How I Use LLMs
How many Rs are there in the word strawberry?
There's a temptation to think of LLMs simply as productivity tools or coding assistants, or worse, like calculators for solving problems. You've likely seen the strawberry eval above, but I don't think their real value lies in solving these deterministic problems.
Since LLMs are trained on the corpus of human knowledge and they're effectively statistical models that predict the next token, they really excel at solving problems that have already been solved.
Common workflows
I've found that people (myself included, at times) can use LLMs in a low leverage way. They tend to work within their own expertise, drastically reducing their solution space. They might ask the LLM to solve the problem for them, without the additional context of exploring options. Inevitably, they end up being unaware of potential solutions because they aren't tasking the LLM well. The reality is that you're limited by what you know exists, not what actually exists.
I remember while I was doing my math degree, you learn about how mathematical concepts are applied in other domains. The most common one is physics. You quickly discover that physics is more or less applied mathematics. What's more surprising was seeing university-level math pop up in domains I didn't expect.
For example, university-level math pops up a lot in finance. Brownian motion was used in physics for a long time to model particle movement, but it turned out you could also use it for modeling the price movement of financial products.
Existing solutions are everywhere; you just need to make the right connections. The best way to do that is to expand the solution space, particularly into those you don't know.
Expanding the solution space
So what does "expanding the solution space" mean?
I often think of this great comic regarding human knowledge and PhDs. I've adapted it below with one addition: the frontier.

Imagine the entirety of human knowledge as a large circle. Our knowledge encompasses only a portion of that, with some narrowing as we specialize our education and experience.
The frontier1 sits at the edge and holds the most difficult and niche problems. These problems require research and significant rigor to solve, hence it's typically the realm of researchers (both academic and corporate).
Surprisingly, most of our problems aren't truly novel problems. We're typically solving a problem where some form of the solution exists.
When solving a problem alone, we're limited to our experience and knowledge. This is why teams are much better at solving problems compared to single individuals, as they can explore a larger solution space due to their diverse expertise.
This isn't solely for solutions, but also problem solving approaches. A problem might have multiple varied solutions to it or require a specific approach. If you don't know a specific strategy, your ability to solve the problem could be limited.

The goal is to get the LLM to search its knowledge-base2 as it's is far larger than yours. And so, if you can tap into that, you've given yourself a tool that can explore a solution space much larger than one you can tap into on your own.

My workflow
This is why reports and deep research are a major focus for AI companies.3 Why spend weeks spent digging for a needle in a haystack, when you can task an LLM? The goal is to generate a diverse set of options and reduce the unknown unknowns. Your job as the human-in-the-loop should be to reason through them and then implement.
Here's how I typically use LLMs:
- Work through a problem myself and draft some ideas.
- Present my problem and understanding to the LLM.
- Ask it to explore new ideas, approaches, and gaps.
- Think over the response and revise my approach.
- Repeat. Critique its ideas and have it critique mine.
The key is step 5. Don't just offload your thinking to the LLM. You need to engage and think deeply about its suggestions. Humans are inherently limited by time, we can't research everything under the sun. What we do excel at is synthesising data and deciding.4
LLMs let you explore solution spaces you either weren't exposed to or didn't know existed. This doesn't mean you don't need to think, but it does make thinking easier. Give the flow above a try if you haven't, you'll be surprised by how many new ideas and approaches you're presented with.
Notes
Footnotes
-
You could define the frontier as outside the edge as we haven't solved those problems yet, but I like to define those problems as problems we don't even know exist. Hence, the frontier is problems we do know exist, just haven't solved yet - i.e. within the universe of human knowledge. ↩
-
Hallucinations are a common concern here. There are a number of strategies to mitigate hallucinations and in the end you shouldn't only rely on LLMs for knowledge. Use research modes that cite information and research things yourself if it seems off. This is where your intuition needs to shine. ↩
-
I first heard about this from a great article by Jason Liu. ↩
-
Thinking models are good at this too, but I've found that they can get stuck in thought spirals. So you should still review an LLM's reasoning to ensure you understand it. ↩