An Introduction to Critical Thinking
The introduction of computers, cellphones, and the internet fundamentally shifted how the world produces and consumes information. What was once a localized and expensive commodity became globally available for free. That shift is happening again — and the stakes are higher than ever. The simplicity of these tools, and their seeming credibility, makes them easy to trust. The most important skill of this new age is one that will quietly fade unless actively practiced: critical thinking.
This blog isn’t meant to scare you. It’s a reminder of the core loop that underlies what it means to think critically: question, research, validate, reflect — and repeat.
The onion routing protocol — which now powers much of the dark web and foreign intelligence operations — is only functional because of the civilian traffic that travels through it. The US Naval Research Laboratory released it publicly because an anonymity network used exclusively by a single agency provides no anonymity at all — every connection on it is obviously government traffic. Public adoption wasn’t a deception; it was a technical requirement. But the result is the same: a tool built for intelligence work depends on civilian users who have no reason to consider that their traffic serves as cover for operations they’ll never know about.
From the tools you use to the history you’re taught, rarely can anyone reach a sensible conclusion without thinking critically.
Question
Critical thinking starts with earnest questions. Are you invested in a specific answer, or are you seeking the truth? You’ve probably been in a situation where you’ve placed all your hope into getting a specific answer. You might have asked someone on a date, or feedback on your project, maybe even the mechanic about the status of your car. In all of these situations, you’ve probably just wanted the positive, the yes, the great, the amazing. Expectations will almost certainly cause disappointment. Always work from first principles. Once you strip away your assumptions, what truths lay in front of you?
Then there’s the matter of ego. You can’t be afraid of sounding like a fool. The uncomfortable reality is that if you’re genuinely challenging yourself, you will sound like you don’t know what you’re talking about. That’s the whole point. Courage isn’t asking questions when you already know the answer. It’s asking when you don’t. This is true of anyone who’s ever learned anything. The quality of your questions only improves if you keep asking them and reflecting on what comes back. A great question satisfies your curiosity — and often opens the door to even better questions.
Research
What comes next is not a single step, but a process — one that most people get wrong in one of two ways. The first is confirmation bias: exploring information only until they find data that confirms their existing beliefs. The second is unwarranted extrapolation: jumping to conclusions based on their own assumptions rather than genuine investigation, effectively bypassing meaningful exploration altogether.
Of 12 pieces of evidence, 6 support and 6 contradict — rarely is it one-sided.
Unwarranted extrapolation is the mirror problem: fitting a trend to limited data and projecting it far beyond where the evidence holds.
To some extent, this is understandable. Few people are willing to spend significant time aggregating information without a clear goal in mind. Academic researchers dedicate years to investigating sources as part of a broader objective — and even then, only a small minority would continue if they knew their work would yield no meaningful results. This dynamic is part of why publication bias remains a persistent problem in academia.
Before LLMs, research meant potentially spending hours, days, or years gathering information from disparate sources. LLMs changed this by enabling tools and systems that aggregate that information into composed summaries. But like any tool, they make mistakes. Search engines surface what’s popular, not what’s true. Books reflect their authors’ biases. LLMs are no different. They’re trained on massive corpora, then fine-tuned on curated datasets that make them more agreeable — and the companies hosting them embed system prompts shaped by internal guidelines and philosophies.
This creates a subtler problem: you’ll often get the “safe answer.” Ask a question with an assumption baked in, and the model will usually confirm it. That’s not research — that’s validation. This is where personas become useful. Prompting an LLM to reason like a skeptic, a domain expert, or someone from a different background shifts the lens it operates through — and you’ll surface meaningfully different perspectives on the same question. The goal isn’t finding one right answer; it’s stress-testing your assumptions from multiple angles before deciding what to believe. Of course, this assumes you know how to construct the right persona — which brings its own learning curve.
Not all LLMs are equal, and not all prompts are either. The uncomfortable reality is that getting depth out of these tools still requires knowing how to ask. Companies building these models need mass adoption, and mass adoption requires accessibility — which means they inevitably trade depth for precision. There will always be a speed-precision tradeoff.
Treat LLMs as tools that enable productivity, not solutions to your problems. Personas and careful prompting reduce the bias problem — they don’t eliminate it. LLMs don’t sidestep the need for research; they accelerate traversal of information, but deserve the same scrutiny you’d apply to any other source.
Validate
Validation is hard — there’s no clearer way to state it. Even established frameworks like the scientific method must continually adapt, and none of them are sufficient alone or guaranteed to produce correct answers. No single evaluation framework exists for complex, multi-variable problems. Any problem embedded in an open system has no guaranteed solution, let alone an optimal one. There will always be a depth tradeoff. The real question isn’t whether you can be certain — you can’t — it’s what level of knowledge is sufficient for your needs, and how wrong are you willing to be?
Reliable knowledge at scale is typically said to require independent verification from parties with conflicting interests — this process is called consilience. The reason conflicting interests matter is that parties with aligned interests share incentives to be wrong in the same direction. Adversarial pressure, in theory, corrects for this. But this only holds if people are actually willing to change their minds when confronted with contradicting evidence — and history suggests that’s the exception, not the rule. Paradigm shifts are slow, costly, and usually require the old guard to die out rather than concede. Peer review gets gamed. Entire fields calcify around bad assumptions for decades. Religion persists not because of evidence but because of social reinforcement — and it’s far from unique in that regard. Consensus, at any scale, is not the same thing as truth.
What consilience actually offers isn’t certainty — it’s a slower, noisier, but marginally more reliable filter than any individual perspective alone. Internally, the same principle applies: validation means determining whether sufficient evidence exists for a claim and whether the same conclusion can be reached independently. Learning rarely produces guarantees. What it produces is calibrated confidence — and a more honest sense of how wrong you’re willing to be.
Reflect & Repeat
Often, you’ll need to refine your approach — ask a more specific question, research further, validate, then ask again — before arriving at an answer you’re satisfied with. Learning is shaped by curiosity the same way evolutionary algorithms are shaped by environmental pressure: iteratively, with a goal in mind, but no guarantee of reaching the best possible solution. You’re effectively running a grid search across a vast, unstructured solution space — and like any search process, you’ll frequently land on local optima rather than global ones. Sometimes you’ll miss the answer entirely, or only recover part of it. But over time, your questions improve, your research improves, and you establish reasonable validation thresholds.
The loop doesn’t end at an answer — it ends at a reflection. Once you’ve evaluated what you found, you need to turn inward: what took longer than it should have, where you were wrong, what biases you caught yourself in, what you didn’t expect to learn. Reflection is just self-directed questioning — and it will often pull you down a slightly different path before returning you to the original problem. That detour isn’t a distraction; it’s the mechanism by which your questions get sharper. You repeat this loop until the cost of acquiring new information exceeds the value it returns. That’s not failure — that’s a rational stopping condition.
Snippet of the Week
This pattern of questioning, researching, validating, reflecting, and repeating maps closely onto the feedback loop in genetic evolution (GE).
In simple terms, GE poses a problem to a population: “can you secure resources?” In most scenarios, some portion of the population will be capable of answering that question. Those individuals compete to secure what they can — and those who succeed earn the ability to reproduce, having passed the selection process. During reproduction, characteristics from each parent are exchanged to produce offspring (crossover). Occasionally, a mutation occurs, making the individual more or less adapted to the problem nature is posing — this adaptability is called fitness. Over time, you expect the population’s average fitness to increase as individuals better suited to the problem proliferate, and those who aren’t gradually decay away.