Artificial Intelligence is Not Really Intelligent

It seems like everyone in computer science is rushing to strike gold in the AI boom (or selling the shovels, in NVIDIA’s case). And who can blame them? With everyone talking about generative AI taking over coding jobs, it feels like the clock is ticking. If the robots might steal your job, why not join their side? If you can’t beat ‘em, join ‘em, right?

Well, not so fast. Let’s pause for a moment to ask a thought-provoking question: Can AI truly generate novel ideas, or is it doomed to regurgitate patterns from its dataset?

Right now, our AI systems rely on pattern recognition. Advanced architectures like transformers and attention mechanisms enable us to perform incredibly sophisticated computations—but it is essentially glorified statistics, though. This is why AI shines in domains where relationships can be quantified and strictly defined: language processing, image recognition, and beyond. But AI doesn’t think, dream, or intuit. It predicts, based on probability distributions learned from its training data.

That’s the crux: AI’s current paradigm has an upper ceiling. Its limitations are not just computational (though more power is always welcome) but also architectural. Our von Neumann bottleneck—the separation of memory and processing—just isn’t built for the massive parallelism AI demands. It’s time to ask the bigger question:

If today’s AI is “statistics with extra steps,” how do we build AI that transcends it?

We can start by talking about abduction and counterfactual thinking in AI.

You know, this reminds me of the summer program I attended when I was in middle school, CTY, at John Hopkins University. It was a course on both propositional and quantificational logic (the book used was Logic by Paul Tomassi for anyone curious about my notation style).

Of course, if we have P -> Q as a premise, then ~Q implies ~P. However, Q does not imply P. What the heck am I even talking about? Well, assume the following:
P = It is raining
Q = The ground is wet.

In this case, the premise P->Q reads as the following: If it is raining, then the ground is wet.

This makes sense! Obviously, the ground will be wet if it is raining. Taking this to the statement ~Q implies ~P, this makes sense too because if the ground is not wet, it implies that it cannot be raining. After all, the rain would cause the ground to be wet.

The part that stumps people is the following: Q does not imply P. This states that if the ground is wet, then it is raining. Do you see the flaw here? What if the ground is wet due to the sprinklers? What if the ground is wet due to spilling a bottle of water?

The point I am getting to is that AI is not very good at these sort of abductions. If you ask AI about logic, it will give you a pretty good overview, but when you ask it to make an abduction, many times it will not take its own advice into consideration and may even hallucinate to try and give an answer.

How can AI do this? Enter Bayesian reasoning, a mathematical framework for balancing prior knowledge and new evidence (and also responsible for giving me many sleepless nights). Using Bayesian inference, AI could propose hypotheses and assign probabilities based on their plausibility. For instance, an AI tasked with diagnosing a patient could suggest diseases based on symptoms and prior probabilities from medical data.

Now add counterfactual thinking and come back to my logic example of the ground being wet: What if you hosed the ground? What if the sprinkler system was on? Counterfactuals allow us to simulate “what if” scenarios. Structural Causal Models (SCMs) give AI a formal way to model these hypothetical worlds, enabling deeper reasoning beyond simple correlations. However, these models are very experimental and rudimentary. I have neither the time nor ability to explain SCMs in detail, but modeling how humans think is the first step in creating truly “innovative” AI.

Something else worth discussing is how synthesizing creative works is more than just randomness. We can utilize fancy mathematics like stochastic processes to introduce unpredictability (side thought: are we truly adding entropy if we cannot simulate true randomness on today’s hardware?), but truly creative synthesis involves combining unrelated ideas under meaningful constraints. Think of randomness as throwing darts on a wall—you need a good set of rules to find patterns in the chaos.

Evolutionary algorithms are one example of constrained randomness. These systems mimic biological evolution: they generate variations (random mutations), evaluate them against a fitness function (constraints), and iterate. By balancing exploration and structure, such models can discover novel solutions, from optimized neural networks to groundbreaking drug designs.

For AI to develop intuition, it needs something akin to abstract reasoning. Mathematical embeddings—where concepts are represented as points in a high-dimensional space—can help. For example, word embeddings in NLP (like Word2Vec) allow AI to map relationships like "king - man + woman = queen." Extending this idea, embeddings could represent more complex relationships across domains.

Self-evolving architectures push this further. Think of Neural Architecture Search (NAS), where AI designs and optimizes its own neural network. This meta-learning approach mimics the adaptability of biological systems and paves the way for self-improving AI.

On top of this, AI kinda sucks at causality at this moment. Causality is the backbone of reasoning. Unlike correlations (e.g., ice cream sales and drowning rates), causal relationships explain why things happen. AI can grasp causality through techniques like SCMs, which model the relationships between variables.

But causality gets exciting with interventions. What happens if we tweak one variable? For instance, in marketing, what if we reduce the price of a product? Intervention modeling helps AI simulate outcomes, making it more effective in decision-making and planning.

Look, everything from counter-factual thinking to causality is interesting but how do we actually get to the next AI revolution — is it even possible? Everything from abandoning the von Neumann architecture to quantum computing has been proposed but I think we can get to the next level of AI by studying the antithesis of computers: humans.

Almost every theory and algorithm can benefit by K.I.S.S. — Keep It Simple, Stupid. Our human brains are the most power-efficient computer out there, running at only 20 Watts. Look, I’m not saying we don’t need complex algorithms, but drawing inspiration from our brains seems much better than brute forcing a relationship with math (wow, human brain intuition at work). Fields like Neuromorphic computing that draws inspiration from the human brain can potentially revolutionize the field of AI. Unlike traditional architectures, neuromorphic chips excel at low-power, parallel processing, making them ideal for real-time AI tasks like robotics or sensory data processing.

We’re standing at the threshold of AI’s next revolution. Building AI that can reason, create, and intuit requires breaking free from our current limitations—exploring new architectures, embracing causal reasoning, and blending randomness with structure. While today’s AI might be “glorified statistics” that doesn’t truly fit the term “intelligence”, tomorrow’s AI could become our most powerful collaborator, solving problems we’ve only begun to imagine.

Next
Next

The Importance of Good Models and Better Data