The Road to AGI

Why ChatGPT May Not Get Us There

Hello Coders! 👾

Over the past few year we’ve seen a huge development in artificial intelligence. One of the most notable developments is ChatGPT, the language model that appears to understand and generate human-like text. It’s a great piece of engineering, no doubt, but is it the gateway to achieving Artificial General Intelligence? Is it even intelligent?

From ChatGPT to AGI

Artificial General Intelligence, or maybe better known as AGI, represents highly autonomous systems that are capable of outperforming humans at most economically valuable work. It’s the holy grail of AI, the point where machines can truly understand, learn, and apply knowledge across a wide range of tasks. The key word here is ‘understand’. And that’s where the road might get a bit rocky for ChatGPT. At its core, ChatGPT is a prediction machine, a statistical model. It uses complicated vector algebra and other mathematical techniques to predict what comes next in a sequence of words. It does not actually ‘understand’ the text in the way humans do. It’s like a parrot that’s been trained to mimic human speech. It can sound incredibly convincing, but does the parrot really understand what it’s saying? Probably not.

ChatGPT is a manifestation of what we call “narrow AI”. It’s brilliant at what it does, but it can’t generalize its abilities beyond the specific tasks it’s been trained on. It can’t make connections between different domains or think creatively in the way humans can. It’s not capable of independent thought or decision-making, the key hallmarks of AGI. This isn’t to downplay the achievements of ChatGPT or similar language models. They are undeniably impressive and have a wide range of practical applications. However, to mistake their capability for a stepping stone to AGI would be to misunderstand the fundamental differences between task-specific prediction and general understanding. To reach AGI, we need to build systems that don’t just mimic patterns in data, but truly understand and reason about the world. That’s a challenge of a different magnitude, one that will require new breakthroughs and innovations. While ChatGPT and other narrow AI systems are a part of the journey, they aren’t the destination. ChatGPT and similar models, while impressive, lack genuine comprehension. They don’t truly understand the text they are generating or responding to. They don’t form beliefs, desires, or fears. They lack an internal representation of what they are dealing with, a key facet of intelligence.

The buzz around the idea that ChatGPT and similar models are bringing us closer to AGI is, in my view, causing unnecessary confusion. It’s leading to misinformation, unwarranted fears, and false expectations. This could potentially mislead funding into areas that promise more than they can deliver, while also causing misunderstanding about what AI truly is and can achieve.

What is needed?

One promising area is in the field of cognitive architectures, such as the SOAR and ACT-R models , which aim to create a complete and unified theory of cognition. These architectures are designed to understand, learn, and adapt, much like a human mind.


SOAR stands for “State, Operator, And Result.” It’s a cognitive architecture that attempts to emulate human cognitive processes, such as decision making, problem-solving, learning, and perception. The main idea behind SOAR is that all cognitive tasks are accomplished through a decision-making process. This process involves creating and testing hypotheses, generating and executing plans, and learning from the outcomes. SOAR uses production rules (if-then statements) to represent knowledge and uses this knowledge to make decisions.

While SOAR is a powerful model, it’s still a work in progress. It needs to incorporate more aspects of human cognition, such as emotions and unconscious processes, to truly mirror human cognitive abilities.


ACT-R, or “Adaptive Control of Thought—Rational” is another cognitive architecture aimed at simulating human cognition. It’s based on the idea that human cognition can be understood as the interaction of a set of modules, each responsible for a specific cognitive function (like vision, hearing, and motor control), with a central production system. ACT-R uses a combination of declarative memory (facts and information) and procedural memory (skills and habits) to perform cognitive tasks. It’s been used to build models that can perform tasks ranging from basic arithmetic to air traffic control. Similar to SOAR, ACT-R is continuously being developed and refined. It also needs to incorporate more aspects of human cognition to fully emulate human cognitive processes.

In summary, both SOAR and ACT-R are attempts to create a computational model of human cognition. While they aren’t perfect, they represent significant steps towards creating AI systems that can think and learn like humans, bringing us closer to the goal of AGI.


In conclusion, the road to AGI is long and filled with unknowns. While language models like ChatGPT represent significant progress, they are, in essence, still glorified prediction engines. They demonstrate the power and potential of AI, but they aren’t the final stop on the journey to AGI. To get there, we’ll need to continue pushing the boundaries of what’s possible, exploring old and new techniques, and redefining our understanding of intelligence itself.

Until then, let’s appreciate the journey and continue to marvel at the milestones we achieve along the way. The future of AI is a thrilling unknown, and I can’t wait to see where we end up.

Happy Coding! 🚀