People have been loud about the “reaching AGI in 10 years” belief every 10 years since the 1950s. Whether or not the ambitious wish has come to reality already, it is worth mentioning that there is still no “widely accepted” consensus on what the term AGI even means; we know what it stands for, i.e., Artificial General Intelligence, but we don’t know what it really means. A simple argument for the “hard problem of intelligence” may actually stem from the lack of common consensus for AGI’s definition. It may not be the most professional way to open up an argument like this, but sometimes the devil in the details ought to be exposed before any other in order to start feeling the leakage in the sewer.
Alan Turing founded the well-known Turing Machine (TM) in 1935-1936, sparking a revolution in the field of Informatics. As much as I respect the man for such an incredible work, which brought concepts such as computational universality and Turing completeness, I cannot go forward in the history of the (Informatics) field without giving credit to another brilliant man who came before Turing and invented essentially the same concepts as Turing’s, but kinda never got as much credit for it as Alan Turing. The man’s name is Alonzo Church, and he invented the lambda calculus. Luckily for Alan Turing, Alonzo Church was his PhD mentor back in the day; must be another “lucky coincidence” that Turing introduced the TM in his PhD thesis under the supervision of Church. Hopefully, you get the sarcasm in the tone. Anyhow, Turing had a vision for what these number-crunching machines could eventually be capable of achieving. The ultimate goal of the field for him (and some others, too) became solving the Intelligence. However, solving intelligence has become even more daunting over the recent decade.
There were many predictions made throughout many decades, but most of them have been overly optimistic about how fast we will have general intelligence in our machines. While there are certain people who have claimed that achieving AGI is not even possible due to either the non-existence of such a thing as “general intelligence” or the computational impossibility of having it. Roger Penrose has mentioned the non-computational aspects of human consciousness[1], indicating the impossibility of having a computational system possessing general intelligence. However, there are not many people who deeply believe in such claims. In fact, many people are over-optimistic about reaching AGI sooner rather than later. Let’s take a look at some of the predictions made from 1950s to the current date.
- In 1957, Herbert Simon stated that computers would be the world’s chess champion within 10 years. He then predicted in 1965 that human-level intelligence would be achieved in 20 years.[2] Neither the world champion AI happened in 10 years nor human-level intelligence emerged in 20 years.
- In 1967, Marvin Minsky predicted that AGI would be reached in the 1970s-1980s.[3] It has not happened yet in 2025.
- In 1982, Edward Feigenbaum and other Japanese researchers predicted AGI by the end of the 1980s or early 1990s.[4] The second AI winter hit in the late 1980s, as opposed to what was expected, and we do not have AGI yet in 2025.
- In 1990, Ray Kurzweil predicted that very intelligent systems would emerge in specific domains by the late 1990s and early 2000s, followed by AGI soon. He later refined this prediction by giving 2029 as the (rough) date for AGI.[5] I do not think we will have AGI by then based on the current state of the theoretical frameworks.
- In 2005-2006, Ben Goertzel predicted that AGI might be 5-10 years away.[6] We are in 2025, and AGI is still not here.
- In 2020, Elon Musk predicted AGI would be 5 to 10 years away (i.e., 2020-2025)[7]; warnings were given to prevent (existential) risks. Humans have not yet faced those exaggerated existential risks, but I could not say the same for AGI at this point.
- In 2020-2021, Sam Altman and others predicted another 5-10 years before reaching AGI[8]. In 2023, Altman predicted 2028-2033[9], and Dario Amodei predicted ~2030 for AGI[10]. I do not think we will have AGI by then based on the current state of the theoretical frameworks.
- In 2025, Demis Hassabis predicted 5-10 years for AGI.[11] I am waiting to be surprised…
There is a pattern in the history of predictions mentioned above: humans tend to underestimate human intelligence, let alone “general intelligence” that is even more powerful than what humans possess. Personally, I believe that “general intelligence” in the sense of more powerful intelligence than humans’ is probably achievable, and I think it may take another ~30 years to get there, if not much more. The thing is, each time we think that we have finally gotten it, we are hit by many, many edge cases that are unbelievably easy for humans to handle but hard for AI systems to crack. Then we start questioning whether we have gotten even 20% of it right. For example, some people may claim that ChatGPT is smarter than humans, but then it cannot solve two number addition problems that many humans are capable of solving, given enough time. Some people may claim that we are using the wrong tool for adding numbers, and that we should just use a calculator or let ChatGPT use it to add numbers, but then how would you trust a system and rely on its superior intelligence, which uses a calculator to add two numbers, not because it does not want to waste computation/time on such a simple problem, but because it does not know how to do it by itself?
Human intelligence is underestimated by lots of folks nowadays. I think that cracking intelligence is a challenging task because understanding the whys and hows of intelligence is hard. I will now give some perspectives to explain further why I think this way.
Measuring Intelligence is Hard. Trying to measure intelligence is like trying to make predictions on the stock market. This is totally different than measuring, say, a distance between two points A and B on the 2D plane. The reason is that the two points on the plane are passive, meaning that they do not react or change according to the actions of the measurer. Once a system becomes reactive, it is harder to make an objective measurement on it because it reacts to the actions of the measurer in some way. For example, if you were to measure the distance between the two points by using a tape measure or a ruler, the points would have stayed in the same place regardless of the measurement device of your choice. However, if the points were reactive, then one of them may not like the ruler for some reason, and decide to move away from it 1 inch; so, each time you are bringing the other end of your ruler towards that point on the plane, it will move according to the norm of the velocity of the tip and you will get different measurements. Now, once you have understood how the system reacts to your actions and the environment around it, you can make an accurate measurement; you would, for example, intentionally move the ruler in a certain direction and then figure out the true distance between the two points by taking the direction of the movement of the tip into the account. That being said, there is also a third type of system, classified as proactive. Proactive systems react without any explicit external stimuli. The two-point system would be proactive if one or both points moved in “seemingly random” directions, even before you decided to use a ruler, a tape measure, or anything else. Making an objective (i.e., universally true/reproducible) measurement on such systems is much harder than making one on the previously mentioned passive and reactive systems. A system of sufficient intelligence is almost always proactive in nature, making it way harder to make a correct measurement of itself. Let’s take an example. Say, you wanted to get a custom suit for an event, and now a tailor needs to make a bunch of measurements on you. You have opened your arms in a T shape now, and the tailor moves towards you to make a measurement. The tailor puts the cloth meter tape on you and starts adjusting the two ends of the tape for an accurate measurement, but then, you feel your back itching so bad that you cannot resist but move your shoulder blades closer. That is when you mess up the whole process, and now you have to walk your shoulder blades closer in that suit when it is ready. The reason that tailors ask you to stay still is because they want to convert your system from being proactive to passive or reactive at most (so you do not stay stuck in the same position when the tailor asks you to switch positions or tell you that you can pick your suit a few days later). As a proactive intelligent system, you compromise and act like a reactive one. In other words, you make yourself dumber so that a measurer can make the correct measurements on you. However, it is not this easy to make a measurement on a proactive/intelligent system without having a two-way established communication ground. You can imagine a French tailor trying to make measurements of an English person.1
Turing Machine versus Cellular Automaton. These two different ways of computing have made me think about the nature of computational intelligence many times. The dichotomy is the following: Singular versus Collective Intelligence, or Centralized versus Distributed Intelligence, or Apparent versus Emergent Intelligence. Turing Machine (TM) represents Singular/Centralized/Apparent Computation, whereas Cellular Automaton (CA) represents Collective/Distributed/Emergent Computation. It has become more natural for humans to think that we, as intelligent entities, perform intelligent actions in the former manner. To elaborate on this further, we often tend to think that intelligence is singular – that is, each human being possesses intelligence regardless of the other (so-called intelligent) entities, centralized – that is, intelligence is held or comes to existence in the human brain of all different organs, and apparent – that is, we are aware and concsious of the end-goals of our intelligent actions. Although these beliefs seem reasonable in certain cases, there are other cases where one cannot help but question whether the opposite is also true.
The reason why you might think that intelligence is singular could be the following imaginary scenario. If every human being on Earth disappeared all of a sudden, you would still be as intelligent as you were yesterday. In contrast, the reason why you might think that intelligence is collective could be due to the following argument: you were not born with the intelligence you possess right now or it was not given to you during birth, you learned how to be intelligent by going to school and learning from other knowledgeable people around you. So, if you were born into the jungle, you would end up being like Tarzan at best, if not dead. Is intelligence singular or collective now?
To give arguments for the centralized and decentralized aspects of intelligence, you may think that it is the brain that produces your intelligence, but even though it could be the brain that does amazing stuff, it is just our level of abstraction that defines whether the intelligence produced is centralized or decentralized. If we were to speak at the level of neurons, well, the brain is full of them, and hence, intelligence seems distributed among all the neurons in the brain. It is also worth questioning the other possibilities where the brain is not the only organ that produces intelligent behavior; moreover, it might be the case that it produces most of it, and we are overlooking the other organs that have a little but important share in the process. So, is intelligence centralized or distributed now?
All decisions that we consciously make in our everyday lives are, by definition, made with our conscious considerations. Human cognition plays an important role in the decisions we make when faced with certain situations. So, if I asked you why you put your cap and shoes on, you may just say because you wanted to go out, and that is simply why. If I asked you why you cooked a meal, you may just say because you were hungry and you needed to eat something, and that is simply why. So, it is somewhat easy for us to believe that we do what we do because we know what we want to do. However, I would argue that it depends on the level of granularity that we are willing to define what we want to do. For example, if you were a game developer working in a big company and your job was to implement ray tracing to support realistic graphics within the company’s commercial game engine, you may only know the reason why you do what you are doing only on a very localized level of the company’s overall goal. You know that the company wants to have realistic graphics support, but you do not know exactly why that is the case. In other words, every developer and engineer in that company works under a very localized set of goals without actually knowing the ultimate objective of the company they are working for. The company itself almost becomes an intelligent entity whose intelligence is distributed across the human workers. Another way to look at this is that no single person in the company knows how to run the whole thing by themselves, and yet, the whole thing runs smoothly by leveraging the manpower of hundreds or thousands of such workers. With each individual having different localized goals, the company produces the AAA game with realistic graphics, an appealing storyline, and everything else that no single person could anticipate beforehand. So, is intelligence apparent or emergent now?
Properties of Intelligence are vague and/or unknown. Making measurements on a significantly intelligent system is hard, as described earlier. However, we could try to make separate measurements on different aspects of such a system. Doing so requires knowing the different aspects in a clearly defined way beforehand. So, here comes the problem: we do not even know for sure the different aspects of intelligence. The ones we claim that we know are almost always not clearly defined, making it a vague and only imprecisely measurable notion. So, it makes one wonder what the different aspects of intelligence are. Is it resource-efficiency, goal-completion, goal-creation, self-replication, or something else? Most theories of intelligence do not take the resource efficiency constraints into account while developing a formal groundwork for building intelligent systems. However, maybe most of us would agree that a system, to even start to become intelligent, would first need to survive in the wild, which requires resource efficiency to some extent. For example, if I spent all my money on gambling mindlessly and ended up losing everything I have, then I wouldn’t be able to write this blog post as of now. So, we cannot just omit how handling resources is important to intelligence. Whether it be for the sake of self-preservation or goal completion, resource efficiency seems to be an important part of life. That being said, there might be many other aspects, such as goal completion, goal creation, self-replication, and so on. However, measuring some of these aspects may also become trickier than one would have imagined initially.
A field of disillusioned people. You can imagine that there are probably many people in the AI business either because of trends, fame, or money. Roughly speaking, these groups of people pollute the workspace for the real ones who are just in love with doing research and coming up with new, cool stuff. The field, as it gets more and more popular over time, is becoming more prone to many other factors that are not pure science-related. Is it politics? Is it ethics? Is it something else? Maybe it is all of the above… “Is the AI politically correct? Guys, let’s spend a significant amount of time fixing it.” (While AI is not even mathematically correct, but who cares, I guess?) “Is the AI ethical? Guys, let’s make sure that it won’t be racist or end up killing everyone on Earth eventually.” (While AI is probably heavily used by governments to make weapons for obvious reasons, but who cares, I guess?) The smell of hypocrisy is once again so strong in the air that one simply cannot sniff away from it…
References
- [1] Wikipedia contributors. (2025, January 3). The Emperor’s New Mind. In Wikipedia, The Free Encyclopedia. Retrieved 13:36, April 20, 2025, from https://en.wikipedia.org/w/index.php?title=The_Emperor%27s_New_Mind&oldid=1267023626
- [2] Wikipedia contributors. (2025, April 4). Herbert A. Simon. In Wikipedia, The Free Encyclopedia. Retrieved 11:55, April 20, 2025, from https://en.wikipedia.org/w/index.php?title=Herbert_A._Simon&oldid=1283916532
- [3] https://web.eecs.umich.edu/~kuipers/opinions/AI-progress.html
- [4] Feigenbaum, Edward A.; McCorduck, Pamela (1983), The Fifth Generation: Artificial Intelligence and Japan’s Computer Challenge to the World, Michael Joseph, ISBN 978-0-7181-2401-4
- [5] Wikipedia contributors. (2025, March 14). Ray Kurzweil. In Wikipedia, The Free Encyclopedia. Retrieved 12:15, April 20, 2025, from https://en.wikipedia.org/w/index.php?title=Ray_Kurzweil&oldid=1280478194
- [6] https://www.thekurzweillibrary.com/we-could-get-to-the-singularity-in-ten-years
- [7] https://www.reuters.com/technology/teslas-musk-predicts-ai-will-be-smarter-than-smartest-human-next-year-2024-04-08/
- [8] https://youtu.be/xXCBz_8hM9w?si=4FiUs8MmWTpWJ4ns&t=2772
- [9] https://yourstory.com/2025/02/sam-altmans-2035-ai-prediction
- [10] https://youtu.be/ugvHCXCOmm4?si=egwFP-Pf4LDefkr_&t=8231
- [11] https://www.cnbc.com/2025/03/17/human-level-ai-will-be-here-in-5-to-10-years-deepmind-ceo-says.html
- If you have been in France for a few months, you will get what I am saying. ↩︎