OpenAI CEO Sam Altman has reignited one of the tech world’s favorite debates: whether or not we will soon see the advent of superintelligent AI. In a recent blog post, he wrote that such a system, surpassing human cognitive abilities, could emerge in “a few thousand days,” ushering in a revolution in global progress.
Altman’s musings on highly advanced artificial intelligence systems have sparked a flurry of responses from researchers and industry observers. As AI’s ability to outperform humans in various areas becomes a hot topic, the tech community is asking an important question: How soon could this change happen, and what might it mean for humanity?
At once provocative and speculative, Altman’s post serves as a Rorschach test for the industry’s hopes and concerns about AI’s trajectory.
The skeptic’s view
While no one denies that AI is advancing quickly, some observers say we are nowhere near superintelligence.
“It’s totally exaggerated,” said Brent Smolinski, IBM VP and Global Head of Technology and Data Strategy. “I don’t think we’re even in the right zip code for getting to superintelligence.”
Despite impressive strides in certain areas, AI still lacks fundamental elements of human-like intelligence, according to Smolinski. “There’s something fundamentally missing that will get us to superintelligence,” he says.
One key issue is the efficiency gap between human and machine learning. Smolinski contrasts AI and human learning processes: “For these large language models, to learn how to dialog, you have to feed it the whole corpus of the internet to get to the point where you can interact with it. Human beings [need] a tiny sliver.”
AI is also far from achieving the kind of versatility humans demonstrate when learning diverse skills, from language to physical tasks like playing golf or driving a car. This versatility represents a fundamental difference between human intelligence and current AI capabilities.
Smolinski outlines several elements of true superintelligence: inductive and deductive reasoning abilities, creativity, knowledge representation through mental models, real-time learning and adaptation, and consciousness.
In the field of AI, quantum computing might address some computational constraints, potentially “push[ing] the envelope of what AI could do,” Smolinski says. But quantum computing’s impact on achieving true superintelligence remains uncertain.
Another issue Smolinski highlighted is the need for a clear, agreed-upon definition of superintelligence. “If you get a room of six computer scientists and ask them what superintelligence means, you’ll get 12 different answers,” Smolinski says.
The superintelligence camp
Altman is not alone in his predictions about superintelligence. Roman V. Yampolskiy, a Professor of Computer Science and Engineering at the University of Louisville, said that artificial general intelligence, often described as computing similar to human-level intelligence, is quickly progressing “and soon after superintelligence is happening in 3-4 years.”
Yampolskiy warns that once artificial intelligence surpasses human-level intelligence, it may become virtually impossible to maintain control over such a system. This superintelligent AI could operate in ways that are fundamentally unpredictable and beyond our ability to manage or constrain. This lack of control, combined with the possibility that a superintelligent system may not share or prioritize human values, could lead to scenarios that threaten humanity’s existence, the researcher says.
“In comparison, all immediate concerns such as bias, unemployment, deepfakes, misinformation are trivial in terms of [this] negative impact,” he says.
The consciousness conundrum
Consciousness is a particular sticking point in superintelligence discussions: would superintelligent machines need to be conscious in order to outthink us? According to Smolinski, true superintelligence would require not just computational power, but also some form of consciousness or self-awareness—features current AI systems lack.
Current AI models excel at combinatorial creativity, combining existing ideas in novel ways. But they need help with truly transformative leaps. Smolinski hypothesizes that this type of transformative creativity may be linked to consciousness in little-understood ways.
As a result, Smolinski is concerned about exaggerated fears surrounding AI.
“What I worry about is that this kind of feeds a bit of this fearmongering, which leads to things like, ‘Oh, we’ve got to regulate AI,'” Smolinski says. While regulation is important, it could protect established players while creating barriers for new entrants, potentially hampering progress.
Smolinski offers a final thought, emphasizing the importance of maintaining a balanced perspective on AI development: “AI is a powerful tool that can help us solve complex problems. But we need to approach its development thoughtfully, clearly understanding its current limitations and future possibilities.”
eBook: How to choose the right foundation model