These five historical lessons are worth learning.
On July 9, 2025, NVIDIA became the first publicly traded company to reach a market value of $4 trillion. Where will NVIDIA and the fluctuating field of AI go from here?
While predictions are difficult, there is a wealth of data available. At a minimum, it illustrates why past predictions were not realized, in what ways, and for what reasons they failed to materialize. That is history.
In the 80-year development history of artificial intelligence (AI), what lessons can be learned? Throughout this period, funding fluctuated, research and development methods varied widely, and public curiosity sometimes turned into anxiety and sometimes excitement.
The history of AI began in December 1943 when neurophysiologist Warren S. McCulloch and logician Walter Pitts published a paper on mathematical logic. In their paper (on the logical calculus of inherent concepts in neural activity), they speculated about idealized and simplified neural networks and how they perform simple logical operations through the transmission or non-transmission of impulses.
Ralph Lillie, who was pioneering the field of organized chemistry at the time, described McCulloch and Pitts' work as endowing 'logic and mathematical models with 'reality' in the absence of 'empirical facts.' Later, when the assumptions of the paper failed to pass empirical testing, Jerome Lettvin of MIT pointed out that while the fields of neurology and neurobiology ignored this paper, it inspired 'a group destined to become enthusiasts of the new field (now known as AI).'
In fact, the paper by McCulloch and Pitts inspired 'connectionism,' a specific variant of AI that currently dominates, now referred to as 'deep learning,' which was recently renamed 'AI.' Although this approach has no relation to how the brain actually works, the statistical analysis methods supporting this AI variant—artificial neural networks—are often described by AI practitioners and commentators as 'mimicking the brain.' Authorities and leading AI practitioners such as Demis Hassabis claimed in 2017 that McCulloch and Pitts' fictional description of how the brain operates and similar studies 'continue to lay the foundation for contemporary deep learning research.'
Lesson One
Be wary of conflating engineering with science, speculation with science, and scientific papers filled with mathematical symbols and formulas with science.
Most importantly, resist the temptation of the illusion that 'we are like gods,' the belief that humans are no different from machines and that humans can create machines that are like humans.
This stubborn and pervasive arrogance has been the catalyst for technological bubbles and periodic frenzies in AI over the past eighty years.
This brings to mind the idea of Artificial General Intelligence (AGI), where machines will soon possess human-like intelligence or even superintelligence.
In 1957, AI pioneer Herbert Simon proclaimed, 'Today, there are machines that can think, learn, and create.' He also predicted that within a decade, computers would become international chess champions. In 1970, another AI pioneer, Marvin Minsky, confidently stated, 'In three to eight years, we will have a machine with the intelligence of an average human... Once computers take control, we may never regain it. We will survive on their benevolence. If we are lucky, they may decide to keep us as pets.'
The anticipated arrival of AGI has significant implications, even affecting government spending and policies. In 1981, Japan allocated $850 million for the Fifth Generation Computer Project aimed at developing machines that think like humans. In response, after experiencing a long 'AI winter,' the U.S. Defense Advanced Research Projects Agency planned to re-fund AI research in 1983 to develop machines that could 'see, hear, speak, and think like humans.'
Progressive governments around the world took about a decade and spent billions of dollars to not only gain a clear understanding of Artificial General Intelligence (AGI) but also recognize the limitations of traditional AI. However, by 1992, connectionism had finally triumphed over other AI paradigms, and a new wave of predictions about the impending arrival of AGI swept the globe. OpenAI claimed in 2023 that superintelligent AI—'the most influential invention in human history'—may arrive within this decade and 'could lead to humanity losing power, or even extinction.'
Lesson Two:
Be wary of shiny new things; examine them carefully, prudently, and wisely. They may not differ much from previous speculations about when machines might possess intelligence similar to humans.
Yann LeCun, one of the 'godfathers' of deep learning, has stated, 'To make machines learn as efficiently as humans and animals, we still lack some critical things; we just don't know what they are yet.'
For many years, AGI has been touted as 'just around the corner,' all due to the 'first step fallacy.' Yehoshua Bar-Hillel, a pioneer in machine translation, was one of the first to discuss the limitations of machine intelligence. He noted that many believed that if someone demonstrated a computer could accomplish something that had just recently been thought possible, even if it did so poorly, it would only require further technological development to perfect the task. It was widely believed that one just needed to wait patiently for it to eventually be realized.
But Bar-Hillel warned as early as the mid-1950s that this was not the case, and reality has been proven time and again not to be so.
Lesson Three:
The distance from being unable to do something to doing it poorly is usually much shorter than from doing it poorly to doing it very well.
In the 1950s and 1960s, due to the increased semiconductor processing speed driving computers, many fell into the 'first step fallacy.' As hardware developed along the reliable upward trajectory of 'Moore's Law' each year, it was widely believed that machine intelligence would develop in sync with hardware.
However, in addition to the continuous improvement of hardware performance, the development of AI has entered a new phase with the introduction of two new elements: software and data collection. Beginning in the mid-1960s, expert systems (note: an intelligent computer program system) shifted the focus towards acquiring and programming knowledge of the real world, especially the knowledge of domain-specific experts and their heuristics. Expert systems became increasingly popular, and by the 1980s, it was estimated that two-thirds of the Fortune 500 companies were applying this technology in their daily business activities.
However, by the early 1990s, this AI boom completely collapsed. Numerous AI startups went out of business, and major companies froze or canceled their AI projects. As early as 2003, expert system pioneer Ed Feigenbaum pointed out the 'critical bottleneck' that led to their demise: the scalability of the knowledge acquisition process, 'which is a very cumbersome, time-consuming, and expensive process.'
Expert systems also faced the challenge of knowledge accumulation. The need to continually add and update rules made them difficult to maintain and costly. They also exposed the flaws of thinking machines compared to human intelligence. They are 'fragile' and can make absurd mistakes when faced with unusual inputs, cannot transfer their expertise to new domains, and lack understanding of the surrounding world. At the most fundamental level, they cannot learn from examples, experiences, or environments like humans can.
Lesson Four:
Initial successes, namely widespread adoption by businesses and government agencies and significant public and private investment, may not necessarily spawn a lasting 'new industry' even after ten or fifteen years. Bubbles often burst.
Amid the ups and downs, hype, and setbacks, two distinctly different AI development methodologies have been vying for the attention of academia, public and private investors, and the media. For more than forty years, rule-based symbolic AI methods have dominated. However, instance-based, statistically-driven connectionism, as another major AI approach, also enjoyed a brief period of popularity in the late 1950s and late 1980s.
Before the resurgence of connectionism in 2012, AI research and development were primarily driven by academia. Academia was characterized by a prevalence of dogma (the so-called 'normal science'), with a binary choice existing between symbolic AI and connectionism. In 2019, Geoffrey Hinton, during his Turing Award lecture, spent most of the time discussing the difficulties he and a few deep learning enthusiasts faced in the hands of mainstream AI and machine learning scholars. Hinton also deliberately downplayed reinforcement learning and the work of his colleagues at DeepMind.
Just a few years later, in 2023, DeepMind took over Google's AI business (Hinton also left there), primarily in response to OpenAI's success, which also incorporated reinforcement learning as a component of its AI development. The two pioneers of reinforcement learning, Andrew Barto and Richard Sutton, won the Turing Award in 2025.
However, there are currently no signs that either DeepMind or OpenAI, or the numerous 'unicorn' companies dedicated to AGI, are focusing on anything beyond the currently popular paradigm of large language models. Since 2012, the focus of AI development has shifted from academia to the private sector; however, the entire field remains obsessed with a single research direction.
Lesson Five:
Don't put all your AI 'eggs' in one 'basket.'
There is no doubt that Jensen Huang is an outstanding CEO, and NVIDIA is also an exceptional company. More than a decade ago, when the opportunity for AI suddenly emerged, NVIDIA quickly seized this opportunity because its chips (originally designed for efficient rendering of video games) were well-suited for deep learning computations due to their parallel processing capabilities. Huang remains vigilant, telling employees, 'Our company is only thirty days away from bankruptcy.'
In addition to remaining vigilant (remember Intel?), the lessons learned from the 80-year development history of AI may also help NVIDIA navigate the upcoming fluctuations over the next thirty days or thirty years.
This article is collaboratively republished from: PANews
More reports
AI girlfriends are here! Grok added the 'Girl Ani + Sassy Bad Rudy' companion mode: how to activate it? What are the chat limits?
The Dogecoin moment of Bitcoin: Is Wall Street quietly buying up your retirement funds?