Written by: Gil Press

Compiled by: Felix, PANews

On July 9, 2025, Nvidia became the first publicly traded company to reach a market value of $4 trillion. What will be the next steps for Nvidia and the volatile AI field?

Although predictions are difficult, there is a wealth of data to draw upon. At the very least, it helps clarify why past predictions did not materialize and in what areas, in what ways, and for what reasons they failed. That is history.

What lessons can be drawn from the 80-year development of artificial intelligence (AI)? During this period, funding has fluctuated, research and development methods have varied widely, and the public has alternated between curiosity, anxiety, and excitement.

The history of AI began in December 1943 when neurophysiologist Warren S. McCulloch and logician Walter Pitts published a paper on mathematical logic. In the paper (Logical Calculus of Ideas Immanent in Nervous Activity), they speculated on idealized and simplified neural networks and how they could perform simple logical operations by transmitting or not transmitting impulses.

At the time, Ralph Lillie, who was pioneering the field of organizational chemistry, described the work of McCulloch and Pitts as endowing 'logic and mathematical models with 'realism' in the absence of 'experimental facts.' Later, when the assumptions of that paper failed empirical testing, Jerome Lettvin at MIT pointed out that while the fields of neurology and neurobiology ignored the paper, it inspired 'the group destined to become enthusiasts of the new field (now known as AI).'

In fact, the paper by McCulloch and Pitts inspired 'connectionism,' a specific variant of AI that now dominates, known today as 'deep learning,' which has recently been renamed 'AI.' Although this approach has no connection to how the brain actually works, the statistical analysis methods that support this AI variant—'artificial neural networks'—are often described by AI practitioners and commentators as 'mimicking the brain.' Authorities and top AI practitioners, such as Demis Hassabis, stated in 2017 that McCulloch and Pitts' fictional description of how the brain works, along with similar research, 'continues to lay the foundation for contemporary deep learning research.'

Lesson One: Be cautious about conflating engineering with science, speculation with science, and science with papers full of mathematical symbols and formulas. Most importantly, resist the temptation of the illusion 'we are like gods,' believing that humans are no different from machines and that humans can create machines that are like humans.

This persistent and widespread arrogance has been a catalyst for technology bubbles and the periodic frenzy surrounding AI over the past 80 years.

This inevitably brings to mind the idea of artificial general intelligence (AGI), where machines will soon possess human-like intelligence or even superintelligence.

In 1957, AI pioneer Herbert Simon proclaimed, 'Machines that can think, learn, and create now exist in the world.' He also predicted that within ten years, computers would become chess champions. In 1970, another AI pioneer, Marvin Minsky, confidently stated, 'In three to eight years, we will have a machine with the intelligence of a normal person... Once computers gain the upper hand, we may never be able to take it back. We will survive by their grace. If we are lucky, they may decide to keep us as pets.'

The expectation of the imminent arrival of AGI is profoundly significant, even influencing government spending and policy. In 1981, Japan allocated $850 million for the Fifth Generation Computer Systems project, aimed at developing machines that think like humans. In response, the U.S. Defense Advanced Research Projects Agency, after experiencing a long 'AI winter,' planned in 1983 to re-fund AI research to develop machines that could 'see, hear, speak, and think like humans.'

Open-minded governments around the world took about a decade and spent billions of dollars to not only gain a clear understanding of artificial general intelligence (AGI) but also recognize the limitations of traditional AI. However, by 2012, connectionism finally triumphed over other AI schools of thought, and a new wave of predictions about the impending arrival of AGI swept the globe. OpenAI claimed in 2023 that superintelligent AI—'the most impactful invention in human history'—could arrive within this decade and 'could lead to humans losing power or even human extinction.'

Lesson Two: Be wary of those shiny new things; examine them carefully, prudently, and wisely. They may not differ much from previous speculations about when machines can possess intelligence similar to humans.

One of the 'godfathers' of deep learning, Yann LeCun, stated: 'We are still missing some key things to allow machines to learn as efficiently as humans and animals; we just don't know what that is yet.'

For years, artificial general intelligence (AGI) has been touted as 'just around the corner,' all due to the 'first step fallacy.' Machine translation pioneer Yehoshua Bar-Hillel was one of the first to discuss the limitations of machine intelligence, pointing out that many believed if a computer demonstrated it could do something that until recently was thought impossible, even if done poorly, further technological development would suffice to perfect the task. People widely believed that with patience, achievement would ultimately be realized. But Bar-Hillel warned as early as the mid-1950s that this was not the case, and reality has proven this over and over.

Lesson Three: The distance from being unable to do something to doing it poorly is usually much shorter than the distance from doing it poorly to doing it well.

In the 1950s and 1960s, many fell into the 'first step fallacy' due to the enhanced processing speed of the semiconductors powering computers. As hardware progressed each year along the reliable upward trajectory of 'Moore's Law,' it was widely believed that machine intelligence would develop in sync with hardware.

However, in addition to the continuous improvement of hardware performance, AI development has entered a new stage, introducing two new factors: software and data collection. Starting in the mid-1960s, expert systems (note: an intelligent computer program system) shifted focus to acquiring and programming knowledge about the real world, especially the knowledge of domain experts and their heuristics. Expert systems gained popularity, and by the 1980s, it was estimated that two-thirds of the Fortune 500 companies were using this technology in their daily business activities.

However, by the early 1990s, the AI boom completely collapsed. Numerous AI startups went bankrupt, and major companies froze or canceled their AI projects. As early as 1983, expert systems pioneer Ed Feigenbaum pointed out the 'key bottleneck' leading to their demise: the expansion of the knowledge acquisition process, 'which is a very tedious, time-consuming, and expensive process.'

Expert systems also face the challenge of knowledge accumulation. The constant demand for adding and updating rules makes them difficult to maintain and costly. They also reveal the shortcomings of thinking machines compared to human intelligence. They are 'fragile,' making absurd mistakes when faced with unusual inputs, unable to transfer their expertise to new areas, and lacking understanding of the surrounding world. At the most fundamental level, they cannot learn from examples, experiences, or environments like humans can.

Lesson Four: Initial success, including widespread adoption by businesses and government agencies and substantial public and private investment, may not necessarily foster a lasting 'new industry,' even after ten or fifteen years. Bubbles often burst.

Amidst the ups and downs, hype, and setbacks, two distinctly different approaches to AI development have been vying for the attention of academia, public and private investors, and the media. For over forty years, rule-based symbolic AI methods have dominated. However, instance-based, statistically driven connectionism, as another major AI method, also enjoyed a brief period of popularity in the late 1950s and late 1980s.

Before the revival of connectionism in 2012, AI research and development were primarily driven by academia. The academic world is characterized by the prevalence of dogma (so-called 'normal science'), and there has always been an either-or choice between symbolic AI and connectionism. In 2019, Geoffrey Hinton, in his Turing Award speech, spent much of the time discussing the hardships he and a few deep learning enthusiasts faced at the hands of mainstream AI and machine learning scholars. Hinton also specifically downplayed reinforcement learning and the work of his colleagues at DeepMind.

Just a few years later, in 2023, DeepMind took over Google's AI business (Hinton also left there), primarily in response to OpenAI's success, which also incorporated reinforcement learning as a component of its AI development. The two pioneers of reinforcement learning, Andrew Barto and Richard Sutton, were awarded the Turing Award in 2025.

However, there are currently no signs that either DeepMind or OpenAI, or the many 'unicorn' companies dedicated to artificial general intelligence (AGI), are focusing on anything beyond the currently prevailing paradigm of large language models. Since 2012, the focus of AI development has shifted from academia to the private sector; however, the entire field remains committed to a single research direction.

Lesson Five: Do not put all your AI 'eggs' in one 'basket.'

There is no doubt that Jensen Huang is an outstanding CEO, and Nvidia is an exceptional company. Over a decade ago, when the opportunities for AI suddenly emerged, Nvidia quickly seized the moment, as its chips (originally designed for efficiently rendering video games) were ideally suited for deep learning computation. Huang remains vigilant, telling employees, 'Our company is only 30 days away from bankruptcy.'

In addition to staying vigilant (remember Intel?), the lessons learned from the 80 years of AI development may also help Nvidia weather the upcoming 30 days or 30 years of ups and downs.