#OpenAI
The concept of neural networks was invented back in 1943, long before ChatGPT, Midjourney, and other "magical AI" systems. Two scientists — McCulloch and Pitts — described how a neuron could be modeled using a formula. Computers barely existed, yet they were already dreaming of machine intelligence.
🔴 1943 — Warren McCulloch and Walter Pitts create the first mathematical model of a neuron. Pure theory.
🔴 1957 — Frank Rosenblatt builds the Perceptron — the first trainable neural network, running on a military computer.
🔴 1969 — Minsky and Papert prove that the Perceptron is too limited. Interest in the topic dies out for decades.
🔴 1986 — Hinton and colleagues revive interest in neural networks by proposing the backpropagation algorithm.
🔴 1990s–2000s — A lull. Neural networks work, but slowly and inefficiently. Few data, weak hardware.
🔴 2012 — AlexNet (Hinton again) wins the ImageNet competition. The modern era of deep learning begins.
🔴 2014 — VGG16: deeper, simpler, more powerful. A network with 3×3 convolutions and 16 layers becomes a classic and the foundation for many models.
🔴 2017 — The Transformer architecture (paper Attention is All You Need by a Google team). Instead of recurrent networks — pure attention (self-attention), which led to a major leap in training speed and quality. This is the foundation for BERT, GPT, T5, LLaMA, and nearly all modern language models.
🔴 2018+ — GPT, Midjourney, voice AIs, bots, agents, memes. Neural networks are everywhere.
The idea is over 80 years old, but it truly took off only with the advent of powerful GPUs, big data, and the internet.
More interesting news — subscribe