As it seemed to me, an interesting topic about neural networks

🔴 1943 — Warren McCulloch and Walter Pitts create the first mathematical model of a neuron. Pure theory

🔴 1957 — Frank Rosenblatt builds the perceptron — the first learnable neural network, working on a military computer

🔴 1969 — Minsky and Papert prove that the perceptron is too limited. Interest in the topic dies for decades

🔴 1986 — Hinton and co-authors revive interest in neural networks: they propose the backpropagation algorithm

🔴 1990s–2000s — a lull. Neural networks work, but slowly and inefficiently. Little data, weak hardware

🔴 2012 — AlexNet (Hinton again) wins the ImageNet competition. The modern era of deep learning begins

🔴 2014 — VGG16: deeper, simpler, more powerful. A network with 3×3 convolutions and 16 layers becomes a classic and a foundation for many models.

🔴 2017 — Transformer architecture (work Attention is All You Need from the Google team). Instead of recurrent networks — pure attention (self-attention), which provided a significant boost in speed and quality of training. This is the basis for BERT, GPT, T5, LLaMA, and almost all modern language models

🔴 2018+ — GPT, Midjourney, voice AI, bots, agents, memes. Neural networks are everywhere

Information taken from the tg DeCenter