Listen, think about it: the Chinese company DeepSeek has made such a huge AI model publicly available that even the most advanced machines can get out of breath when launching it. This thing is called Prover V2, and it doesn't just chat or write code — it proves mathematical theorems by translating problems into formal logic in Lean 4. This is a straight AI scientist, seriously.

And most importantly, it is completely open. On April 30, they posted the model on Hugging Face under the MIT license, so anyone can download and use it. But there is a caveat: the model weighs about 650 gigabytes, because it has 671 billion parameters, which is many times more than the previous versions of Prover V1 and V1.5. That is, technically you can run it, but in reality, if you don't have a supercomputer with a lot of video memory, then the maximum that You'll do it, and you'll look at the icon.

To reduce the weight at least a little, the developers quantized the model — roughly speaking, they reduced the accuracy of calculations to 8-bit, instead of the standard 16-bit. This reduces the size by about half and speeds up the work, albeit with minimal loss of quality.

What's even more interesting: this entire Prover line seems to have grown out of an earlier DeepSeek model called the R1, which made a splash last year — it almost caught up with OpenAI's advanced model, the o1. It was a real challenge to Western AI, and many considered it a "sputnik moment" — when it became clear that China had really caught up (or overtaken) in a key technological race.

But along with the excitement came fears. The open publication of such powerful models is both a good thing (accessibility, scientific development) and a risk (potential abuse). Unlike closed systems like GPT, no one can restrict the use of Prover V2. It can be modified, run offline, or do anything with it. This is both the strength and vulnerability of open AI.

All this has been made possible by methods like distillation and quantization. First, they make a "big smart model", then they teach it to be "smaller and simpler" so that it behaves almost the same way. Therefore, although the original Prover V2 is a giant, there are already lighter models based on it that run even on smartphones.

But here's what I'm getting at. DeepSeek does serious things: it creates an AI that understands and operates with abstract mathematics. And they do it openly. As users, this can give you and me tools that were previously available only to universities and laboratories. But the risks here are huge.

So, with all this in mind, would you be interested in trying to launch such a powerful AI model at home if it could help, say, with learning, programming, or even solving complex mathematical problems?

#AI #DeepSeek #Aİ