In the development of AI, a long-standing issue has been the 'black box': models can produce results, but it is difficult to prove how they reach their conclusions.
DeepProve-1, launched by Lagrange, has officially changed this. It is the first production-ready zkML system capable of generating zero-knowledge proofs for the complete inference process of large language models (LLMs). In other words, from GPT-2 to LLaMA, to open-source models like Gemma and Mistral, it can achieve 'verifiable results'.
The key behind this is DeepProve-1's comprehensive coverage of the Transformer architecture, including core layers such as Attention, LayerNorm, Embedding, and Softmax, and it supports complex graph structures (DAG), no longer limited to simple linear networks. This allows the verification of LLMs to move beyond the laboratory and truly into production. More importantly, DeepProve-1 introduces a token-by-token inference engine, meaning that in the future, it will be possible not only to prove the overall correctness of the model but also to track each inference step incrementally.
The value of this breakthrough lies in its foundation for verifiable AI: financial institutions can confirm whether models are compliant, medical audits can ensure transparency in the diagnostic process, and defense tasks can ensure models perform as expected. For the industry, this marks a turning point in the AI trust framework.