Mastering GPT-2 is not enough; Lagrange is accelerating DeepProve towards LLAMA
DeepProve-1 can prove that GPT-2's reasoning is already impressive, but @Lagrange Official hasn't stopped; now all that's on their mind is how to make zkML truly usable in practical applications. After all, accuracy alone isn't enough; speed and scale are equally important, especially in high-frequency trading and real-time healthcare scenarios.
They are optimizing fiercely from both ends: from a cryptographic perspective, they need to switch to a more efficient polynomial commitment scheme to solve the current problems of large proof files and slow verification. On the system level, they are leveraging the graph structure of the model for parallel computing, so that generating proofs can be split across multiple machines, capable of handling industrial-scale loads.