Has anyone run Lagrange's zkML Demo? How was the experience?

Recently, Lagrange's DeepProve-1 has been very popular, claiming to be able to verify GPT-2 inference results on-chain. I have looked at the documentation, and the general process is: upload the model output, generate a zero-knowledge proof, and then verify it on-chain. It sounds very hardcore, but I haven't personally run the demo yet.

I am particularly concerned about a few details:

How long does proof generation take? Seconds or minutes?

Is the Gas consumption large during verification?

If we switch to more complex inputs, will it crash?

Some community developers have reported that generating proofs takes longer than expected and is suitable for offline batch processing, but I haven't verified it. If anyone here has tested it personally, could you share your real experience?

👉 I think this kind of usage data is more important than the white paper. Who has run it, please comment in the replies.

@Lagrange Official $LA #lagrange