Delay-Proof (PoDL) Buffering and LA Risk Mitigation
Scenario Pain Points
Some deep AI inference tasks can take over 30 seconds for a single run. If the Prover is required to submit a complete STARK proof within one block, it will lead to frequent timeouts and high Gas rework costs.
PoDL Buffering Solution
Phase Operation Contract Status
T0 Submit submitTask(): User payment + Collateral LA; set deadlineSlot = now + k
T0+Δ Exec Prover marks startExecution(), only the hash is recorded on-chain TaskStatus = Running
T1 Commit Generate trace segment hash H_i, periodically commitChunk() Store Merkle root increment
T2 Final Generate aggregate proof π_agg, call finalizeProof() If approved → Success; Failed → Slash
• k is set by governance (default 120 seconds).
• Each segment commit will return “segment deposit coefficient” ΔLA, which is locked until the final proof is approved; this reduces the risk of a one-time large Slash.
• Failing to finalizeProof() before deadlineSlot is considered a breach, and a linear penalty will be applied based on the remaining locked deposit ratio.
LA Risk Mitigation Model
effectiveStake = baseStake + Σ ΔLA
slash = violationLevel × effectiveStake × β
• β is adjustable by Governance.
• violationLevel is automatically quantified based on the number of missing segments and timeout ratio.
• If a failure occurs but over 70% of segments are submitted, the Slash only applies a 0.5× penalty rate, encouraging partial availability.
Security Perspective
• All commitments of segment roots and final proofs are recorded in CommitLog to prevent rollbacks.
• Deliberate submission of incorrect segments will be caught during the aggregation check in finalizeProof() and trigger the highest penalty Slash.
• All collateral changes within the delay window are traceable on-chain, avoiding offline private negotiations.
Under PoDL buffering, the Coprocessor can support longer latency inference tasks, while the LA collateral–Slash system maintains economic security equivalence without sacrificing network integrity.