-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal features: 16 -> 32 bit range check on LogUp #702
Comments
An updated of one blocking issue need to address before carrying on the tasks Sparse Fraction-SumWe already figured out an PCS to verify ( However there still one challenge: how to conduct the fraction sum of There are two directions direction 1: 2 stage fraction sumA quick attempt is to convert to 2 stages fraction sum: direction 2: develop sparse fraction sum argumentThis direction aim to resolve fraction sum leaf layer problem directly. In each layer fraction sum, we are dealing with sumcheck
|
Lev hint a nice solution. Mindset is transforming table for all x in boolean hypercube [0 - 2^n-1]
And derive What make
Further more, in fraction sum first layer, it keep the sparsity on denominator part: So prover just need to maintain Besides, we dont need to commit We also dont need to commit reference of sparse sumcheck |
Previously to quick verify idea and avoid massive change, there are new functionality with suffix `_v2`. After experiment with good result, long time ago all logic already stick to v2 version and no longer use v1. This PR clean up all leftover v1 version, do renaming and file replacement without modify existing logic. In summary - `sumcheck/src/prover_v2.rs` -> `sumcheck/src/prover.rs` - `multilinear_extensions/src/virtual_poly_v2.rs` -> `multilinear_extensions/src/virtual_poly.rs` - clean up all `V2` suffix This addressed previous out-dated PR #162, and as a preparation for #788, #702
Background
Ceno heavily reply on LogUp to do range check, e.g. 16 bit range check
See circuits statistis here #585 (comment)
Most of lookup operation are contribution via 16 bit range check. Take
add
opcode as example, the 9 lookups are all contributes viaSo in overall there are 6 + 2 + 1 = 9 lookup
If we do 32 bit range check
overall is 5 lookup, which across 2^3 = 8 boundary. The tower sumcheck part leaf layer size will be cut half size. so the expected latency would be cut to half.
As a side effect, we also save bunch of
witin
for they are there for holding 16 limb, so it also benefit of mpcs since there are less polynomial.Design Rationales
On logup formula right hand side, we have m(x) and T(x). One of nice property for 32 bits range check table T(x) is we can skip its commitment & PCS, for verifier can evaluate T(r) succinctly via tricks here. So another challenge is how to deal with huge & sparse polynomial m(x)
Via spartan paper p29 7.2.2. sparse polynomial commitment
SPARK
, we can view sparse m(x) into tuple of 3 dense [(i, j, M(i,j))] and commit 3 dense polynomial respectively. Giving original m(x) dense size is 2^32. The insight magic of theSPARK
is via split into i, j polynomial, each size just match non-zero entries of m(x). I think the most innovation to break variables into row, col, is in SPARK offline memory check memory-in-the-head, it reduced audit_ts_(row/col) dense size from 2^32 to 2^16 size.Prover need to commit
i
,j
,M
, along withread_ts_row
,write_ts_row
,audit_ts_row
,read_ts_col
,write_ts_col
,audit_ts_col
related to SPARK protocol.With SPARK, the e2e table proving flow will be like this
i
,j
,M
,read_ts_row
,write_ts_row
,audit_ts_row
,read_ts_col
,write_ts_col
,audit_ts_col
i
,j
,M
,read_ts_row
,write_ts_row
,audit_ts_row
,read_ts_col
,write_ts_col
,audit_ts_col
])row
,col
offline memory check follow SPARKWhat the overhead
In table proof part, since there is a new SPARK proof flow involve in critical path sequentially, the overall latency of table proof will increase. However all the opcode proof will benefit quite a lot on 32 bits range check. As the opcode proof occupied the major overhead, the increase overhead in table probably negligible.
In more detail analysis, the proving time overhead in SPARK is donimated by the size |read_ts_row|, |write_ts_row|, |read_ts_col|, |write_ts_col|, associate with number of non zero entry |m(x)|. In real world workload, if there are more repeated values to be range check, then non zero entry in |m(x)| will be even less, so the cost will be save quite a lots. The worst case happend suppose all the lookups value are distinct.
Sub Task breandown
Other side effects
This feature rely on base field to hold 32 bit riv32, therefore we need to stick to Goldilock64.
So the future roalmap will be Goldilock64 -> Binary Field, without mersenne31/babybear in transition.
The text was updated successfully, but these errors were encountered: