Paper TLDR Project

$$ \def\field{\mathbb{F}} \def\ZZ{\mathbb{Z}} \def\eq{\text{eq}} \def\enc{\text{Enc}} \def\com{\text{Com}} \def\ring{\mathcal{R}} \def\distro{\mathcal{D}} \def\bc{\pmb{c}} \def\be{\pmb{e}} \def\bm{\pmb{m}} \def\br{\pmb{r}} \def\bt{\pmb{t}} \def\bu{\pmb{u}} \def\bv{\pmb{v}} \def\bw{\pmb{w}} \def\bx{\pmb{x}} \def\by{\pmb{y}} \def\bz{\pmb{z}} $$

This project serves as intermediate indices to my memory of paper reading and notes.

It also serves as a humble mimic to Oded’s choices.


[Seen 2023/11 for Nova line of work for Folding Scheme]

Spartan: Efficient and general-purpose zkSNARKs without trusted setup
  • To bridge R1CS with sum-check protocol, Setty et al. encoded R1CS with low-degree extension (LDE) into multi-linear polynomials.
  • The Spartan NIZK version requires the verifier to evaluate encoded R1CS polynomials over random points, thus they applied polynomial commitments to make the prover to compute the evaluation point for verifier.
  • R1CS matrices are sparse, i.e., the matrix can be $\field^{m \times m}$ with $n = O(m)$ non-zero entries. They showed a gradual transform in prover complexity:
    • An $O(m^2)$ solution would be directly committing against LDE multi-linear polynomial of matrix $\field^{m \times m}$.
    • An $O(n \log n)$ “strawman” solution would be to LDE against tuples of matrix coordinates ($\field^\mu$ where $\mu = 2s$, $s = \log m$) and non-zero entry $\field$.
      • In total requires $O(n \log n)$ variables, where the extra $\log n$ comes from $\field^\mu$ of coordinates.
    • Finally, an $O(n)$ solution would be compressing sparse matrix $\field^{m \times m}$ into $n$-length list of tuples of (row, col, val), then use offline memory checking technique in Spice to construct an $O(n)$ sized, $O(\log n)$ depth circuit.
      • Offline memory checking performs time stamp order construction for non-zero entries in matrices, the trick is init + write = read + audit.
      • The public coin multiset hash function helps checking the equality of sets of (coordinate, value, time-stamp) tuple.

The hardest part to comprehend (for me) is the memory checking circuit for $\tilde{M}(\pmb{r_x},\pmb{r_y})$ on Section 7 for SPARK compiler that converts a poly commitment into one that handles sparse multi-linear poly commitment.

Nova: Recursive Zero-Knowledge Arguments from Folding Schemes
  • The motivation is that: vanilla R1CS relation does not allow for merging 2 statement-witness pairs into one, and naive construction of IVC is far from practical to be used.
  • The previous R1CS relation is $(A, B, C \in \field^{m \times m}, \bz \in \field^m)$ where $\bz = 1 \parallel \bz \parallel {w} \in \field^m$, such that $(A \bz) \circ (B \bz) = C\bz$.
  • To make the relation foldable, we should introduce relaxation terms $u \in \field$ and $\be \in \field^m$, such that $(A \cdot (\bz_1 + r \bz_2)) \circ (B \cdot (\bz_1 + r \bz_2)) = u \cdot C \cdot (\bz_1 + r \bz_2) + \be$.
    • Additionally, we want to commit to the R1CS witness $\bw, \be$ with an additive homomorphic commitment as $\com_w$ and $\com_e$.
  • To make use of folding scheme, or non-interactive folding scheme (NIFS) in IVC, we want to let the circuit to fold 2 R1CS statement into 1:
    • R1CS statement contains inputs and outputs, the contradiction is that we can no longer make R1CS statement constant as each step goes.
    • To make the R1CS statement size being a constant, they use hash of inputs as R1CS statement.
    • Such hash can be vied as a “constant sized computation binding” to the inputs we want to use at each step.
  • Series connection of folding components into an IVC, while at each step a NIFS prover will keep track of the relaxation terms and IVC proof, which is the R1CS witness.
  • To compress the IVC proof size, which is the same as R1CS statement-witness pair size, they applied Spartan to generate a proof of knowledge of IVC proof, which compresses the concrete proof size.

The trickiest part is the hash of R1CS statements in folding scheme as R1CS statement, which ensures the statement size being constant.

Algebraic Reductions of Knowledge

TODO: SuperNova: Proving universal machine executions without universal circuits

TODO: HyperNova: Recursive arguments for customizable constraint systems

LatticeFold: A Lattice-based Folding Scheme and its Applications to Succinct Proof Systems


[Seen 2023/11 for PlonK line of work]

see notes here

TODO: Efficient Zero-Knowledge Argument for Correctness of a Shuffle

PlonK: Permutations over Lagrange-bases for Oecumenical Non-Interactive arguments of Knowledge
  • A permutation argument checks if a polynomial is permutation of another polynomial by $f(\sigma(x)) = g(x)$.
    • The instantiation of permutation argument in Plonk is product argument, which constructs $z(x)$ progressing over $g^0 … g^n$.
    • Each step of $z(x)$ multiplies a step over $f(x) + \beta x + \gamma$, with a division over $g(x) + \beta \cdot \sigma(x) + \gamma$.
    • The goal is to check when finishing the progression, $z(x)$ should have all terms canceled, which means $z(g^n) = 1$.
    • $\gamma$ is necessary as a randomized shift for multiset equality, while $\beta$ is necessary for prover holding on to the same permutation $\sigma$, which yields a permutation argument.
  • Another perspective of constraint system is that, view computation circuit as $n$ gates and $m$ wires, then the wire values attached to gates as input/output should be equal.
    • This yields the motivation of introducing the copy constraints, which is established by permutation argument.
  • A trick to combine multiple zeroity check is to apply random linear combination over multiple zerotiy terms with exponents of random $\alpha \in \field$.
    • The same trick applies to Spartan in sumcheck on multiple terms.
  • KZG polynomial commitment allows up to polynomial degree up to $n$ by CRS, linearization allows one to open against polynomial as products of sub-polynomials of degree lower than $n$.
    • Give verifier evaluation points of a subset of sub-polynomials, and swap the evaluations against these sub-polynomials in the original polynomial, we get the linearization polynomial.
    • KZG commitments are additive homomorphic, while evaluations sent over are over $\field$, then the verifier can construct the commitment for linearization polynomial, and finish the open check.
PlookUp: A simplified polynomial protocol for lookup tables
  • A similar trick of permutation argument application, while we want to check all the elements in multiset $F = f_1 … f_n$ appeared in vector table $T = t_1 … t_d$.
  • We only need to check the multiset equality, so we no need of $\beta \in \field$ in plonk.
  • To check $F \subseteq T$, we merge these two multisets and sort them, yielding a multiset $S’$.
    • Supposing $F \subseteq T$, then all $F$ elements with same value as $T$ can appear later than $T$’s element.
    • We convert $S’$ into $S$ by $s_i + \beta s_{i+1}$, the motivation is to help with multiset subset checking.
    • The progression over $S’$ from first and $n^{th}$ should be equal to progression over $f_i (1 + \beta)$ ($f_i$ is identical with earlier $t_i$) and $t_i + \beta t_{i+1}$.
PlonKup: Reconciling PlonK with plookup
  • Combining the upper 2 works into a Plonk with look up table supported.
  • To combine a line of 2 input 1 output table entry into $\field$, see Spartan multiset hash function, that $t = t_1 + \gamma \cdot t_2 + \gamma^2 \cdot t_3$ for random $\gamma \in \field$.
  • Additional to constraint system is that $q_{k_i} (w_i + \gamma \cdot w_{n + i} + \gamma^2 \cdot w_{2n + i} - t_i) = 0$ for all $i \in [n]$, which checks intermediate witnesses match table results.
HyperPlonk: Plonk with Linear-Time Prover and High-Degree Custom Gates
  • This paper gave off a good vibe from Spartan, with 2 contributions:
    • Formalization of sumcheck based PIOP toolkits to build up plonk-alike SNARK without FFT involving.
    • Compression of multilinear polynomial commitment size with the help of Commit-and-Prove SNARK, which is similar to Spartan’s Spark compiler in some sense.
  • This paper helped me realizing the essence of Plonk, that is multiset hash function with multiset product argument:
    • Recall the multiset hash function in Spartan, with $h_\gamma(\alpha_1, \alpha_2, \alpha_3) = \alpha_1 + \gamma \alpha_2 + \gamma^2 \alpha_3$ and $H_\beta(\pmb{\alpha}) = \prod (\alpha_i - \beta)$.
    • The constraints in Plonk can be viewed as an entry of 2 elements of index and witness value merged with $h_\gamma$ above, and the final multiset should hold a multiset hash from $H_\beta$.
    • Given a permutation that permutes over same value, the permutation way of constructing multiset hash should lead to same multiset hash with a vanilla way of constructing multiset hash.
    • We can check above with a product check argument, that the grand product of divisions leads to 1 as a result.
  • I personally like the way they organize the PIOPs in section 3, which is very well organized and can be easily read.
    • We saw the sumcheck for high individual degree polynomials in Spartan before, but the prover complexity is $O(2^\mu d^2)$ rather than $O(2^\mu d \log^2 d)$ in implementation – this should not be a big issue when $d$ is low, just asymptotically different.
    • 3 queries per round (0, 1, $r$) in multilinear sumcheck can be merged into 1, with the help of last round’s claim of sum and multilinear polynomial dividing over $X (1-X)$.
    • They use Spartan’s zero-over-subcube check trick extensively, with a random challenge $\br \in \field^\mu$ and check $\sum\limits_{\bx \in {(0, 1)}^\mu} f(\bx) \cdot \eq(\bx, \br) = 0$.
    • Product check takes advantage from Quark, which checks product over $\mu$ bits by carefully arrange evaluations ($2^\mu$) and sub-products ($2^\mu - 1$) over $\mu + 1$ bits.
      • We check the encoding over $\mu + 1$ bits by seeing if it satisfies $v(1, \bx) = v(\bx, 0) \cdot v(\bx, 1)$ over $\mu$ bits of $\bx$.
      • The rest should follow from a standard sumcheck.
    • Permutation PIOP with small soundness error views $\sigma$ as permutation, using inverse permutation $\sigma^{-1}$, we avoided exploding the degree in sumcheck round polynomial.
      • The first $n$ rounds can be done in $O(\mu 2^\mu)$ complexity, with each round $O(2^\mu)$, by $O(2^{\mu - i})$ on $\eq$ and $O(2^i)$ on multilinear minus term (zeroity terms).
      • The last $n$ rounds can be done in $O(2^\mu \cdot \mu \log^2 \mu)$, for $\sigma^{-1}$ is applied in $\eq$ incurring degree to be $O(\mu)$, and operation done by sumcheck over high degree polynomial.
    • Lookup PIOP deploys a degree-2 multivariate “generator” polynomial, but it gets linearized by computing $f$ twice rather than computing $f(g(\bx))$ directly.
      • The rest should follow from plookup and multiset check PIOP, for “shift-by-1” table and values over $\mu+1$ bits.
    • Batch opening for multilinear polynomials gets to optimized complexity with $O(k 2^\mu)$ for $k$ batch size.
      • This is achieved by rewriting $f(\bx) = \sum\limits_{\bz \in {(0, 1)}^\mu} f(\bz) \cdot \eq(\bz, \bx)$ in its multilinear form (a variant of Spartan’s trick).
      • All these subchecks can be combined together by Spartan’s zero-on-subcube checks, checking zeroity over sum of $\mu + \log k$ bits.
      • A specific case when all the opening points are over subcube is that, we can precompute the $\eq$ terms in Spartan trick, and the prover complexity optimized to $O(2^\mu)$.
  • Linear time multilinear polynomial commitment scheme from Orion has concretely large proof size, due to full MT implementation.
    • Similar to Spartan, to compress both the proof size and verifier cost, they build an outer SNARK to prove the verification of original proof goes through.
    • To decrease the circuit complexity of MT verification, they uses multilinear KZG as part of the commitment for lower SNARK circuit complexity.
    • For random access of proximity test query location, they construct lookup table for querying.
    • The consistency check in witness with public IO can be removed from circuit from Commit-and-Prove SNARK.

TODO: Succinct Arguments over Towers of Binary Fields


[Seen 2023/12 for efficient polynomial commitment line of work]

Doubly-efficient zkSNARKs without trusted setup (Hyrax) (multilinear PCS from matrix commitment and IPA, $\sqrt{|w|}$ communication complexity)
  • The contributions of this paper follows:
    • 2 options of squashing 2 sumchecks into 1 (“reducing from 2 points to 1 point”), either invoke 2 summed term evaluation with slightly larger proof size, or only 1 summed term invocation with smaller proof size.
    • Cost reduction on verifier side for sumcheck by dot product proof of knowledge, with random linear combination over linear checks to form dot product.
    • To reduce verifier side work on witness, which is the last round of GKR sumcheck, verifier invoke a multilinear polynomial commitment, rather than evaluating $\tilde{V_x}$ term.
  • Multilinear polynomial commitment starts from the observation of $\tilde{\eq}(\bu, \bv)$ can be separated into $\prod_{1 \le i \le \ell}\tilde{\eq}(u_i, v_i)$.
    • Instead of performing inner product between $\pmb{f} \in \field^{2^\ell}$ and $\tilde{\eq}(\bu, \bt)$ for $t \in 0, 1^\ell$, we write $\pmb{f}$ as a $\field^{2^{\ell / 2} \times 2^{\ell / 2}}$ matrix.
    • Committing phase requires using multi-commitment (Pedersen commitment’s multi-commitment) committing each row of $\pmb{f}$.
    • Evaluating phase requires prover to linear combine the multi-commitments with first part of $\tilde{\eq}$, and verifier invoke a proof of dot product with prover on $f(\bu) = y$.
Ligero: Lightweight Sublinear Arguments Without a Trusted Setup (interleaving code proximity test from tensor-code IOPP)
  • The contributions of this paper are two folds as follows:
    • They introduced an interleaved linear code proximity test.
      • Given a interleaved linear code $U^\ast$ with large min distance from $L^m$ by $d(U^\ast,L^m) > e$, an element $\bw^\ast \in L^\ast$ preserves $d(\bw^\ast, L) > e$ with high probability.
      • Brakedown, Orion, and BCG20 papers follow from the proximity test over interleaved linear code, which plays a role in tensor code based polynomial commitment.
    • This work provides a viewpoint that connects MPC with argument system, that each column of an interleaved linear code can be viewed as a party’s view in MPC.
      • If the linear code is RS-code, then the secret sharing is Shamir secret sharing.
  • We start by Lemma 4.2/4.3 version, where $e < \dfrac{d}{4}$, that: if $d(U^\ast,L^m)>e$, let $L^\ast = \text{rowspan}(U^\ast)$, then $\Pr\limits_{\bw^\ast \overset{R}{\gets} L^\ast}[d(\bw^\ast,L) \le e] \le \dfrac{e + 1}{|\field|}$.
    • The main trick is to write $\bw^\ast$ into the affine form by $\alpha \cdot U^\ast_i + \bx$, then we argue for a given $\bx$, at most one $\alpha$ s.t. $j \in \Delta(U_i,L)$, but $d(\bw^\ast,L) \le e$ and $j \notin \Delta(\bw^\ast,L)$.
    • Or the span of these $\bw^\ast \in L^\ast$ would contradict with $j \in \Delta(U_i,L)$, for $U_i$ itself is in the span. (The subtlety here is the unique decoding radius, that $2e < d/2$)
  • An improved analysis by by Roth and Zemor gives $e < \dfrac{d}{3}$, that: if $d(U^\ast,L^m)>e$, then $\Pr\limits_{\bw^\ast \overset{R}{\gets} L^\ast}[d(\bw^\ast,L) \le e] \le \dfrac{e + 1}{|\field|}$.
    • The improvement on bound $e$ is due to escaping the previous constraint from unique decoding radius.
    • This result derive from a lemma, that says if $d(U^\ast,L^m)>e$, then $\exists \bw^\ast \in L^\ast$ that $d(\bw^\ast,L)$.
    • The lemma is achieved by proof of contradiction, that assuming all elements in $L^\ast$ are at most $e$-far from $L$, then the bases $U_i^\ast$ are all $e$-close to $L$.
    • Assuming one vector is $\bv_0^\ast$ maximize the distance, then there exists another basis $U_i^\ast$ holding other error entries that $\Delta(U_i^\ast,L)\setminus \Delta(\bv^\ast_0,L) \neq \varnothing$.
    • Then by the affine form $\alpha \cdot U_i^\ast + \bv_0^\ast \in L^\ast$, and by the unique closest codeword in $L$, that lead to a larger distance to $L$, which contradict to $\bv_0^\ast$ maximizing the distance.
  • Based on the interleaved code proximity test, they developed a way to test if an interleaved codeword contain message satisfying linear/quadratic constraints.
    • The trick of consistency test and evaluation test is by encoding a row of RS code into a polynomial, and using random linear combination to combine all rows into one.
    • To ZKify this, we pad a line of random codeword, and thus randomize the response polynomial from prover, which makes it simulatable.
RedShift: Transparent SNARKs from List Polynomial Commitments (univariate PCS from FRI)
  • This paper introduced a way to construct polynomial commitment from FRI.
    • Specifically, they proposed a construction for List Polynomial Commitment (LPC) from FRI.
    • By taking advantage of randomness, they can “patch” the LPC such that it commits to the actual polynomial uniquely, with minor soundness error.
  • The motivation behind LPC is that, distinguishing a sufficient close function from a valid codeword in RS code leads to large parameters.
    • This involves a concept called list decoding, that $L_\delta(f)$ denotes a list of valid codewords in RS code that are at most $\delta$-far from $f$.
    • Supposing the $\delta$ is sufficient small (smaller than unique decoding radius $\delta_0$), then $L_\delta(f)$ contain only 1 codeword.
    • that is the function constrained over RS field $D \subseteq \field$.
    • Instead, we can try to commit to a list of polynomials that are in $L_\delta(f)$, and complying to an additional list of point-evaluation pairs.
  • However, the LPC above doesn’t give uniqueness in normal polynomial commitment.
    • The rough idea is to take advantage of randomness sampling, picking points to patch the LPC point-evaluation pairs that $f$ must comply to.
    • The application of a $\mu$-dimensional $\varepsilon$-distinguisher happens in the “offline” stage.
    • It appends $\mu$ pairs of random point-evaluation of $f$ to the list of point-evaluation pairs for LPC.
    • The PPT distinguisher patches all points that all other $g \in L_\delta(f)$ that are not agreeing with $f$, to be evaluated by $f$.
    • Soundness error follows from Schwartz-Zippel, that for a $g \in L_\delta(f)$ that is not $f$, we have $\Pr\limits_{x \in \field}\left[f(x) = g(x)\right] \le \dfrac{d}{|\field|}$.
    • By union bound over $|L_\delta(f) -1|$ other polynomials and iterating over $\mu$ points, $\varepsilon \le \left(\dfrac{d}{|\field|} \cdot (|L_\delta| - 1)\right)^\mu$.
  • Finally, double-column ACM format is anti-human, definitely hate it.

TODO: Transparent Polynomial Delegation and Its Applications to Zero Knowledge Proof

TODO: Linear-Time Arguments with Sublinear Verification from Tensor Codes

Brakedown: Linear-time and field-agnostic SNARKs for R1CS (multilinear PCS from tensor-code IOPP)
  • On Brakedown, the main contributions are two folds:
    • They proved the extractability of coding based multilinear polynomial commitments, without the need of efficient decoding, this has something to do with extractor.
    • On the top of removing the need of efficient decoding, they constructed an efficient linear code with linear-time encoding from expander code.
  • The polynomial commitment scheme from BCG20 is holding the form of $g(\br) = \langle \pmb{q}_1 \otimes \pmb{q}_2, \bu\rangle$, where $\bu$ is the Lagrange basis for $g$ over $(0, 1)^n$.
  • As for the efficient extractor without need of efficient decoder part:
    • We should start with the definition of extractor, that extractor should be ignorant of the odds of prover convincing verifier.
    • The first construction of the extractor needs to know the probability $\varepsilon$ that the prover convinces the verifier, for it needs $\varepsilon$ to estimate its runtime.
    • The second extractor construction builds on the top of the first one, that “guesses” the $\varepsilon$.
      • The extractor runs regardless what $\varepsilon$ is, in case $\varepsilon$ is not inverse poly time with $m$ and $\lambda$, and extractor won’t be in expected poly time.
      • The guessing trick is from Hazay, Lindell, and Goldreich.
    • The resulting expected runtime of second extractor turns out to be in expected poly time, given that it aborts on $2^m$ steps.
  • The claim 1 and lemma 1 for testing phase are a bit hard to comprehend; while using Ligero’s lemma 4.2 and 4.4, things are straightforward.
  • There are a lot to learn in coding theory and extractor runtime subtly (EPT) proof, I need to be better at it.
  • The linear code description in Brakedown is a bit hard to comprehend, so we defer this to the following writeup on expander code.

TODO: Brakedown’s expander code (supplementary file for Brakedown Section 4 construction)

TODO: Gemini: Elastic SNARKs for Diverse Environments (provides a way to convert univariate PCS to a multilinear PCS)

TODO: Orion: Zero Knowledge Proof with Linear Prover Time (from tensor-code IOPP)

BaseFold: Efficient Field-Agnostic Polynomial Commitment Schemes from Foldable Codes (multilinear PCS from Foldable Code IOPP)
  • The paper has following three contributions:
    • A new foldable linear code family that has a generation matrix efficiently and randomly sampled, and with high probability that the encoding has high min distance.
    • A generalization FRI IOPP for the foldable linear code, that applies to a broad class of codes (can be instantiated with RS code).
    • A construction of multilinear PC based on interleaving BaseFold and sumcheck protocol.
  • The foldable linear code derive from a family of random foldable distribution, that has high min distance with high probability.
    • The key idea behind the construction follows:
      • Starting from a MDS generation matrix for a MDS code, a natural question to ask is about how to encode a doubled length message more efficiently.
      • A construction is that, write $\bm = \bm_l \parallel \bm_r$, the codeword for next level would be $\enc_{i + 1}(\bm) = \enc_i(\bm_l) + \bt_1 \circ \enc_i (\bm_r) \parallel \enc_i(\bm_l) + \bt_2 \circ \enc_i (\bm_r)$.
      • The random foldable distribution additionally requires $\bt_1 \in (\field^\ast)^{n_i}$, and $\bt_1 + \bt_2 = 0$.
    • The construction above requires that, the minimum relative distance over $n$ layers of encoding should preserve to be sufficient.
    • Applying $\bt_1$ and $\bt_2$ against $\enc_i(\bm_r)$ for $\enc_{i + 1}(\bm)$ will lead to additional zeros in Hamming Weight.
    • The key lemma here is to introduce rank-nullity theorem to bound the kernel size of the matrix formed by columns of the generator matrix, which help with lowering the parameters in Hamming Weight.
    • The additional zeros introduced in new level of encoding caused larger and larger min distance, that is $\Delta_{C_i} \ge \Delta_{C_j}$ for $i \le j$.
  • The IOPP for foldable linear code gives a similar vibe of FRI.
    • The main idea of the foldable linear code IOPP is somewhat going in the opposite direction of foldable linear code:
      • That is to unapply the $\bt_l$ and $\bt_r$ on left and right side of $\enc_{i + 1}(\bm)$, and fold $\enc_i(\bm_l)$ and $\enc_i(\bm_r)$ with $\alpha$.
    • The main idea should derive from Ben-Sasson et al.’s paper in CCC 2018: “Worst-case to average case reductions for the distance to a code”.
  • The polynomial commitment scheme is a combination of BaseFold IOPP and sumcheck protocol.
    • The tricky part is: BaseFold IOPP only rule out an encoding far from $C_d$ will not have a distance close to $C_i$ in folding process.
    • But it won’t rule out the case that the prover shift from the folding of previous message to the encoding of another message.
    • The soundness proof focus on the bad case above, i.e., swapping over to another encoded message, instead of folding of the previous one.
    • This is done by proof of contradiction, that $\Delta_{C_k} \le \Delta_{C_d}$ contradicts to $\Delta_{C_i} \ge \Delta_{C_k}$ for any $i \le k$, which is by foldable code construction.
    • The knowledge soundness requires one to construct an EPT extractor: Gaussian elimination over $2^d$ samples of MLE bases over $\field^{2^d}$ against function’s coefficient form.
    • The extractor is somewhat looking like Brakedown’s, but the difference lies in the predicate forking lemma for sampling the linearly independent MLE bases.
  • The whole paper is a bit mathematical heavy, which is good for me to practice my paper reading and proving skills.

TODO: Vortex: A List Polynomial Commitment and its Application to Arguments of Knowledge


[Seen 2023/11 for Reed-Solomon Proximity Test line of work]

TODO: Fast Reed-Solomon Interactive Oracle Proofs of Proximity

TODO: Worst-Case to Average Case Reductions for the Distance to a Code (BaseFold construction derive from this paper)

TODO: DEEP-FRI: Sampling Outside the Box Improves Soundness

TODO: Proximity Gaps for Reed-Solomon Codes

TODO: Lattice (List) Decoding Near Minkowski’s Inequality

TODO: Hardness of the (Approximate) Shortest Vector Problem: A Simple Proof via Reed-Solomon Codes


[MPC and distributed ZK related works]

TODO: Zero-Knowledge Proofs on Secret-Shared Data via Fully Linear PCPs

TODO: Efficient Fully Secure Computation via Distributed Zero-Knowledge Proofs

Experimenting with Collaborative zk-SNARKs: Zero-Knowledge Proofs for Distributed Secrets
  • This paper specialized in introducing MPC techniques into developed SNARK schemes, and implementing relevant components for the collaborative version.
  • The main technique here is bringing linear secret sharing over elliptic curve, that avoids using MPC to lift group elements over the curve.
  • The definition of collaborative SNARK on knowledge soundness does not guarantee extraction of each sub-prover.
    • But rather, when all the prover pool information together, they can determine witnesses.
  • The definition of zero-knowledgeness in collaborative SNARK is also a bit different.
    • For a single prover, it has to be valid witness, or an honest prover cannot generate proof.
    • For collaborative proofs, each prover holds a share of witness, and eventually the reconstruction of witness is valid or not.
    • Thus the simulator need to know the validity of the witness itself.
  • To bring KZG commitment into collaborative scenario, the key is lemma 4.
    • The goal is to prove an MPC scheme replacing a sub-circuit for linear function with local computation over secret sharing is secure.
EOS: Efficient Private Delegation of zkSNARK Provers
  • This paper achieved private delegation of SNARK with efficient throughput, and malicious security at the same time.
  • The differences from OB22 are:
    • EOS uses honest delegator to minimize preprocessing cost, avoiding the cost of generating correlated randomness with PKE.
    • EOS introduces a notion of PIOP consistency check, that allows delegator to check the consistency of polynomial computation, which minimizes malicious security.
    • However, EOS requires the delegator to be also involved in the proof generation computation, while OB22 just delegates, and walks away.
zkSaaS: Zero-Knowledge SNARKs as a Service
  • An essence of secret sharing based MPC shown in this paper is the use of randomness:
    • It comes in as jointly generated random secret sharings, reconstructed and applied to target, and remvoed by deducting these random sharings.
  • This paper’s main contribution is the use of packed secret sharing:
    • The rewritten sub-protocols gives lower memory and communication, and performs “SIMD-alike” computation over secret of vector form.
    • The workload of servers involved in computation (not the large server in the center of star topology) is reduced due to packed secret sharing.
  • The nature of FFT leads to an $O(\log m)$ round naive MPC, but the paper provides a 2 round computation from packed secret sharing:
    • By arranging the vector packed in the secret, a party can last from $\log m$ to $\log \ell$ round by locally perform linear combination.
    • Once $\log \ell$ round in FFT is reached, the parties add masked randomness to sharings, and send over all computation to a server with large memory,
    • The large server taking over FFT take over the rest of the computation, and send out shaings once its computation is done.
    • The small server perform only $O(m / \ell \cdot (\log m - \log \ell))$ operations and $O(m / \ell)$ memory, due to SIMD operation from packed SS.
    • The large server additionally need $O(m \log \ell)$ operations for FFT, $O(m \log n)$ operations for secret sharing over $n$ parties, and $O(m)$ memory.

[Argument System in general]

TODO: How to Construct Constant-Round Zero-Knowledge Proof Systems for NP

TODO: Succinct Non-Interactive Arguments via Linear Interactive Proofs

TODO: A Uniform Min-Max Theorem with Applications in Cryptography

TODO: Delegating Computation: Interactive Proofs for Muggles

TODO: Separating SNARGs from All Falsifiable Assumptions

TODO: Fiat-Shamir: From practice to Theory, Part I: Fiat-Shamir From Simpler Assumptions

TODO: Fiat-Shamir: From Practice to Theory, Part II: NIZK and Correlation Intractability from Circular-Secure FHE

TODO: On the (In)security of Kilian-Based SNARGs

TODO: Transparent SNARKs from DARK Compilers

TODO: Does Fiat-Shamir Require a Cryptographic Hash Function?

TODO: Doubly Efficient Interactive Proofs for General Arithmetic Circuits with Linear Prover Time

TODO: Fiat-Shamir via List-Recoverable Codes (or: Parallel Repetition of GMW is not Zero-Knowledge)

TODO: Non-Interactive Batch Arguments for NP from Standard Assumptions

TODO: SNARGs for P from LWE

TODO: Algebraic Reductions of Knowledge

TODO: Batch Arguments for NP and More from Standard Bilinear Group Assumptions

TODO: Lower Bound on SNARGs in the Random Oracle Model

TODO: PPAD is as Hard as LWE and Iterated Squaring

TODO: Non-Interactive Zero-Knowledge from Non-Interactive Batch Arguments

TODO: Probabilistically Checkable Arguments for all NP


[Fiat-Shamir-with-Abort line of work]

Lattice Signatures Without Trapdoors (reject sampling technique)
  • This paper’s reject sampling technique made possible a fruitful line of work on lattice based zero-knowledge proof, really really impressive.
  • We start with the reject sampling technique in general, without instantiating it over any specific distribution.
    • This technique gives a way to “hammer” one distribution to be looking like another one, by rejecting output upper bounded by some probability.
    • Supposing we want to make a distribution $(h, g_v)$ statistically close to $(h, f)$, where $v \gets h$, and $f(z) / g_v(z) \le M$ w.p. $\ge 1 - \varepsilon$ for $z \gets f$ and any $v$.
    • On $(h, g_v)$ side, reject w.p. $\min(1, \dfrac{f(z)}{M \cdot g_v(z)})$; while on $(h, f)$ side, reject w.p. $M^{-1}$.
    • The statistical distance should be $\varepsilon / M$, for no-output SD by $\varepsilon / M$ and output case SD by $\varepsilon / M$.
  • On instantiating with Gaussian distribution, the motivation is to hammer $(h, \distro^m_{\bv, \sigma})$, where $\bv \gets h$, to be looking like $(h, \distro^m_\sigma)$.
    • This helps arguing the zero-knowledgeness (simulatability) without knowledge of center of distribution $\bv$.
    • A key lemma is to bound the probability of $\dfrac{\distro_\sigma^m(\bz)}{\distro^m_{\bv,\sigma}(\bz)} = O(1)$, which applies to the general reject sampling lemma above.
Short Invertible Elements in Partially Splitting Cyclotomic Rings
  • First of all, think geometrically, prove algebraically.
  • Suppose we have a ring represented in CRT form. We write modulus $p = \prod p_i^{e_i}$, short elements $v$ with each entry being between $(-p_i, p_i) \setminus {0}$ are invertible.
    • Their CRT form won’t contain 0, thus they are invertible.
    • Their CRT form is a vector of same value $v$.
    • If the $\ell_\infty$-norm bound of the CRT form is beyond $p_i$, the challenge space $C$ will contain invertible elements, where 0 exists in their CRT form.
  • Suppose we have a $m^{th}$ cyclotomic polynomial $\Phi_m(x) \equiv \prod\limits_{i = 1}^{\varphi(m)}(x - r_i) \mod p$, with $r_i$ being $m^{th}$ primitive root of unities over $\ZZ_p^\ast$.
    • Then $\Phi_m(x) \equiv \Phi_{\delta(m)}(x^{m / \delta(m)}) \mod p$, with $\delta(m) = \prod\limits_{\text{prime}~p_i | m} p_i$.
    • More generally we have $\Phi_m(x) \equiv \Phi_{z}(x^{m / z}) \mod p$ for $z = \prod p_i^{f_i}$ with $1 \le f_i \le e_i$, because of $\varphi(m) = m / \delta(m) \cdot \varphi(\delta(m)) = m / z \cdot \varphi(z)$.
  • Now, let $\Phi_m(x) \equiv \prod\limits_{i = 1}^{d}f_i(x) \mod p$, also let $\Lambda$ be a set for $\bz \in R = \ZZ^{\varphi(m)} / \Phi_m(x)$ that $\bz \equiv \pmb{0} \mod (\Phi_m(x), p)$. Note that $\Lambda$ forms an ideal lattice in $R$.
    • A polynomial in $R_p$ that is invertible, then its modulo $f_i$ over $p$ should always be non zero.
    • We write $\lambda_1(\Lambda)$ as the Euclidean length of the shortest non-zero vector in $\Lambda$.
    • We have $\det(\Lambda) = | \ZZ^{\varphi(m)} / \Lambda | = p^{\varphi(m) / d}$, and we can use it to retrieve $\lambda_1(\Lambda)$ upper bounded by $\dfrac{\sqrt{\varphi(m)}}{s_1(m)} \cdot \det(\Lambda)^{1/ \varphi(m)}$.
    • Supposing $\by \in R$ has $\by \equiv \pmb{0} \mod (\Phi_m(x), p)$ and $\lVert \by \rVert_2 < \lambda_1(\Lambda)$, then contradict with shortest vector in ideal lattice. (Not long enough to round back)
  • In short, given an ideal lattice $\Lambda$ in $R$, if $\forall \bc \in C$ are shorter than any shortest vector in it, then they are always invertible.
More Efficient Commitments from Structured Lattice Assumptions (BDLOP commitment)
  • This is one of the few papers I read when I started self-teaching in Algorand years.
  • The BDLOP paper has following contributions:
    • The BDLOP additive commitment itself, that is instantiated to be computationally hiding and statistically binding.
    • A connection between knapsack problem and LWE/SIS, that gives a duality of the search/decision problem being statistically/computationally hard.
    • A proof of opening with relaxed soundness (can be extended to proof of linear relation), which is before the introduction of the proof of exact knowledge from lattice.
  • The search version of knapsack problem ${\sf SKS}^2_{n, k, \beta}$ in the $\ell_2$ norm asks to find a $\by$ that $\lVert \by \rVert_2 \le \beta$ to solve $\begin{bmatrix} {\bf I}_n & {\bf A}’ \end{bmatrix} \cdot \by = \pmb{0}$, with ${\bf A}’$ being uniformly random.
    • Supposing each element in $\by$ is invertible, then the probability over ${\bf A}’$ to find the solution above derive from Schwartz-Zippel.
    • Statistical hardness derives from the union bound for all the $\by$ that $\lVert \by \rVert_2 \le \beta$, each with probability to solve ${\sf SKS}$ over randomness of ${\bf A}’$.
  • The decision version of knapsack problem ${\sf DKS}^\infty_{n, k, \beta}$ in the $\ell_\infty$ norm asks to tell $({\bf A}’, \begin{bmatrix} {\bf I}_n & {\bf A}’ \end{bmatrix} \cdot \by)$ from uniformly random sample.
    • Statistical hardness derives from Leftover Hash Lemma (LHL), that ${\bf A}’$ serves as a hash function.
  • As the norm bound $\beta$ increases, the ${\sf SKS}$ degrades from statistically to computationally hard, while ${\sf DKS}$ strengthens from computationally to statistically hard.
    • The choice of $\beta$ gives the duality of search/decision problem being statistically/computationally hard.
  • Essentially, the BDLOP commitment hiding the secret being masking message with a (M)LWE sample, while the binding is from a statistically secure version of SIS, that is ${\sf SKS}$.
Algebraic Techniques for Short(er) Exact Lattice-Based Zero-Knowledge Proofs (exact proof of knowledge)
  • This paper and Efficient Lattice-Based Zero-Knowledge Arguments with Standard Soundness: Construction and Applications both solved lattice based exact knowledge of linear system, in a similar flavor.
  • The trick is that: to prove exact knowledge of $\vec{s} \in \ZZ_q^m$ that satisfies $A \in \ZZ^{n \times m}_q$, we can rely on $A$ being a strong (R)SIS instance, that binds a short $\vec{s}$ for linear system $A \vec{s} \equiv \vec{u} \pmod{q}$.
  • The shortness of $\vec{s} \in {0, 1, 2}^m$ can be checked through $(\vec{s} - \vec{2}) \circ (\vec{s} - \vec{1}) \circ \vec{s} = \vec{0}$, and can be lifted to NTT as follows:
    • Suppose $\vec{s}$ can be lifted to an NTT representation of a ring $\ring_q$, written as $\hat{s}$, that $(s - 2)(s - 1)s = 0$ over $\ring_q$.
  • The main protocol of exact knowledge is a sigma-protocol, that:
    • Prover samples a uniform masking polynomial $y \in \ring_q$ that $A \hat{y} \equiv \vec{w} \pmod q$;
    • Verifier samples a uniform challenge $c \in \ZZ_q \subseteq \ring_q$;
    • Finally prover outputs masked secret $z = y + c \cdot s$.
    • This gives extraction of $A (\hat{z} - \hat{z’}) \equiv (c - c’) \cdot \vec{u} \pmod q$.
  • Prover additionally sends out commitment to $y$ and $s$, such that verifier can check $C_z \stackrel{?}{=} c \cdot C_s + C_y$.
    • This improves extractability to $A (c - c’) \hat{s} \equiv (c - c’) \cdot \vec{u} \pmod q$.
    • Cancelling $c - c’$ terms on both sides, the rest is to prove the $\hat{s}$ over $C_s$.
  • Noticing $(z - 2c) (z - c) z = y^3 + 3y^2(s - 1)c + y(3s^2 - 6s + 2)c^2 + (s - 2)(s - 1)s c^3$.
    • Prover additionally commits to $y^3$, $3y^3(s - 1)$, and $y(3s^2 - 6s + 2)$.
    • Verifier checks $C_{(z - 2c)(z - c)z} \stackrel{?}{=} C_{y^3} + c \cdot C_{3y^2(s-1)} + c^2 \cdot C_{y(3s^2 - 6s + 2)}$.
  • Finally, the verfiier needs to check whether all the commitment terms are well formed, following BDLOP commitment proof of opening, which is binding.
Practical Product Proofs for Lattice Commitments (short invertible elements on fully splitting rings, with high probability)
  • One of the main contribution of this paper from Short Invertible Elements in Partially Splitting Cyclotomic Ring is that, it allows for short invertible elements in fully splitting ring (namely no restriction on modulus $q$) with high probability.
  • An invertible ring element has all its NTT coefficients being non-zero, while an NTT coefficient is a polynomial ring element’s evaluation over a cyclotomic ring root-of-unity.
  • Now that write $\bc = \sum\limits^{d - 1}_{i = 0} c_i X^i \in \ZZ_q[X] / (X^d + 1)$, where $\ring_q \cong \prod\limits_{i \in \ZZ_{2 \ell}^\times} \ZZ_q [X] / (X^{d / \ell} - \xi^i)$ (note that $X^{d / \ell} + \xi^i$ doesn’t necessary have to be irreducible, as it doesn’t matter for the result here).
  • The proof’s main idea is that:
    • Supposing each coefficients of $\bc \in \ring_q$ are iid sampled, $\bc$ and $\sigma(\bc)$ are identically distributed, where $\sigma \in \mathsf{Aut}(\ring_q)$.
    • Since $\ring_q / (X^{d/ \ell} - \xi^i) \cong \ring_q / (X^{d / \ell} - \xi^j)$, then $\bc / (X^{d / \ell} - \xi^i)$ and $\bc / (X^{d / \ell} - \xi^j)$ should be identically distributed, for $\sigma(\bc / (X^{d / \ell} - \xi^i)) = \sigma(\bc) / (X^{d / \ell} - \xi^j)$.
    • Thus each the polynomial splitting sharing same distribution, and we can simply argue on one split, and the argument should go through on the other splits.
    • Observing that $\bc / (X^{d / \ell} - \xi)$ can be reduced by: $\bc = \sum\limits_{k = 0}^{\ell - 1} X^{k d / \ell} \sum\limits_{j = 0}^{d / \ell - 1} c_{k \ell + j} X^j \equiv \sum\limits_{k = 0}^{\ell - 1} \xi^k \sum\limits_{j = 0}^{d / \ell - 1} c_{k \ell + j} X^j \pmod{X^{d / \ell} - \xi}$.
    • The probability density of each coefficient in the polynomial split is a convolution of the (shifted) probability densities of coefficients of $\bc$, that $\mu_k(0) = p$, and $\mu_k(\xi^{\pm k}) = (1 - p) / 2$ ($\bc$’s probability density is $\mu_0$).
    • The rest follow from Fourier analysis to upper bound the $\ell_\infty$-norm of the probability density of the polynomial split’s coefficient $\sum\limits_{k = 0}^{\ell - 1} c_{k \ell + j} \xi^k$ for $j \in [0, d/\ell - 1]$, which bounds the probability of the split being 0.

TODO: Practical Exact Proofs from Lattices: New Techniques to Exploit Fully-Splitting Rings

TODO: Practical Lattice-Based Zero-Knowledge Proofs for Integer Relations

TODO: Lattice-Based Zero-Knowledge Proofs and Applications: Shorter, Simpler, and More General


[Lattices]

TODO: Streaming Authenticated Data Structures

TODO: Leveled Fully Homomorphic Signatures from Standard Lattices

TODO: FHE Circuit Privacy Almost For Free

TODO: Succinct Vector, Polynomial, and Functional Commitments from Lattices

TODO: Attribute-Based Encryption for Circuits of Unbounded Depth from Lattices

TODO: Chipmunk: Better Synchronized Multi-Signatures from Lattices

TODO: LatticeFold: A Lattice-based Folding Scheme and its Applications to Succinct Proof Systems


[Leakage Resilient Secret Sharing line of work]

TODO: On the Local Leakage Resilience of Linear Secret Sharing Scheme

TODO: Constructing Locally Leakage-resilient Linear Secret-sharing Schemes


[Combinatorics Papers]

An Alternative Proof of The Schwartz-Zippel Lemma

A degree $d$ polynomial in $n$ variables cuts out a hypersurface $S$ in $\field^n$. There are at most $d$ intersection points between a hypersurface $S$ and an independent line (i.e., the intersection is not the whole line).

This explains separating out homogeneous $g$ of total degree $d$ from $f$: we need $g$ for constructing hypersurface $S$ in $\field^n$.

If one can find a direction that is everywhere independent of the hypersurface $S$, one can foliate $\field^n$ by parallel lines in that direction and count intersections within each line.

Question: Why we want to foliate $\field^n$?

Recall that, we want to upper bound the number of $\pmb{\alpha}$’s such that $f(\pmb{\alpha}) = 0$ over $\field^n$, namely, $|S|$ (points over $S$) over $\field^n$.

The foliation is parameterized by the direction’s orthogonal complement, which is isomorphic to $\field^{n-1}$, so the total points in $\field^n$ can be described by $\field^{n-1}$ lines, and we count intersection points between lines and hyperplane $S$ within each line.

The total number of lines is $|\field|^{n-1}$, while on each line either at most $d$ points or all $p$ points over the line (we eliminated this case by independence), thus the total number of hypersurface points across all of $\field^n$ is at most $d \cdot |\field|^{n-1}$.

TODO: Multilinear Schwartz-Zippel mod N with Applications to Succinct Arguments