Demonstrating the Accuracy of an AI Model through Zero-Knowledge Proof is Even Faster Than Direct Computation

1. Machine Learning and Zero Knowledge Proof: How did they meet
In recent years, with in-depth research on machine learning theory and algorithms, more and more AI products have begun to achieve large-scale market applications, greatly changing the operation mode of many traditional industries. This shift is expected to drive more research, investment, and engineering focus on this field in the future. Especially in the last few years, AI large models represented by GPT have made amazing progress, evolving into indispensable tools for people's daily lives.
As the ability of AI models to handle problems continues to grow, so does the complexity of the models behind them. For example, Google recently announced their new model, Gemini, claiming it to be the most complex AI model to date. The time and money consumed to train such a complex model to achieve the desired performance are prohibitive. However, for common users, this complexity is hidden behind simple APIs, making it challenging to understand the model's specific running process.
From the perspective of AI service providers, this design is reasonable. The training process requires a significant investment of time and money, and the completed parameter set is a valuable asset of the company. It ensures that it will not be leaked during usage while providing convenience to users and lowering the threshold to use AI models.
However, from the user's perspective, this design raises a problem: How can we be sure that the AI model vendor has provided us the right model? This question may sound confusing, but we can understand it through a simple example: Suppose a website needs to use an AI model that classifies ads based on customer groups, and the AI model vendor (e.g., Google) may have multiple models with different accuracy levels and costs. As a user, the site wants the model with 90% accuracy at $20,000, but the vendor provides a model with 85% accuracy. If the difference in prediction results is indistinguishable, the site is unable to tell which model the vendor is providing.
This example may still be confusing because, as users, we tend to focus more on the actual results of the model we used and don't care what kind of model was used. Nevertheless, not all results of AI models can be measured simply in the vast scenario of AI usage. For instance, in scenarios like medical diagnosis, financial market investments, or predicting enemy actions in a war, the prediction results are significant, and a wrong prediction is unacceptable.
Fortunately, zero-knowledge proofs (ZKP) can be a perfect solution to this paradox. Generally speaking, zero-knowledge proof is a proof system that utilizes cryptographic techniques to ensure that no private information about the prover is disclosed during the proof process. For more details about zero-knowledge proofs, readers can refer to this article, which will not be described in detail here.
However, it is very difficult to prove a complex AI model directly by ZKP. In ZKP, it is necessary to convert AI model algorithms to some hardware description language (HDL), such as R1CS, which only allows multiplication gates and addition gates to represent arithmetic processes. Converting a complex AI model to HDL description is often very inefficient. For example, it takes about 30 minutes to prove a VGG16 model (a common CNN model). For more complex AI models such as the Twitter recommendation algorithm, directly proving them with ZKP is impractical, taking about 6 hours to generate the proof.
Therefore, proving an AI model directly with ZKP seems unwise. We need other ways to reduce the complexity of the proofs. We need to clarify that our goal is to "prove a model" rather than "compute a model again with ZKP." That is, we do not want to "translate" the model's algorithms into some ZKP description because the complexity of proving an algorithm can, in some cases, be even less than the complexity of computing the algorithm itself!
2. Difference between Validation and Computation
Suppose we have three matrices on hand, , , and . We want to prove that . The most intuitive way is to evaluate every element of and check if it is equal to in every position, i.e., . Alternatively, we could multiply by another vector on the right and left simultaneously and check , where . What is the difference? It may seem counterintuitive as it introduces more computation at first sight. Let's consider more details.
In the first case, we need to evaluate multiplications for every in . Given that the dimension of is , we need multiplications, resulting in a time complexity of . Compared with evaluating directly, in the second case, we evaluate firstly, where is an -size vector. This operation needs multiplications, and the result is another -size vector. Then we evaluate , still requiring multiplications. So, we totally need multiplications, and the complexity is . This is known as Freivalds’ Algorithm given by Rūsiņš Mārtiņš Freivalds in 1979.
This is the core message we want to convey: "Validation" does not equate to "recalculation." With some mathematical methods and techniques, it is possible to prove a complex algorithm with reduced complexity. In the next section, we will see how to use this idea to prove the convolutional layer in CNN.
3. Example: Validation of 2-D Convolution Even Faster Than Computation
All the content in this section is sourced from zkCNN: Zero Knowledge Proofs for Convolutional Neural Network Predictions and Accuracy published in 2021. The description in this section involves some mathematical theories and derivations. To maximize reader convenience, many theoretical derivations and specific algorithms have been omitted, and some expressions are actually wrong in the formal definitions. For a comprehensive understanding, we strongly recommend reading the original paper. You can find the paper here.
The sumcheck protocol is one of the interactive proofs in the literature. The sumcheck problem is to verify the summation of a multivariate polynomial on all binary inputs. Simply put, the purpose of the protocol is to check this evaluation:
The time complexity of the prover is about , and the proof size is .
2-D convolution is a very common operation in CNN. The 2-D convolution operation is , where is an matrix and is a matrix, with much greater than . When we validate 2-D convolutions in CNN, the computation could be reduced to a 1-D convolution. It is well-known that 1-D convolution is the same as multiplications between two univariate polynomials, and FFT/IFFT is a widely used technique to evaluate univariate polynomial multiplication. The algorithm is:
where denotes element-wise product. If we ignore FFT and IFFT, the complexity of this evaluation for an matrix is . The FFT and IFFT evaluation is:
If we directly evaluate , the complexity is . Instead of validating the FFT operation directly, we will use the sumcheck protocol in FFT with some clever observations. Formally speaking, let be the vector of coefficients of a polynomial, and be the vector of evaluations at where is the -th root of unity such that . Then we use the FFT formulation we have , where is:
In practice, would be the multilinear extensions of themselves:
Suppose we have a polynomial , . The complexity to evaluate its multilinear extension , is . For the matrix polynomial , , , the complexity to evaluate , , is . Generally speaking, we evaluate , , firstly, and the complexity is . The final evaluation is:
where . Because is an -th root of unity, we have . That means at most has distinct values for all . We could precompute all distinct values of , and the complexity is . So the total complexity of validating FFT is .
Recalling for 2-D convolution, is actually an matrix, so the complexity of validating is . The complexity of element-wise production is also . So the total complexity is , which is even faster than computing the convolution!!!
Reference
- Zero Knowledge Proofs Mooc: https://zk-learning.org/
- ZKCNN paper: https://eprint.iacr.org/2021/673
- ZKML community: https://github.com/zkml-community/awesome-zkml
- Daniel Kang's Medium: https://medium.com/@danieldkang
- ZK-MINST: https://github.com/0xZKML/zk-mnist
