Consensus-based Distribution Verification (CDV)

In addition to hardware-level TEE-based solutions for model verification, we will discuss our algorithmic approach to ensure model integrity. In decentralized inference systems, verifying that each node accurately executes the intended model is crucial for maintaining the integrity and reliability of the system. This verification process ensures consistency in inference results across the network, safeguards against malicious modifications of the model, and ensures adherence to privacy protocols and regulatory compliance. Moreover, it optimizes the use of computational resources, preventing wastage on incorrect or unauthorized computations, and helps manage operational costs effectively.

Implementation: consensus-based distribution verification

Given the computational and scalability challenges associated with Zero-knowledge proofs for verifying the integrity of LLMs in decentralized systems, Nesa proposes a consensus-based distribution verification (CDV) strategy. This strategy leverages the collective agreement of multiple nodes to ensure the correctness and integrity of model execution without revealing sensitive data. To provide a clear picture, we will present the idea by iterating over it step by step.

Consensus-based Verification: Consider a decentralized network with NN nodes, where each node ii executes the same inference model M\mathcal{M} with parameters θ\theta, on a given input xx. The output of the model on node ii is denoted by yi=M(x;θi)y_i = \mathcal{M}(x; \theta_i). The goal is to ensure that all nodes accurately execute the model M\mathcal{M}, yielding consistent outputs.

The process can be formalized in the following steps:

  1. Redundant Execution: A subset of the network nodes, {1,2,,k}N\{1, 2, \ldots, k\} \subseteq N, independently computes the output yiy_i for the same input xx.

    yi=M(x;θ),i{1,2,,k}y_i = \mathcal{M}(x; \theta), \quad \forall i \in \{1, 2, \ldots, k\}
  2. Output Collection: The outputs {y1,y2,,yk}\{y_1, y_2, \ldots, y_k\} are collected for consensus evaluation. This collection phase requires secure and efficient communication protocols to protect the integrity of the transmitted data.

  3. Consensus Determination: Utilizing a consensus algorithm C\mathcal{C}, the system evaluates the collected outputs to determine the agreed-upon result ycony_{\text{con}}. The consensus result is considered valid if it satisfies a predefined criterion, such as majority agreement or a more sophisticated decision rule based on the specific properties of the outputs.

    ycon=C({y1,y2,,yk})y_{\text{con}} = \mathcal{C}(\{y_1, y_2, \ldots, y_k\})
  4. Verification and Finalization: If the consensus result ycony_{\text{con}} aligns with the outputs from a sufficiently large subset of nodes, the model's execution is verified. Otherwise, discrepancies indicate potential integrity issues, triggering further investigation or corrective measures.

This consensus-based approach not only facilitates the verification of model integrity across decentralized nodes but also introduces a robust mechanism to detect and mitigate the impact of faulty or malicious nodes. By leveraging mathematical rigor and algorithmic precision, Consensus-based Verification offers a viable solution to ensuring the integrity and correctness of decentralized LLM inference, complementing hardware-based protections and filling the gaps left by the impracticality of ZKPs for LLMs within Nesa's innovative ecosystem.

Taking Model Sharding into Account: In Nesa's decentralized system, where LLMs may be sharded across multiple nodes for scalability, each node ii possesses a unique shard Mi\mathcal{M}_i of the complete model M\mathcal{M}. This partitioning requires a specialized approach to Consensus-based Verification to accommodate the fragmented nature of model execution.

Consider the complete model M\mathcal{M} being divided into kk shards, such that M=i=1kMi\mathcal{M} = \bigoplus_{i=1}^{k} \mathcal{M}_i, where \bigoplus denotes the operation of combining the model shards to represent the full model functionality. Given an input xx, the execution of these shards across kk nodes produces a set of partial outputs {y1,y2,,yk}\{y_1, y_2, \ldots, y_k\}, where yi=Mi(x;θi)y_i = \mathcal{M}_i(x; \theta_i).

Verification in the Context of Sharding:

  1. Shard Redundant Execution: For each shard Mi\mathcal{M}_i of the complete model M\mathcal{M}, redundant execution is performed by a designated subset of nodes. Each of these nodes, within the subset responsible for shard Mi\mathcal{M}_i, computes the output yi,jy_{i,j} for the given input xx, where jj represents the node within the subset.

    yi,j=Mi(x;θi,j),jSubset of nodes for Miy_{i,j} = \mathcal{M}_i(x; \theta_{i,j}), \quad \forall j \in \text{Subset of nodes for } \mathcal{M}_i

    This step introduces computational redundancy, where multiple independent computations of the same shard aim to fortify the verification process by cross-verifying results among nodes computing the same shard.

  2. Redundant Output Collection and Verification: The outputs {yi,1,yi,2,,yi,m}\{y_{i,1}, y_{i,2}, \ldots, y_{i,m}\} for each shard ii are collected from the nodes in its subset. A consensus mechanism Ci\mathcal{C}_i specific to shard ii then evaluates these collected outputs to determine a shard-specific agreed-upon result ycon,iy_{\text{con},i}.

    ycon,i=Ci({yi,1,yi,2,,yi,m})y_{\text{con},i} = \mathcal{C}_i(\{y_{i,1}, y_{i,2}, \ldots, y_{i,m}\})

    Here, mm denotes the number of nodes executing the shard Mi\mathcal{M}_i. The redundancy in computation across these nodes allows for a robust verification mechanism, enhancing the detection of discrepancies or faults.

  3. Shard Verification Completion: Upon achieving consensus for a shard ii, signified by the result ycon,iy_{\text{con},i}, the process ensures the integrity of the shard's computation before proceeding. This step-by-step verification across shards, with redundancy in each shard's computation, significantly reduces the risk of erroneous or malicious model execution.

  4. Model Reconstruction: After each shard has been independently verified, the shard-specific consensus results {ycon,1,ycon,2,,ycon,k}\{y_{\text{con},1}, y_{\text{con},2}, \ldots, y_{\text{con},k}\} are combined to reconstruct the final model output YfinalY_{\text{final}}. This comprehensive output can ensure the integrity of the complete model execution.

    Yfinal=i=1kycon,iY_{\text{final}} = \bigoplus_{i=1}^{k} y_{\text{con},i}

Selective Random Verification: To optimize the CDV process with model sharding, Nesa employs a strategic, probabilistic method for selecting nodes for verification, termed selective Random verification (SRV). Instead of exhaustively verifying the outputs from all sharded model parts across the network, SRV focuses on a randomly chosen subset of nodes. This significantly reduces the computational overhead and network traffic involved in the verification process, making it more scalable and efficient, particularly suitable for large-scale deployments. The SRV process can be formalized as follows:

  1. At each inference task, a verification subset V{1,2,,k}V \subset \{1, 2, \ldots, k\} is randomly selected, where kk is the total number of nodes (or model shards), and VV represents the indices of nodes chosen for verification. Note this process can be achieved by our VRF Module (Verifiable Random Function).

  2. Only the outputs yiy_i from nodes iVi \in V undergo the verification process:

    yi=Mi(x;θi),iVy_i = \mathcal{M}_i(x; \theta_i), \quad \forall i \in V
  3. A consensus mechanism C\mathcal{C} evaluates the partial outputs from the selected subset to ascertain the model's integrity:

    ycon=C({yiiV})y_{\text{con}} = \mathcal{C}(\{y_i | i \in V\})
  4. If the consensus outcome ycony_{\text{con}} aligns with expected results, the integrity of the model execution within the sampled subset is confirmed. Inconsistent results trigger a more extensive investigation, potentially leading to a wider verification scope.

Consensus-based Distribution Verification: Building upon traditional consensus mechanisms, the CDV strategy introduces an advanced layer of verification by assessing the statistical distribution of model outputs across a decentralized network. This approach is ideally suited for scenarios where the model is not monolithic but is instead distributed as shards across multiple nodes.

CDV is based on the understanding that while individual outputs from model shards might exhibit slight variability due to the stochastic nature of ML models and the complexity of input data, the collective output distribution should maintain consistency. This consistency holds true provided that the model and its inputs remain unchanged. By evaluating the aggregated statistical characteristics of these outputs, CDV furnishes a sophisticated and robust framework for affirming the uniformity and integrity of the model's behavior, thereby enhancing SP without direct comparison of individual inference results.

Detailed Implementation of CDV: Implementing CDV within Nesa's ecosystem involves a multi-faceted approach:

  • Sharded Execution and Output Synthesis: In the initial phase, each node, housing a shard Mi\mathcal{M}_i of the overarching model M\mathcal{M}, executes its segment on a shared input xx, generating partial outputs {y1,y2,,yk}\{y_1, y_2, \ldots, y_k\}. These outputs are synthesized to construct a comprehensive output profile that reflects the combined inference result of the entire model.

  • Advanced Statistical Aggregation: Following output synthesis, the system embarks on advanced statistical analysis, deriving metrics such as the mean μ\mu, standard deviation σ\sigma, and potentially higher-order moments. This stage may also incorporate non-parametric statistics to capture the full essence of the output distribution, offering a nuanced view of the model's performance landscape.

  • Rigorous Distribution Comparison: Utilizing sophisticated statistical methodologies, the derived metrics are juxtaposed with predefined benchmarks or dynamically established norms. Techniques such as hypothesis testing, divergence measures, or similarity indices evaluate the congruence between the observed and expected output distributions, facilitating an objective assessment of model integrity.

  • Enhanced Consensus Mechanism with Adaptive Thresholding: The core of CDV lies in its consensus mechanism, where nodes collectively determine the acceptability of the observed distribution's alignment with benchmarks. Adaptive thresholding plays a crucial role here, dynamically adjusting sensitivity based on historical data and operational context to pinpoint deviations that truly signify integrity breaches.

Through its implementation, CDV offers a powerful solution to the challenges of verifying the integrity of distributed LLMs in Nesa's decentralized framework. By focusing on distributional characteristics rather than discrete output values, CDV not only elevates the verification process but also aligns with the goals of enhancing model security and maintaining stringent privacy standards.

Last updated