LogoLogo
  • Nesa Docs
    • Introduction to Nesa
    • Overview of the Nesa System
      • AI Models: Repository, Standardization, Uniformity
      • Users: Why Do We Need Private Inference?
      • Node Runners: Doing Inference and Earning $NES
    • Organization of the Documentation
  • Technical Designs
    • Decentralized Inference
      • Overview
      • Model Partitioning and Deep Network Sharding
      • Dynamic Sharding of Arbitrary Neural Networks
      • Cache Optimization to Enhance Efficiency
      • BSNS with Parameter-efficient Fine-tuning via Adapters
      • Enhanced MTPP Slicing of Topological Order
      • Swarm Topology
      • Additional: Free-Riding Prevention
    • Security and Privacy
      • Overview
      • Hardware Side: Trusted Execution Environments (TEEs)
      • Software/algorithm Side: Model Verification
        • Zero-knowledge Machine Learning (ZKML)
        • Consensus-based Distribution Verification (CDV)
      • Software/algorithm Side: Data Encryption
        • Visioning: Homomorphic Encryption
        • Implementation: Split Learning (HE)
      • Additional Info
        • Additional Info: Trusted Execution Environments (TEEs)
        • Additional Info: Software-based Approaches
    • Overview of $NES
      • $NES Utility
    • The First Application on Nesa: DNA X
    • Definitions
    • Additional Information
      • Dynamic Model Versioning and Fork Management
      • Nesa's Utility Suite
      • The AI Kernel Market
      • Privacy Technology
        • Trusted Execution Environment (TEE)
        • Secure Multi-Party Computation (MPC)
        • Verifiable Random Function (VRF)
        • Zero-Knowledge Proof (ZKP)
      • The Integration of Evolutionary AI to Evolve the Nesa Ecosystem
      • Interoperability and Nesa Future Plans
  • Using Nesa
    • Getting Started
      • Wallet Setup
      • Testnet Nesa Faucet
    • Via Web
      • Your Nesa Account
      • Selecting an AI Kernel
      • Submitting a Query
    • Via SDK
    • Via IBC
    • Via NESBridge
      • On Sei
  • Run a Nesa Node
    • Prerequisites
    • Installation
    • Troubleshooting
    • FAQ
  • Links
    • nesa.ai
    • Nesa Discord
    • Nesa Twitter
    • Nesa dApp: dnax.ai
    • Nesa dApp: DNA X Docs
    • Terms of Service
    • Privacy Policy
Powered by GitBook
On this page
  1. Technical Designs
  2. Security and Privacy
  3. Software/algorithm Side: Data Encryption

Visioning: Homomorphic Encryption

Homomorphic Encryption (HE) stands at the frontier of secure data processing, offering the strong capability to perform computations on encrypted data, denoted as Enc(data)Enc(data)Enc(data), without decryption. This form of encryption, defined through operations ⊕\oplus⊕ and ⊗\otimes⊗ that correspond to addition and multiplication on encrypted data, ensures that Enc(a)⊕Enc(b)=Enc(a+b)Enc(a) \oplus Enc(b) = Enc(a+b)Enc(a)⊕Enc(b)=Enc(a+b) and Enc(a)⊗Enc(b)=Enc(a×b)Enc(a) \otimes Enc(b) = Enc(a \times b)Enc(a)⊗Enc(b)=Enc(a×b), for any data aaa and bbb. The appeal of HE lies in its ability to maintain data in a secure, encrypted state, Enc(data)Enc(data)Enc(data), throughout the computation process, thus safeguarding the confidentiality of sensitive information.

Applying HE within the context of LLMs in decentralized inference systems paints a visionary landscape. In such a setting, HE could enable the privacy and security of data inputs and outputs, represented as Enc(x)Enc(x)Enc(x) for inputs and Enc(y)Enc(y)Enc(y) for outputs, facilitating complex computations by LLMs without revealing the underlying user data. This capability is particularly crucial in decentralized systems, where ensuring data privacy and security is the key, yet the collaborative nature of training and utilizing LLMs necessitates secure, distributed computation.

However, integrating HE with LLMs for decentralized inference is embryonic and challenging. The foremost obstacle is the computational overhead introduced by HE operations, which, given the voluminous size and computational requisites of LLMs, renders real-time or near-real-time inference an infeasible task. The operations on encrypted data, especially when considering the iterative and complex nature of LLMs, encapsulated by functions fθ(Enc(x))=Enc(y)f_{\theta}(Enc(x)) = Enc(y)fθ​(Enc(x))=Enc(y), demand significant computational resources. Moreover, adapting HE to accommodate the dynamic, iterative processes inherent in LLM training and inference within decentralized frameworks adds complexity.

Despite these obstacles, the aspiration to leverage HE for ensuring SP in LLM applications within decentralized systems remains a potent driver of innovation. Acknowledging the current practical challenges of implementing HE for LLMs in decentralized settings, the narrative evolves towards exploring pragmatic methodologies for secure data processing. This journey leads to adopting Split Learning as a tangible approach to achieving data privacy and security in decentralized inference systems.

PreviousSoftware/algorithm Side: Data EncryptionNextImplementation: Split Learning (HE)

Last updated 1 year ago