LogoLogo
  • Nesa Docs
    • Introduction to Nesa
    • Overview of the Nesa System
      • AI Models: Repository, Standardization, Uniformity
      • Users: Why Do We Need Private Inference?
      • Node Runners: Doing Inference and Earning $NES
    • Organization of the Documentation
  • Technical Designs
    • Decentralized Inference
      • Overview
      • Model Partitioning and Deep Network Sharding
      • Dynamic Sharding of Arbitrary Neural Networks
      • Cache Optimization to Enhance Efficiency
      • BSNS with Parameter-efficient Fine-tuning via Adapters
      • Enhanced MTPP Slicing of Topological Order
      • Swarm Topology
      • Additional: Free-Riding Prevention
    • Security and Privacy
      • Overview
      • Hardware Side: Trusted Execution Environments (TEEs)
      • Software/algorithm Side: Model Verification
        • Zero-knowledge Machine Learning (ZKML)
        • Consensus-based Distribution Verification (CDV)
      • Software/algorithm Side: Data Encryption
        • Visioning: Homomorphic Encryption
        • Implementation: Split Learning (HE)
      • Additional Info
        • Additional Info: Trusted Execution Environments (TEEs)
        • Additional Info: Software-based Approaches
    • Overview of $NES
      • $NES Utility
    • The First Application on Nesa: DNA X
    • Definitions
    • Additional Information
      • Dynamic Model Versioning and Fork Management
      • Nesa's Utility Suite
      • The AI Kernel Market
      • Privacy Technology
        • Trusted Execution Environment (TEE)
        • Secure Multi-Party Computation (MPC)
        • Verifiable Random Function (VRF)
        • Zero-Knowledge Proof (ZKP)
      • The Integration of Evolutionary AI to Evolve the Nesa Ecosystem
      • Interoperability and Nesa Future Plans
  • Using Nesa
    • Getting Started
      • Wallet Setup
      • Testnet Nesa Faucet
    • Via Web
      • Your Nesa Account
      • Selecting an AI Kernel
      • Submitting a Query
    • Via SDK
    • Via IBC
    • Via NESBridge
      • On Sei
  • Run a Nesa Node
    • Prerequisites
    • Installation
    • Troubleshooting
    • FAQ
  • Links
    • nesa.ai
    • Nesa Discord
    • Nesa Twitter
    • Nesa dApp: dnax.ai
    • Nesa dApp: DNA X Docs
    • Terms of Service
    • Privacy Policy
Powered by GitBook
On this page
  1. Nesa Docs
  2. Overview of the Nesa System

Users: Why Do We Need Private Inference?

Private inference is crucial for users who handle sensitive data that must not be exposed during the model inference process, even to the models themselves. This need is prevalent in several key industries:

  1. Finance: Financial institutions manage highly confidential data, such as personal financial records, investment details, and proprietary trading algorithms. These entities require private inference to ensure that data remains secure while using AI for fraud detection, risk assessment, and personalized banking services.

  2. Healthcare: In healthcare, patient data is both sensitive and heavily regulated. Healthcare providers and researchers use private inference to analyze medical records, diagnostic images, and genetic information to provide personalized treatment plans and conduct medical research without compromising patient confidentiality.

  3. Legal Sector: Law firms and legal departments use private inference to handle sensitive case documents, client records, and legal precedents. AI can help in predicting case outcomes, performing document review, or automating legal research while ensuring that the data does not leak or become accessible outside the authorized channels.

  4. Public Sector: Government agencies often handle sensitive information related to national security, public records, and personal data of citizens. Private inference is employed to utilize AI for public safety applications, policy making, and service delivery without exposing the underlying data.

Providing private inference can be challenging for existing centralized platforms like ChatGPT due to several inherent limitations:

  1. Data Centralization: Centralized systems often collect and process data on central servers, which can create potential points of vulnerability where data might be exposed to unauthorized access or breaches.

  2. Transparency and Trust: Users must trust that the platform will handle their data securely and according to privacy agreements. In centralized models, it's difficult for users to verify that data handling protocols are followed without the ability to inspect the infrastructure or data flows.

  3. Scalability of Privacy: As the user base grows, maintaining strict data privacy at scale becomes more complex. Ensuring consistent enforcement of privacy practices across large volumes of data and inference requests is a substantial challenge.

  4. Regulatory Compliance: Different regions have varying regulations on data privacy (like GDPR in Europe or HIPAA in the U.S.), making it difficult for centralized platforms to uniformly apply the highest standard of data privacy across all jurisdictions.

To fill this gap, we design Nesa, the first platform to provide private inference on the decentralized systems.

PreviousAI Models: Repository, Standardization, UniformityNextNode Runners: Doing Inference and Earning $NES

Last updated 11 months ago