LogoLogo
  • Nesa Docs
    • Introduction to Nesa
    • Overview of the Nesa System
      • AI Models: Repository, Standardization, Uniformity
      • Users: Why Do We Need Private Inference?
      • Node Runners: Doing Inference and Earning $NES
    • Organization of the Documentation
  • Technical Designs
    • Decentralized Inference
      • Overview
      • Model Partitioning and Deep Network Sharding
      • Dynamic Sharding of Arbitrary Neural Networks
      • Cache Optimization to Enhance Efficiency
      • BSNS with Parameter-efficient Fine-tuning via Adapters
      • Enhanced MTPP Slicing of Topological Order
      • Swarm Topology
      • Additional: Free-Riding Prevention
    • Security and Privacy
      • Overview
      • Hardware Side: Trusted Execution Environments (TEEs)
      • Software/algorithm Side: Model Verification
        • Zero-knowledge Machine Learning (ZKML)
        • Consensus-based Distribution Verification (CDV)
      • Software/algorithm Side: Data Encryption
        • Visioning: Homomorphic Encryption
        • Implementation: Split Learning (HE)
      • Additional Info
        • Additional Info: Trusted Execution Environments (TEEs)
        • Additional Info: Software-based Approaches
    • Overview of $NES
      • $NES Utility
    • The First Application on Nesa: DNA X
    • Definitions
    • Additional Information
      • Dynamic Model Versioning and Fork Management
      • Nesa's Utility Suite
      • The AI Kernel Market
      • Privacy Technology
        • Trusted Execution Environment (TEE)
        • Secure Multi-Party Computation (MPC)
        • Verifiable Random Function (VRF)
        • Zero-Knowledge Proof (ZKP)
      • The Integration of Evolutionary AI to Evolve the Nesa Ecosystem
      • Interoperability and Nesa Future Plans
  • Using Nesa
    • Getting Started
      • Wallet Setup
      • Testnet Nesa Faucet
    • Via Web
      • Your Nesa Account
      • Selecting an AI Kernel
      • Submitting a Query
    • Via SDK
    • Via IBC
    • Via NESBridge
      • On Sei
  • Run a Nesa Node
    • Prerequisites
    • Installation
    • Troubleshooting
    • FAQ
  • Links
    • nesa.ai
    • Nesa Discord
    • Nesa Twitter
    • Nesa dApp: dnax.ai
    • Nesa dApp: DNA X Docs
    • Terms of Service
    • Privacy Policy
Powered by GitBook
On this page
  1. Technical Designs

Decentralized Inference

PreviousOrganization of the DocumentationNextOverview

Last updated 1 year ago

Nesa's decentralized inference process is the cornerstone of our autonomous AI oracle network, enabling the first trustless environment where AI computations are performed transparently and reliably on-chain.

This section outlines the design of Nesa's decentralized inference framework, which is composed of several core components: users who submit inference requests, chain contracts responsible for the verification and aggregation of results, and nodes that process these requests.

This framework leverages a two-phase transaction structure, utilizing a commit-reveal paradigm, to safeguard against dishonest behavior and free-riding. This ensures that nodes are incentivized to perform their computations honestly and that users can trust the integrity of the inference results.

The system maintains a decentralized approach by employing smart contracts for the key processes of verification and aggregation, allowing for a scalable network that harnesses the collective computational power of its participants.

Model Partitioning and Deep Network Sharding
Cache Optimization to Enhance Efficiency
BSNS with Parameter-efficient Fine-tuning via Adapters
Dynamic Sharding of Arbitrary Neural Networks
Enhanced MTPP Slicing of Topological Order
Swarm Topology