LogoLogo
  • Nesa Docs
    • Introduction to Nesa
    • Overview of the Nesa System
      • AI Models: Repository, Standardization, Uniformity
      • Users: Why Do We Need Private Inference?
      • Node Runners: Doing Inference and Earning $NES
    • Organization of the Documentation
  • Technical Designs
    • Decentralized Inference
      • Overview
      • Model Partitioning and Deep Network Sharding
      • Dynamic Sharding of Arbitrary Neural Networks
      • Cache Optimization to Enhance Efficiency
      • BSNS with Parameter-efficient Fine-tuning via Adapters
      • Enhanced MTPP Slicing of Topological Order
      • Swarm Topology
      • Additional: Free-Riding Prevention
    • Security and Privacy
      • Overview
      • Hardware Side: Trusted Execution Environments (TEEs)
      • Software/algorithm Side: Model Verification
        • Zero-knowledge Machine Learning (ZKML)
        • Consensus-based Distribution Verification (CDV)
      • Software/algorithm Side: Data Encryption
        • Visioning: Homomorphic Encryption
        • Implementation: Split Learning (HE)
      • Additional Info
        • Additional Info: Trusted Execution Environments (TEEs)
        • Additional Info: Software-based Approaches
    • Overview of $NES
      • $NES Utility
    • The First Application on Nesa: DNA X
    • Definitions
    • Additional Information
      • Dynamic Model Versioning and Fork Management
      • Nesa's Utility Suite
      • The AI Kernel Market
      • Privacy Technology
        • Trusted Execution Environment (TEE)
        • Secure Multi-Party Computation (MPC)
        • Verifiable Random Function (VRF)
        • Zero-Knowledge Proof (ZKP)
      • The Integration of Evolutionary AI to Evolve the Nesa Ecosystem
      • Interoperability and Nesa Future Plans
  • Using Nesa
    • Getting Started
      • Wallet Setup
      • Testnet Nesa Faucet
    • Via Web
      • Your Nesa Account
      • Selecting an AI Kernel
      • Submitting a Query
    • Via SDK
    • Via IBC
    • Via NESBridge
      • On Sei
  • Run a Nesa Node
    • Prerequisites
    • Installation
    • Troubleshooting
    • FAQ
  • Links
    • nesa.ai
    • Nesa Discord
    • Nesa Twitter
    • Nesa dApp: dnax.ai
    • Nesa dApp: DNA X Docs
    • Terms of Service
    • Privacy Policy
Powered by GitBook
On this page
  1. Nesa Docs

Introduction to Nesa

The Layer-1 for trusted AI on-chain.

NextOverview of the Nesa System

Last updated 11 months ago

Nesa means miracle, in tribute to the golden age of AI that we are living in. Sufficiently advanced technology - like trusted AI - is often indistinguishable from magic.

Nesa is the lightweight Layer-1 executing critical AI inference on queries that require a high degree of privacy, security, and trust using advanced methods on-chain, including zero-knowledge machine learning (ZKML), split learning (SL), and more.

Nesa was created as an alternative to ChatGPT and other inference platforms today that are centralized and controlled by major players. These platforms have zero visibility or accountability in their output on your critical transactions and provide zero privacy in your data and results. This is a big problem in AI.

Limitations in Centralized Architecture. While commonly used, centralized processing architectures pose significant risks related to data privacy, computational bottlenecks, deterministic output, and single points of failure. The prohibitive cost and scarcity of high-performance computing resources, such as advanced GPUs, prevent the mass adoption of the only counter option to centralization, which is open-source, decentralized AI. These challenges further limit the ability to contribute to developing, training, fine-tuning, and executing AI models. This severely interdicts the adoption of state-of-the-art (SOTA) AI models for enterprises and developers, as well as shared research worldwide.

The challenges around security further impede collaborative efforts to execute AI today. Techniques like continuous adaptation and domain adaptation offer partial solutions by enabling model sharing without direct data exchange. However, these methods are susceptible to backdoor attacks and often result in suboptimal model performance due to the limitations of semi-supervised or unsupervised fine-tuning. Collectively, these issues have real-world implications for businesses that require critical inference. In the financial domain, for example, where institutions analyze vast amounts of sensitive transactional data, traditional centralized systems fail to meet the strict requirements for data privacy and security, while existing decentralized systems fail to achieve the need for both confidentiality and verifiability.


Nesa's Private, Secure, Decentralized Solution. We have built the first decentralized query marketplace - a large AI model store for Web3 that is powered by a reward economy for AI developers, queriers, miners, and model reviewers. This is the first global repository for AI that is decentralized and on-chain. Note our model store is beyond large language models (LLMs) but also includes leading models for visions (such as vision language models) and other key modalities. Additionally, we suppot proprietary small models for businesses and individuals as well for high security and privacy measures.

How inference works on Nesa is simple. Nesa’s Layer-1 trustlessly queries AI models off-chain while keeping their parameters and output secret. Our mining network reaches consensus and then broadcasts these trustworthy proofs on-chain via system oracles.

Nesa introduces layered onto . This solution distributes computational loads across multiple nodes on a decentralized inference network. Nesa’s distributed inference protocol delivers privacy-preserving data processing while facilitating the scalable, auditable execution of AI inference. The figure below highlights our key components.

This new protocol greatly lowers the barrier to entry for participating nodes by accommodating various levels of computational capabilities, which is particularly pertinent given the high hardware requirements of competing systems that restrict inference participation only to the entities with access to top-tier GPUs. By allowing even nodes with limited resources to contribute as validators, Nesa’s protocol democratizes AI inference and makes network participation accessible to all.

Innovations of Nesa. This book introduces two innovations distinct to Nesa. We begin with a detailed look at (1) our novel decentralized, distributed training and inference framework powered by model-agnostic sharding, and then zoom into (2) the leading methods in preserving its security and privacy via zero-knowledge machine learning (ZKML), consensus-based verification, and split learning.

In more granularity, this book will provide details into our , Nesa's cryptographic advancements in security and privacy, and the platform’s tokenomics structure, as well as the broader implications of our project in reshaping the landscape of decentralized AI applications.

decentralized architectural design
the first model-agnostic hybrid sharding approach
a hardware-based and software-based privacy co-optimization protocol
High-level overview of the Nesa network
Page cover image