LogoLogo
  • Nesa Docs
    • Introduction to Nesa
    • Overview of the Nesa System
      • AI Models: Repository, Standardization, Uniformity
      • Users: Why Do We Need Private Inference?
      • Node Runners: Doing Inference and Earning $NES
    • Organization of the Documentation
  • Technical Designs
    • Decentralized Inference
      • Overview
      • Model Partitioning and Deep Network Sharding
      • Dynamic Sharding of Arbitrary Neural Networks
      • Cache Optimization to Enhance Efficiency
      • BSNS with Parameter-efficient Fine-tuning via Adapters
      • Enhanced MTPP Slicing of Topological Order
      • Swarm Topology
      • Additional: Free-Riding Prevention
    • Security and Privacy
      • Overview
      • Hardware Side: Trusted Execution Environments (TEEs)
      • Software/algorithm Side: Model Verification
        • Zero-knowledge Machine Learning (ZKML)
        • Consensus-based Distribution Verification (CDV)
      • Software/algorithm Side: Data Encryption
        • Visioning: Homomorphic Encryption
        • Implementation: Split Learning (HE)
      • Additional Info
        • Additional Info: Trusted Execution Environments (TEEs)
        • Additional Info: Software-based Approaches
    • Overview of $NES
      • $NES Utility
    • The First Application on Nesa: DNA X
    • Definitions
    • Additional Information
      • Dynamic Model Versioning and Fork Management
      • Nesa's Utility Suite
      • The AI Kernel Market
      • Privacy Technology
        • Trusted Execution Environment (TEE)
        • Secure Multi-Party Computation (MPC)
        • Verifiable Random Function (VRF)
        • Zero-Knowledge Proof (ZKP)
      • The Integration of Evolutionary AI to Evolve the Nesa Ecosystem
      • Interoperability and Nesa Future Plans
  • Using Nesa
    • Getting Started
      • Wallet Setup
      • Testnet Nesa Faucet
    • Via Web
      • Your Nesa Account
      • Selecting an AI Kernel
      • Submitting a Query
    • Via SDK
    • Via IBC
    • Via NESBridge
      • On Sei
  • Run a Nesa Node
    • Prerequisites
    • Installation
    • Troubleshooting
    • FAQ
  • Links
    • nesa.ai
    • Nesa Discord
    • Nesa Twitter
    • Nesa dApp: dnax.ai
    • Nesa dApp: DNA X Docs
    • Terms of Service
    • Privacy Policy
Powered by GitBook
On this page
  1. Technical Designs
  2. Additional Information

Nesa's Utility Suite

PreviousDynamic Model Versioning and Fork ManagementNextThe AI Kernel Market

Last updated 7 months ago

Nesa accessorizes its system with a hub for external AI tools that help facilitate model querying, training, upload, and evolution (Figure 11.2). As the point of final validation for any model inference query, Nesa sits at the very bottom of the stack and is therefore able to plug an assortment of third-party AI services on top of it.

These services are serially executed and consigned to a dedicated adapter within the system. The Utility Suite includes provisioned computational power (TPUs, GPUs), almost all major publicly available open-source models, the major API-based LLMs, an assortment of Information Oracles, DAO tooling, and Relayers.

Nesa will continue to grow its collection of AI-Web3 partner services and products that can be invoked during training, upload, orchestration, and inference before Nesa executes final query validation.

Some examples of Nesa's future Utility Suite include:

  1. Efficient sharing of computational resources on-chain, such as Render, Akash.

  2. Decentralized AI tooling and infrastructure services, such as Olas, Bittensor.

  3. Information oracles for off-chain knowledge relayance, such as Chainlink, Band.

  4. DAO tooling management systems, such as Gnosis, Aragon

Nesa Utility Suite Architecture. Nesa connects a hub of utility services that help facilitate model querying, training, upload, and evolution for AI model developers in the Nesa ecosystem. A utility service is initialized, run, and then securely hands off data to Nesa’s Execution Layer to assist in a component of the preparation, orchestra- tion, or computation of the request. Inference is then conducted by Nesa Core and the results are sent to validator nodes for consensus before rollup to settlement.