LogoLogo
  • Nesa Docs
    • Introduction to Nesa
    • Overview of the Nesa System
      • AI Models: Repository, Standardization, Uniformity
      • Users: Why Do We Need Private Inference?
      • Node Runners: Doing Inference and Earning $NES
    • Organization of the Documentation
  • Technical Designs
    • Decentralized Inference
      • Overview
      • Model Partitioning and Deep Network Sharding
      • Dynamic Sharding of Arbitrary Neural Networks
      • Cache Optimization to Enhance Efficiency
      • BSNS with Parameter-efficient Fine-tuning via Adapters
      • Enhanced MTPP Slicing of Topological Order
      • Swarm Topology
      • Additional: Free-Riding Prevention
    • Security and Privacy
      • Overview
      • Hardware Side: Trusted Execution Environments (TEEs)
      • Software/algorithm Side: Model Verification
        • Zero-knowledge Machine Learning (ZKML)
        • Consensus-based Distribution Verification (CDV)
      • Software/algorithm Side: Data Encryption
        • Visioning: Homomorphic Encryption
        • Implementation: Split Learning (HE)
      • Additional Info
        • Additional Info: Trusted Execution Environments (TEEs)
        • Additional Info: Software-based Approaches
    • Overview of $NES
      • $NES Utility
    • The First Application on Nesa: DNA X
    • Definitions
    • Additional Information
      • Dynamic Model Versioning and Fork Management
      • Nesa's Utility Suite
      • The AI Kernel Market
      • Privacy Technology
        • Trusted Execution Environment (TEE)
        • Secure Multi-Party Computation (MPC)
        • Verifiable Random Function (VRF)
        • Zero-Knowledge Proof (ZKP)
      • The Integration of Evolutionary AI to Evolve the Nesa Ecosystem
      • Interoperability and Nesa Future Plans
  • Using Nesa
    • Getting Started
      • Wallet Setup
      • Testnet Nesa Faucet
    • Via Web
      • Your Nesa Account
      • Selecting an AI Kernel
      • Submitting a Query
    • Via SDK
    • Via IBC
    • Via NESBridge
      • On Sei
  • Run a Nesa Node
    • Prerequisites
    • Installation
    • Troubleshooting
    • FAQ
  • Links
    • nesa.ai
    • Nesa Discord
    • Nesa Twitter
    • Nesa dApp: dnax.ai
    • Nesa dApp: DNA X Docs
    • Terms of Service
    • Privacy Policy
Powered by GitBook
On this page
  • The Gallery
  • Model Types
  • AI Kernel Selection
  1. Using Nesa
  2. Via Web

Selecting an AI Kernel

Choosing the right AI kernel is crucial for your tasks. This guide will help you browse and select the most suitable AI kernel for your needs.

PreviousYour Nesa AccountNextSubmitting a Query

Last updated 11 months ago

The Gallery

The default navigation of Nesa Website is the Gallery. AI Kernels supported within Nesa Ecosystem are listed here for your choosing, grouped by model type.

In order to navigate to the Gallery, click the Gallery icon on the left-hand navigation page.


Model Types

AI Kernels are grouped by model type, allowing users to efficiently pinpoint the kernel they are interested in. The number of model types available will increase over time as Nesa continues to add support for them. See a non-exhaustive screen capture below.


AI Kernel Selection

Following selection of the desired model type, users have the ability to search and sort within the available kernels.

Some kernels, designated with a badge as a top pick, are highlighted and suggested for use by Nesa Team due to factors such as overall performance.

[Testnet] Some model details, like latency, likes, and cost may not appear consistent.

Following the search and sort process, users may choose their target kernel for inference simply by clicking on it. Upon selection, users will be navigated to the Query page for inference execution with their respective model primed.

Gallery icon
A sample of available model types
Search and sort options
Example AI Kernel