With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time

Subnet 64

Chutes

Emissions
Value
Recycled
Value
Recycled (24h)
Value
Registration Cost
Value
Active Validators
Value
Active Miners
Value
Active Dual Miners/Validators
Value

ABOUT

What exactly does it do?

The Serverless AI Compute subnet allows users to deploy, run, and scale any AI model in seconds. Whether directly through the platform or via a simple API, this subnet offers a seamless, fast, and efficient solution for AI model management.

 

The Serverless AI Compute subnet allows users to deploy, run, and scale any AI model in seconds. Whether directly through the platform or via a simple API, this subnet offers a seamless, fast, and efficient solution for AI model management.

 

PURPOSE

What exactly is the 'product/build'?

Images

Images are Docker images on which all applications (called “chutes”) run within the platform. These images must meet specific requirements, including having a CUDA installation (preferably version 12.2-12.6) and Python 3.10+ installed.

 

Chutes

A “chute” is an application running on top of an image. Each chute is essentially a FastAPI application, designed to perform specific tasks.

 

Cords

A “cord” is a single function within the chute, analogous to a route and method in FastAPI. Each cord performs a specific action or process within the chute.

 

Graval

Graval is a middleware library that ensures the GPUs being used are authentic and meet certain performance standards. It runs within the chute to validate hardware claims and perform additional checks, such as VRAM capacity and device information verification.

 

Registration

To use the platform, users must register with a Bittensor wallet and hotkey. Registration allows users to create API keys for secure access to the platform and services.

 

Creating API Keys

After registration, users can create API keys with different levels of access. These keys allow control over specific aspects of the platform, such as image management or chute deployment.

 

Validators and Subnet Owners

Validators and subnet owners on Bittensor can link their keys to a chute account, granting them free access and developer roles. This process ensures that validators and subnet owners have special privileges within the platform.

 

Developer Role and Deposit

To create and deploy chutes, users must first deposit a refundable amount of Tao, which serves as a developer deposit. This helps prevent abuse on the platform. After seven days, users can request their deposit back.

 

Building an Image

To deploy an application, users must first build an image. The platform provides a base image to start with, including Python 3.12.7 and necessary CUDA packages. Users can customize the image to include their desired dependencies and applications.

 

Deploying a Chute

Once an image is built and ready, users can deploy it as a chute. Deployment allows applications to run in a scalable, distributed environment, with various configurations to ensure optimal performance (e.g., GPU count, VRAM requirements).

 

Custom Chutes

Users can build custom chutes tailored to their needs. Chutes are flexible and can support various functions, including complex processes and arbitrary web servers. Functions within the chutes are decorated with special annotations (cords) to define their behavior.

 

Local Testing

Before deploying an image or chute, users can test them locally. This process allows users to verify functionality and debug issues in a controlled environment before deployment.

This platform provides users with the tools to build, deploy, and manage AI applications in a decentralized and scalable manner, allowing for flexibility and efficiency.

Images

Images are Docker images on which all applications (called “chutes”) run within the platform. These images must meet specific requirements, including having a CUDA installation (preferably version 12.2-12.6) and Python 3.10+ installed.

 

Chutes

A “chute” is an application running on top of an image. Each chute is essentially a FastAPI application, designed to perform specific tasks.

 

Cords

A “cord” is a single function within the chute, analogous to a route and method in FastAPI. Each cord performs a specific action or process within the chute.

 

Graval

Graval is a middleware library that ensures the GPUs being used are authentic and meet certain performance standards. It runs within the chute to validate hardware claims and perform additional checks, such as VRAM capacity and device information verification.

 

Registration

To use the platform, users must register with a Bittensor wallet and hotkey. Registration allows users to create API keys for secure access to the platform and services.

 

Creating API Keys

After registration, users can create API keys with different levels of access. These keys allow control over specific aspects of the platform, such as image management or chute deployment.

 

Validators and Subnet Owners

Validators and subnet owners on Bittensor can link their keys to a chute account, granting them free access and developer roles. This process ensures that validators and subnet owners have special privileges within the platform.

 

Developer Role and Deposit

To create and deploy chutes, users must first deposit a refundable amount of Tao, which serves as a developer deposit. This helps prevent abuse on the platform. After seven days, users can request their deposit back.

 

Building an Image

To deploy an application, users must first build an image. The platform provides a base image to start with, including Python 3.12.7 and necessary CUDA packages. Users can customize the image to include their desired dependencies and applications.

 

Deploying a Chute

Once an image is built and ready, users can deploy it as a chute. Deployment allows applications to run in a scalable, distributed environment, with various configurations to ensure optimal performance (e.g., GPU count, VRAM requirements).

 

Custom Chutes

Users can build custom chutes tailored to their needs. Chutes are flexible and can support various functions, including complex processes and arbitrary web servers. Functions within the chutes are decorated with special annotations (cords) to define their behavior.

 

Local Testing

Before deploying an image or chute, users can test them locally. This process allows users to verify functionality and debug issues in a controlled environment before deployment.

This platform provides users with the tools to build, deploy, and manage AI applications in a decentralized and scalable manner, allowing for flexibility and efficiency.

WHO

Team Info

Namoray is one of the premier developers on Bittensor. Other team members include:

Marcus Graichen – Co-Founder

Akinwunmi Aguda – Frontend Developer

Arpan Tripathi – AI Engineer

Nicholas Bateman – Lead AI Engineer

Christopher Subia-Waud – Lead Machine Learning Engineer

 

Namoray is one of the premier developers on Bittensor. Other team members include:

Marcus Graichen – Co-Founder

Akinwunmi Aguda – Frontend Developer

Arpan Tripathi – AI Engineer

Nicholas Bateman – Lead AI Engineer

Christopher Subia-Waud – Lead Machine Learning Engineer

 

FUTURE

Roadmap

Achievements

  • Pioneering the future of decentralized AI with significant technological and infrastructure milestones already in place.
  • Hundreds of H200s and A6000s are live on the platform, processing billions of tokens daily through OpenRouter.
  • Text, image, and audio models for all major LLMs are fully integrated and operational.
  • Leading efforts in decentralized auditing.
  • More lines of code written than they can count (and trust them, they love coding as much as they love their meals!).
  • Fully integrated TAO payment system, with Product-Market Fit (PMF) successfully achieved.

 

Q1 2025 – Core Platform Evolution

  • Focused on major platform upgrades that drive decentralization, enhance agent capabilities, and scale infrastructure.
  • Launch of a comprehensive agent platform with advanced toolsets, integrating subnets for data and inference providers.
  • Decentralized model distribution via IPFS, minimizing reliance on centralized providers and enabling miner-based data mirroring.
  • Support for long-running jobs with bid-based pricing, advanced storage management, and validation systems for fine-tuning and large-scale operations.
  • Custom validation processes for Chutes using cord functions, facilitating adversarial verification across multiple miners.

 

Q2 2025 – Pretraining as a Service

  • Launching LLM pretraining services and targeted enterprise solutions.
  • Onboard enterprises with a streamlined process for custom deployments of standard chutes.
  • Activate revenue generation, flowing directly to miners.
  • Roll out extensive pretraining services for large language models.

 

H2 2025 – Long Jobs

  • Expanding support for long-term operations and advanced model training.
  • Introduce support for extended operations and complex computational tasks.
  • Implement fine-tuning capabilities for LLMs and image models.
  • Strengthen partnerships and finalize enterprise agreements.

 

Achievements

  • Pioneering the future of decentralized AI with significant technological and infrastructure milestones already in place.
  • Hundreds of H200s and A6000s are live on the platform, processing billions of tokens daily through OpenRouter.
  • Text, image, and audio models for all major LLMs are fully integrated and operational.
  • Leading efforts in decentralized auditing.
  • More lines of code written than they can count (and trust them, they love coding as much as they love their meals!).
  • Fully integrated TAO payment system, with Product-Market Fit (PMF) successfully achieved.

 

Q1 2025 – Core Platform Evolution

  • Focused on major platform upgrades that drive decentralization, enhance agent capabilities, and scale infrastructure.
  • Launch of a comprehensive agent platform with advanced toolsets, integrating subnets for data and inference providers.
  • Decentralized model distribution via IPFS, minimizing reliance on centralized providers and enabling miner-based data mirroring.
  • Support for long-running jobs with bid-based pricing, advanced storage management, and validation systems for fine-tuning and large-scale operations.
  • Custom validation processes for Chutes using cord functions, facilitating adversarial verification across multiple miners.

 

Q2 2025 – Pretraining as a Service

  • Launching LLM pretraining services and targeted enterprise solutions.
  • Onboard enterprises with a streamlined process for custom deployments of standard chutes.
  • Activate revenue generation, flowing directly to miners.
  • Roll out extensive pretraining services for large language models.

 

H2 2025 – Long Jobs

  • Expanding support for long-term operations and advanced model training.
  • Introduce support for extended operations and complex computational tasks.
  • Implement fine-tuning capabilities for LLMs and image models.
  • Strengthen partnerships and finalize enterprise agreements.

 

MEDIA

Huge thanks to Keith Singery (aka Bittensor Guru) for all of his fantastic work in the Bittensor community. Make sure to check out his other video/audio interviews by clicking HERE.

Subnet 64 Chutes, built for dtao with optimized efficiency on every level, monetization included through micropayments in TAO, a host of groundbreaking security features for miners, a rewrite of validation methodology to lower expenses and a front end so smooth you don’t need to know squat to kick the tires on all of Huggingface.

A big thank you to Tao Stats for producing these insightful videos in the Novelty Search series. We appreciate the opportunity to dive deep into the groundbreaking work being done by Subnets within Bittensor! Check out some of their other videos HERE.

In this session, the team from Rayon Labs discuss the latest developments and updates within their ecosystem. The conversation delves into the performance and future improvements of the Gradients platform, focusing on its edge in machine learning and AI model training, where the platform consistently outperforms major competitors like Google and AWS in terms of cost, performance, and ease of use. They also explore the Squad platform, a tool designed for building and deploying AI agents, enabling users to create custom agents with little to no coding experience. The discussion touches on innovations like trusted execution environments (TE) to enhance security and privacy for AI computations. The team also highlights ongoing efforts to scale up their infrastructure, including integrating fiat payments into Chutes, a serverless AI compute platform, and expanding their use of X integrations. The session provides a deep dive into how these tools and technologies are reshaping decentralized AI and the future of machine learning.

An earlier Novelty Search session from 2024, Namoray and the Rayon Labs team provide updates on SN19 and introduce two new subnets: Gradients and Chutes

A special thanks to Mark Jeffrey for his amazing Hash Rate series! In this series, he provides valuable insights into Bittensor Subnets and the world of decentralized AI. Be sure to check out the full series on his YouTube channel for more expert analysis and deep dives.

Recorded in June 2025, this episode of Hash Rate features an in-depth conversation between host Mark Jeffrey and John Durban of Chutes, one of the leading subnets in the Bittensor. They explore Chutes rapid rise to prominence as a decentralized AI compute platform, capable of handling over 100 billion tokens daily—about one-third of Google’s AI load from a year prior. John explains how Chutes simplifies AI deployment by abstracting away infrastructure challenges, discusses its revenue model powered by Bittensor emissions, and outlines the platform’s long-term vision of delivering uncensored, privacy-protecting AI services globally. The episode also dives into broader Bittensor dynamics, including root emissions, subnet profitability, and the evolving interplay between crypto and AI innovation.

Novelty Search is great, but for most investors trying to understand Bittensor, the technical depth is a wall, not a bridge. If we’re going to attract investment into this ecosystem then we need more people to understand it! That’s why Siam Kidd and Mark Creaser from DSV Fund have launched Revenue Search, where they ask the simple questions that investors want to know the answers to.

Recorded in June 2025, this episode of Revenue Search dives into the business side of Chutes, Bittensor’s leading AI subnet. Jon Durbin, the founder of Chutes, explains how the platform enables developers to deploy AI models without the heavy lifting of traditional infrastructure. With a focus on serverless, scalable GPU-based compute, Chutes simplifies AI deployment while offering pricing up to 20 times cheaper than traditional cloud providers. The episode covers Chutes’ shift toward revenue generation—including new fiat payments, early monetization metrics, and plans to introduce privacy-guaranteeing Trusted Execution Environments (TEs). Durbin also discusses future growth through enterprise adoption, agent platforms like Squad, and second-order AI apps—all aiming to make Chutes the “Linux of AI.”

NEWS

Announcements

Load More