With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time
The Serverless AI Compute subnet allows users to deploy, run, and scale any AI model in seconds. Whether directly through the platform or via a simple API, this subnet offers a seamless, fast, and efficient solution for AI model management.
The Serverless AI Compute subnet allows users to deploy, run, and scale any AI model in seconds. Whether directly through the platform or via a simple API, this subnet offers a seamless, fast, and efficient solution for AI model management.
Images
Images are Docker images on which all applications (called “chutes”) run within the platform. These images must meet specific requirements, including having a CUDA installation (preferably version 12.2-12.6) and Python 3.10+ installed.
Chutes
A “chute” is an application running on top of an image. Each chute is essentially a FastAPI application, designed to perform specific tasks.
Cords
A “cord” is a single function within the chute, analogous to a route and method in FastAPI. Each cord performs a specific action or process within the chute.
Graval
Graval is a middleware library that ensures the GPUs being used are authentic and meet certain performance standards. It runs within the chute to validate hardware claims and perform additional checks, such as VRAM capacity and device information verification.
Registration
To use the platform, users must register with a Bittensor wallet and hotkey. Registration allows users to create API keys for secure access to the platform and services.
Creating API Keys
After registration, users can create API keys with different levels of access. These keys allow control over specific aspects of the platform, such as image management or chute deployment.
Validators and Subnet Owners
Validators and subnet owners on Bittensor can link their keys to a chute account, granting them free access and developer roles. This process ensures that validators and subnet owners have special privileges within the platform.
Developer Role and Deposit
To create and deploy chutes, users must first deposit a refundable amount of Tao, which serves as a developer deposit. This helps prevent abuse on the platform. After seven days, users can request their deposit back.
Building an Image
To deploy an application, users must first build an image. The platform provides a base image to start with, including Python 3.12.7 and necessary CUDA packages. Users can customize the image to include their desired dependencies and applications.
Deploying a Chute
Once an image is built and ready, users can deploy it as a chute. Deployment allows applications to run in a scalable, distributed environment, with various configurations to ensure optimal performance (e.g., GPU count, VRAM requirements).
Custom Chutes
Users can build custom chutes tailored to their needs. Chutes are flexible and can support various functions, including complex processes and arbitrary web servers. Functions within the chutes are decorated with special annotations (cords) to define their behavior.
Local Testing
Before deploying an image or chute, users can test them locally. This process allows users to verify functionality and debug issues in a controlled environment before deployment.
This platform provides users with the tools to build, deploy, and manage AI applications in a decentralized and scalable manner, allowing for flexibility and efficiency.
Images
Images are Docker images on which all applications (called “chutes”) run within the platform. These images must meet specific requirements, including having a CUDA installation (preferably version 12.2-12.6) and Python 3.10+ installed.
Chutes
A “chute” is an application running on top of an image. Each chute is essentially a FastAPI application, designed to perform specific tasks.
Cords
A “cord” is a single function within the chute, analogous to a route and method in FastAPI. Each cord performs a specific action or process within the chute.
Graval
Graval is a middleware library that ensures the GPUs being used are authentic and meet certain performance standards. It runs within the chute to validate hardware claims and perform additional checks, such as VRAM capacity and device information verification.
Registration
To use the platform, users must register with a Bittensor wallet and hotkey. Registration allows users to create API keys for secure access to the platform and services.
Creating API Keys
After registration, users can create API keys with different levels of access. These keys allow control over specific aspects of the platform, such as image management or chute deployment.
Validators and Subnet Owners
Validators and subnet owners on Bittensor can link their keys to a chute account, granting them free access and developer roles. This process ensures that validators and subnet owners have special privileges within the platform.
Developer Role and Deposit
To create and deploy chutes, users must first deposit a refundable amount of Tao, which serves as a developer deposit. This helps prevent abuse on the platform. After seven days, users can request their deposit back.
Building an Image
To deploy an application, users must first build an image. The platform provides a base image to start with, including Python 3.12.7 and necessary CUDA packages. Users can customize the image to include their desired dependencies and applications.
Deploying a Chute
Once an image is built and ready, users can deploy it as a chute. Deployment allows applications to run in a scalable, distributed environment, with various configurations to ensure optimal performance (e.g., GPU count, VRAM requirements).
Custom Chutes
Users can build custom chutes tailored to their needs. Chutes are flexible and can support various functions, including complex processes and arbitrary web servers. Functions within the chutes are decorated with special annotations (cords) to define their behavior.
Local Testing
Before deploying an image or chute, users can test them locally. This process allows users to verify functionality and debug issues in a controlled environment before deployment.
This platform provides users with the tools to build, deploy, and manage AI applications in a decentralized and scalable manner, allowing for flexibility and efficiency.
Namoray is one of the premier developers on Bittensor. Other team members include:
Marcus Graichen – Co-Founder
Akinwunmi Aguda – Frontend Developer
Arpan Tripathi – AI Engineer
Nicholas Bateman – Lead AI Engineer
Christopher Subia-Waud – Lead Machine Learning Engineer
Namoray is one of the premier developers on Bittensor. Other team members include:
Marcus Graichen – Co-Founder
Akinwunmi Aguda – Frontend Developer
Arpan Tripathi – AI Engineer
Nicholas Bateman – Lead AI Engineer
Christopher Subia-Waud – Lead Machine Learning Engineer
Achievements
Q1 2025 – Core Platform Evolution
Q2 2025 – Pretraining as a Service
H2 2025 – Long Jobs
Achievements
Q1 2025 – Core Platform Evolution
Q2 2025 – Pretraining as a Service
H2 2025 – Long Jobs
Huge thanks to Keith Singery (aka Bittensor Guru) for all of his fantastic work in the Bittensor community. Make sure to check out his other video/audio interviews by clicking HERE.
Subnet 64 Chutes, built for dtao with optimized efficiency on every level, monetization included through micropayments in TAO, a host of groundbreaking security features for miners, a rewrite of validation methodology to lower expenses and a front end so smooth you don’t need to know squat to kick the tires on all of Huggingface.
A big thank you to Tao Stats for producing these insightful videos in the Novelty Search series. We appreciate the opportunity to dive deep into the groundbreaking work being done by Subnets within Bittensor! Check out some of their other videos HERE.
In this session, the team from Rayon Labs discuss the latest developments and updates within their ecosystem. The conversation delves into the performance and future improvements of the Gradients platform, focusing on its edge in machine learning and AI model training, where the platform consistently outperforms major competitors like Google and AWS in terms of cost, performance, and ease of use. They also explore the Squad platform, a tool designed for building and deploying AI agents, enabling users to create custom agents with little to no coding experience. The discussion touches on innovations like trusted execution environments (TE) to enhance security and privacy for AI computations. The team also highlights ongoing efforts to scale up their infrastructure, including integrating fiat payments into Chutes, a serverless AI compute platform, and expanding their use of X integrations. The session provides a deep dive into how these tools and technologies are reshaping decentralized AI and the future of machine learning.
An earlier Novelty Search session from 2024, Namoray and the Rayon Labs team provide updates on SN19 and introduce two new subnets: Gradients and Chutes
A special thanks to Mark Jeffrey for his amazing Hash Rate series! In this series, he provides valuable insights into Bittensor Subnets and the world of decentralized AI. Be sure to check out the full series on his YouTube channel for more expert analysis and deep dives.
Recorded in June 2025, this episode of Hash Rate features an in-depth conversation between host Mark Jeffrey and John Durban of Chutes, one of the leading subnets in the Bittensor. They explore Chutes rapid rise to prominence as a decentralized AI compute platform, capable of handling over 100 billion tokens daily—about one-third of Google’s AI load from a year prior. John explains how Chutes simplifies AI deployment by abstracting away infrastructure challenges, discusses its revenue model powered by Bittensor emissions, and outlines the platform’s long-term vision of delivering uncensored, privacy-protecting AI services globally. The episode also dives into broader Bittensor dynamics, including root emissions, subnet profitability, and the evolving interplay between crypto and AI innovation.
Novelty Search is great, but for most investors trying to understand Bittensor, the technical depth is a wall, not a bridge. If we’re going to attract investment into this ecosystem then we need more people to understand it! That’s why Siam Kidd and Mark Creaser from DSV Fund have launched Revenue Search, where they ask the simple questions that investors want to know the answers to.
Recorded in June 2025, this episode of Revenue Search dives into the business side of Chutes, Bittensor’s leading AI subnet. Jon Durbin, the founder of Chutes, explains how the platform enables developers to deploy AI models without the heavy lifting of traditional infrastructure. With a focus on serverless, scalable GPU-based compute, Chutes simplifies AI deployment while offering pricing up to 20 times cheaper than traditional cloud providers. The episode covers Chutes’ shift toward revenue generation—including new fiat payments, early monetization metrics, and plans to introduce privacy-guaranteeing Trusted Execution Environments (TEs). Durbin also discusses future growth through enterprise adoption, agent platforms like Squad, and second-order AI apps—all aiming to make Chutes the “Linux of AI.”
Complete trust in gradients and Bittensor will be achieved
The only ml platform where enterprise knows exactly how we train, with complete confidence it’s the best in the world
Combined with private training runs and we have the complete package
The best AutoML scripts in the world are about to go open source 👀
@gradients_ai 5.0 aims to deliver:
🔓 Open-source tournament winners every 2 weeks
🏢 Enterprise transparency that actually builds trust
🚀 Continued world-leading performance through competitive…
The best AutoML scripts in the world are about to go open source 👀
@gradients_ai 5.0 aims to deliver:
🔓 Open-source tournament winners every 2 weeks
🏢 Enterprise transparency that actually builds trust
🚀 Continued world-leading performance through competitive…
Long awaited @gradients_ai 5.0 detailed announcement Friday, July 4th.
This has been cooking for a couple of months now and is our biggest upgrade yet - by a long way.
Mass commercialization at its heart, hyped to get this out 🤙
Our experience with has so far been very positive.
The Nuance subnet uses Nineteen's API to classifying if posts are nuanced or not, and sentiment of responses.
The integration was easy since they made it so that validators can sign message with their…
NineteenAI | Decentralised Inference
Inference, Decentralised.
Nineteen.ai
Why focus on training one model when you can build incentives to train them all and consistently best in class?
#AutoML
@gradients_ai
56
Have I mentioned lately how much I love bittensor? Sure, we have some warts, but it can't be any more clear to me that decentralized, permissionless access to intelligence, compute, investment, etc., is the only way to a sustainable future.
Keep ahead of the Bittensor exponential development curve…
Subnet Alpha is an informational platform for Bittensor Subnets.
This site is not affiliated with the Opentensor Foundation or TaoStats.
The content provided on this website is for informational purposes only. We make no guarantees regarding the accuracy or currency of the information at any given time.
Subnet Alpha is created and maintained by The Realistic Trader. If you have any suggestions or encounter any issues, please contact us at [email protected].
Copyright 2024