Subnet 64

Chutes

Emissions
Value
Recycled
Value
Recycled (24h)
Value
Registration Cost
Value
Active Validators
Value
Active Miners
Value
Active Dual Miners/Validators
Value

ABOUT

What exactly does it do?

The Serverless AI Compute subnet allows users to deploy, run, and scale any AI model in seconds. Whether directly through the platform or via a simple API, this subnet offers a seamless, fast, and efficient solution for AI model management.

 

The Serverless AI Compute subnet allows users to deploy, run, and scale any AI model in seconds. Whether directly through the platform or via a simple API, this subnet offers a seamless, fast, and efficient solution for AI model management.

 

PURPOSE

What exactly is the 'product/build'?

Images

Images are Docker images on which all applications (called “chutes”) run within the platform. These images must meet specific requirements, including having a CUDA installation (preferably version 12.2-12.6) and Python 3.10+ installed.

 

Chutes

A “chute” is an application running on top of an image. Each chute is essentially a FastAPI application, designed to perform specific tasks.

 

Cords

A “cord” is a single function within the chute, analogous to a route and method in FastAPI. Each cord performs a specific action or process within the chute.

 

Graval

Graval is a middleware library that ensures the GPUs being used are authentic and meet certain performance standards. It runs within the chute to validate hardware claims and perform additional checks, such as VRAM capacity and device information verification.

 

Registration

To use the platform, users must register with a Bittensor wallet and hotkey. Registration allows users to create API keys for secure access to the platform and services.

 

Creating API Keys

After registration, users can create API keys with different levels of access. These keys allow control over specific aspects of the platform, such as image management or chute deployment.

 

Validators and Subnet Owners

Validators and subnet owners on Bittensor can link their keys to a chute account, granting them free access and developer roles. This process ensures that validators and subnet owners have special privileges within the platform.

 

Developer Role and Deposit

To create and deploy chutes, users must first deposit a refundable amount of Tao, which serves as a developer deposit. This helps prevent abuse on the platform. After seven days, users can request their deposit back.

 

Building an Image

To deploy an application, users must first build an image. The platform provides a base image to start with, including Python 3.12.7 and necessary CUDA packages. Users can customize the image to include their desired dependencies and applications.

 

Deploying a Chute

Once an image is built and ready, users can deploy it as a chute. Deployment allows applications to run in a scalable, distributed environment, with various configurations to ensure optimal performance (e.g., GPU count, VRAM requirements).

 

Custom Chutes

Users can build custom chutes tailored to their needs. Chutes are flexible and can support various functions, including complex processes and arbitrary web servers. Functions within the chutes are decorated with special annotations (cords) to define their behavior.

 

Local Testing

Before deploying an image or chute, users can test them locally. This process allows users to verify functionality and debug issues in a controlled environment before deployment.

This platform provides users with the tools to build, deploy, and manage AI applications in a decentralized and scalable manner, allowing for flexibility and efficiency.

Images

Images are Docker images on which all applications (called “chutes”) run within the platform. These images must meet specific requirements, including having a CUDA installation (preferably version 12.2-12.6) and Python 3.10+ installed.

 

Chutes

A “chute” is an application running on top of an image. Each chute is essentially a FastAPI application, designed to perform specific tasks.

 

Cords

A “cord” is a single function within the chute, analogous to a route and method in FastAPI. Each cord performs a specific action or process within the chute.

 

Graval

Graval is a middleware library that ensures the GPUs being used are authentic and meet certain performance standards. It runs within the chute to validate hardware claims and perform additional checks, such as VRAM capacity and device information verification.

 

Registration

To use the platform, users must register with a Bittensor wallet and hotkey. Registration allows users to create API keys for secure access to the platform and services.

 

Creating API Keys

After registration, users can create API keys with different levels of access. These keys allow control over specific aspects of the platform, such as image management or chute deployment.

 

Validators and Subnet Owners

Validators and subnet owners on Bittensor can link their keys to a chute account, granting them free access and developer roles. This process ensures that validators and subnet owners have special privileges within the platform.

 

Developer Role and Deposit

To create and deploy chutes, users must first deposit a refundable amount of Tao, which serves as a developer deposit. This helps prevent abuse on the platform. After seven days, users can request their deposit back.

 

Building an Image

To deploy an application, users must first build an image. The platform provides a base image to start with, including Python 3.12.7 and necessary CUDA packages. Users can customize the image to include their desired dependencies and applications.

 

Deploying a Chute

Once an image is built and ready, users can deploy it as a chute. Deployment allows applications to run in a scalable, distributed environment, with various configurations to ensure optimal performance (e.g., GPU count, VRAM requirements).

 

Custom Chutes

Users can build custom chutes tailored to their needs. Chutes are flexible and can support various functions, including complex processes and arbitrary web servers. Functions within the chutes are decorated with special annotations (cords) to define their behavior.

 

Local Testing

Before deploying an image or chute, users can test them locally. This process allows users to verify functionality and debug issues in a controlled environment before deployment.

This platform provides users with the tools to build, deploy, and manage AI applications in a decentralized and scalable manner, allowing for flexibility and efficiency.

WHO

Team Info

Namoray – Chief Incentive Mechanism Officer

Namoray – Chief Incentive Mechanism Officer

FUTURE

Roadmap

MEDIA

Huge thanks to Keith Singery (aka Bittensor Guru) for all of his fantastic work in the Bittensor community. Make sure to check out his other video/audio interviews by clicking HERE.

Subnet 64 Chutes, built for dtao with optimized efficiency on every level, monetization included through micropayments in TAO, a host of groundbreaking security features for miners, a rewrite of validation methodology to lower expenses and a front end so smooth you don’t need to know squat to kick the tires on all of Huggingface.

NEWS

Announcements

MORE INFO

Useful Links