With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time

Subnet 56

Gradients

Emissions
Value
Recycled
Value
Recycled (24h)
Value
Registration Cost
Value
Active Validators
Value
Active Miners
Value
Active Dual Miners/Validators
Value

ABOUT

What exactly does it do?

With the gradients subnet, they’ve created a platform that makes AI training accessible to everyone, leveraging the power of the Bittensor network. With just a few clicks, anyone regardless of their AI training knowledge can train models on Bittensor. Their platform offers a wide variety of models and datasets, allowing users to easily select and train their AI models, either through a simple user interface or programmatically via their API.

They provide intelligent AI tools that automatically optimise your training process, ensuring the best results with minimal effort. Users can monitor their training progress in great detail through platforms like Weights & Biases and, once ready, deploy their models on Hugging Face. With the ability to fine-tune models using custom data on demand, they’ve made it easier than ever to build, train, and deploy AI models, streamlining the entire process for developers and innovators.

With the gradients subnet, they’ve created a platform that makes AI training accessible to everyone, leveraging the power of the Bittensor network. With just a few clicks, anyone regardless of their AI training knowledge can train models on Bittensor. Their platform offers a wide variety of models and datasets, allowing users to easily select and train their AI models, either through a simple user interface or programmatically via their API.

They provide intelligent AI tools that automatically optimise your training process, ensuring the best results with minimal effort. Users can monitor their training progress in great detail through platforms like Weights & Biases and, once ready, deploy their models on Hugging Face. With the ability to fine-tune models using custom data on demand, they’ve made it easier than ever to build, train, and deploy AI models, streamlining the entire process for developers and innovators.

PURPOSE

What exactly is the 'product/build'?

They’ve built a decentralised platform that allows anyone in the world to train AI models on Bittensor, the largest decentralised AI network. With just a few clicks, users can begin training their models, even without any prior AI knowledge. The process is streamlined with a wide selection of models and datasets, enabling users to quickly select the best options for their tasks. Their intelligent AI automatically helps with column mapping for optimal results, or users can make their own selections.

Training can be initiated either via an easy-to-use interface or programmatically using their API. Throughout the process, users can monitor their model’s progress in detail, with advanced tools like Weights & Biases providing deeper insights. Once the model is ready, users can deploy and use it via platforms like Hugging Face. Additionally, the platform allows users to fine-tune models using custom datasets, all on-demand. This makes training and deploying AI models simpler, more efficient, and accessible to everyone.

They’ve built a decentralised platform that allows anyone in the world to train AI models on Bittensor, the largest decentralised AI network. With just a few clicks, users can begin training their models, even without any prior AI knowledge. The process is streamlined with a wide selection of models and datasets, enabling users to quickly select the best options for their tasks. Their intelligent AI automatically helps with column mapping for optimal results, or users can make their own selections.

Training can be initiated either via an easy-to-use interface or programmatically using their API. Throughout the process, users can monitor their model’s progress in detail, with advanced tools like Weights & Biases providing deeper insights. Once the model is ready, users can deploy and use it via platforms like Hugging Face. Additionally, the platform allows users to fine-tune models using custom datasets, all on-demand. This makes training and deploying AI models simpler, more efficient, and accessible to everyone.

WHO

Team Info

Namoray is one of the premier developers on Bittensor. Other team members include:

Marcus Graichen – Co-Founder

Akinwunmi Aguda – Frontend Developer

Arpan Tripathi – AI Engineer

Nicholas Bateman – Lead AI Engineer

Christopher Subia-Waud – Lead Machine Learning Engineer

Charlotte Hare – Recruitment

Namoray is one of the premier developers on Bittensor. Other team members include:

Marcus Graichen – Co-Founder

Akinwunmi Aguda – Frontend Developer

Arpan Tripathi – AI Engineer

Nicholas Bateman – Lead AI Engineer

Christopher Subia-Waud – Lead Machine Learning Engineer

Charlotte Hare – Recruitment

FUTURE

Roadmap

Q4 2024

  • They have reached major milestones in democratizing AI model training and solidifying their position as the leading platform for decentralized AI development.
  • Demonstrated superior zero-click training performance versus Together.ai, Google Vertex, and Hugging Face AutoML
  • First and only platform enabling decentralized model training on Bittensor
  • Enabled training of any small- to medium-sized LLM from Hugging Face
  • Achieved training across 118 trillion parameters, showcasing unmatched computational power
  • Successfully processed and trained on 2 billion rows of data
  • Set new benchmarks in model auditing and validation

 

Q1 2025

  • They are extending their capabilities to image model training while maintaining a focus on accessibility and high performance.
  • Launch of full-featured image model training tools
  • Advanced preprocessing pipeline tailored for image datasets
  • Specialized infrastructure optimized for efficient image model training

 

Q2 2025

  • Introducing large-scale pretraining for LLMs, empowering organizations to build foundational models from scratch.
  • Deployment of full-scale pretraining for Large Language Models
  • Infrastructure expansion to accommodate larger parameter text models
  • Improved training environment to support foundation model development

 

Q3 2025

  • They are bringing pretraining capabilities to vision models, enabling custom image model development from the ground up.
  • Launch of pretraining services for image-based models
  • Support for higher-parameter image model training
  • Dedicated infrastructure for advanced vision model creation

 

Q4 2025

  • Expanding into multimodal training and integrating with major cloud providers to power enterprise-scale AI.
  • Support for training models combining text, image, and other modalities
  • Seamless integration with major cloud providers as a top-tier one-click training platform
  • Upgraded infrastructure tailored for enterprise-grade deployments

 

Q4 2024

  • They have reached major milestones in democratizing AI model training and solidifying their position as the leading platform for decentralized AI development.
  • Demonstrated superior zero-click training performance versus Together.ai, Google Vertex, and Hugging Face AutoML
  • First and only platform enabling decentralized model training on Bittensor
  • Enabled training of any small- to medium-sized LLM from Hugging Face
  • Achieved training across 118 trillion parameters, showcasing unmatched computational power
  • Successfully processed and trained on 2 billion rows of data
  • Set new benchmarks in model auditing and validation

 

Q1 2025

  • They are extending their capabilities to image model training while maintaining a focus on accessibility and high performance.
  • Launch of full-featured image model training tools
  • Advanced preprocessing pipeline tailored for image datasets
  • Specialized infrastructure optimized for efficient image model training

 

Q2 2025

  • Introducing large-scale pretraining for LLMs, empowering organizations to build foundational models from scratch.
  • Deployment of full-scale pretraining for Large Language Models
  • Infrastructure expansion to accommodate larger parameter text models
  • Improved training environment to support foundation model development

 

Q3 2025

  • They are bringing pretraining capabilities to vision models, enabling custom image model development from the ground up.
  • Launch of pretraining services for image-based models
  • Support for higher-parameter image model training
  • Dedicated infrastructure for advanced vision model creation

 

Q4 2025

  • Expanding into multimodal training and integrating with major cloud providers to power enterprise-scale AI.
  • Support for training models combining text, image, and other modalities
  • Seamless integration with major cloud providers as a top-tier one-click training platform
  • Upgraded infrastructure tailored for enterprise-grade deployments

 

MEDIA

Huge thanks to Keith Singery (aka Bittensor Guru) for all of his fantastic work in the Bittensor community. Make sure to check out his other video/audio interviews by clicking HERE.

The team at Rayon Labs have done it again with Gradients led by wanderingweights who joins the pod to discuss how he and his team have democratized AI model building with their “couple of clicks” no-code training subnet. This is one of the most groundbreaking projects on Bittensor that, in only a few months on the network, can already out-train the establishment.

A big thank you to Tao Stats for producing these insightful videos in the Novelty Search series. We appreciate the opportunity to dive deep into the groundbreaking work being done by Subnets within Bittensor! Check out some of their other videos HERE.

In this session, the team from Rayon Labs discuss the latest developments and updates within their ecosystem. The conversation delves into the performance and future improvements of the Gradients platform, focusing on its edge in machine learning and AI model training, where the platform consistently outperforms major competitors like Google and AWS in terms of cost, performance, and ease of use. They also explore the Squad platform, a tool designed for building and deploying AI agents, enabling users to create custom agents with little to no coding experience. The discussion touches on innovations like trusted execution environments (TE) to enhance security and privacy for AI computations. The team also highlights ongoing efforts to scale up their infrastructure, including integrating fiat payments into Chutes, a serverless AI compute platform, and expanding their use of X integrations. The session provides a deep dive into how these tools and technologies are reshaping decentralized AI and the future of machine learning.

An earlier Novelty Search session from 2024, Namoray and the Rayon Labs team provide updates on SN19 and introduce two new subnets: Gradients and Chutes

Novelty Search is great, but for most investors trying to understand Bittensor, the technical depth is a wall, not a bridge. If we’re going to attract investment into this ecosystem then we need more people to understand it! That’s why Siam Kidd and Mark Creaser from DSV Fund have launched Revenue Search, where they ask the simple questions that investors want to know the answers to.

Recorded in July 2025, this episode of Revenue Search features Chris (aka Wandering Weights), founder of Gradients (Subnet 56)—a Bittensor subnet focused on decentralized, high-performance model training. Gradients allows users to upload a dataset, select a model, and have Bittensor miners compete to produce the best-performing version—removing the need for manual hyperparameter tuning or large ML teams. The platform has already attracted 3,000 paying users, primarily hobbyists, and delivers results significantly cheaper and better than major incumbents like Google Cloud or AWS. With a new enterprise-ready version (5.0) launching, Gradients plans to address corporate data privacy concerns by running training internally or through Shoots’ Trusted Execution Environments. All fiat revenue is used to buy and lock up the Alpha token, and the team now aims to shift focus from technical development to business growth and enterprise partnerships.

NEWS

Announcements

Load More