With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time

Subnet 56

Gradients

Alpha Price
Value
Market Cap
Value
Neurons
Value
Registration Cost
Value
TAO Liquidity
Value
Alpha in Pool
Value
Total Alpha Supply
Value
% Alpha Staked
Value

ABOUT

What exactly does it do?

With the gradients subnet, they’ve created a platform that makes AI training accessible to everyone, leveraging the power of the Bittensor network. With just a few clicks, anyone regardless of their AI training knowledge can train models on Bittensor. Their platform offers a wide variety of models and datasets, allowing users to easily select and train their AI models, either through a simple user interface or programmatically via their API.

They provide intelligent AI tools that automatically optimise your training process, ensuring the best results with minimal effort. Users can monitor their training progress in great detail through platforms like Weights & Biases and, once ready, deploy their models on Hugging Face. With the ability to fine-tune models using custom data on demand, they’ve made it easier than ever to build, train, and deploy AI models, streamlining the entire process for developers and innovators.

With the gradients subnet, they’ve created a platform that makes AI training accessible to everyone, leveraging the power of the Bittensor network. With just a few clicks, anyone regardless of their AI training knowledge can train models on Bittensor. Their platform offers a wide variety of models and datasets, allowing users to easily select and train their AI models, either through a simple user interface or programmatically via their API.

They provide intelligent AI tools that automatically optimise your training process, ensuring the best results with minimal effort. Users can monitor their training progress in great detail through platforms like Weights & Biases and, once ready, deploy their models on Hugging Face. With the ability to fine-tune models using custom data on demand, they’ve made it easier than ever to build, train, and deploy AI models, streamlining the entire process for developers and innovators.

PURPOSE

What exactly is the 'product/build'?

They’ve built a decentralised platform that allows anyone in the world to train AI models on Bittensor, the largest decentralised AI network. With just a few clicks, users can begin training their models, even without any prior AI knowledge. The process is streamlined with a wide selection of models and datasets, enabling users to quickly select the best options for their tasks. Their intelligent AI automatically helps with column mapping for optimal results, or users can make their own selections.

Training can be initiated either via an easy-to-use interface or programmatically using their API. Throughout the process, users can monitor their model’s progress in detail, with advanced tools like Weights & Biases providing deeper insights. Once the model is ready, users can deploy and use it via platforms like Hugging Face. Additionally, the platform allows users to fine-tune models using custom datasets, all on-demand. This makes training and deploying AI models simpler, more efficient, and accessible to everyone.

They’ve built a decentralised platform that allows anyone in the world to train AI models on Bittensor, the largest decentralised AI network. With just a few clicks, users can begin training their models, even without any prior AI knowledge. The process is streamlined with a wide selection of models and datasets, enabling users to quickly select the best options for their tasks. Their intelligent AI automatically helps with column mapping for optimal results, or users can make their own selections.

Training can be initiated either via an easy-to-use interface or programmatically using their API. Throughout the process, users can monitor their model’s progress in detail, with advanced tools like Weights & Biases providing deeper insights. Once the model is ready, users can deploy and use it via platforms like Hugging Face. Additionally, the platform allows users to fine-tune models using custom datasets, all on-demand. This makes training and deploying AI models simpler, more efficient, and accessible to everyone.

WHO

Team Info

Wanderingweights – Founder

Besim – Lead AI dev

Samoline – Lead AI dev

Diagonalge – Senior AI dev

Fezicles – Community manager

Wanderingweights – Founder

Besim – Lead AI dev

Samoline – Lead AI dev

Diagonalge – Senior AI dev

Fezicles – Community manager

FUTURE

Roadmap

Q4 2024

  • They have reached major milestones in democratizing AI model training and solidifying their position as the leading platform for decentralized AI development.
  • Demonstrated superior zero-click training performance versus Together.ai, Google Vertex, and Hugging Face AutoML
  • First and only platform enabling decentralized model training on Bittensor
  • Enabled training of any small- to medium-sized LLM from Hugging Face
  • Achieved training across 118 trillion parameters, showcasing unmatched computational power
  • Successfully processed and trained on 2 billion rows of data
  • Set new benchmarks in model auditing and validation

 

Q1 2025

  • They are extending their capabilities to image model training while maintaining a focus on accessibility and high performance.
  • Launch of full-featured image model training tools
  • Advanced preprocessing pipeline tailored for image datasets
  • Specialized infrastructure optimized for efficient image model training

 

Q2 2025

  • Introducing large-scale pretraining for LLMs, empowering organizations to build foundational models from scratch.
  • Deployment of full-scale pretraining for Large Language Models
  • Infrastructure expansion to accommodate larger parameter text models
  • Improved training environment to support foundation model development

 

Q3 2025

  • They are bringing pretraining capabilities to vision models, enabling custom image model development from the ground up.
  • Launch of pretraining services for image-based models
  • Support for higher-parameter image model training
  • Dedicated infrastructure for advanced vision model creation

 

Q4 2025

  • Expanding into multimodal training and integrating with major cloud providers to power enterprise-scale AI.
  • Support for training models combining text, image, and other modalities
  • Seamless integration with major cloud providers as a top-tier one-click training platform
  • Upgraded infrastructure tailored for enterprise-grade deployments

 

Q4 2024

  • They have reached major milestones in democratizing AI model training and solidifying their position as the leading platform for decentralized AI development.
  • Demonstrated superior zero-click training performance versus Together.ai, Google Vertex, and Hugging Face AutoML
  • First and only platform enabling decentralized model training on Bittensor
  • Enabled training of any small- to medium-sized LLM from Hugging Face
  • Achieved training across 118 trillion parameters, showcasing unmatched computational power
  • Successfully processed and trained on 2 billion rows of data
  • Set new benchmarks in model auditing and validation

 

Q1 2025

  • They are extending their capabilities to image model training while maintaining a focus on accessibility and high performance.
  • Launch of full-featured image model training tools
  • Advanced preprocessing pipeline tailored for image datasets
  • Specialized infrastructure optimized for efficient image model training

 

Q2 2025

  • Introducing large-scale pretraining for LLMs, empowering organizations to build foundational models from scratch.
  • Deployment of full-scale pretraining for Large Language Models
  • Infrastructure expansion to accommodate larger parameter text models
  • Improved training environment to support foundation model development

 

Q3 2025

  • They are bringing pretraining capabilities to vision models, enabling custom image model development from the ground up.
  • Launch of pretraining services for image-based models
  • Support for higher-parameter image model training
  • Dedicated infrastructure for advanced vision model creation

 

Q4 2025

  • Expanding into multimodal training and integrating with major cloud providers to power enterprise-scale AI.
  • Support for training models combining text, image, and other modalities
  • Seamless integration with major cloud providers as a top-tier one-click training platform
  • Upgraded infrastructure tailored for enterprise-grade deployments

 

MEDIA

Huge thanks to Keith Singery (aka Bittensor Guru) for all of his fantastic work in the Bittensor community. Make sure to check out his other video/audio interviews by clicking HERE.

The team at Rayon Labs have done it again with Gradients led by wanderingweights who joins the pod to discuss how he and his team have democratized AI model building with their “couple of clicks” no-code training subnet. This is one of the most groundbreaking projects on Bittensor that, in only a few months on the network, can already out-train the establishment.

A big thank you to Tao Stats for producing these insightful videos in the Novelty Search series. We appreciate the opportunity to dive deep into the groundbreaking work being done by Subnets within Bittensor! Check out some of their other videos HERE.

Recorded in August 2025: This session opens with a chat about location-independent mining before spotlighting Wandering Weights’ “Gradients” — an AutoML post-training platform on Bittensor that now supports Instruct, Diffusion, DPO (preference tuning), and GRPO (reward-function-driven optimization) with custom datasets, auto-captioning for images, and an API/UX for one-click jobs. The big update is Gradients 5.0: a shift to open source via rolling text/image tournaments where miners submit repos (not just models); validators provide fixed compute, results are compared head-to-head, and winning scripts earn emissions. The team shares benchmark results across ~180 runs claiming Gradients outperforms Databricks, GCP, Hugging Face, Together (and beats CivetAI on tougher diffusion settings), and unveils “Gradients Instruct 8B,” a fine-tune of Quen 3 Base reported to beat Quen 3 Instruct on zero-shot math and instruction-following. They discuss pricing examples, revenue traction, privacy/containers for custom rewards, future video/vision tasks, and tighter integration with compute subnets to keep costs low while scaling.

 

Novelty Search is great, but for most investors trying to understand Bittensor, the technical depth is a wall, not a bridge. If we’re going to attract investment into this ecosystem then we need more people to understand it! That’s why Siam Kidd and Mark Creaser from DSV Fund have launched Revenue Search, where they ask the simple questions that investors want to know the answers to.

Recorded in July 2025, this episode of Revenue Search features Chris (aka Wandering Weights), founder of Gradients (Subnet 56)—a Bittensor subnet focused on decentralized, high-performance model training. Gradients allows users to upload a dataset, select a model, and have Bittensor miners compete to produce the best-performing version—removing the need for manual hyperparameter tuning or large ML teams. The platform has already attracted 3,000 paying users, primarily hobbyists, and delivers results significantly cheaper and better than major incumbents like Google Cloud or AWS. With a new enterprise-ready version (5.0) launching, Gradients plans to address corporate data privacy concerns by running training internally or through Shoots’ Trusted Execution Environments. All fiat revenue is used to buy and lock up the Alpha token, and the team now aims to shift focus from technical development to business growth and enterprise partnerships.

NEWS

Announcements

Load More