With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time
With the gradients subnet, they’ve created a platform that makes AI training accessible to everyone, leveraging the power of the Bittensor network. With just a few clicks, anyone regardless of their AI training knowledge can train models on Bittensor. Their platform offers a wide variety of models and datasets, allowing users to easily select and train their AI models, either through a simple user interface or programmatically via their API.
They provide intelligent AI tools that automatically optimise your training process, ensuring the best results with minimal effort. Users can monitor their training progress in great detail through platforms like Weights & Biases and, once ready, deploy their models on Hugging Face. With the ability to fine-tune models using custom data on demand, they’ve made it easier than ever to build, train, and deploy AI models, streamlining the entire process for developers and innovators.
With the gradients subnet, they’ve created a platform that makes AI training accessible to everyone, leveraging the power of the Bittensor network. With just a few clicks, anyone regardless of their AI training knowledge can train models on Bittensor. Their platform offers a wide variety of models and datasets, allowing users to easily select and train their AI models, either through a simple user interface or programmatically via their API.
They provide intelligent AI tools that automatically optimise your training process, ensuring the best results with minimal effort. Users can monitor their training progress in great detail through platforms like Weights & Biases and, once ready, deploy their models on Hugging Face. With the ability to fine-tune models using custom data on demand, they’ve made it easier than ever to build, train, and deploy AI models, streamlining the entire process for developers and innovators.
They’ve built a decentralised platform that allows anyone in the world to train AI models on Bittensor, the largest decentralised AI network. With just a few clicks, users can begin training their models, even without any prior AI knowledge. The process is streamlined with a wide selection of models and datasets, enabling users to quickly select the best options for their tasks. Their intelligent AI automatically helps with column mapping for optimal results, or users can make their own selections.
Training can be initiated either via an easy-to-use interface or programmatically using their API. Throughout the process, users can monitor their model’s progress in detail, with advanced tools like Weights & Biases providing deeper insights. Once the model is ready, users can deploy and use it via platforms like Hugging Face. Additionally, the platform allows users to fine-tune models using custom datasets, all on-demand. This makes training and deploying AI models simpler, more efficient, and accessible to everyone.
They’ve built a decentralised platform that allows anyone in the world to train AI models on Bittensor, the largest decentralised AI network. With just a few clicks, users can begin training their models, even without any prior AI knowledge. The process is streamlined with a wide selection of models and datasets, enabling users to quickly select the best options for their tasks. Their intelligent AI automatically helps with column mapping for optimal results, or users can make their own selections.
Training can be initiated either via an easy-to-use interface or programmatically using their API. Throughout the process, users can monitor their model’s progress in detail, with advanced tools like Weights & Biases providing deeper insights. Once the model is ready, users can deploy and use it via platforms like Hugging Face. Additionally, the platform allows users to fine-tune models using custom datasets, all on-demand. This makes training and deploying AI models simpler, more efficient, and accessible to everyone.
Wanderingweights – Founder
Besim – Lead AI dev
Samoline – Lead AI dev
Diagonalge – Senior AI dev
Fezicles – Community manager
Wanderingweights – Founder
Besim – Lead AI dev
Samoline – Lead AI dev
Diagonalge – Senior AI dev
Fezicles – Community manager
Q4 2024
Q1 2025
Q2 2025
Q3 2025
Q4 2025
Q4 2024
Q1 2025
Q2 2025
Q3 2025
Q4 2025
Huge thanks to Keith Singery (aka Bittensor Guru) for all of his fantastic work in the Bittensor community. Make sure to check out his other video/audio interviews by clicking HERE.
The team at Rayon Labs have done it again with Gradients led by wanderingweights who joins the pod to discuss how he and his team have democratized AI model building with their “couple of clicks” no-code training subnet. This is one of the most groundbreaking projects on Bittensor that, in only a few months on the network, can already out-train the establishment.
A big thank you to Tao Stats for producing these insightful videos in the Novelty Search series. We appreciate the opportunity to dive deep into the groundbreaking work being done by Subnets within Bittensor! Check out some of their other videos HERE.
Recorded in August 2025: This session opens with a chat about location-independent mining before spotlighting Wandering Weights’ “Gradients” — an AutoML post-training platform on Bittensor that now supports Instruct, Diffusion, DPO (preference tuning), and GRPO (reward-function-driven optimization) with custom datasets, auto-captioning for images, and an API/UX for one-click jobs. The big update is Gradients 5.0: a shift to open source via rolling text/image tournaments where miners submit repos (not just models); validators provide fixed compute, results are compared head-to-head, and winning scripts earn emissions. The team shares benchmark results across ~180 runs claiming Gradients outperforms Databricks, GCP, Hugging Face, Together (and beats CivetAI on tougher diffusion settings), and unveils “Gradients Instruct 8B,” a fine-tune of Quen 3 Base reported to beat Quen 3 Instruct on zero-shot math and instruction-following. They discuss pricing examples, revenue traction, privacy/containers for custom rewards, future video/vision tasks, and tighter integration with compute subnets to keep costs low while scaling.
Novelty Search is great, but for most investors trying to understand Bittensor, the technical depth is a wall, not a bridge. If we’re going to attract investment into this ecosystem then we need more people to understand it! That’s why Siam Kidd and Mark Creaser from DSV Fund have launched Revenue Search, where they ask the simple questions that investors want to know the answers to.
Recorded in July 2025, this episode of Revenue Search features Chris (aka Wandering Weights), founder of Gradients (Subnet 56)—a Bittensor subnet focused on decentralized, high-performance model training. Gradients allows users to upload a dataset, select a model, and have Bittensor miners compete to produce the best-performing version—removing the need for manual hyperparameter tuning or large ML teams. The platform has already attracted 3,000 paying users, primarily hobbyists, and delivers results significantly cheaper and better than major incumbents like Google Cloud or AWS. With a new enterprise-ready version (5.0) launching, Gradients plans to address corporate data privacy concerns by running training internally or through Shoots’ Trusted Execution Environments. All fiat revenue is used to buy and lock up the Alpha token, and the team now aims to shift focus from technical development to business growth and enterprise partnerships.
2026 is already becoming Bittensor's Year of Integration.
There are no explicit incentives for teams to work together. It's just happening.
"survival of the fittest" -> "survival of the collaborative"
📢103 TAO worth of alpha buyback and burn:
As promised, the collected tournament fees have been used to buy alpha and then burn it. 🔥
The extrinsic can be found on taostats:
Happy New Year from the Gradients team! 🤙
7227267-... · Extrinsic · taostats
Explore the official Bittensor blockchain explorer at taostats.io, your trusted source for metagraph analytics, TAO toke...
taostats.io
She knows...
Latest and greatest image generation models added to Gradients: Z-Image and Qwen Image 🤙
Choose between them or one of the other 36 image models to fine-tune to your style, brand or face on http://gradients.io - no code, just a few clicks and done
Styles and
The important thing is that we do the right thing, exploits are signals that teach a mechanism, organism, network how to harden
Don't fight the exploit, learn from them
March 30th SF.
Cya.
2/2
Our YaRN-extended Covenant-Chat (32k context) demonstrates what's possible when you combine extended context windows with optimized gradient-based training. Longer context means the model sees more relevant information during each training step, leading to stronger learning
1/2
Gradients takes the best decentralized, open-source base LLM from Templar Covenant and finetunes it into a chatbot assistant that can carry multi-turn conversation and reasonably respond to user queries, here's how we did it:
- Chat template integration and embedding update