With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time
Ridges AI’s (previously known as Agentao) mission is to create a decentralized, self-sustaining marketplace of autonomous software engineering agents, designed to solve real-world software challenges. By leveraging Bittensor, they incentivize SWE agents to tackle increasingly complex and general tasks, pushing the boundaries of AI-driven software development.
In recent years, the rapid advancement of language models has transformed the AI landscape. With the rise of autonomous software engineering companies like Devin, more people recognize that the most impactful way to direct this progress is by using these models to write even more code. The logic is simple—better coding models lead to even more advanced AI, accelerating the path to AGI.
However, the current system is misaligned. Most advancements come from large corporations and select startups, leaving individuals without incentives to contribute. While open-source initiatives provide an alternative, they lack financial compensation, making participation unsustainable for many. They are changing this by designing an incentive structure that enables individuals to actively contribute to cutting-edge AI development while being rewarded for their efforts.
Ridges AI’s (previously known as Agentao) mission is to create a decentralized, self-sustaining marketplace of autonomous software engineering agents, designed to solve real-world software challenges. By leveraging Bittensor, they incentivize SWE agents to tackle increasingly complex and general tasks, pushing the boundaries of AI-driven software development.
In recent years, the rapid advancement of language models has transformed the AI landscape. With the rise of autonomous software engineering companies like Devin, more people recognize that the most impactful way to direct this progress is by using these models to write even more code. The logic is simple—better coding models lead to even more advanced AI, accelerating the path to AGI.
However, the current system is misaligned. Most advancements come from large corporations and select startups, leaving individuals without incentives to contribute. While open-source initiatives provide an alternative, they lack financial compensation, making participation unsustainable for many. They are changing this by designing an incentive structure that enables individuals to actively contribute to cutting-edge AI development while being rewarded for their efforts.
They are building a dynamic, decentralized coding ecosystem where AI-driven agents solve software challenges and improve over time.
The way they operate is straightforward: validators generate coding problems, miners solve them, and validators assess their solutions, rewarding them based on quality and efficiency. Miners who provide faster, more accurate solutions earn higher rewards.
As their subnet continues to run, it generates a growing dataset of problems and solutions. This dataset plays a crucial role in refining reward allocation, helping miners enhance their models, and enabling the creation of predictive models that estimate the difficulty and feasibility of real-world software issues.
Cerebro Model & Dataset
One of their most valuable outputs is the dataset generated by their subnet’s operation, known as Cerebro. This dataset provides key insights into how well language models can tackle coding tasks and how performance changes based on various parameters.
They are developing Cerebro, a model trained using this dataset, to answer fundamental questions like:
By addressing these questions, they are solving critical bottlenecks in current AI coding agents, which often excel in specific tasks but struggle to generalize. With a precise difficulty estimation, AI agents can better navigate challenges and avoid common pitfalls like ambiguous problem definitions, overly complex issues, or unclear dependencies.
Ultimately, the Cerebro dataset will:
Autonomous SWEs x Open Source
They aim to bridge Bittensor and open-source software development, making AI-driven coding more impactful. Not long after launch, they will expand their subnet to allow AI agents to submit Pull Requests (PRs) to open-source repositories, rewarding miners when their contributions are merged.
Their first AI-built project is @taogod_terminal, an autonomous Twitter agent that posts subnet updates in real-time. As a proof of concept, they will open-source this project shortly after launch and leverage their subnet’s agents to develop it further.
Path to Product
There is a massive demand for autonomous coding agents that save time and produce high-quality, functional code. As their AI agents reach state-of-the-art performance levels, they will launch an API allowing third parties to license these AI agents.
This will lead to an agent marketplace, where users can browse and purchase autonomous software engineers tailored to their needs. The subnet will serve both as a training ground for developing these AI agents and an evaluation platform for customers to assess their performance before making a selection.
Incentive Mechanism
Miners
Validators
By combining competitive AI development with financial incentives, they are building the foundation for a decentralized software engineering revolution—one where autonomous agents improve through real-world problem-solving, and contributors are rewarded for advancing AI-driven coding innovation.
Summary
They have created a decentralized mechanism that incentivizes the development of high-quality code patches for both open-source and private repositories within the Bittensor ecosystem. Their system brings together validators, who propose and assess tasks, and miners, who compete to deliver the best solutions. At the core of their subnet is Cerebro, a learning-based system that classifies task difficulty, supervises submitted solutions, and continuously refines the reward model to ensure fairness and effectiveness.
Their subnet progresses through multiple epochs, evolving from synthetic dataset collection (Epoch 1) to expanding across real-world GitHub issues (Epoch 2). It then introduces containerized agent marketplaces (Epoch 3) before reaching its final phase of fully autonomous local development capabilities (Epoch 4). By fostering innovation through incentivized problem-solving and direct GitHub integration, they are positioning themselves as a major force in the emerging SWE-agent market, driving decentralized collaboration and pushing the boundaries of software engineering.
They are building a dynamic, decentralized coding ecosystem where AI-driven agents solve software challenges and improve over time.
The way they operate is straightforward: validators generate coding problems, miners solve them, and validators assess their solutions, rewarding them based on quality and efficiency. Miners who provide faster, more accurate solutions earn higher rewards.
As their subnet continues to run, it generates a growing dataset of problems and solutions. This dataset plays a crucial role in refining reward allocation, helping miners enhance their models, and enabling the creation of predictive models that estimate the difficulty and feasibility of real-world software issues.
Cerebro Model & Dataset
One of their most valuable outputs is the dataset generated by their subnet’s operation, known as Cerebro. This dataset provides key insights into how well language models can tackle coding tasks and how performance changes based on various parameters.
They are developing Cerebro, a model trained using this dataset, to answer fundamental questions like:
By addressing these questions, they are solving critical bottlenecks in current AI coding agents, which often excel in specific tasks but struggle to generalize. With a precise difficulty estimation, AI agents can better navigate challenges and avoid common pitfalls like ambiguous problem definitions, overly complex issues, or unclear dependencies.
Ultimately, the Cerebro dataset will:
Autonomous SWEs x Open Source
They aim to bridge Bittensor and open-source software development, making AI-driven coding more impactful. Not long after launch, they will expand their subnet to allow AI agents to submit Pull Requests (PRs) to open-source repositories, rewarding miners when their contributions are merged.
Their first AI-built project is @taogod_terminal, an autonomous Twitter agent that posts subnet updates in real-time. As a proof of concept, they will open-source this project shortly after launch and leverage their subnet’s agents to develop it further.
Path to Product
There is a massive demand for autonomous coding agents that save time and produce high-quality, functional code. As their AI agents reach state-of-the-art performance levels, they will launch an API allowing third parties to license these AI agents.
This will lead to an agent marketplace, where users can browse and purchase autonomous software engineers tailored to their needs. The subnet will serve both as a training ground for developing these AI agents and an evaluation platform for customers to assess their performance before making a selection.
Incentive Mechanism
Miners
Validators
By combining competitive AI development with financial incentives, they are building the foundation for a decentralized software engineering revolution—one where autonomous agents improve through real-world problem-solving, and contributors are rewarded for advancing AI-driven coding innovation.
Summary
They have created a decentralized mechanism that incentivizes the development of high-quality code patches for both open-source and private repositories within the Bittensor ecosystem. Their system brings together validators, who propose and assess tasks, and miners, who compete to deliver the best solutions. At the core of their subnet is Cerebro, a learning-based system that classifies task difficulty, supervises submitted solutions, and continuously refines the reward model to ensure fairness and effectiveness.
Their subnet progresses through multiple epochs, evolving from synthetic dataset collection (Epoch 1) to expanding across real-world GitHub issues (Epoch 2). It then introduces containerized agent marketplaces (Epoch 3) before reaching its final phase of fully autonomous local development capabilities (Epoch 4). By fostering innovation through incentivized problem-solving and direct GitHub integration, they are positioning themselves as a major force in the emerging SWE-agent market, driving decentralized collaboration and pushing the boundaries of software engineering.
Competing with large labs by training massive, end-to-end models isn’t a sustainable approach—it’s prohibitively expensive, and miners are bound to lose. Instead, they are adopting a strategy that leverages Bittensor’s incentive mechanisms for success:
Rather than constructing a single large model, they harness the power of many specialized agents—each focusing on a specific task and competing to be the best—enabling them to collectively outperform centralized labs at scale.
Roadmap
Here’s what they plan to release in the upcoming months to bring them closer to their vision.
Early Q2 2025
Late Q2 2025
Q3 2025
Competing with large labs by training massive, end-to-end models isn’t a sustainable approach—it’s prohibitively expensive, and miners are bound to lose. Instead, they are adopting a strategy that leverages Bittensor’s incentive mechanisms for success:
Rather than constructing a single large model, they harness the power of many specialized agents—each focusing on a specific task and competing to be the best—enabling them to collectively outperform centralized labs at scale.
Roadmap
Here’s what they plan to release in the upcoming months to bring them closer to their vision.
Early Q2 2025
Late Q2 2025
Q3 2025
Novelty Search is great, but for most investors trying to understand Bittensor, the technical depth is a wall, not a bridge. If we’re going to attract investment into this ecosystem then we need more people to understand it! That’s why Siam Kidd and Mark Creaser from DSV Fund have launched Revenue Search, where they ask the simple questions that investors want to know the answers to.
Recorded in August 2025, this episode of Revenue Search features Shakeel (“Shak”) from Ridges, an AI software agent company aiming to revolutionize software development by replacing or drastically enhancing human coders. The discussion covers Ridges’ open-source, model-agnostic approach, which builds on top of any large language model to create end-to-end coding solutions, making it far cheaper and more adaptable than competitors like Claude Code. Shak outlines the massive $400B engineering market, Ridges’ rapid progress toward matching top competitors’ performance, and their go-to-market plans targeting both individual developers and enterprises. The conversation also reveals DSV’s recent $300K investment in Ridges, the company’s revenue potential, and strategies for reinvesting profits into product growth rather than short-term tokenomics. Throughout, Shak emphasizes lean operations, scalability, and the potential for Ridges to disrupt how software is built globally.
A special thanks to Mark Jeffrey for his amazing Hash Rate series! In this series, he provides valuable insights into Bittensor Subnets and the world of decentralized AI. Be sure to check out the full series on his YouTube channel for more expert analysis and deep dives.
Recorded August 2025: Mark Jeffrey interviews Shak from Ridges (Subnet 62), a front-door Bittensor app aiming to replace “AI for engineers” tools with true end-to-end AI engineers. Ridges emphasizes a thick agent layer over cheaper models (e.g., DeepSeek) and composes with Chutes and Targon, yielding dramatic cost cuts while agents self-review and ship finished code—no coding required. Shak details a Google-login mining dashboard that pays top agent builders in fiat (winner-take-all style) and an upcoming IDE—Cursor-like but powered by Ridges agents—plus open-source, run-it-locally options for enterprises. They cover benchmarks, a weeks-away public product, and how Bittensor’s subsidies and composability funnel value back to TAO. Shak also shouts out other subnets (Ready AI, Score, BitMind, Bitcast).
In a live Revenue Search special, Shak (Ridges) explains how they’ll shift incentives from benchmarks to real user impact: the product itself will decide who earns emissions. Ridges V1 ships as a Cursor/VS Code extension on Oct 30, 2025, priced around $12/mo (with an opt-in data tier near $8).
Under the hood, validators still run SWEBench Polyglot, but an additional step silently swaps in challenger agents for a slice of users; miners get paid only if those users accept more suggestions, need fewer fixes, and stay engaged.
Recent mixed-set scores dropped from 88% to 17–18% when Polyglot was added, then rebounded to ~41% by Oct 6—evidence, Shaq says, that iteration speed is their edge. A full platform rewrite lands this week (stability, parallel evals, dual-sandbox on device, limited internet excluding benchmark content) and USD payouts are returning to attract company-scale competitors.
Goal: grow users fast, reach revenue – emissions (targeting by January) to both disincentivize gaming and potentially fund buybacks—while remaining far cheaper than rivals.
Huge thanks to Keith Singery (aka Bittensor Guru) for all of his fantastic work in the Bittensor community. Make sure to check out his other video/audio interviews by clicking HERE.
In this episode, Shak, creator of Bittensor’s Subnet 62 (Ridges), explains how it surged into the top 15 and why it could reshape AI-assisted coding. We dig into the architecture, growth strategy, incentives, and upcoming roadmap, plus what sets Ridges apart from other subnets.
🤖 Product solutions will now also show the agent that generated them
We're experimenting with ways to feed product usage back into evals in a way that is both transparent and hard to game. Stay tuned 👀
🚀 We're continuing to roll the product out to more users this week, and have a bunch of IM updates planned based on the feedback we've gotten so far (including limited web access for agents, better support for more languages, etc).
Here's what we've learned so far:
Why the…
📧 We're sending out more onboarding invites today, keep an eye out on your inbox
🚀 We're continuing to roll the product out to more users this week, and have a bunch of IM updates planned based on the feedback we've gotten so far (including limited web access for agents, better support for more languages, etc).
Here's what we've learned so far:
Why the…
🚀Rollout is going smoothly, we are fixing lots of bugs as they come up and onboarding more users today and over the next week
Lots learned already, we will share IM updates based on user feedback so far soon!
👀 What do you guys want to see Ridges agents do? We'll be posting a walkthrough of all our features, stuff built by agents, and more!