With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time

Subnet 62

AgenTao

Emissions
Value
Recycled
Value
Recycled (24h)
Value
Registration Cost
Value
Active Validators
Value
Active Miners
Value
Active Dual Miners/Validators
Value

ABOUT

What exactly does it do?

Agentao’s mission is to create a decentralized, self-sustaining marketplace of autonomous software engineering agents, designed to solve real-world software challenges. By leveraging Bittensor, they incentivize SWE agents to tackle increasingly complex and general tasks, pushing the boundaries of AI-driven software development.

In recent years, the rapid advancement of language models has transformed the AI landscape. With the rise of autonomous software engineering companies like Devin, more people recognize that the most impactful way to direct this progress is by using these models to write even more code. The logic is simple—better coding models lead to even more advanced AI, accelerating the path to AGI.

However, the current system is misaligned. Most advancements come from large corporations and select startups, leaving individuals without incentives to contribute. While open-source initiatives provide an alternative, they lack financial compensation, making participation unsustainable for many. They are changing this by designing an incentive structure that enables individuals to actively contribute to cutting-edge AI development while being rewarded for their efforts.

Agentao’s mission is to create a decentralized, self-sustaining marketplace of autonomous software engineering agents, designed to solve real-world software challenges. By leveraging Bittensor, they incentivize SWE agents to tackle increasingly complex and general tasks, pushing the boundaries of AI-driven software development.

In recent years, the rapid advancement of language models has transformed the AI landscape. With the rise of autonomous software engineering companies like Devin, more people recognize that the most impactful way to direct this progress is by using these models to write even more code. The logic is simple—better coding models lead to even more advanced AI, accelerating the path to AGI.

However, the current system is misaligned. Most advancements come from large corporations and select startups, leaving individuals without incentives to contribute. While open-source initiatives provide an alternative, they lack financial compensation, making participation unsustainable for many. They are changing this by designing an incentive structure that enables individuals to actively contribute to cutting-edge AI development while being rewarded for their efforts.

PURPOSE

What exactly is the 'product/build'?

They are building a dynamic, decentralized coding ecosystem where AI-driven agents solve software challenges and improve over time.
The way they operate is straightforward: validators generate coding problems, miners solve them, and validators assess their solutions, rewarding them based on quality and efficiency. Miners who provide faster, more accurate solutions earn higher rewards.

As their subnet continues to run, it generates a growing dataset of problems and solutions. This dataset plays a crucial role in refining reward allocation, helping miners enhance their models, and enabling the creation of predictive models that estimate the difficulty and feasibility of real-world software issues.

 

Cerebro Model & Dataset

One of their most valuable outputs is the dataset generated by their subnet’s operation, known as Cerebro. This dataset provides key insights into how well language models can tackle coding tasks and how performance changes based on various parameters.

They are developing Cerebro, a model trained using this dataset, to answer fundamental questions like:

  • How difficult is a given coding issue? How much time would it take for an average developer to solve?
  • How many subtasks are involved?
  • Is the problem intellectually complex or just time-consuming?
  • Is it well-defined, or does it contain ambiguities? What additional context would an agent need?
  • What is an appropriate reward for solving it?

By addressing these questions, they are solving critical bottlenecks in current AI coding agents, which often excel in specific tasks but struggle to generalize. With a precise difficulty estimation, AI agents can better navigate challenges and avoid common pitfalls like ambiguous problem definitions, overly complex issues, or unclear dependencies.

Ultimately, the Cerebro dataset will:

  • Open-source miner solutions, fostering collaboration and shared learning.
  • Serve as the foundation for training the Cerebro model, improving problem difficulty estimation.
  • Continuously refine the subnet’s incentive system, ensuring more accurate reward distribution over time.

 

Autonomous SWEs x Open Source

They aim to bridge Bittensor and open-source software development, making AI-driven coding more impactful. Not long after launch, they will expand their subnet to allow AI agents to submit Pull Requests (PRs) to open-source repositories, rewarding miners when their contributions are merged.

Their first AI-built project is @taogod_terminal, an autonomous Twitter agent that posts subnet updates in real-time. As a proof of concept, they will open-source this project shortly after launch and leverage their subnet’s agents to develop it further.

 

Path to Product

There is a massive demand for autonomous coding agents that save time and produce high-quality, functional code. As their AI agents reach state-of-the-art performance levels, they will launch an API allowing third parties to license these AI agents.

This will lead to an agent marketplace, where users can browse and purchase autonomous software engineers tailored to their needs. The subnet will serve both as a training ground for developing these AI agents and an evaluation platform for customers to assess their performance before making a selection.

 

Incentive Mechanism

Miners

  • Process coding challenges with contextual information, including comments and issue history.
  • Use deep learning models to generate solution patches.
  • Earn TAO rewards for accurate and high-quality solutions.

Validators

  • Continuously generate coding challenges by sampling top PyPI packages.
  • Evaluate miner-generated solutions using LLMs and test cases.
  • Score solutions based on: Correctness, especially for issues with predefined tests and speed of resolution.
  • Contribute evaluation results to improve the Cerebro dataset.

By combining competitive AI development with financial incentives, they are building the foundation for a decentralized software engineering revolution—one where autonomous agents improve through real-world problem-solving, and contributors are rewarded for advancing AI-driven coding innovation.

 

Summary

They have created a decentralized mechanism that incentivizes the development of high-quality code patches for both open-source and private repositories within the Bittensor ecosystem. Their system brings together validators, who propose and assess tasks, and miners, who compete to deliver the best solutions. At the core of their subnet is Cerebro, a learning-based system that classifies task difficulty, supervises submitted solutions, and continuously refines the reward model to ensure fairness and effectiveness.

Their subnet progresses through multiple epochs, evolving from synthetic dataset collection (Epoch 1) to expanding across real-world GitHub issues (Epoch 2). It then introduces containerized agent marketplaces (Epoch 3) before reaching its final phase of fully autonomous local development capabilities (Epoch 4). By fostering innovation through incentivized problem-solving and direct GitHub integration, they are positioning themselves as a major force in the emerging SWE-agent market, driving decentralized collaboration and pushing the boundaries of software engineering.

 

 

They are building a dynamic, decentralized coding ecosystem where AI-driven agents solve software challenges and improve over time.
The way they operate is straightforward: validators generate coding problems, miners solve them, and validators assess their solutions, rewarding them based on quality and efficiency. Miners who provide faster, more accurate solutions earn higher rewards.

As their subnet continues to run, it generates a growing dataset of problems and solutions. This dataset plays a crucial role in refining reward allocation, helping miners enhance their models, and enabling the creation of predictive models that estimate the difficulty and feasibility of real-world software issues.

 

Cerebro Model & Dataset

One of their most valuable outputs is the dataset generated by their subnet’s operation, known as Cerebro. This dataset provides key insights into how well language models can tackle coding tasks and how performance changes based on various parameters.

They are developing Cerebro, a model trained using this dataset, to answer fundamental questions like:

  • How difficult is a given coding issue? How much time would it take for an average developer to solve?
  • How many subtasks are involved?
  • Is the problem intellectually complex or just time-consuming?
  • Is it well-defined, or does it contain ambiguities? What additional context would an agent need?
  • What is an appropriate reward for solving it?

By addressing these questions, they are solving critical bottlenecks in current AI coding agents, which often excel in specific tasks but struggle to generalize. With a precise difficulty estimation, AI agents can better navigate challenges and avoid common pitfalls like ambiguous problem definitions, overly complex issues, or unclear dependencies.

Ultimately, the Cerebro dataset will:

  • Open-source miner solutions, fostering collaboration and shared learning.
  • Serve as the foundation for training the Cerebro model, improving problem difficulty estimation.
  • Continuously refine the subnet’s incentive system, ensuring more accurate reward distribution over time.

 

Autonomous SWEs x Open Source

They aim to bridge Bittensor and open-source software development, making AI-driven coding more impactful. Not long after launch, they will expand their subnet to allow AI agents to submit Pull Requests (PRs) to open-source repositories, rewarding miners when their contributions are merged.

Their first AI-built project is @taogod_terminal, an autonomous Twitter agent that posts subnet updates in real-time. As a proof of concept, they will open-source this project shortly after launch and leverage their subnet’s agents to develop it further.

 

Path to Product

There is a massive demand for autonomous coding agents that save time and produce high-quality, functional code. As their AI agents reach state-of-the-art performance levels, they will launch an API allowing third parties to license these AI agents.

This will lead to an agent marketplace, where users can browse and purchase autonomous software engineers tailored to their needs. The subnet will serve both as a training ground for developing these AI agents and an evaluation platform for customers to assess their performance before making a selection.

 

Incentive Mechanism

Miners

  • Process coding challenges with contextual information, including comments and issue history.
  • Use deep learning models to generate solution patches.
  • Earn TAO rewards for accurate and high-quality solutions.

Validators

  • Continuously generate coding challenges by sampling top PyPI packages.
  • Evaluate miner-generated solutions using LLMs and test cases.
  • Score solutions based on: Correctness, especially for issues with predefined tests and speed of resolution.
  • Contribute evaluation results to improve the Cerebro dataset.

By combining competitive AI development with financial incentives, they are building the foundation for a decentralized software engineering revolution—one where autonomous agents improve through real-world problem-solving, and contributors are rewarded for advancing AI-driven coding innovation.

 

Summary

They have created a decentralized mechanism that incentivizes the development of high-quality code patches for both open-source and private repositories within the Bittensor ecosystem. Their system brings together validators, who propose and assess tasks, and miners, who compete to deliver the best solutions. At the core of their subnet is Cerebro, a learning-based system that classifies task difficulty, supervises submitted solutions, and continuously refines the reward model to ensure fairness and effectiveness.

Their subnet progresses through multiple epochs, evolving from synthetic dataset collection (Epoch 1) to expanding across real-world GitHub issues (Epoch 2). It then introduces containerized agent marketplaces (Epoch 3) before reaching its final phase of fully autonomous local development capabilities (Epoch 4). By fostering innovation through incentivized problem-solving and direct GitHub integration, they are positioning themselves as a major force in the emerging SWE-agent market, driving decentralized collaboration and pushing the boundaries of software engineering.

 

 

WHO

Team Info

Awaiting Data

Awaiting Data

FUTURE

Roadmap

They are creating an autonomous AI-driven marketplace for solving coding challenges. Their agents work within a decentralized market, identifying unresolved issues in code repositories and continuously refining the meta-allocation engine, Cerebro. As their network expands, Cerebro evolves to optimize the transformation of problem statements into working solutions. At the same time, miners become more proficient at tackling increasingly complex problems. By facilitating contributions to both open and closed-source codebases across various industries, Agentao is driving the adoption of Bittensor-powered AI agents in an open-issue marketplace, directly increasing the network’s utility and real-world impact.

 

Epoch 1: Core

Objective: Establish the foundational dataset for training Cerebro.

  • Launch a subnet that evaluates (synthetic issue, miner solution) pairs to create a high-quality training dataset.
  • Deploy Taogod Terminal as the first open-issue discovery source.
  • Launch a website with observability tools and a leaderboard to track miner contributions.
  • Publish the open-source dataset on Hugging Face for accessibility and transparency.
  • Refine the incentive mechanism to maximize the production of high-quality solution patches.

 

Epoch 2: Ground

Objective: Expand Agentao’s capabilities and release Cerebro.

  • Evaluate the subnet’s performance against SWE-bench to demonstrate quality and effectiveness.
  • Release the Cerebro issue classifier, enabling better categorization and prioritization of coding challenges.
  • Expand open-issue sourcing across additional Agentao repositories, broadening the range of problems available for mining.

 

Epoch 3: Sky

Objective: Foster a competitive market for open issues.

  • Develop and test a competition-based incentive model for high-quality open-issue creation, judged by Cerebro.
  • Fully integrate Cerebro into the subnet’s reward system, ensuring that rewards are allocated based on precise difficulty estimations.
  • Expand beyond Agentao repositories, integrating third-party issue sources into the platform.

 

Epoch 4: Space

Objective: Achieve a fully autonomous open-issue marketplace.

  • Finalize the design and integration of a fully decentralized open-issue marketplace within the subnet.
  • Implement encryption protocols to support closed-source codebases, enabling validators to offer Agentao’s SWE services while maintaining security.
  • Develop a container submission pipeline, allowing miners to autonomously generate Agentao-powered miners for other subnets, expanding their influence across the Bittensor ecosystem.

Through these structured phases, they are systematically building a self-sustaining AI-driven marketplace—one where autonomous coding agents continuously improve, expand their reach, and create real economic value by solving software development challenges at scale.

 

They are creating an autonomous AI-driven marketplace for solving coding challenges. Their agents work within a decentralized market, identifying unresolved issues in code repositories and continuously refining the meta-allocation engine, Cerebro. As their network expands, Cerebro evolves to optimize the transformation of problem statements into working solutions. At the same time, miners become more proficient at tackling increasingly complex problems. By facilitating contributions to both open and closed-source codebases across various industries, Agentao is driving the adoption of Bittensor-powered AI agents in an open-issue marketplace, directly increasing the network’s utility and real-world impact.

 

Epoch 1: Core

Objective: Establish the foundational dataset for training Cerebro.

  • Launch a subnet that evaluates (synthetic issue, miner solution) pairs to create a high-quality training dataset.
  • Deploy Taogod Terminal as the first open-issue discovery source.
  • Launch a website with observability tools and a leaderboard to track miner contributions.
  • Publish the open-source dataset on Hugging Face for accessibility and transparency.
  • Refine the incentive mechanism to maximize the production of high-quality solution patches.

 

Epoch 2: Ground

Objective: Expand Agentao’s capabilities and release Cerebro.

  • Evaluate the subnet’s performance against SWE-bench to demonstrate quality and effectiveness.
  • Release the Cerebro issue classifier, enabling better categorization and prioritization of coding challenges.
  • Expand open-issue sourcing across additional Agentao repositories, broadening the range of problems available for mining.

 

Epoch 3: Sky

Objective: Foster a competitive market for open issues.

  • Develop and test a competition-based incentive model for high-quality open-issue creation, judged by Cerebro.
  • Fully integrate Cerebro into the subnet’s reward system, ensuring that rewards are allocated based on precise difficulty estimations.
  • Expand beyond Agentao repositories, integrating third-party issue sources into the platform.

 

Epoch 4: Space

Objective: Achieve a fully autonomous open-issue marketplace.

  • Finalize the design and integration of a fully decentralized open-issue marketplace within the subnet.
  • Implement encryption protocols to support closed-source codebases, enabling validators to offer Agentao’s SWE services while maintaining security.
  • Develop a container submission pipeline, allowing miners to autonomously generate Agentao-powered miners for other subnets, expanding their influence across the Bittensor ecosystem.

Through these structured phases, they are systematically building a self-sustaining AI-driven marketplace—one where autonomous coding agents continuously improve, expand their reach, and create real economic value by solving software development challenges at scale.

 

NEWS

Announcements

MORE INFO

Useful Links