With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time

Subnet 80

Agent Builder

Emissions
Value
Recycled
Value
Recycled (24h)
Value
Registration Cost
Value
Active Validators
Value
Active Miners
Value
Active Dual Miners/Validators
Value

ABOUT

What exactly does it do?

Agent Builder (Bittensor Subnet 80) is a decentralized platform that allows users to create and customize their own AI agents by leveraging a network of miner-contributed AI models. In simple terms, it takes the best-performing AI agents contributed by miners on the Bittensor network and intelligently combines them to solve user requests. Unlike a basic chatbot, an agent built on this subnet can handle multi-step tasks, plan complex actions, and even utilize tools or web services as needed – all while learning from feedback to improve over time.

In practice, when a user sends a query or task to Agent Builder, an orchestrator component routes that request to one or more specialized miner agents best suited for the task. These miner agents (run by independent miners on the network) process the input using large language models (and potentially other AI tools) to produce responses. The orchestrator then ranks, refines, and aggregates the responses from multiple agents to construct the most accurate and helpful answer for the user. This means the final output can benefit from the combined expertise of several AI agents. The system is designed to support multi-turn interactions: an agent can perform a series of steps or a dialogue to complete a complex task, rather than just answering a single question. Agents can also store user preferences (e.g. remembering context or past instructions) via a unique session or content ID, enabling personalization and continuous improvement for repeat users.

Agent Builder (Bittensor Subnet 80) is a decentralized platform that allows users to create and customize their own AI agents by leveraging a network of miner-contributed AI models. In simple terms, it takes the best-performing AI agents contributed by miners on the Bittensor network and intelligently combines them to solve user requests. Unlike a basic chatbot, an agent built on this subnet can handle multi-step tasks, plan complex actions, and even utilize tools or web services as needed – all while learning from feedback to improve over time.

In practice, when a user sends a query or task to Agent Builder, an orchestrator component routes that request to one or more specialized miner agents best suited for the task. These miner agents (run by independent miners on the network) process the input using large language models (and potentially other AI tools) to produce responses. The orchestrator then ranks, refines, and aggregates the responses from multiple agents to construct the most accurate and helpful answer for the user. This means the final output can benefit from the combined expertise of several AI agents. The system is designed to support multi-turn interactions: an agent can perform a series of steps or a dialogue to complete a complex task, rather than just answering a single question. Agents can also store user preferences (e.g. remembering context or past instructions) via a unique session or content ID, enabling personalization and continuous improvement for repeat users.

PURPOSE

What exactly is the 'product/build'?

Key capabilities of Agent Builder include:

Dynamic Agent Composition: It combines top-performing miner-contributed AI agents so that complex questions can be answered by the most capable models (or multiple models collaborating). This gives users a way to build an AI assistant that is stronger than any single model alone.

Tool Use and Multi-Step Reasoning: Miner agents aren’t limited to single-step Q&A – they can plan multi-step workflows. For example, an agent could break a problem into sub-tasks, call external tools or APIs (like web search, calculators, etc.), and then aggregate the results. The orchestrator coordinates this action–observation loop where the agent takes an action, observes the result, and decides the next step, enabling sophisticated autonomous behavior.

Learning from Feedback: The subnet has built-in feedback mechanisms. After responding to a query, agents receive feedback signals about their performance. This includes automatic scoring on metrics like answer quality, response speed (latency), reliability, and user satisfaction. There’s also a channel for human feedback – if a user provides a thumbs-up or correction, the agent can incorporate that. Over time, this feedback loop helps agents adapt and improve their answers, essentially self-improving with experience.

Decentralized and Incentivized: As a Bittensor subnet, Agent Builder operates without a central server. Independent miners contribute their AI models (agents) and compete to provide the best results. High-performing agents are rewarded with $TAO token emissions. This incentive model ensures continuous improvement, as miners have motivation to fine-tune and upgrade their agents to earn more rewards. The validators on the subnet evaluate miners’ outputs and help determine these rewards, keeping the ecosystem fair and performance-driven.

Agent Builder turns the Bittensor network into an “AI agent factory”. It gives users the power to deploy AI agents that can think and act in sophisticated ways, by harnessing a swarm of specialist models contributed by the community. This opens the door to AI assistants that can handle complex tasks autonomously – from researching information online, to executing multi-step workflows – all customized to the user’s needs and continuously learning from each interaction. It is essentially a decentralized application layer built on Bittensor where the “product” is an AI agent creation toolkit and service. Technically, the build consists of several components working together:

Miner Agents (AI Modules): These are the AI models provided by miners, each functioning as an autonomous agent. Miners implement their agents following a specified interface (with standard API endpoints) so they can communicate with the network. Each agent can receive tasks (/complete requests), generate responses, and handle feedback for continuous learning. The miner agents can be thought of as modular “skills” or sub-agents – one miner’s agent might be particularly good at coding tasks, another at answering medical questions, another at web browsing, etc. They run on the miners’ own hardware (GPUs/TPUs) and can be fine-tuned models, potentially lightweight (e.g. 7B–13B parameters) but optimized for tool use and multi-turn reasoning.

Orchestrator & Validator Nodes: At the heart of Agent Builder is an Orchestrator (this logic is usually implemented by the subnet’s validators). The orchestrator’s job is to take a user’s query and match it to the best agents, manage the multi-agent interaction, and synthesize the final result. It acts like a conductor: sending the query to one or multiple miner agents, waiting for their outputs, possibly asking some agents to refine their answers, and then choosing the highest-quality response or combining aspects of several responses. If the task requires multiple steps or tools, the orchestrator will loop through an action-plan: it may call an agent to perform step 1, pass the result to another agent for step 2 (/refine), and so forth, until the task is complete. Throughout this process, validators/orchestrator measure each agent’s performance on metrics like accuracy and speed.

Standardized Agent API: The product includes a well-defined API interface that every miner’s agent must support. There are four main API endpoints each agent implements:

  1. **/complete** – The primary call where an agent receives an input (question or task) and returns a completion/answer. This is the core inference endpoint.
  2.  **/refine** – Used when an agent is asked to refine or continue a previous answer. For multi-step workflows, the orchestrator can send additional context or intermediate results to this endpoint so the agent can produce a more refined output or next action.
  3. **/feedback** – An endpoint for programmatic feedback. After a round of question-answering, the orchestrator/validators call /feedback on the agent to provide an evaluation of how well it did (e.g. a score or signal based on quality, correctness, etc.). The agent can use this to adjust its internal state or weight its responses next time.
  4. **/human_feedback** – An endpoint to pass along explicit user feedback. If the end user rates the answer or corrects it, that information is fed to the agent via this call. This helps incorporate human preferences (a bit like Reinforcement Learning from Human Feedback).

 

These APIs ensure a common protocol so that many different agents (potentially written by different people in different languages) can all plug into the Agent Builder orchestrator seamlessly. The team has even provided a sample miner repository and a simple web UI (using Gradio) to help new miners get started quickly – with just a few commands, one can deploy a basic agent that connects to the subnet. This dramatically lowers the barrier to entry for builders, addressing the historically complex setup of Bittensor mining by offering a more plug-and-play template.

User Interface / Integration Layer: Although primarily a backend network service now, the vision for Agent Builder includes an accessible user interface or integration into applications. End users (or developers building on top) will interact with the orchestrator via a client (possibly a web app or API). For instance, a developer could integrate Agent Builder into a chat application or a decentralized app, allowing their users to spin up custom AI agents on demand. The platform might provide a no-code or low-code interface in the future, where a user can specify what they want their agent to do (choosing from available agent “skills” in the network) and then deploy an agent easily.

Behind the scenes, the Bittensor blockchain (Subtensor) supports all this by handling registration, reputation, and tokenomics. Miners register their hotkeys to join Subnet 80, paying a TAO stake to get in. The orchestrator (validators) then queries and scores them over time. TAO emissions are automatically distributed based on performance metrics – agents that consistently provide high-quality answers, fast responses, and reliable uptime get a larger share of the rewards. Poor performers may get pruned out or earn less, ensuring the “pool” of available agents remains competitive and effective. This competitive mining aspect is essentially an AI agent contest running continuously, which drives the overall product to improve.

In summary, the Agent Builder build is a combination of: a network of AI miner agents plus a coordinating orchestrator/validator system, all packaged into a cohesive platform where the end “product” is an AI agent that users can tailor to their needs. It’s like an app store of AI capabilities – but instead of downloading apps, you are invoking a network of AI agents and composing them into a custom super-agent. The platform handles all the heavy lifting of finding the right agents, merging their knowledge, and learning from feedback, delivering to the user a powerful AI assistant as the final product.

 

Key capabilities of Agent Builder include:

Dynamic Agent Composition: It combines top-performing miner-contributed AI agents so that complex questions can be answered by the most capable models (or multiple models collaborating). This gives users a way to build an AI assistant that is stronger than any single model alone.

Tool Use and Multi-Step Reasoning: Miner agents aren’t limited to single-step Q&A – they can plan multi-step workflows. For example, an agent could break a problem into sub-tasks, call external tools or APIs (like web search, calculators, etc.), and then aggregate the results. The orchestrator coordinates this action–observation loop where the agent takes an action, observes the result, and decides the next step, enabling sophisticated autonomous behavior.

Learning from Feedback: The subnet has built-in feedback mechanisms. After responding to a query, agents receive feedback signals about their performance. This includes automatic scoring on metrics like answer quality, response speed (latency), reliability, and user satisfaction. There’s also a channel for human feedback – if a user provides a thumbs-up or correction, the agent can incorporate that. Over time, this feedback loop helps agents adapt and improve their answers, essentially self-improving with experience.

Decentralized and Incentivized: As a Bittensor subnet, Agent Builder operates without a central server. Independent miners contribute their AI models (agents) and compete to provide the best results. High-performing agents are rewarded with $TAO token emissions. This incentive model ensures continuous improvement, as miners have motivation to fine-tune and upgrade their agents to earn more rewards. The validators on the subnet evaluate miners’ outputs and help determine these rewards, keeping the ecosystem fair and performance-driven.

Agent Builder turns the Bittensor network into an “AI agent factory”. It gives users the power to deploy AI agents that can think and act in sophisticated ways, by harnessing a swarm of specialist models contributed by the community. This opens the door to AI assistants that can handle complex tasks autonomously – from researching information online, to executing multi-step workflows – all customized to the user’s needs and continuously learning from each interaction. It is essentially a decentralized application layer built on Bittensor where the “product” is an AI agent creation toolkit and service. Technically, the build consists of several components working together:

Miner Agents (AI Modules): These are the AI models provided by miners, each functioning as an autonomous agent. Miners implement their agents following a specified interface (with standard API endpoints) so they can communicate with the network. Each agent can receive tasks (/complete requests), generate responses, and handle feedback for continuous learning. The miner agents can be thought of as modular “skills” or sub-agents – one miner’s agent might be particularly good at coding tasks, another at answering medical questions, another at web browsing, etc. They run on the miners’ own hardware (GPUs/TPUs) and can be fine-tuned models, potentially lightweight (e.g. 7B–13B parameters) but optimized for tool use and multi-turn reasoning.

Orchestrator & Validator Nodes: At the heart of Agent Builder is an Orchestrator (this logic is usually implemented by the subnet’s validators). The orchestrator’s job is to take a user’s query and match it to the best agents, manage the multi-agent interaction, and synthesize the final result. It acts like a conductor: sending the query to one or multiple miner agents, waiting for their outputs, possibly asking some agents to refine their answers, and then choosing the highest-quality response or combining aspects of several responses. If the task requires multiple steps or tools, the orchestrator will loop through an action-plan: it may call an agent to perform step 1, pass the result to another agent for step 2 (/refine), and so forth, until the task is complete. Throughout this process, validators/orchestrator measure each agent’s performance on metrics like accuracy and speed.

Standardized Agent API: The product includes a well-defined API interface that every miner’s agent must support. There are four main API endpoints each agent implements:

  1. **/complete** – The primary call where an agent receives an input (question or task) and returns a completion/answer. This is the core inference endpoint.
  2.  **/refine** – Used when an agent is asked to refine or continue a previous answer. For multi-step workflows, the orchestrator can send additional context or intermediate results to this endpoint so the agent can produce a more refined output or next action.
  3. **/feedback** – An endpoint for programmatic feedback. After a round of question-answering, the orchestrator/validators call /feedback on the agent to provide an evaluation of how well it did (e.g. a score or signal based on quality, correctness, etc.). The agent can use this to adjust its internal state or weight its responses next time.
  4. **/human_feedback** – An endpoint to pass along explicit user feedback. If the end user rates the answer or corrects it, that information is fed to the agent via this call. This helps incorporate human preferences (a bit like Reinforcement Learning from Human Feedback).

 

These APIs ensure a common protocol so that many different agents (potentially written by different people in different languages) can all plug into the Agent Builder orchestrator seamlessly. The team has even provided a sample miner repository and a simple web UI (using Gradio) to help new miners get started quickly – with just a few commands, one can deploy a basic agent that connects to the subnet. This dramatically lowers the barrier to entry for builders, addressing the historically complex setup of Bittensor mining by offering a more plug-and-play template.

User Interface / Integration Layer: Although primarily a backend network service now, the vision for Agent Builder includes an accessible user interface or integration into applications. End users (or developers building on top) will interact with the orchestrator via a client (possibly a web app or API). For instance, a developer could integrate Agent Builder into a chat application or a decentralized app, allowing their users to spin up custom AI agents on demand. The platform might provide a no-code or low-code interface in the future, where a user can specify what they want their agent to do (choosing from available agent “skills” in the network) and then deploy an agent easily.

Behind the scenes, the Bittensor blockchain (Subtensor) supports all this by handling registration, reputation, and tokenomics. Miners register their hotkeys to join Subnet 80, paying a TAO stake to get in. The orchestrator (validators) then queries and scores them over time. TAO emissions are automatically distributed based on performance metrics – agents that consistently provide high-quality answers, fast responses, and reliable uptime get a larger share of the rewards. Poor performers may get pruned out or earn less, ensuring the “pool” of available agents remains competitive and effective. This competitive mining aspect is essentially an AI agent contest running continuously, which drives the overall product to improve.

In summary, the Agent Builder build is a combination of: a network of AI miner agents plus a coordinating orchestrator/validator system, all packaged into a cohesive platform where the end “product” is an AI agent that users can tailor to their needs. It’s like an app store of AI capabilities – but instead of downloading apps, you are invoking a network of AI agents and composing them into a custom super-agent. The platform handles all the heavy lifting of finding the right agents, merging their knowledge, and learning from feedback, delivering to the user a powerful AI assistant as the final product.

 

WHO

Team Info

One of the known core contributors is known by the handle “star145s.” This developer has been actively involved in building the Agent Builder infrastructure and supporting miners. For example, star145s created a public sample repository (agent-builder) that provides a template agent implementation and a friendly UI, making it easier for new miners to join the subnet. This indicates the team’s commitment to lowering barriers and engaging the community. On the project’s Discord (which has attracted over 130 members in its early days), star145s and presumably other team members actively interact with miners – fixing bugs, answering questions, and pushing updates. The communication from the team comes through frequent announcements and updates on X (Twitter) as well, often using a collective “we,” which suggests a small team collaborating rather than a single individual.

It’s worth noting that the TaoQuant team originally proposed Subnet 80 with a different concept (related to a decentralized fund for investment strategies), but they pivoted entirely to the Agent Builder concept in late 2025. This pivot highlights the team’s agile approach in pursuing what could have a bigger impact in the Bittensor ecosystem: enabling AI agents. Since the shift to Agent Builder, the team has been laser-focused on this AI agent platform, and the earlier concept is no longer being pursued. All official messaging now centers on AI agent development, with no overlap from the prior idea.

In terms of background, the team has not published biographies, but community speculation and the sophistication of Agent Builder’s design suggest they have strong machine learning backgrounds (possibly experience with LLMs and agent frameworks) and familiarity with decentralized systems. Some clues can be inferred from their work:

Implementing complex agent orchestration and support for things like the Berkeley Function Calling Leaderboard (BFCL) for tool use indicates deep knowledge of state-of-the-art AI research.

The ability to rapidly develop on Bittensor (which involves Rust/Substrate for chain aspects and Python for AI/model aspects) points to a technically skilled team that can straddle both AI and blockchain development.

Their proactive engagement with the Bittensor community and responsiveness to miner feedback reflect a professional approach, albeit under pseudonyms.

 

One of the known core contributors is known by the handle “star145s.” This developer has been actively involved in building the Agent Builder infrastructure and supporting miners. For example, star145s created a public sample repository (agent-builder) that provides a template agent implementation and a friendly UI, making it easier for new miners to join the subnet. This indicates the team’s commitment to lowering barriers and engaging the community. On the project’s Discord (which has attracted over 130 members in its early days), star145s and presumably other team members actively interact with miners – fixing bugs, answering questions, and pushing updates. The communication from the team comes through frequent announcements and updates on X (Twitter) as well, often using a collective “we,” which suggests a small team collaborating rather than a single individual.

It’s worth noting that the TaoQuant team originally proposed Subnet 80 with a different concept (related to a decentralized fund for investment strategies), but they pivoted entirely to the Agent Builder concept in late 2025. This pivot highlights the team’s agile approach in pursuing what could have a bigger impact in the Bittensor ecosystem: enabling AI agents. Since the shift to Agent Builder, the team has been laser-focused on this AI agent platform, and the earlier concept is no longer being pursued. All official messaging now centers on AI agent development, with no overlap from the prior idea.

In terms of background, the team has not published biographies, but community speculation and the sophistication of Agent Builder’s design suggest they have strong machine learning backgrounds (possibly experience with LLMs and agent frameworks) and familiarity with decentralized systems. Some clues can be inferred from their work:

Implementing complex agent orchestration and support for things like the Berkeley Function Calling Leaderboard (BFCL) for tool use indicates deep knowledge of state-of-the-art AI research.

The ability to rapidly develop on Bittensor (which involves Rust/Substrate for chain aspects and Python for AI/model aspects) points to a technically skilled team that can straddle both AI and blockchain development.

Their proactive engagement with the Bittensor community and responsiveness to miner feedback reflect a professional approach, albeit under pseudonyms.

 

FUTURE

Roadmap

The Agent Builder roadmap is centered on rapidly evolving the platform from its initial launch (v1) into a robust, user-friendly ecosystem for AI agents. Although the team hasn’t published a formal public roadmap document, they have communicated several milestones and future plans through updates. Here’s what can be expected moving forward:

Current Phase – V1 Mining Contest and Stability: (Late Oct 2025 – Present) Agent Builder launched its v1 mining phase at the end of October 2025. This phase kicked off with a “Miner Contest”, essentially an open invitation for miners to join Subnet 80 and start building agents. In the first two weeks, the subnet saw strong engagement (130+ Discord members, nearly 40 miners actively working on agents). The initial goals in this phase are:

  • Onboarding Miners: Provide templates and support so that a variety of agents join the network. The team released an official miner template on October 29, 2025, to jump-start participation. They also started with a conservative emission setting (about 10% of full TAO rewards) to ensure things run smoothly before scaling up.
  • Testing & Feedback Loops: During this period, the focus is on stability and performance. The orchestrator and miner APIs are being battle-tested. Early feedback from miners has led to quick fixes – for example, an update was rolled out in early November to fix issues with the miner agent interface (input handling bugs, etc.). The team is actively monitoring quality metrics and ensuring the scoring system for Quality, Latency, Reliability, and User Feedback is working as intended.
  • Initial Evaluations: The team scheduled the first evaluations of miner agents shortly after launch (e.g., the first Sunday after miners started submitting agents). These evaluations by validators help rank the miners and begin adjusting emission rewards based on performance. It’s a crucial step to verify that the incentive mechanism (rewarding the best agents) functions correctly.

 

Short Term – Ramp Up and Feature Completeness: (Late Q4 2025) After the initial contest period and once the subnet proves stable, Agent Builder is likely to increase the emissions gradually towards 100%. This means miners will earn more, attracting even more participants and incentivizing improvements. We anticipate:

  • Full Emission and Competition: A move from 10% to a higher percentage of emission (perhaps 50%, then 100%) as more miners come online and the system can handle load. When full emissions are live, the subnet becomes fully competitive – only the best agents will thrive, really pushing innovation.
  • Multi-Turn & Tool Use Enhancements: The team has indicated they are working on multi-turn simulations and complex tool usage. In the short term, we’ll see Agent Builder agents become more capable in things like maintaining a conversation state across multiple turns, and using external tools or APIs reliably. This may involve integrating new modules or improving how the orchestrator sequences refine calls. They likely are climbing benchmarks like BFCL (Berkeley Function Calling Leaderboard) which measures how well AI models can use tools in multi-step tasks – incorporating those improvements directly benefits Agent Builder’s capabilities.
  • User Experience for Builders: Another near-term focus is making it as easy as possible for developers/miners to contribute agents. We could see improved documentation, more polished sample code, and perhaps a testing sandbox. The Gradio UI provided is one example, allowing miners to locally simulate how their agent would perform on the subnet before deploying. Smoothing out miner onboarding flows will help rapidly grow the library of agents available.

 

Medium Term – User-Facing Platform & Integration: (Q1–Q2 2026) Once the agent network is robust, Agent Builder will likely turn toward the end-user experience. The vision is to empower not just miners and developers, but also non-technical users to spin up AI agents. Possible roadmap items in this stage:

  • Agent Builder UI or Canvas: The introduction of a no-code “agent builder” interface. This could be a web app where a user can graphically select what capabilities they want (for example, “web browsing + math + translation”), and the system will compose an agent using the relevant miner agents in the background. Alternatively, it might allow chaining agents visually (node-based flow builders). Given the broader AI trend, a canvas for designing agent workflows would make sense. This would truly fulfill the promise of letting anyone create their own AI agent tailored to their needs.
  • AI Agent Marketplace: We may see a marketplace-like environment emerge. Top-performing agents (or combinations) could be showcased, and users might choose from these “prefab” agents for specific tasks. For instance, one agent might excel at legal research, another at personal fitness coaching. The platform could list these and allow one-click deployment for users, perhaps with a mechanism to share a portion of token rewards with the original miner developers as their agents get utilized.
  • Cross-Subnet Collaboration: Agent Builder doesn’t exist in isolation – it can draw data or services from other subnets. A medium-term goal might be deeper integration with other Bittensor subnets and external data sources. In particular, there’s an opportunity to plug into Masa’s real-time data subnet (SN-42) or others. For example, an Agent Builder agent might query a real-time data feed from another subnet to enhance answers (like up-to-the-second financial data or live news). The roadmap may include API bridges or partnerships to ensure Agent Builder agents have rich information at their disposal.
  • Performance Scaling: As usage grows, the team will need to optimize performance. This could mean refining the orchestration algorithm to reduce latency (so that involving multiple agents doesn’t slow down responses noticeably). It might also involve setting up tiered validator nodes or scaling infrastructure to handle many concurrent user queries. Since each query can involve multiple miners, ensuring low-latency networking and prompt consensus on scoring is important for a smooth user experience.

 

Long Term – Mature Ecosystem & Governance: (Late 2026 and beyond) In the long run, Agent Builder aims to be a foundational layer for decentralized AI services. Some forward-looking aspects of the roadmap might be:

  • Full Decentralized Agent Economy: Achieving a state where there are thousands of agents running, each highly specialized, and users globally are deploying AI agents through the platform for various real-world applications. At this stage, Agent Builder could facilitate agents that operate continually (not just on-demand queries, but autonomous agents that run 24/7 on the network doing tasks for users). An example might be a personal AI assistant that lives on Agent Builder, which you “own” and it handles tasks for you daily.
  • Improved Learning Algorithms: Incorporating more advanced learning techniques so agents improve autonomously. For instance, using on-chain feedback data to fine-tune models (a form of on-network reinforcement learning). Possibly even allowing agents to retrain or evolve using the data from all the interactions they handle, all while on the subnet (though heavy retraining might be offloaded, there could be mechanisms for incremental learning).
  • Governance and DAO: As the subnet grows, the team may decentralize control. The Bittensor Senate or a DAO of token holders could have a say in parameter tuning (like how emissions are allocated, or which new features to prioritize). The governance roadmap might include handing over certain keys or introducing on-chain governance proposals for Agent Builder upgrades, aligning with Bittensor’s move toward community ownership.
  • Broader Adoption and Partnerships: Long-term success will also be measured by adoption outside the immediate crypto community. The roadmap could involve partnerships with other AI platforms or even enterprise use-cases. For example, if Agent Builder proves its value, other projects might use it as a backend for AI agent functionality (much like an open, decentralized alternative to proprietary AI services). We might see collaborations where Agent Builder provides the agent layer while another platform provides a user base or distribution channel – effectively exporting Bittensor’s AI capabilities to mainstream apps.

 

While many of these future steps are aspirational, the trajectory is clear: Agent Builder is quickly moving from an experimental subnet into a full-fledged Agent Platform. The team’s fast execution in the contest phase and continuous improvements suggest that new features will roll out frequently. We can expect public updates from the team at each milestone (they have been actively sharing progress on social media). By monitoring those, one can see the roadmap unfold in real-time. The excitement in the community is high – Agent Builder is viewed as a potential game-changer in the Bittensor ecosystem, and the coming months will be focused on turning that potential into reality.

 

The Agent Builder roadmap is centered on rapidly evolving the platform from its initial launch (v1) into a robust, user-friendly ecosystem for AI agents. Although the team hasn’t published a formal public roadmap document, they have communicated several milestones and future plans through updates. Here’s what can be expected moving forward:

Current Phase – V1 Mining Contest and Stability: (Late Oct 2025 – Present) Agent Builder launched its v1 mining phase at the end of October 2025. This phase kicked off with a “Miner Contest”, essentially an open invitation for miners to join Subnet 80 and start building agents. In the first two weeks, the subnet saw strong engagement (130+ Discord members, nearly 40 miners actively working on agents). The initial goals in this phase are:

  • Onboarding Miners: Provide templates and support so that a variety of agents join the network. The team released an official miner template on October 29, 2025, to jump-start participation. They also started with a conservative emission setting (about 10% of full TAO rewards) to ensure things run smoothly before scaling up.
  • Testing & Feedback Loops: During this period, the focus is on stability and performance. The orchestrator and miner APIs are being battle-tested. Early feedback from miners has led to quick fixes – for example, an update was rolled out in early November to fix issues with the miner agent interface (input handling bugs, etc.). The team is actively monitoring quality metrics and ensuring the scoring system for Quality, Latency, Reliability, and User Feedback is working as intended.
  • Initial Evaluations: The team scheduled the first evaluations of miner agents shortly after launch (e.g., the first Sunday after miners started submitting agents). These evaluations by validators help rank the miners and begin adjusting emission rewards based on performance. It’s a crucial step to verify that the incentive mechanism (rewarding the best agents) functions correctly.

 

Short Term – Ramp Up and Feature Completeness: (Late Q4 2025) After the initial contest period and once the subnet proves stable, Agent Builder is likely to increase the emissions gradually towards 100%. This means miners will earn more, attracting even more participants and incentivizing improvements. We anticipate:

  • Full Emission and Competition: A move from 10% to a higher percentage of emission (perhaps 50%, then 100%) as more miners come online and the system can handle load. When full emissions are live, the subnet becomes fully competitive – only the best agents will thrive, really pushing innovation.
  • Multi-Turn & Tool Use Enhancements: The team has indicated they are working on multi-turn simulations and complex tool usage. In the short term, we’ll see Agent Builder agents become more capable in things like maintaining a conversation state across multiple turns, and using external tools or APIs reliably. This may involve integrating new modules or improving how the orchestrator sequences refine calls. They likely are climbing benchmarks like BFCL (Berkeley Function Calling Leaderboard) which measures how well AI models can use tools in multi-step tasks – incorporating those improvements directly benefits Agent Builder’s capabilities.
  • User Experience for Builders: Another near-term focus is making it as easy as possible for developers/miners to contribute agents. We could see improved documentation, more polished sample code, and perhaps a testing sandbox. The Gradio UI provided is one example, allowing miners to locally simulate how their agent would perform on the subnet before deploying. Smoothing out miner onboarding flows will help rapidly grow the library of agents available.

 

Medium Term – User-Facing Platform & Integration: (Q1–Q2 2026) Once the agent network is robust, Agent Builder will likely turn toward the end-user experience. The vision is to empower not just miners and developers, but also non-technical users to spin up AI agents. Possible roadmap items in this stage:

  • Agent Builder UI or Canvas: The introduction of a no-code “agent builder” interface. This could be a web app where a user can graphically select what capabilities they want (for example, “web browsing + math + translation”), and the system will compose an agent using the relevant miner agents in the background. Alternatively, it might allow chaining agents visually (node-based flow builders). Given the broader AI trend, a canvas for designing agent workflows would make sense. This would truly fulfill the promise of letting anyone create their own AI agent tailored to their needs.
  • AI Agent Marketplace: We may see a marketplace-like environment emerge. Top-performing agents (or combinations) could be showcased, and users might choose from these “prefab” agents for specific tasks. For instance, one agent might excel at legal research, another at personal fitness coaching. The platform could list these and allow one-click deployment for users, perhaps with a mechanism to share a portion of token rewards with the original miner developers as their agents get utilized.
  • Cross-Subnet Collaboration: Agent Builder doesn’t exist in isolation – it can draw data or services from other subnets. A medium-term goal might be deeper integration with other Bittensor subnets and external data sources. In particular, there’s an opportunity to plug into Masa’s real-time data subnet (SN-42) or others. For example, an Agent Builder agent might query a real-time data feed from another subnet to enhance answers (like up-to-the-second financial data or live news). The roadmap may include API bridges or partnerships to ensure Agent Builder agents have rich information at their disposal.
  • Performance Scaling: As usage grows, the team will need to optimize performance. This could mean refining the orchestration algorithm to reduce latency (so that involving multiple agents doesn’t slow down responses noticeably). It might also involve setting up tiered validator nodes or scaling infrastructure to handle many concurrent user queries. Since each query can involve multiple miners, ensuring low-latency networking and prompt consensus on scoring is important for a smooth user experience.

 

Long Term – Mature Ecosystem & Governance: (Late 2026 and beyond) In the long run, Agent Builder aims to be a foundational layer for decentralized AI services. Some forward-looking aspects of the roadmap might be:

  • Full Decentralized Agent Economy: Achieving a state where there are thousands of agents running, each highly specialized, and users globally are deploying AI agents through the platform for various real-world applications. At this stage, Agent Builder could facilitate agents that operate continually (not just on-demand queries, but autonomous agents that run 24/7 on the network doing tasks for users). An example might be a personal AI assistant that lives on Agent Builder, which you “own” and it handles tasks for you daily.
  • Improved Learning Algorithms: Incorporating more advanced learning techniques so agents improve autonomously. For instance, using on-chain feedback data to fine-tune models (a form of on-network reinforcement learning). Possibly even allowing agents to retrain or evolve using the data from all the interactions they handle, all while on the subnet (though heavy retraining might be offloaded, there could be mechanisms for incremental learning).
  • Governance and DAO: As the subnet grows, the team may decentralize control. The Bittensor Senate or a DAO of token holders could have a say in parameter tuning (like how emissions are allocated, or which new features to prioritize). The governance roadmap might include handing over certain keys or introducing on-chain governance proposals for Agent Builder upgrades, aligning with Bittensor’s move toward community ownership.
  • Broader Adoption and Partnerships: Long-term success will also be measured by adoption outside the immediate crypto community. The roadmap could involve partnerships with other AI platforms or even enterprise use-cases. For example, if Agent Builder proves its value, other projects might use it as a backend for AI agent functionality (much like an open, decentralized alternative to proprietary AI services). We might see collaborations where Agent Builder provides the agent layer while another platform provides a user base or distribution channel – effectively exporting Bittensor’s AI capabilities to mainstream apps.

 

While many of these future steps are aspirational, the trajectory is clear: Agent Builder is quickly moving from an experimental subnet into a full-fledged Agent Platform. The team’s fast execution in the contest phase and continuous improvements suggest that new features will roll out frequently. We can expect public updates from the team at each milestone (they have been actively sharing progress on social media). By monitoring those, one can see the roadmap unfold in real-time. The excitement in the community is high – Agent Builder is viewed as a potential game-changer in the Bittensor ecosystem, and the coming months will be focused on turning that potential into reality.