With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time

Subnet 45

SWE-Rizzo

Emissions
Value
Recycled
Value
Recycled (24h)
Value
Registration Cost
Value
Active Validators
Value
Active Miners
Value
Active Dual Miners/Validators
Value

ABOUT

What exactly does it do?

SWE-Rizzo is a specialized Bittensor subnetwork focused on AI-driven software engineering, particularly automated code generation, bug fixing, and code completion. It leverages a decentralized network of miners (AI model servers) and validators to provide coding assistance and software development tools powered by open-source large language models. In the Bittensor ecosystem, SWE-Rizzo serves as the leading coding-focused subnet, optimizing prompt engineering for code tasks, automating software debugging, and accelerating development with high-performance AI models. The project’s core mission is to create robust, scalable tools that improve the software engineering process – from code generation and QA to testing and maintenance – using collective intelligence.

By tackling a “hard real-world problem” (reducing software bugs and automating fixes), SWE-Rizzo aims to deliver practical downstream products and also integrate with other subnets for a stronger overall network. Notably, this subnet competes on Princeton University’s SWE-Bench – a software engineering benchmark – alongside models from OpenAI, Anthropic, Meta, etc., to validate its performance on code tasks. Top-performing solutions from SWE-Rizzo’s miners are even published on the official SWE-Bench leaderboard, demonstrating real-world efficacy and measuring the subnet against industry-leading AI coding models. In essence, SWE-Rizzo’s role is to “redefine software engineering with decentralized intelligence”, providing an open, community-driven platform for AI-assisted coding and problem-solving.

SWE-Rizzo is a specialized Bittensor subnetwork focused on AI-driven software engineering, particularly automated code generation, bug fixing, and code completion. It leverages a decentralized network of miners (AI model servers) and validators to provide coding assistance and software development tools powered by open-source large language models. In the Bittensor ecosystem, SWE-Rizzo serves as the leading coding-focused subnet, optimizing prompt engineering for code tasks, automating software debugging, and accelerating development with high-performance AI models. The project’s core mission is to create robust, scalable tools that improve the software engineering process – from code generation and QA to testing and maintenance – using collective intelligence.

By tackling a “hard real-world problem” (reducing software bugs and automating fixes), SWE-Rizzo aims to deliver practical downstream products and also integrate with other subnets for a stronger overall network. Notably, this subnet competes on Princeton University’s SWE-Bench – a software engineering benchmark – alongside models from OpenAI, Anthropic, Meta, etc., to validate its performance on code tasks. Top-performing solutions from SWE-Rizzo’s miners are even published on the official SWE-Bench leaderboard, demonstrating real-world efficacy and measuring the subnet against industry-leading AI coding models. In essence, SWE-Rizzo’s role is to “redefine software engineering with decentralized intelligence”, providing an open, community-driven platform for AI-assisted coding and problem-solving.

PURPOSE

What exactly is the 'product/build'?

SWE-Rizzo operates through a network of miners and validators collaborating via Bittensor’s consensus protocol. Miners in this subnet run specialized AI pipelines that can take a software repository and a bug/issue as input and generate a patch or code fix as output. Each miner hosts their pipeline behind a Bittensor synapse (an interface exposing the service) so that validators can query and execute the miner’s code. Validators are responsible for selecting tasks, invoking miners’ pipelines, and evaluating the results to determine rewards. The workflow is as follows (automated by the subnet’s incentive program):

  • A validator grabs a new task from the SWE-Bench dataset – e.g. a description of a bug or issue in a specific code repository.
  • The validator then fetches the miner’s code from its synapse – the miner provides a Python script that knows how to check out the repository, read the issue, and attempt a fix.
  • The miner’s code is executed inside an isolated Docker container along with the target repository and issue details, ensuring a sandboxed, reproducible environment for running the patch-generation pipeline.
  • The patch output produced by the miner’s pipeline is then automatically evaluated using SWE-Bench’s evaluation methods (e.g. running test cases or diff checks defined by the benchmark).
  • Based on the evaluation results (e.g. did the patch successfully fix the bug and meet quality criteria), the miner is rewarded in proportion to their performance.

 

This incentive mechanism means miners compete to write the most effective bug-fixing AI pipelines, since only high-performing patches earn strong rewards. Validators play a crucial role in measuring performance and ensuring fairness: they conduct offline benchmarking and real-time tests to assess each miner’s code quality, correctness, and efficiency. The subnet’s consensus (now under Bittensor’s dTAO system) allocates token rewards based on these evaluations, so miners are directly incentivized to improve their models and solutions. Notably, miners must adhere to the required interface (their synapse script must accept a repo and issue and return a patch) and likely need to frequently update their models/code as the benchmark tasks evolve. This competitive dynamic, combined with decentralized validation by multiple independent validators, helps ensure that the subnet continually produces higher-quality automated code solutions over time. In summary, miners contribute AI-driven code fixes and validators rank and reward them using a rigorous benchmark-driven scoring system, aligning the network on solving real software bugs for token incentives.

 

Products, Services, and Applications Built on SWE-Rizzo

One distinguishing goal of Subnet 39 (SWE-Rizzo) is to enable tangible products and tools on top of its mining/validation network, not just theoretical AI services. Over time the team has developed “Gen42”, a suite of user-facing applications that showcase the subnet’s capabilities. Key products and interfaces include:

  • Gen42 Chat Application: An interactive web app for code-based Q&A and troubleshooting. This chat interface allows developers to “converse with your code,” meaning you can ask questions about a codebase or request fixes and get responses generated by the subnet’s AI agents. It’s essentially an AI coding assistant chat bot powered by the best-performing miners (think of it as a decentralized version of ChatGPT specialized for programming help).
  • Code Completion API: An OpenAI-compatible REST API that developers can integrate into their development environment or tools. For instance, the team has made it compatible with the VS Code extension Continue.dev, so developers can get auto-completions and suggestions from SWE-Rizzo’s models directly in their IDE. This API follows the OpenAI schema, making it easy to plug into existing editor plugins and workflows.
  • Command-Line Interface (CLI) Tool: Often referred to as the SWE CLI, this is a command-line utility that lets users invoke the subnet’s software engineering pipelines on their own code projects. Developers can use it to run automated fixes or analyses on a repository straight from the terminal. This CLI is expected to be useful not only to end-users but also other Bittensor subnets – for example, a different subnet could call SWE-Rizzo’s CLI to auto-fix code as part of a larger pipeline, reinforcing cross-subnet collaboration.
  • Gen42 Web Portal (Gen42.ai): A website serving as the home for these services, including account management and a dashboard. Gen42 offers a subscription-based model for access, with a free trial available. Through this portal, users can try out demos, sign up for the coding assistant service, and monitor performance. (Gen42 is described as the first direct-to-consumer product born from this subnet’s technology.)
  • Integration Plugins: The team emphasizes integration with existing developer platforms. Gen42 is being integrated with popular IDEs and Git platforms. For instance, there are plans (and ongoing work) to integrate AI assistance into GitLab and other version control systems, so that SWE-Rizzo’s AI can automatically assist with merge requests or code reviews in enterprise workflows. The Gen42 assistant also supports multi-language coding and can plug into various IDEs via extensions (as noted in their roadmap).
  • SWE-Bench Leaderboard & Analytics: While not a traditional product, the subnet heavily uses the SWE-Bench benchmarking system. The top solutions from miners are published on SWE-Bench’s public leaderboard, effectively turning the subnet’s progress into an externally visible “product” – one can see how SWE-Rizzo’s best AI models rank against other state-of-the-art code models globally. This transparency helps drive improvements and also showcases the subnet’s capability to the broader AI community.

 

Overall, SWE-Rizzo has evolved beyond just an infrastructure network; it underpins a full suite of software engineering AI services. From an AI support chatbot for developers, to IDE integration, to automated code review and fixes, the products stemming from Subnet 39 illustrate its practical impact. These applications are continually refined and expanded – for example, Gen42 has real-time suggestions, multi-language support, and is working toward deeper enterprise features according to the team’s announcements. By focusing on real use-cases (like assisting IT support via the earlier MSPTech application on a sister subnet, and now coding assistance via Gen42), Team Rizzo is ensuring that the subnet’s AI innovations directly benefit end users in the software industry.

 

Technical Architecture and Infrastructure

SWE-Rizzo’s architecture combines Bittensor’s decentralized blockchain protocol with a specialized AI model pipeline for code. Here’s a breakdown of key technical components:

Open-Source AI Models: At the core, miners run large language models (LLMs) specialized for coding tasks. The project explicitly uses open-source LLMs (likely code-focused models such as CodeLlama, StarCoder, etc.) rather than proprietary models. This openness aligns with Bittensor’s ethos and allows any miner to deploy a model that fits the subnet’s criteria. Many models are fine-tuned to perform code generation and debugging based on the SWE-Bench tasks. The subnet does not fixate on one model; instead miners are free to improve their pipelines with any model or algorithm that yields better bug fixes. Miners often fine-tune models on code and problem-solution data to optimize for the SWE-Bench evaluations.

Miner Synapse & Execution Sandbox: Each miner exposes a synapse – essentially an RPC endpoint on Bittensor – implementing the required interface for this subnet (accepting a repository + issue, returning a patch). When validators call this synapse, the miner’s code (the AI pipeline plus any scripting around it) is packaged and run inside a Docker container. This containerization is crucial: it provides a consistent runtime where the code can safely clone the target repository, apply fixes, and run tests without affecting the host or other network operations. The use of Docker ensures every miner’s solution is evaluated in an identical environment, which improves fairness and reproducibility of results.

Benchmark-Driven Validation: The subnet’s validation logic is tightly coupled with Princeton’s SWE-Bench framework. SWE-Bench provides a suite of software engineering tasks (like fixing a known bug) along with evaluation metrics (did the patch fix the bug? does it pass all tests? etc.). Validators essentially serve as benchmark oracles – they feed tasks to miners and use SWE-Bench’s automated tests to score the responses. This is a departure from Bittensor’s original generic “prompt/response” validation; here the correctness can be measured objectively by running code. It represents an on-chain adaptation of a traditional leaderboard evaluation. Validators likely still produce “trust scores” or weights for miners (as in Bittensor’s mechanism), but those are directly informed by these benchmark results rather than just subjective scoring.

Bittensor Blockchain Integration: Subnet 39 operates within the Bittensor network (Opentensor), meaning it is registered on the Bittensor substrate chain and follows all on-chain governance and tokenomic rules. Each subnet has a unique network UID (netuid); in this case SWE-Rizzo is identified as netuid 45 on-chain (often referred to as SN45). The Bittensor chain handles staking and token emission for the subnet: miners and validators stake the native token TAO (or a derivative “subnet token” after dTAO) and earn rewards in it for their contributions. The consensus model is often described as Proof-of-Intelligence – participants are rewarded for producing useful AI work. In practice, SWE-Rizzo’s blockchain logic allocates daily TAO emissions to validators and miners according to the SWE-Bench-based performance signals (ensuring that better code solutions translate to higher on-chain rewards). Smart contracts or pallet modules on the Subtensor chain manage these reward distributions and enforce that the subnet meets certain milestones (e.g., maintaining a minimum level of activity or performance) to continue receiving global TAO emissions.

Software Stack: The implementation of SWE-Rizzo is primarily in Python, which is natural given the AI/ML focus and the need for scripting git operations. The subnet’s open-source code repository (originally a fork of Bittensor’s Subnet-1 “Prompting” code) is about 73% Python, with some Jupyter Notebooks (for development and evaluation), shell scripts for automation, and Dockerfiles for the container setup. The miners likely utilize machine learning libraries (PyTorch, HuggingFace Transformers, etc.) inside their Docker environments to run the code models, although specific libraries can vary by miner. Communication between validators and miners happens through Bittensor’s networking protocol (built on libp2p and using RPC calls under the hood), so the whole subnet forms a peer-to-peer network of nodes exchanging tasks and results.

Inter-subnet Connectivity: SWE-Rizzo is designed not to exist in isolation but to integrate with other subnets. Team Rizzo has emphasized cross-subnet collaboration as a pillar of their approach. Concretely, Subnet 45 (SWE-Rizzo) works in tandem with Subnet 20 (BitAgent) for orchestrating tasks and with Subnet 54 (an “image-to-code” subnet) for complementary capabilities. For example, BitAgent (SN20) focuses on tool automation and agent workflows – it might route a coding-related task to SWE-Rizzo’s Gen42 for code assistance, then use the result in a larger automation script. Similarly, an image analysis subnet could call on SWE-Rizzo if it needs to generate code as part of its pipeline. This modular design of subnets, each providing a distinct AI service (code, image, language, etc.), is enabled by standard interfaces and the shared TAO economy. SWE-Rizzo’s Gen42 API is part of this vision, effectively making the subnet’s functionality accessible to other programs and subnets via API calls.

Security and Reliability: The Team Rizzo operators run a top-tier validator node (“Rizzo Validator”) with bare-metal servers, ensuring high uptime and reliable performance for the subnet. They even partnered to obtain a $25M on-chain insurance coverage via Nexus Mutual for their validator’s stakers, reflecting a strong emphasis on security. On the subnet level, containerization and open-source scrutiny of miner code provide security – any malicious or low-quality miner code would fail the SWE-Bench tests and not be rewarded, naturally ejecting it over time. The code execution is sandboxed to prevent harm. Moreover, by focusing on relatively smaller, open models (often ≤7B or 13B parameters for practicality), the subnet ensures miners can run on commodity GPUs, which decentralizes participation (this approach was explicitly taken in their prior Subnet 20 as well, using Apache-2 licensed models under 8B parameters, and carries over into SWE-Rizzo’s philosophy).

In summary, SWE-Rizzo’s technical backbone is a marriage of blockchain (for coordination and incentives) and AI infrastructure (for model training and inference). It stands on the shoulders of Bittensor’s network protocol but extends it with a custom evaluation system (SWE-Bench integration, Dockerized execution) tailor-made for the software engineering domain.

 

SWE-Rizzo operates through a network of miners and validators collaborating via Bittensor’s consensus protocol. Miners in this subnet run specialized AI pipelines that can take a software repository and a bug/issue as input and generate a patch or code fix as output. Each miner hosts their pipeline behind a Bittensor synapse (an interface exposing the service) so that validators can query and execute the miner’s code. Validators are responsible for selecting tasks, invoking miners’ pipelines, and evaluating the results to determine rewards. The workflow is as follows (automated by the subnet’s incentive program):

  • A validator grabs a new task from the SWE-Bench dataset – e.g. a description of a bug or issue in a specific code repository.
  • The validator then fetches the miner’s code from its synapse – the miner provides a Python script that knows how to check out the repository, read the issue, and attempt a fix.
  • The miner’s code is executed inside an isolated Docker container along with the target repository and issue details, ensuring a sandboxed, reproducible environment for running the patch-generation pipeline.
  • The patch output produced by the miner’s pipeline is then automatically evaluated using SWE-Bench’s evaluation methods (e.g. running test cases or diff checks defined by the benchmark).
  • Based on the evaluation results (e.g. did the patch successfully fix the bug and meet quality criteria), the miner is rewarded in proportion to their performance.

 

This incentive mechanism means miners compete to write the most effective bug-fixing AI pipelines, since only high-performing patches earn strong rewards. Validators play a crucial role in measuring performance and ensuring fairness: they conduct offline benchmarking and real-time tests to assess each miner’s code quality, correctness, and efficiency. The subnet’s consensus (now under Bittensor’s dTAO system) allocates token rewards based on these evaluations, so miners are directly incentivized to improve their models and solutions. Notably, miners must adhere to the required interface (their synapse script must accept a repo and issue and return a patch) and likely need to frequently update their models/code as the benchmark tasks evolve. This competitive dynamic, combined with decentralized validation by multiple independent validators, helps ensure that the subnet continually produces higher-quality automated code solutions over time. In summary, miners contribute AI-driven code fixes and validators rank and reward them using a rigorous benchmark-driven scoring system, aligning the network on solving real software bugs for token incentives.

 

Products, Services, and Applications Built on SWE-Rizzo

One distinguishing goal of Subnet 39 (SWE-Rizzo) is to enable tangible products and tools on top of its mining/validation network, not just theoretical AI services. Over time the team has developed “Gen42”, a suite of user-facing applications that showcase the subnet’s capabilities. Key products and interfaces include:

  • Gen42 Chat Application: An interactive web app for code-based Q&A and troubleshooting. This chat interface allows developers to “converse with your code,” meaning you can ask questions about a codebase or request fixes and get responses generated by the subnet’s AI agents. It’s essentially an AI coding assistant chat bot powered by the best-performing miners (think of it as a decentralized version of ChatGPT specialized for programming help).
  • Code Completion API: An OpenAI-compatible REST API that developers can integrate into their development environment or tools. For instance, the team has made it compatible with the VS Code extension Continue.dev, so developers can get auto-completions and suggestions from SWE-Rizzo’s models directly in their IDE. This API follows the OpenAI schema, making it easy to plug into existing editor plugins and workflows.
  • Command-Line Interface (CLI) Tool: Often referred to as the SWE CLI, this is a command-line utility that lets users invoke the subnet’s software engineering pipelines on their own code projects. Developers can use it to run automated fixes or analyses on a repository straight from the terminal. This CLI is expected to be useful not only to end-users but also other Bittensor subnets – for example, a different subnet could call SWE-Rizzo’s CLI to auto-fix code as part of a larger pipeline, reinforcing cross-subnet collaboration.
  • Gen42 Web Portal (Gen42.ai): A website serving as the home for these services, including account management and a dashboard. Gen42 offers a subscription-based model for access, with a free trial available. Through this portal, users can try out demos, sign up for the coding assistant service, and monitor performance. (Gen42 is described as the first direct-to-consumer product born from this subnet’s technology.)
  • Integration Plugins: The team emphasizes integration with existing developer platforms. Gen42 is being integrated with popular IDEs and Git platforms. For instance, there are plans (and ongoing work) to integrate AI assistance into GitLab and other version control systems, so that SWE-Rizzo’s AI can automatically assist with merge requests or code reviews in enterprise workflows. The Gen42 assistant also supports multi-language coding and can plug into various IDEs via extensions (as noted in their roadmap).
  • SWE-Bench Leaderboard & Analytics: While not a traditional product, the subnet heavily uses the SWE-Bench benchmarking system. The top solutions from miners are published on SWE-Bench’s public leaderboard, effectively turning the subnet’s progress into an externally visible “product” – one can see how SWE-Rizzo’s best AI models rank against other state-of-the-art code models globally. This transparency helps drive improvements and also showcases the subnet’s capability to the broader AI community.

 

Overall, SWE-Rizzo has evolved beyond just an infrastructure network; it underpins a full suite of software engineering AI services. From an AI support chatbot for developers, to IDE integration, to automated code review and fixes, the products stemming from Subnet 39 illustrate its practical impact. These applications are continually refined and expanded – for example, Gen42 has real-time suggestions, multi-language support, and is working toward deeper enterprise features according to the team’s announcements. By focusing on real use-cases (like assisting IT support via the earlier MSPTech application on a sister subnet, and now coding assistance via Gen42), Team Rizzo is ensuring that the subnet’s AI innovations directly benefit end users in the software industry.

 

Technical Architecture and Infrastructure

SWE-Rizzo’s architecture combines Bittensor’s decentralized blockchain protocol with a specialized AI model pipeline for code. Here’s a breakdown of key technical components:

Open-Source AI Models: At the core, miners run large language models (LLMs) specialized for coding tasks. The project explicitly uses open-source LLMs (likely code-focused models such as CodeLlama, StarCoder, etc.) rather than proprietary models. This openness aligns with Bittensor’s ethos and allows any miner to deploy a model that fits the subnet’s criteria. Many models are fine-tuned to perform code generation and debugging based on the SWE-Bench tasks. The subnet does not fixate on one model; instead miners are free to improve their pipelines with any model or algorithm that yields better bug fixes. Miners often fine-tune models on code and problem-solution data to optimize for the SWE-Bench evaluations.

Miner Synapse & Execution Sandbox: Each miner exposes a synapse – essentially an RPC endpoint on Bittensor – implementing the required interface for this subnet (accepting a repository + issue, returning a patch). When validators call this synapse, the miner’s code (the AI pipeline plus any scripting around it) is packaged and run inside a Docker container. This containerization is crucial: it provides a consistent runtime where the code can safely clone the target repository, apply fixes, and run tests without affecting the host or other network operations. The use of Docker ensures every miner’s solution is evaluated in an identical environment, which improves fairness and reproducibility of results.

Benchmark-Driven Validation: The subnet’s validation logic is tightly coupled with Princeton’s SWE-Bench framework. SWE-Bench provides a suite of software engineering tasks (like fixing a known bug) along with evaluation metrics (did the patch fix the bug? does it pass all tests? etc.). Validators essentially serve as benchmark oracles – they feed tasks to miners and use SWE-Bench’s automated tests to score the responses. This is a departure from Bittensor’s original generic “prompt/response” validation; here the correctness can be measured objectively by running code. It represents an on-chain adaptation of a traditional leaderboard evaluation. Validators likely still produce “trust scores” or weights for miners (as in Bittensor’s mechanism), but those are directly informed by these benchmark results rather than just subjective scoring.

Bittensor Blockchain Integration: Subnet 39 operates within the Bittensor network (Opentensor), meaning it is registered on the Bittensor substrate chain and follows all on-chain governance and tokenomic rules. Each subnet has a unique network UID (netuid); in this case SWE-Rizzo is identified as netuid 45 on-chain (often referred to as SN45). The Bittensor chain handles staking and token emission for the subnet: miners and validators stake the native token TAO (or a derivative “subnet token” after dTAO) and earn rewards in it for their contributions. The consensus model is often described as Proof-of-Intelligence – participants are rewarded for producing useful AI work. In practice, SWE-Rizzo’s blockchain logic allocates daily TAO emissions to validators and miners according to the SWE-Bench-based performance signals (ensuring that better code solutions translate to higher on-chain rewards). Smart contracts or pallet modules on the Subtensor chain manage these reward distributions and enforce that the subnet meets certain milestones (e.g., maintaining a minimum level of activity or performance) to continue receiving global TAO emissions.

Software Stack: The implementation of SWE-Rizzo is primarily in Python, which is natural given the AI/ML focus and the need for scripting git operations. The subnet’s open-source code repository (originally a fork of Bittensor’s Subnet-1 “Prompting” code) is about 73% Python, with some Jupyter Notebooks (for development and evaluation), shell scripts for automation, and Dockerfiles for the container setup. The miners likely utilize machine learning libraries (PyTorch, HuggingFace Transformers, etc.) inside their Docker environments to run the code models, although specific libraries can vary by miner. Communication between validators and miners happens through Bittensor’s networking protocol (built on libp2p and using RPC calls under the hood), so the whole subnet forms a peer-to-peer network of nodes exchanging tasks and results.

Inter-subnet Connectivity: SWE-Rizzo is designed not to exist in isolation but to integrate with other subnets. Team Rizzo has emphasized cross-subnet collaboration as a pillar of their approach. Concretely, Subnet 45 (SWE-Rizzo) works in tandem with Subnet 20 (BitAgent) for orchestrating tasks and with Subnet 54 (an “image-to-code” subnet) for complementary capabilities. For example, BitAgent (SN20) focuses on tool automation and agent workflows – it might route a coding-related task to SWE-Rizzo’s Gen42 for code assistance, then use the result in a larger automation script. Similarly, an image analysis subnet could call on SWE-Rizzo if it needs to generate code as part of its pipeline. This modular design of subnets, each providing a distinct AI service (code, image, language, etc.), is enabled by standard interfaces and the shared TAO economy. SWE-Rizzo’s Gen42 API is part of this vision, effectively making the subnet’s functionality accessible to other programs and subnets via API calls.

Security and Reliability: The Team Rizzo operators run a top-tier validator node (“Rizzo Validator”) with bare-metal servers, ensuring high uptime and reliable performance for the subnet. They even partnered to obtain a $25M on-chain insurance coverage via Nexus Mutual for their validator’s stakers, reflecting a strong emphasis on security. On the subnet level, containerization and open-source scrutiny of miner code provide security – any malicious or low-quality miner code would fail the SWE-Bench tests and not be rewarded, naturally ejecting it over time. The code execution is sandboxed to prevent harm. Moreover, by focusing on relatively smaller, open models (often ≤7B or 13B parameters for practicality), the subnet ensures miners can run on commodity GPUs, which decentralizes participation (this approach was explicitly taken in their prior Subnet 20 as well, using Apache-2 licensed models under 8B parameters, and carries over into SWE-Rizzo’s philosophy).

In summary, SWE-Rizzo’s technical backbone is a marriage of blockchain (for coordination and incentives) and AI infrastructure (for model training and inference). It stands on the shoulders of Bittensor’s network protocol but extends it with a custom evaluation system (SWE-Bench integration, Dockerized execution) tailor-made for the software engineering domain.

 

WHO

Team Info

Team Rizzo is the driving force behind Subnet 39 (SWE-Rizzo). It’s a well-known group in the Bittensor community, also responsible for Subnet 20 (BitAgent) and the reputable Rizzo Validator node. The team is co-founded by Frank Rizzo, who often serves as the public face and business lead (pseudonymous, with a playful moniker “Frank” – indeed Frank Rizzo was originally a character name, but here it represents the co-founder’s alias). Frank Rizzo is active on social media (Twitter/X handle @FrankRizz07) and frequently shares updates or thought leadership about Bittensor and their subnets. The other primary figure is the Rizzo Validator operator (Twitter @RizzoValidator), which suggests a technical lead focusing on the infrastructure side. These two handles were explicitly listed as the development team for Gen42, highlighting their presence on public forums.

Internally, Team Rizzo is quite robust for a subnet team. As of early 2025, they had 13 core members: 6 developers/engineers, 3 validator operators, 2 operations experts, and 2 communications/multimedia specialists. This mix of engineering talent and business/ops support is somewhat unique in Bittensor, as many subnets are run by small developer-focused teams. Frank Rizzo has emphasized the importance of having both technical excellence and business acumen – the team strives for a balance, ensuring they not only build great AI models but also market their products and engage the community.

On the development front, the open-source contributions for SWE-Rizzo can be found on GitHub. The subnet’s code repository (initially under the name Gen42) was forked from Bittensor’s original prompting subnet and then extended. Contributors from Team Rizzo (possibly under handles like “brokespace” or others in the organization) actively maintain this codebase. They also produce documentation and quickstart guides for new miners and validators – for example, a Validator Quickstart Guide and Miner Quickstart Guide are referenced in their docs to help community members join the subnet. The team engages with the community via the Bittensor Discord and subreddit, often answering questions and sharing progress. In fact, Team Rizzo is frequently cited by community members as a reliable and innovative group; users on Reddit often recommend checking out subnets 20 and 45 by Team Rizzo for their strong utility and track record.

Beyond the core team, there are likely numerous community miners contributing to SWE-Rizzo. These miners might not be official team members, but they develop pipelines to compete on the subnet. Team Rizzo fosters this broader community by providing leaderboards and even publishing top miner achievements (getting one’s solution on the Princeton SWE-Bench leaderboard is a form of). This open competition approach means the “team” in effect includes all miners who are working on improving AI for code; Team Rizzo coordinates and sets the vision, but many independent contributors participate in the subnet’s success.

Team Rizzo’s strong presence and consistent performance have earned them respect in the Bittensor ecosystem. Even Bittensor’s founder Jake Steeves acknowledged them as “one of – if not the – most consistent and performant validators on Bittensor… adding value not just to stakers, but the subnets they validate in”. This reputation underscores that SWE-Rizzo is in capable hands, with a team committed to long-term innovation and community trust.

 

Team Rizzo is the driving force behind Subnet 39 (SWE-Rizzo). It’s a well-known group in the Bittensor community, also responsible for Subnet 20 (BitAgent) and the reputable Rizzo Validator node. The team is co-founded by Frank Rizzo, who often serves as the public face and business lead (pseudonymous, with a playful moniker “Frank” – indeed Frank Rizzo was originally a character name, but here it represents the co-founder’s alias). Frank Rizzo is active on social media (Twitter/X handle @FrankRizz07) and frequently shares updates or thought leadership about Bittensor and their subnets. The other primary figure is the Rizzo Validator operator (Twitter @RizzoValidator), which suggests a technical lead focusing on the infrastructure side. These two handles were explicitly listed as the development team for Gen42, highlighting their presence on public forums.

Internally, Team Rizzo is quite robust for a subnet team. As of early 2025, they had 13 core members: 6 developers/engineers, 3 validator operators, 2 operations experts, and 2 communications/multimedia specialists. This mix of engineering talent and business/ops support is somewhat unique in Bittensor, as many subnets are run by small developer-focused teams. Frank Rizzo has emphasized the importance of having both technical excellence and business acumen – the team strives for a balance, ensuring they not only build great AI models but also market their products and engage the community.

On the development front, the open-source contributions for SWE-Rizzo can be found on GitHub. The subnet’s code repository (initially under the name Gen42) was forked from Bittensor’s original prompting subnet and then extended. Contributors from Team Rizzo (possibly under handles like “brokespace” or others in the organization) actively maintain this codebase. They also produce documentation and quickstart guides for new miners and validators – for example, a Validator Quickstart Guide and Miner Quickstart Guide are referenced in their docs to help community members join the subnet. The team engages with the community via the Bittensor Discord and subreddit, often answering questions and sharing progress. In fact, Team Rizzo is frequently cited by community members as a reliable and innovative group; users on Reddit often recommend checking out subnets 20 and 45 by Team Rizzo for their strong utility and track record.

Beyond the core team, there are likely numerous community miners contributing to SWE-Rizzo. These miners might not be official team members, but they develop pipelines to compete on the subnet. Team Rizzo fosters this broader community by providing leaderboards and even publishing top miner achievements (getting one’s solution on the Princeton SWE-Bench leaderboard is a form of). This open competition approach means the “team” in effect includes all miners who are working on improving AI for code; Team Rizzo coordinates and sets the vision, but many independent contributors participate in the subnet’s success.

Team Rizzo’s strong presence and consistent performance have earned them respect in the Bittensor ecosystem. Even Bittensor’s founder Jake Steeves acknowledged them as “one of – if not the – most consistent and performant validators on Bittensor… adding value not just to stakers, but the subnets they validate in”. This reputation underscores that SWE-Rizzo is in capable hands, with a team committed to long-term innovation and community trust.

 

FUTURE

Roadmap

SWE-Rizzo’s journey is an evolving one, with clear milestones already achieved and ambitious plans ahead. Here’s a structured look at its historical context, current status, and future roadmap:

Origins and Historical Context: The concept for an AI coding assistant subnet emerged from Team Rizzo’s earlier work on AI agents. Initially, the project was codenamed “Gen42”, reflecting its focus on code generation and possibly a nod to “the answer to life, the universe, and everything” (42) in software form. In mid-2024, Team Rizzo began developing Gen42 as a separate subnet, forking the base code from Bittensor’s Subnet-1 (Prompting) to customize it for software engineering tasks. Early on, this initiative was associated with Subnet UID 42 on test networks or documentation – in fact, external sources referred to “Subnet 42: Gen42, an open-source AI coding assistant” when describing the new Bittensor subnets in late 2024. This indicates that Gen42 was one of the new wave of subnets around that time. However, due to various on-chain updates and perhaps re-registration under the dynamic TAO (dTAO) system, the subnet eventually went live on mainnet with NetUID 45. (It appears the network UID 42 might have been a temporary slot or an initial registration that was later superseded by UID 45 for mainnet launch around August 18, 2024.) To avoid confusion and better brand the project, the team officially renamed Gen42 to “SWE-Rizzo.” The rename, announced on social media in late 2024, was meant to reflect a broader vision of building the leading AI-powered software engineering subnet (SWE stands for Software Engineering) under the Rizzo team’s umbrella. Essentially, SWE-Rizzo (Subnet 39/45) is the direct evolution of the Gen42 idea, inheriting its code and purpose but with a clearer name and mandate.

Milestones Achieved: Since launching, Subnet 45 (SWE-Rizzo) has hit several key milestones:

  • Mainnet Launch & Benchmark Integration (2024): The subnet was registered and began producing blocks in August 2024. Validators and miners successfully integrated the SWE-Bench dataset into the network’s operations, meaning real benchmark tasks were being used for rewards – a first-of-its-kind setup on Bittensor. Early miners started to solve tasks, and the best solutions were submitted to the Princeton SWE-Bench leaderboard, immediately placing the subnet’s AI on the map next to corporate labs. This proved the concept: decentralized miners could tackle complex coding tasks at a competitive level.
  • Gen42 Product Launch (late 2024): The team rolled out Gen42.ai, the front-end platform for users to interact with the subnet’s AI. By the end of 2024, Gen42 Chat, API, and CLI were in a functional state. A subscription model was established, and a free beta or demo period gathered feedback from early adopters. This marked a shift from purely internal development to customer-facing service – fulfilling Team Rizzo’s philosophy that subnets need tangible products and revenue for long-term survival.
  • Integration with Bittensor “Interact” (Q1 2025): The team achieved an “Interact integration”, meaning Gen42’s capabilities were plugged into the Bittensor Interact platform (an interface where different subnets and agents can interact). This likely allowed other AI agents on Bittensor to call Gen42 for coding tasks seamlessly. It demonstrates a working collaboration between SN45 and SN20 (and possibly others), where, for example, BitAgent’s MSPTech agent could use Gen42 to solve a coding issue during an IT support ticket.
  • Community & Performance Growth: Through late 2024 and into 2025, SWE-Rizzo grew to have dozens of miners and a strong validator set. The subnet’s rank within Bittensor climbed (as of early 2025 it was often among the top subnets by stake and activity). The team kept the community engaged with frequent updates: they attended the Endgame conference (Austin, 2023) to showcase their work, published strategy blogs (e.g., on Medium about dTAO strategy), and even secured the aforementioned Native insurance partnership to protect stakeholders – a milestone that builds trust for anyone delegating TAO to their validator.

 

Current Focus (2025): Presently, SWE-Rizzo is in an active growth and refinement phase. According to Team Rizzo’s updates, the subnet is focused on:

  • Advancing on SWE-Bench Leaderboards: Continually improving the models to climb towards the top rankings on the external benchmark. This involves iterating on model fine-tuning, incorporating larger or better architectures as they become available open-source, and encouraging miners to experiment. The team explicitly aims for “industry-leading validation” results – trying to outperform even some closed-source models in specific coding tasks.
  • Enhancing the AI Models: Ongoing R&D is happening to optimize prompt engineering and fine-tuning techniques for code generation. They are likely exploring improvements like better few-shot prompt strategies, chain-of-thought for coding, or integrating retrieval (e.g., pulling relevant code context) to boost the AI’s effectiveness. The miners are also encouraged to update their pipelines with any new research findings (for example, if a new version of Code Llama or StarCoder is released, miners can adapt it to the subnet).
  • Enterprise and Developer Integrations: A major part of the roadmap is to make the AI assistant useful in real developer workflows. Upcoming integrations mentioned include GitLab (so that Gen42 can work on merge requests or issues within a GitLab repository) and broader IDE support (plug-ins for JetBrains IDEs, VS Code, etc.). The team is exploring enterprise partnerships – possibly getting pilot programs with software companies similar to how their Subnet 20 partnered with MSPs. The idea is to have Gen42 act as an “AI pair programmer” or automated code reviewer in professional environments.
  • Cross-Subnet Synergy: They continue to collaborate with Subnet 20 (BitAgent) on joint use-cases. For instance, SN20’s GoGoAgent or MSPTech (which automate IT support tasks) can hand off programming challenges to SN45’s Gen42, creating a multi-agent system. Additionally, integration with Subnet 54 (image-to-code) could allow solving tasks like generating code from a screenshot or diagram. This synergy is actively being improved to demonstrate the power of a network of AI services rather than isolated silos.
  • Community & Miner Incentives: With the shift to dTAO (dynamic TAO rewards) across Bittensor, Team Rizzo is ensuring that SWE-Rizzo meets all the milestone requirements to keep emissions flowing. They are adjusting the incentive mechanisms as needed – for example, fine-tuning the reward function or task selection process to make sure miners are fairly paid and motivated to tackle the hardest problems. They’ve indicated a willingness to “work lessons learned to refine our incentive mechanism” based on previous subnet versions. The community is regularly updated on progress through forums and SWE-Bench tracking dashboards.

 

Future Plans: Looking further ahead, Team Rizzo envisions SWE-Rizzo as a long-term platform for AI-driven software development. Their roadmap (as communicated in Q&A and marketing materials) includes:

  • Achieving Top Benchmark Status: They want Subnet 45 to rank at or near the top of SWE-Bench, effectively proving that a decentralized network of small models can rival big-tech AI on coding tasks. This could involve incorporating new model innovations (the team will surely watch things like GPT-4 Code Interpreter capabilities and try to narrow the gap with open models).
  • Expanding the Scope of Tasks: While the initial focus is on bug-fixing and code completion, the subnet could expand to other software engineering tasks – e.g. automated code documentation, refactoring large codebases, security vulnerability patching, etc. The “hard real world problem” ethos means they will continuously identify pain points in software development to tackle next. The SWE-Bench itself may evolve or new benchmarks (like code security or performance optimization challenges) could be adopted.
  • Greater Productization and Revenue Streams: Following the mantra that subnets need sustainable revenue, Team Rizzo will likely introduce premium features for Gen42, enterprise licensing, or partnerships to monetize the service (beyond just token rewards). They’ve already piloted a subscription model, but we might see enterprise tiers or self-hosted versions for companies, etc. Success stories from the MSPTech pilot (Subnet 20) could translate into Gen42 finding commercial use in tech companies or open-source projects as a dependable AI coder. Ensuring monthly recurring revenue (MRR) is a stated long-term goal to complement token incentives.
  • Continuous Decentralization and Scale: As more developers find out about Gen42, the team expects more miners to join and more users to query the network. The architecture will need to scale – possibly involving optimization for lower latency (so that IDE suggestions feel instant) and support for consumer GPUs so that anyone with a decent GPU can contribute a miner. They’ve already emphasized models that run on consumer-grade hardware, and moving forward they may adopt techniques like model distillation or quantization to keep the barrier to entry low while scaling out the number of miner nodes.
  • Governance and Open Collaboration: The subnet will also mature in terms of governance. Team Rizzo, being a large stakeholder, will guide the direction, but they are likely to involve the community in decisions (for instance, if the community wants to target a new benchmark or alter the reward formula, those could be discussed on forums). As dTAO evolves, subnets might compete for delegations; Team Rizzo’s plan is clearly to remain at the forefront of innovation to attract support, which in turn funds further R&D. They’ve shown a proactive approach in marketing and transparency, which should continue into the future.

 

In conclusion, Subnet 39 “SWE-Rizzo” has rapidly grown from an idea to a leading AI coding network in less than a year. It has its roots in earlier subnets (borrowing code from the original Prompting subnet and inspiration from BitAgent SN20) and has been rebranded and refined to zero in on software engineering automation. The roadmap ahead is about deepening the technology and broadening the impact: the team aims for technical excellence (best-in-class AI coding assistants) and real-world adoption (making Gen42 a ubiquitous developer tool). With a strong foundation and a clear strategic vision, SWE-Rizzo is poised to play a significant role in both the Bittensor ecosystem and the wider AI-for-code landscape, pushing decentralized AI to tackle ever more complex programming challenges.

 

SWE-Rizzo’s journey is an evolving one, with clear milestones already achieved and ambitious plans ahead. Here’s a structured look at its historical context, current status, and future roadmap:

Origins and Historical Context: The concept for an AI coding assistant subnet emerged from Team Rizzo’s earlier work on AI agents. Initially, the project was codenamed “Gen42”, reflecting its focus on code generation and possibly a nod to “the answer to life, the universe, and everything” (42) in software form. In mid-2024, Team Rizzo began developing Gen42 as a separate subnet, forking the base code from Bittensor’s Subnet-1 (Prompting) to customize it for software engineering tasks. Early on, this initiative was associated with Subnet UID 42 on test networks or documentation – in fact, external sources referred to “Subnet 42: Gen42, an open-source AI coding assistant” when describing the new Bittensor subnets in late 2024. This indicates that Gen42 was one of the new wave of subnets around that time. However, due to various on-chain updates and perhaps re-registration under the dynamic TAO (dTAO) system, the subnet eventually went live on mainnet with NetUID 45. (It appears the network UID 42 might have been a temporary slot or an initial registration that was later superseded by UID 45 for mainnet launch around August 18, 2024.) To avoid confusion and better brand the project, the team officially renamed Gen42 to “SWE-Rizzo.” The rename, announced on social media in late 2024, was meant to reflect a broader vision of building the leading AI-powered software engineering subnet (SWE stands for Software Engineering) under the Rizzo team’s umbrella. Essentially, SWE-Rizzo (Subnet 39/45) is the direct evolution of the Gen42 idea, inheriting its code and purpose but with a clearer name and mandate.

Milestones Achieved: Since launching, Subnet 45 (SWE-Rizzo) has hit several key milestones:

  • Mainnet Launch & Benchmark Integration (2024): The subnet was registered and began producing blocks in August 2024. Validators and miners successfully integrated the SWE-Bench dataset into the network’s operations, meaning real benchmark tasks were being used for rewards – a first-of-its-kind setup on Bittensor. Early miners started to solve tasks, and the best solutions were submitted to the Princeton SWE-Bench leaderboard, immediately placing the subnet’s AI on the map next to corporate labs. This proved the concept: decentralized miners could tackle complex coding tasks at a competitive level.
  • Gen42 Product Launch (late 2024): The team rolled out Gen42.ai, the front-end platform for users to interact with the subnet’s AI. By the end of 2024, Gen42 Chat, API, and CLI were in a functional state. A subscription model was established, and a free beta or demo period gathered feedback from early adopters. This marked a shift from purely internal development to customer-facing service – fulfilling Team Rizzo’s philosophy that subnets need tangible products and revenue for long-term survival.
  • Integration with Bittensor “Interact” (Q1 2025): The team achieved an “Interact integration”, meaning Gen42’s capabilities were plugged into the Bittensor Interact platform (an interface where different subnets and agents can interact). This likely allowed other AI agents on Bittensor to call Gen42 for coding tasks seamlessly. It demonstrates a working collaboration between SN45 and SN20 (and possibly others), where, for example, BitAgent’s MSPTech agent could use Gen42 to solve a coding issue during an IT support ticket.
  • Community & Performance Growth: Through late 2024 and into 2025, SWE-Rizzo grew to have dozens of miners and a strong validator set. The subnet’s rank within Bittensor climbed (as of early 2025 it was often among the top subnets by stake and activity). The team kept the community engaged with frequent updates: they attended the Endgame conference (Austin, 2023) to showcase their work, published strategy blogs (e.g., on Medium about dTAO strategy), and even secured the aforementioned Native insurance partnership to protect stakeholders – a milestone that builds trust for anyone delegating TAO to their validator.

 

Current Focus (2025): Presently, SWE-Rizzo is in an active growth and refinement phase. According to Team Rizzo’s updates, the subnet is focused on:

  • Advancing on SWE-Bench Leaderboards: Continually improving the models to climb towards the top rankings on the external benchmark. This involves iterating on model fine-tuning, incorporating larger or better architectures as they become available open-source, and encouraging miners to experiment. The team explicitly aims for “industry-leading validation” results – trying to outperform even some closed-source models in specific coding tasks.
  • Enhancing the AI Models: Ongoing R&D is happening to optimize prompt engineering and fine-tuning techniques for code generation. They are likely exploring improvements like better few-shot prompt strategies, chain-of-thought for coding, or integrating retrieval (e.g., pulling relevant code context) to boost the AI’s effectiveness. The miners are also encouraged to update their pipelines with any new research findings (for example, if a new version of Code Llama or StarCoder is released, miners can adapt it to the subnet).
  • Enterprise and Developer Integrations: A major part of the roadmap is to make the AI assistant useful in real developer workflows. Upcoming integrations mentioned include GitLab (so that Gen42 can work on merge requests or issues within a GitLab repository) and broader IDE support (plug-ins for JetBrains IDEs, VS Code, etc.). The team is exploring enterprise partnerships – possibly getting pilot programs with software companies similar to how their Subnet 20 partnered with MSPs. The idea is to have Gen42 act as an “AI pair programmer” or automated code reviewer in professional environments.
  • Cross-Subnet Synergy: They continue to collaborate with Subnet 20 (BitAgent) on joint use-cases. For instance, SN20’s GoGoAgent or MSPTech (which automate IT support tasks) can hand off programming challenges to SN45’s Gen42, creating a multi-agent system. Additionally, integration with Subnet 54 (image-to-code) could allow solving tasks like generating code from a screenshot or diagram. This synergy is actively being improved to demonstrate the power of a network of AI services rather than isolated silos.
  • Community & Miner Incentives: With the shift to dTAO (dynamic TAO rewards) across Bittensor, Team Rizzo is ensuring that SWE-Rizzo meets all the milestone requirements to keep emissions flowing. They are adjusting the incentive mechanisms as needed – for example, fine-tuning the reward function or task selection process to make sure miners are fairly paid and motivated to tackle the hardest problems. They’ve indicated a willingness to “work lessons learned to refine our incentive mechanism” based on previous subnet versions. The community is regularly updated on progress through forums and SWE-Bench tracking dashboards.

 

Future Plans: Looking further ahead, Team Rizzo envisions SWE-Rizzo as a long-term platform for AI-driven software development. Their roadmap (as communicated in Q&A and marketing materials) includes:

  • Achieving Top Benchmark Status: They want Subnet 45 to rank at or near the top of SWE-Bench, effectively proving that a decentralized network of small models can rival big-tech AI on coding tasks. This could involve incorporating new model innovations (the team will surely watch things like GPT-4 Code Interpreter capabilities and try to narrow the gap with open models).
  • Expanding the Scope of Tasks: While the initial focus is on bug-fixing and code completion, the subnet could expand to other software engineering tasks – e.g. automated code documentation, refactoring large codebases, security vulnerability patching, etc. The “hard real world problem” ethos means they will continuously identify pain points in software development to tackle next. The SWE-Bench itself may evolve or new benchmarks (like code security or performance optimization challenges) could be adopted.
  • Greater Productization and Revenue Streams: Following the mantra that subnets need sustainable revenue, Team Rizzo will likely introduce premium features for Gen42, enterprise licensing, or partnerships to monetize the service (beyond just token rewards). They’ve already piloted a subscription model, but we might see enterprise tiers or self-hosted versions for companies, etc. Success stories from the MSPTech pilot (Subnet 20) could translate into Gen42 finding commercial use in tech companies or open-source projects as a dependable AI coder. Ensuring monthly recurring revenue (MRR) is a stated long-term goal to complement token incentives.
  • Continuous Decentralization and Scale: As more developers find out about Gen42, the team expects more miners to join and more users to query the network. The architecture will need to scale – possibly involving optimization for lower latency (so that IDE suggestions feel instant) and support for consumer GPUs so that anyone with a decent GPU can contribute a miner. They’ve already emphasized models that run on consumer-grade hardware, and moving forward they may adopt techniques like model distillation or quantization to keep the barrier to entry low while scaling out the number of miner nodes.
  • Governance and Open Collaboration: The subnet will also mature in terms of governance. Team Rizzo, being a large stakeholder, will guide the direction, but they are likely to involve the community in decisions (for instance, if the community wants to target a new benchmark or alter the reward formula, those could be discussed on forums). As dTAO evolves, subnets might compete for delegations; Team Rizzo’s plan is clearly to remain at the forefront of innovation to attract support, which in turn funds further R&D. They’ve shown a proactive approach in marketing and transparency, which should continue into the future.

 

In conclusion, Subnet 39 “SWE-Rizzo” has rapidly grown from an idea to a leading AI coding network in less than a year. It has its roots in earlier subnets (borrowing code from the original Prompting subnet and inspiration from BitAgent SN20) and has been rebranded and refined to zero in on software engineering automation. The roadmap ahead is about deepening the technology and broadening the impact: the team aims for technical excellence (best-in-class AI coding assistants) and real-world adoption (making Gen42 a ubiquitous developer tool). With a strong foundation and a clear strategic vision, SWE-Rizzo is poised to play a significant role in both the Bittensor ecosystem and the wider AI-for-code landscape, pushing decentralized AI to tackle ever more complex programming challenges.

 

NEWS

Announcements

MORE INFO

Useful Links