With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time
SWE-Rizzo is a specialized Bittensor subnetwork focused on AI-driven software engineering, particularly automated code generation, bug fixing, and code completion. It leverages a decentralized network of miners (AI model servers) and validators to provide coding assistance and software development tools powered by open-source large language models. In the Bittensor ecosystem, SWE-Rizzo serves as the leading coding-focused subnet, optimizing prompt engineering for code tasks, automating software debugging, and accelerating development with high-performance AI models. The project’s core mission is to create robust, scalable tools that improve the software engineering process – from code generation and QA to testing and maintenance – using collective intelligence.
By tackling a “hard real-world problem” (reducing software bugs and automating fixes), SWE-Rizzo aims to deliver practical downstream products and also integrate with other subnets for a stronger overall network. Notably, this subnet competes on Princeton University’s SWE-Bench – a software engineering benchmark – alongside models from OpenAI, Anthropic, Meta, etc., to validate its performance on code tasks. Top-performing solutions from SWE-Rizzo’s miners are even published on the official SWE-Bench leaderboard, demonstrating real-world efficacy and measuring the subnet against industry-leading AI coding models. In essence, SWE-Rizzo’s role is to “redefine software engineering with decentralized intelligence”, providing an open, community-driven platform for AI-assisted coding and problem-solving.
SWE-Rizzo is a specialized Bittensor subnetwork focused on AI-driven software engineering, particularly automated code generation, bug fixing, and code completion. It leverages a decentralized network of miners (AI model servers) and validators to provide coding assistance and software development tools powered by open-source large language models. In the Bittensor ecosystem, SWE-Rizzo serves as the leading coding-focused subnet, optimizing prompt engineering for code tasks, automating software debugging, and accelerating development with high-performance AI models. The project’s core mission is to create robust, scalable tools that improve the software engineering process – from code generation and QA to testing and maintenance – using collective intelligence.
By tackling a “hard real-world problem” (reducing software bugs and automating fixes), SWE-Rizzo aims to deliver practical downstream products and also integrate with other subnets for a stronger overall network. Notably, this subnet competes on Princeton University’s SWE-Bench – a software engineering benchmark – alongside models from OpenAI, Anthropic, Meta, etc., to validate its performance on code tasks. Top-performing solutions from SWE-Rizzo’s miners are even published on the official SWE-Bench leaderboard, demonstrating real-world efficacy and measuring the subnet against industry-leading AI coding models. In essence, SWE-Rizzo’s role is to “redefine software engineering with decentralized intelligence”, providing an open, community-driven platform for AI-assisted coding and problem-solving.
SWE-Rizzo operates through a network of miners and validators collaborating via Bittensor’s consensus protocol. Miners in this subnet run specialized AI pipelines that can take a software repository and a bug/issue as input and generate a patch or code fix as output. Each miner hosts their pipeline behind a Bittensor synapse (an interface exposing the service) so that validators can query and execute the miner’s code. Validators are responsible for selecting tasks, invoking miners’ pipelines, and evaluating the results to determine rewards. The workflow is as follows (automated by the subnet’s incentive program):
This incentive mechanism means miners compete to write the most effective bug-fixing AI pipelines, since only high-performing patches earn strong rewards. Validators play a crucial role in measuring performance and ensuring fairness: they conduct offline benchmarking and real-time tests to assess each miner’s code quality, correctness, and efficiency. The subnet’s consensus (now under Bittensor’s dTAO system) allocates token rewards based on these evaluations, so miners are directly incentivized to improve their models and solutions. Notably, miners must adhere to the required interface (their synapse script must accept a repo and issue and return a patch) and likely need to frequently update their models/code as the benchmark tasks evolve. This competitive dynamic, combined with decentralized validation by multiple independent validators, helps ensure that the subnet continually produces higher-quality automated code solutions over time. In summary, miners contribute AI-driven code fixes and validators rank and reward them using a rigorous benchmark-driven scoring system, aligning the network on solving real software bugs for token incentives.
Products, Services, and Applications Built on SWE-Rizzo
One distinguishing goal of Subnet 39 (SWE-Rizzo) is to enable tangible products and tools on top of its mining/validation network, not just theoretical AI services. Over time the team has developed “Gen42”, a suite of user-facing applications that showcase the subnet’s capabilities. Key products and interfaces include:
Overall, SWE-Rizzo has evolved beyond just an infrastructure network; it underpins a full suite of software engineering AI services. From an AI support chatbot for developers, to IDE integration, to automated code review and fixes, the products stemming from Subnet 39 illustrate its practical impact. These applications are continually refined and expanded – for example, Gen42 has real-time suggestions, multi-language support, and is working toward deeper enterprise features according to the team’s announcements. By focusing on real use-cases (like assisting IT support via the earlier MSPTech application on a sister subnet, and now coding assistance via Gen42), Team Rizzo is ensuring that the subnet’s AI innovations directly benefit end users in the software industry.
Technical Architecture and Infrastructure
SWE-Rizzo’s architecture combines Bittensor’s decentralized blockchain protocol with a specialized AI model pipeline for code. Here’s a breakdown of key technical components:
Open-Source AI Models: At the core, miners run large language models (LLMs) specialized for coding tasks. The project explicitly uses open-source LLMs (likely code-focused models such as CodeLlama, StarCoder, etc.) rather than proprietary models. This openness aligns with Bittensor’s ethos and allows any miner to deploy a model that fits the subnet’s criteria. Many models are fine-tuned to perform code generation and debugging based on the SWE-Bench tasks. The subnet does not fixate on one model; instead miners are free to improve their pipelines with any model or algorithm that yields better bug fixes. Miners often fine-tune models on code and problem-solution data to optimize for the SWE-Bench evaluations.
Miner Synapse & Execution Sandbox: Each miner exposes a synapse – essentially an RPC endpoint on Bittensor – implementing the required interface for this subnet (accepting a repository + issue, returning a patch). When validators call this synapse, the miner’s code (the AI pipeline plus any scripting around it) is packaged and run inside a Docker container. This containerization is crucial: it provides a consistent runtime where the code can safely clone the target repository, apply fixes, and run tests without affecting the host or other network operations. The use of Docker ensures every miner’s solution is evaluated in an identical environment, which improves fairness and reproducibility of results.
Benchmark-Driven Validation: The subnet’s validation logic is tightly coupled with Princeton’s SWE-Bench framework. SWE-Bench provides a suite of software engineering tasks (like fixing a known bug) along with evaluation metrics (did the patch fix the bug? does it pass all tests? etc.). Validators essentially serve as benchmark oracles – they feed tasks to miners and use SWE-Bench’s automated tests to score the responses. This is a departure from Bittensor’s original generic “prompt/response” validation; here the correctness can be measured objectively by running code. It represents an on-chain adaptation of a traditional leaderboard evaluation. Validators likely still produce “trust scores” or weights for miners (as in Bittensor’s mechanism), but those are directly informed by these benchmark results rather than just subjective scoring.
Bittensor Blockchain Integration: Subnet 39 operates within the Bittensor network (Opentensor), meaning it is registered on the Bittensor substrate chain and follows all on-chain governance and tokenomic rules. Each subnet has a unique network UID (netuid); in this case SWE-Rizzo is identified as netuid 45 on-chain (often referred to as SN45). The Bittensor chain handles staking and token emission for the subnet: miners and validators stake the native token TAO (or a derivative “subnet token” after dTAO) and earn rewards in it for their contributions. The consensus model is often described as Proof-of-Intelligence – participants are rewarded for producing useful AI work. In practice, SWE-Rizzo’s blockchain logic allocates daily TAO emissions to validators and miners according to the SWE-Bench-based performance signals (ensuring that better code solutions translate to higher on-chain rewards). Smart contracts or pallet modules on the Subtensor chain manage these reward distributions and enforce that the subnet meets certain milestones (e.g., maintaining a minimum level of activity or performance) to continue receiving global TAO emissions.
Software Stack: The implementation of SWE-Rizzo is primarily in Python, which is natural given the AI/ML focus and the need for scripting git operations. The subnet’s open-source code repository (originally a fork of Bittensor’s Subnet-1 “Prompting” code) is about 73% Python, with some Jupyter Notebooks (for development and evaluation), shell scripts for automation, and Dockerfiles for the container setup. The miners likely utilize machine learning libraries (PyTorch, HuggingFace Transformers, etc.) inside their Docker environments to run the code models, although specific libraries can vary by miner. Communication between validators and miners happens through Bittensor’s networking protocol (built on libp2p and using RPC calls under the hood), so the whole subnet forms a peer-to-peer network of nodes exchanging tasks and results.
Inter-subnet Connectivity: SWE-Rizzo is designed not to exist in isolation but to integrate with other subnets. Team Rizzo has emphasized cross-subnet collaboration as a pillar of their approach. Concretely, Subnet 45 (SWE-Rizzo) works in tandem with Subnet 20 (BitAgent) for orchestrating tasks and with Subnet 54 (an “image-to-code” subnet) for complementary capabilities. For example, BitAgent (SN20) focuses on tool automation and agent workflows – it might route a coding-related task to SWE-Rizzo’s Gen42 for code assistance, then use the result in a larger automation script. Similarly, an image analysis subnet could call on SWE-Rizzo if it needs to generate code as part of its pipeline. This modular design of subnets, each providing a distinct AI service (code, image, language, etc.), is enabled by standard interfaces and the shared TAO economy. SWE-Rizzo’s Gen42 API is part of this vision, effectively making the subnet’s functionality accessible to other programs and subnets via API calls.
Security and Reliability: The Team Rizzo operators run a top-tier validator node (“Rizzo Validator”) with bare-metal servers, ensuring high uptime and reliable performance for the subnet. They even partnered to obtain a $25M on-chain insurance coverage via Nexus Mutual for their validator’s stakers, reflecting a strong emphasis on security. On the subnet level, containerization and open-source scrutiny of miner code provide security – any malicious or low-quality miner code would fail the SWE-Bench tests and not be rewarded, naturally ejecting it over time. The code execution is sandboxed to prevent harm. Moreover, by focusing on relatively smaller, open models (often ≤7B or 13B parameters for practicality), the subnet ensures miners can run on commodity GPUs, which decentralizes participation (this approach was explicitly taken in their prior Subnet 20 as well, using Apache-2 licensed models under 8B parameters, and carries over into SWE-Rizzo’s philosophy).
In summary, SWE-Rizzo’s technical backbone is a marriage of blockchain (for coordination and incentives) and AI infrastructure (for model training and inference). It stands on the shoulders of Bittensor’s network protocol but extends it with a custom evaluation system (SWE-Bench integration, Dockerized execution) tailor-made for the software engineering domain.
SWE-Rizzo operates through a network of miners and validators collaborating via Bittensor’s consensus protocol. Miners in this subnet run specialized AI pipelines that can take a software repository and a bug/issue as input and generate a patch or code fix as output. Each miner hosts their pipeline behind a Bittensor synapse (an interface exposing the service) so that validators can query and execute the miner’s code. Validators are responsible for selecting tasks, invoking miners’ pipelines, and evaluating the results to determine rewards. The workflow is as follows (automated by the subnet’s incentive program):
This incentive mechanism means miners compete to write the most effective bug-fixing AI pipelines, since only high-performing patches earn strong rewards. Validators play a crucial role in measuring performance and ensuring fairness: they conduct offline benchmarking and real-time tests to assess each miner’s code quality, correctness, and efficiency. The subnet’s consensus (now under Bittensor’s dTAO system) allocates token rewards based on these evaluations, so miners are directly incentivized to improve their models and solutions. Notably, miners must adhere to the required interface (their synapse script must accept a repo and issue and return a patch) and likely need to frequently update their models/code as the benchmark tasks evolve. This competitive dynamic, combined with decentralized validation by multiple independent validators, helps ensure that the subnet continually produces higher-quality automated code solutions over time. In summary, miners contribute AI-driven code fixes and validators rank and reward them using a rigorous benchmark-driven scoring system, aligning the network on solving real software bugs for token incentives.
Products, Services, and Applications Built on SWE-Rizzo
One distinguishing goal of Subnet 39 (SWE-Rizzo) is to enable tangible products and tools on top of its mining/validation network, not just theoretical AI services. Over time the team has developed “Gen42”, a suite of user-facing applications that showcase the subnet’s capabilities. Key products and interfaces include:
Overall, SWE-Rizzo has evolved beyond just an infrastructure network; it underpins a full suite of software engineering AI services. From an AI support chatbot for developers, to IDE integration, to automated code review and fixes, the products stemming from Subnet 39 illustrate its practical impact. These applications are continually refined and expanded – for example, Gen42 has real-time suggestions, multi-language support, and is working toward deeper enterprise features according to the team’s announcements. By focusing on real use-cases (like assisting IT support via the earlier MSPTech application on a sister subnet, and now coding assistance via Gen42), Team Rizzo is ensuring that the subnet’s AI innovations directly benefit end users in the software industry.
Technical Architecture and Infrastructure
SWE-Rizzo’s architecture combines Bittensor’s decentralized blockchain protocol with a specialized AI model pipeline for code. Here’s a breakdown of key technical components:
Open-Source AI Models: At the core, miners run large language models (LLMs) specialized for coding tasks. The project explicitly uses open-source LLMs (likely code-focused models such as CodeLlama, StarCoder, etc.) rather than proprietary models. This openness aligns with Bittensor’s ethos and allows any miner to deploy a model that fits the subnet’s criteria. Many models are fine-tuned to perform code generation and debugging based on the SWE-Bench tasks. The subnet does not fixate on one model; instead miners are free to improve their pipelines with any model or algorithm that yields better bug fixes. Miners often fine-tune models on code and problem-solution data to optimize for the SWE-Bench evaluations.
Miner Synapse & Execution Sandbox: Each miner exposes a synapse – essentially an RPC endpoint on Bittensor – implementing the required interface for this subnet (accepting a repository + issue, returning a patch). When validators call this synapse, the miner’s code (the AI pipeline plus any scripting around it) is packaged and run inside a Docker container. This containerization is crucial: it provides a consistent runtime where the code can safely clone the target repository, apply fixes, and run tests without affecting the host or other network operations. The use of Docker ensures every miner’s solution is evaluated in an identical environment, which improves fairness and reproducibility of results.
Benchmark-Driven Validation: The subnet’s validation logic is tightly coupled with Princeton’s SWE-Bench framework. SWE-Bench provides a suite of software engineering tasks (like fixing a known bug) along with evaluation metrics (did the patch fix the bug? does it pass all tests? etc.). Validators essentially serve as benchmark oracles – they feed tasks to miners and use SWE-Bench’s automated tests to score the responses. This is a departure from Bittensor’s original generic “prompt/response” validation; here the correctness can be measured objectively by running code. It represents an on-chain adaptation of a traditional leaderboard evaluation. Validators likely still produce “trust scores” or weights for miners (as in Bittensor’s mechanism), but those are directly informed by these benchmark results rather than just subjective scoring.
Bittensor Blockchain Integration: Subnet 39 operates within the Bittensor network (Opentensor), meaning it is registered on the Bittensor substrate chain and follows all on-chain governance and tokenomic rules. Each subnet has a unique network UID (netuid); in this case SWE-Rizzo is identified as netuid 45 on-chain (often referred to as SN45). The Bittensor chain handles staking and token emission for the subnet: miners and validators stake the native token TAO (or a derivative “subnet token” after dTAO) and earn rewards in it for their contributions. The consensus model is often described as Proof-of-Intelligence – participants are rewarded for producing useful AI work. In practice, SWE-Rizzo’s blockchain logic allocates daily TAO emissions to validators and miners according to the SWE-Bench-based performance signals (ensuring that better code solutions translate to higher on-chain rewards). Smart contracts or pallet modules on the Subtensor chain manage these reward distributions and enforce that the subnet meets certain milestones (e.g., maintaining a minimum level of activity or performance) to continue receiving global TAO emissions.
Software Stack: The implementation of SWE-Rizzo is primarily in Python, which is natural given the AI/ML focus and the need for scripting git operations. The subnet’s open-source code repository (originally a fork of Bittensor’s Subnet-1 “Prompting” code) is about 73% Python, with some Jupyter Notebooks (for development and evaluation), shell scripts for automation, and Dockerfiles for the container setup. The miners likely utilize machine learning libraries (PyTorch, HuggingFace Transformers, etc.) inside their Docker environments to run the code models, although specific libraries can vary by miner. Communication between validators and miners happens through Bittensor’s networking protocol (built on libp2p and using RPC calls under the hood), so the whole subnet forms a peer-to-peer network of nodes exchanging tasks and results.
Inter-subnet Connectivity: SWE-Rizzo is designed not to exist in isolation but to integrate with other subnets. Team Rizzo has emphasized cross-subnet collaboration as a pillar of their approach. Concretely, Subnet 45 (SWE-Rizzo) works in tandem with Subnet 20 (BitAgent) for orchestrating tasks and with Subnet 54 (an “image-to-code” subnet) for complementary capabilities. For example, BitAgent (SN20) focuses on tool automation and agent workflows – it might route a coding-related task to SWE-Rizzo’s Gen42 for code assistance, then use the result in a larger automation script. Similarly, an image analysis subnet could call on SWE-Rizzo if it needs to generate code as part of its pipeline. This modular design of subnets, each providing a distinct AI service (code, image, language, etc.), is enabled by standard interfaces and the shared TAO economy. SWE-Rizzo’s Gen42 API is part of this vision, effectively making the subnet’s functionality accessible to other programs and subnets via API calls.
Security and Reliability: The Team Rizzo operators run a top-tier validator node (“Rizzo Validator”) with bare-metal servers, ensuring high uptime and reliable performance for the subnet. They even partnered to obtain a $25M on-chain insurance coverage via Nexus Mutual for their validator’s stakers, reflecting a strong emphasis on security. On the subnet level, containerization and open-source scrutiny of miner code provide security – any malicious or low-quality miner code would fail the SWE-Bench tests and not be rewarded, naturally ejecting it over time. The code execution is sandboxed to prevent harm. Moreover, by focusing on relatively smaller, open models (often ≤7B or 13B parameters for practicality), the subnet ensures miners can run on commodity GPUs, which decentralizes participation (this approach was explicitly taken in their prior Subnet 20 as well, using Apache-2 licensed models under 8B parameters, and carries over into SWE-Rizzo’s philosophy).
In summary, SWE-Rizzo’s technical backbone is a marriage of blockchain (for coordination and incentives) and AI infrastructure (for model training and inference). It stands on the shoulders of Bittensor’s network protocol but extends it with a custom evaluation system (SWE-Bench integration, Dockerized execution) tailor-made for the software engineering domain.
Rizzo Founders
roguetensor – Co-founder & CTO: Tech visionary blending AI, Computer Vision, and Robotics to bring groundbreaking products to life. A catalyst for innovation and execution, bridging bold ideas with real-world applications.
frankrizz07 – Co-founder & CEO: Serial entrepreneur dedicated to solving complex, real-world challenges for executives—specializing in automation, recruitment/retention, and operational efficiency.
Rizzo Validator Team
rysjol – Data & Software Engineer: Experienced in network and system administration. Enthusiastic about geography, history, music, and sports.
solros3 – Technical Creative & VFX Artist: Combines a background in software engineering with VFX artistry, specializing in particle/fluid simulations and creative tech.
gregbeard – Full-Stack Developer & Systems Engineer: A versatile computer science expert, bringing robust skills in full-stack web development and system design.
Rizzo Subnet Development Team
slaive – Full-Stack Developer & Technologist: Fearless innovator with a knack for engineering, AI models, and the occasional high-risk experiment.
vectorforge – Subnet Developer, SN20 Specialist: Bringing strong experience in generative AI, data science, and backend systems to the forefront of SN20 development.
thunderheavyindustry – Application Developer & Polymath: Rooted in mathematics and philosophy, with talents in music and cuisine. Focused on integrating subnet outputs (SN20/SN45) into broader ecosystems.
brokespace – Cybersecurity & AI/ML Specialist: Focused on the intersection of cybersecurity, AI, and machine learning, especially in the context of large language models.
canti_dev – Generative AI Engineer: Veteran developer building end-to-end applications with emphasis on rapid prototyping and market-oriented solutions.
Rizzo Communications & Support Team
msptech.aiops – Operations & Strategy Leader: Bringing deep experience in team leadership and business growth across IT, logistics, manufacturing, and retail industries.
deakins02 – Digital Creative & Videographer: Award-winning visual storyteller with expertise in cinematography, animation, and digital editing.
taospark – Communications Strategist & AI Enthusiast: Computer scientist driven by a passion for automation, AI, and uniting teams to turn bold visions into tangible outcomes.
Rizzo Founders
roguetensor – Co-founder & CTO: Tech visionary blending AI, Computer Vision, and Robotics to bring groundbreaking products to life. A catalyst for innovation and execution, bridging bold ideas with real-world applications.
frankrizz07 – Co-founder & CEO: Serial entrepreneur dedicated to solving complex, real-world challenges for executives—specializing in automation, recruitment/retention, and operational efficiency.
Rizzo Validator Team
rysjol – Data & Software Engineer: Experienced in network and system administration. Enthusiastic about geography, history, music, and sports.
solros3 – Technical Creative & VFX Artist: Combines a background in software engineering with VFX artistry, specializing in particle/fluid simulations and creative tech.
gregbeard – Full-Stack Developer & Systems Engineer: A versatile computer science expert, bringing robust skills in full-stack web development and system design.
Rizzo Subnet Development Team
slaive – Full-Stack Developer & Technologist: Fearless innovator with a knack for engineering, AI models, and the occasional high-risk experiment.
vectorforge – Subnet Developer, SN20 Specialist: Bringing strong experience in generative AI, data science, and backend systems to the forefront of SN20 development.
thunderheavyindustry – Application Developer & Polymath: Rooted in mathematics and philosophy, with talents in music and cuisine. Focused on integrating subnet outputs (SN20/SN45) into broader ecosystems.
brokespace – Cybersecurity & AI/ML Specialist: Focused on the intersection of cybersecurity, AI, and machine learning, especially in the context of large language models.
canti_dev – Generative AI Engineer: Veteran developer building end-to-end applications with emphasis on rapid prototyping and market-oriented solutions.
Rizzo Communications & Support Team
msptech.aiops – Operations & Strategy Leader: Bringing deep experience in team leadership and business growth across IT, logistics, manufacturing, and retail industries.
deakins02 – Digital Creative & Videographer: Award-winning visual storyteller with expertise in cinematography, animation, and digital editing.
taospark – Communications Strategist & AI Enthusiast: Computer scientist driven by a passion for automation, AI, and uniting teams to turn bold visions into tangible outcomes.
SWE-Rizzo’s journey is an evolving one, with clear milestones already achieved and ambitious plans ahead. Here’s a structured look at its historical context, current status, and future roadmap:
Origins and Historical Context: The concept for an AI coding assistant subnet emerged from Team Rizzo’s earlier work on AI agents. Initially, the project was codenamed “Gen42”, reflecting its focus on code generation and possibly a nod to “the answer to life, the universe, and everything” (42) in software form. In mid-2024, Team Rizzo began developing Gen42 as a separate subnet, forking the base code from Bittensor’s Subnet-1 (Prompting) to customize it for software engineering tasks. Early on, this initiative was associated with Subnet UID 42 on test networks or documentation – in fact, external sources referred to “Subnet 42: Gen42, an open-source AI coding assistant” when describing the new Bittensor subnets in late 2024. This indicates that Gen42 was one of the new wave of subnets around that time. However, due to various on-chain updates and perhaps re-registration under the dynamic TAO (dTAO) system, the subnet eventually went live on mainnet with NetUID 45. (It appears the network UID 42 might have been a temporary slot or an initial registration that was later superseded by UID 45 for mainnet launch around August 18, 2024.) To avoid confusion and better brand the project, the team officially renamed Gen42 to “SWE-Rizzo.” The rename, announced on social media in late 2024, was meant to reflect a broader vision of building the leading AI-powered software engineering subnet (SWE stands for Software Engineering) under the Rizzo team’s umbrella. Essentially, SWE-Rizzo (Subnet 39/45) is the direct evolution of the Gen42 idea, inheriting its code and purpose but with a clearer name and mandate.
Milestones Achieved: Since launching, Subnet 45 (SWE-Rizzo) has hit several key milestones:
Current Focus (2025): Presently, SWE-Rizzo is in an active growth and refinement phase. According to Team Rizzo’s updates, the subnet is focused on:
Future Plans: Looking further ahead, Team Rizzo envisions SWE-Rizzo as a long-term platform for AI-driven software development. Their roadmap (as communicated in Q&A and marketing materials) includes:
In conclusion, Subnet 39 “SWE-Rizzo” has rapidly grown from an idea to a leading AI coding network in less than a year. It has its roots in earlier subnets (borrowing code from the original Prompting subnet and inspiration from BitAgent SN20) and has been rebranded and refined to zero in on software engineering automation. The roadmap ahead is about deepening the technology and broadening the impact: the team aims for technical excellence (best-in-class AI coding assistants) and real-world adoption (making Gen42 a ubiquitous developer tool). With a strong foundation and a clear strategic vision, SWE-Rizzo is poised to play a significant role in both the Bittensor ecosystem and the wider AI-for-code landscape, pushing decentralized AI to tackle ever more complex programming challenges.
SWE-Rizzo’s journey is an evolving one, with clear milestones already achieved and ambitious plans ahead. Here’s a structured look at its historical context, current status, and future roadmap:
Origins and Historical Context: The concept for an AI coding assistant subnet emerged from Team Rizzo’s earlier work on AI agents. Initially, the project was codenamed “Gen42”, reflecting its focus on code generation and possibly a nod to “the answer to life, the universe, and everything” (42) in software form. In mid-2024, Team Rizzo began developing Gen42 as a separate subnet, forking the base code from Bittensor’s Subnet-1 (Prompting) to customize it for software engineering tasks. Early on, this initiative was associated with Subnet UID 42 on test networks or documentation – in fact, external sources referred to “Subnet 42: Gen42, an open-source AI coding assistant” when describing the new Bittensor subnets in late 2024. This indicates that Gen42 was one of the new wave of subnets around that time. However, due to various on-chain updates and perhaps re-registration under the dynamic TAO (dTAO) system, the subnet eventually went live on mainnet with NetUID 45. (It appears the network UID 42 might have been a temporary slot or an initial registration that was later superseded by UID 45 for mainnet launch around August 18, 2024.) To avoid confusion and better brand the project, the team officially renamed Gen42 to “SWE-Rizzo.” The rename, announced on social media in late 2024, was meant to reflect a broader vision of building the leading AI-powered software engineering subnet (SWE stands for Software Engineering) under the Rizzo team’s umbrella. Essentially, SWE-Rizzo (Subnet 39/45) is the direct evolution of the Gen42 idea, inheriting its code and purpose but with a clearer name and mandate.
Milestones Achieved: Since launching, Subnet 45 (SWE-Rizzo) has hit several key milestones:
Current Focus (2025): Presently, SWE-Rizzo is in an active growth and refinement phase. According to Team Rizzo’s updates, the subnet is focused on:
Future Plans: Looking further ahead, Team Rizzo envisions SWE-Rizzo as a long-term platform for AI-driven software development. Their roadmap (as communicated in Q&A and marketing materials) includes:
In conclusion, Subnet 39 “SWE-Rizzo” has rapidly grown from an idea to a leading AI coding network in less than a year. It has its roots in earlier subnets (borrowing code from the original Prompting subnet and inspiration from BitAgent SN20) and has been rebranded and refined to zero in on software engineering automation. The roadmap ahead is about deepening the technology and broadening the impact: the team aims for technical excellence (best-in-class AI coding assistants) and real-world adoption (making Gen42 a ubiquitous developer tool). With a strong foundation and a clear strategic vision, SWE-Rizzo is poised to play a significant role in both the Bittensor ecosystem and the wider AI-for-code landscape, pushing decentralized AI to tackle ever more complex programming challenges.
Berkeley Function Calling Leaderboard (BFCL) Updated! ..and guess what!?
BitAgent SN20 has the #1 8B Agentic Function / Tool Calling Model in the World! 🏆
What does this mean??
It means two things:
-MSPTech.ai has an upgraded tool calling model for automation in IT Service…
Rizzo Validator has converted .765 BTC ($90+k USD) into 210 TAO. We then distributed them to our top 20 largest sn14 delegates.
As sn14 has modified their incentive mechanism, this will mark our last BTC distribution from sn14.
I want to thank everyone who stakes with us on…
We are now at >0.75 BTC accumulated in our SN14 Rizzo buyback pot!!!🪙
Don't forget to stake with Rizzo Validator in order to get the buyback!
Track the pot here: https://rizzo.network/btc-mining-taohash-rewards/ and stake with us today!
#bittensor $tao #bitcoin
Absolutely incredible!🔥This is an exciting time to be alive for AI companies and researchers. Grats to the @OpenAI team!
Next, let's see models with these reasoning capabilities use tool calling with the innovations coming out of Bittensor to solve new real world problems!
1/N I’m excited to share that our latest @OpenAI experimental reasoning LLM has achieved a longstanding grand challenge in AI: gold medal-level performance on the world’s most prestigious math competition—the International Math Olympiad (IMO).
Who's going to be the first miner to submit something to Subnet 45 using Grok 4? ⛏️
It will be interesting to see how it competes against the solutions built on other platforms. Go miners go! We have to know! #bittensor #Grok4 $tao #swebench