With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time
DSperse (formerly known as Omron) is a decentralized framework that enables verifiable AI inference across a network of nodes. In simple terms, it lets AI models run on distributed miners while producing cryptographic proofs (using zero-knowledge technology) that the computation was done correctly. This means users can get AI-generated results (e.g. answers, predictions, data analyses) with mathematical guarantees of origin and correctness – rather than having to trust any single server or model operator
subnet2.inferencelabs.com.
DSperse essentially provides a “marketplace for inference and compute verification” within the Bittensor ecosystem, ensuring every AI output can be trusted as provably valid and originating from a specific model. It achieves this by leveraging the world’s largest decentralized zero-knowledge proving cluster, which verifies computations without revealing underlying data, preserving privacy while assuring integrity. In summary, DSperse distributes AI tasks across many nodes and uses cryptographic proofs to verify the results, thereby building trust at scale in AI outputs. Organizations can rely on these proofs (termed Proof-of-Inference) to confidently use AI results across different platforms and even different blockchains, knowing the outputs have been validated by the network and not tampered with.
DSperse (formerly known as Omron) is a decentralized framework that enables verifiable AI inference across a network of nodes. In simple terms, it lets AI models run on distributed miners while producing cryptographic proofs (using zero-knowledge technology) that the computation was done correctly. This means users can get AI-generated results (e.g. answers, predictions, data analyses) with mathematical guarantees of origin and correctness – rather than having to trust any single server or model operator
subnet2.inferencelabs.com.
DSperse essentially provides a “marketplace for inference and compute verification” within the Bittensor ecosystem, ensuring every AI output can be trusted as provably valid and originating from a specific model. It achieves this by leveraging the world’s largest decentralized zero-knowledge proving cluster, which verifies computations without revealing underlying data, preserving privacy while assuring integrity. In summary, DSperse distributes AI tasks across many nodes and uses cryptographic proofs to verify the results, thereby building trust at scale in AI outputs. Organizations can rely on these proofs (termed Proof-of-Inference) to confidently use AI results across different platforms and even different blockchains, knowing the outputs have been validated by the network and not tampered with.
DSperse is delivered as a Bittensor subnet (Subnet-2) – essentially a specialized blockchain network (and software stack) dedicated to running and verifying AI inference tasks. The product comprises: (1) an open-source miner and validator node software (the Subnet-2 codebase) that anyone can run, and (2) a web-based interface (Subnet2 marketplace) where users can submit tasks and retrieve results with proofs. Under the hood, DSperse’s architecture uses a novel slice-based verification approach: models are automatically analyzed and split into “slices” for parallel processing. Each slice of the AI model is compiled into a zero-knowledge circuit and proven separately, focusing only on the critical parts of the computation – this targeted verification drastically reduces cost compared to proving an entire model end-to-end. The system then stitches together these slice proofs to ensure the whole model inference was performed correctly.
In the Subnet-2 network, there are two main participant roles: miners and validators:
Miners: These nodes perform the actual AI computations. A validator sends an input (question or data) to miners, and a miner will run the AI model on that input, but in a verifiable mode. The miner uses a custom AI model that has been converted into a zero-knowledge circuit, generates the model’s prediction along with a cryptographic proof that this prediction was computed correctly by the intended model, then returns both the output and proof to the validator.
Validators: These nodes coordinate and verify the work. A validator gathers requests (from users or other systems) and packages them as inference tasks to send out to miners in the subnet. When a miner returns an answer, the validator checks the attached zero-knowledge proof to confirm the result truly came from the specified model and was processed faithfully. Validators then score the miner on performance metrics (for example, how fast it responded and how compact the proof is) and distribute the verified result back to the requester. Validators thereby act as gateways that keep miners honest and ensure the network’s outputs are always auditably correct.
This whole product is often described as the Proof-of-Inference system for Bittensor: every important AI computation comes with a proof, so anyone can verify it independently. The technical stack is proving-system agnostic, meaning DSperse isn’t tied to one cryptographic library – it currently supports multiple zero-knowledge proof frameworks (such as EZKL and Circom using Groth16 proving systems) and can integrate new ones as they emerge. Inference Labs has also built tooling (like JSTprove) to help convert machine learning models (e.g. ONNX models) into verifiable circuits more easily, which pairs with DSperse to handle the distributed proving of those circuits. Overall, the DSperse product includes the blockchain subnet, the node software (miners/validators), and a user-facing API/portal that together allow anyone to request AI computations with mathematical guarantees – forming a unified network for massively parallel, provably-correct AI processing.
DSperse is delivered as a Bittensor subnet (Subnet-2) – essentially a specialized blockchain network (and software stack) dedicated to running and verifying AI inference tasks. The product comprises: (1) an open-source miner and validator node software (the Subnet-2 codebase) that anyone can run, and (2) a web-based interface (Subnet2 marketplace) where users can submit tasks and retrieve results with proofs. Under the hood, DSperse’s architecture uses a novel slice-based verification approach: models are automatically analyzed and split into “slices” for parallel processing. Each slice of the AI model is compiled into a zero-knowledge circuit and proven separately, focusing only on the critical parts of the computation – this targeted verification drastically reduces cost compared to proving an entire model end-to-end. The system then stitches together these slice proofs to ensure the whole model inference was performed correctly.
In the Subnet-2 network, there are two main participant roles: miners and validators:
Miners: These nodes perform the actual AI computations. A validator sends an input (question or data) to miners, and a miner will run the AI model on that input, but in a verifiable mode. The miner uses a custom AI model that has been converted into a zero-knowledge circuit, generates the model’s prediction along with a cryptographic proof that this prediction was computed correctly by the intended model, then returns both the output and proof to the validator.
Validators: These nodes coordinate and verify the work. A validator gathers requests (from users or other systems) and packages them as inference tasks to send out to miners in the subnet. When a miner returns an answer, the validator checks the attached zero-knowledge proof to confirm the result truly came from the specified model and was processed faithfully. Validators then score the miner on performance metrics (for example, how fast it responded and how compact the proof is) and distribute the verified result back to the requester. Validators thereby act as gateways that keep miners honest and ensure the network’s outputs are always auditably correct.
This whole product is often described as the Proof-of-Inference system for Bittensor: every important AI computation comes with a proof, so anyone can verify it independently. The technical stack is proving-system agnostic, meaning DSperse isn’t tied to one cryptographic library – it currently supports multiple zero-knowledge proof frameworks (such as EZKL and Circom using Groth16 proving systems) and can integrate new ones as they emerge. Inference Labs has also built tooling (like JSTprove) to help convert machine learning models (e.g. ONNX models) into verifiable circuits more easily, which pairs with DSperse to handle the distributed proving of those circuits. Overall, the DSperse product includes the blockchain subnet, the node software (miners/validators), and a user-facing API/portal that together allow anyone to request AI computations with mathematical guarantees – forming a unified network for massively parallel, provably-correct AI processing.
Inference Labs, the company behind DSperse (Subnet-2), was founded in 2023 by Colin Gagich and Ron (Ronald) Chan. Colin Gagich is the CEO and co-founder, known for emphasizing the importance of keeping AI decentralized and auditable (“Decentralized AI isn’t just our goal, it’s a critical necessity for the future” he says). Ron Chan (Co-founder) has overseen much of the product development – he announced the open-sourcing of DSperse’s code in 2025, highlighting that the team isn’t just publishing research but also “building the foundational tools for the next generation of autonomic systems”. Inference Labs has a strong research-oriented team: the core DSperse framework was detailed in an academic paper with authors Dan Ivanov, Tristan Freiberg, Shirin Shahabi, Jonathan Gold, and Haruna Isah (all using @inferencelabs.com emails). This indicates expertise spanning cryptography and machine learning within the team. The company is based in Hamilton, Ontario, and has grown with support from notable Web3 investors (e.g. Digital Asset Capital Management, Delphi, Mechanism Capital) to expand its proof-of-inference technology. Other team members (as seen through public research and posts) include engineers and scientists focusing on zero-knowledge proofs (for example, Inference Labs worked closely with Dr. Jason Morton of ZKonduit/EZKL on integrating zkML tooling). In summary, DSperse is built by a multidisciplinary team of AI researchers, cryptographers, and blockchain developers, led by co-founders who are pushing for “auditable autonomy” in AI systems.
Colin Gagich – Co-Founder
Ronald Chan – Co-Founder
Eric Lesiuta – Software Engineer
Spencer Graham – Software Developer
Will P – Software Developer
Ehsan Meamari – Researcher
Julia Théberge – Executive Assistant
Shawn Knapczyk – Communities Manager
Ivan Anishchuk – Crypto Researcher
Jonathan Gold – Software Engineer
Inference Labs, the company behind DSperse (Subnet-2), was founded in 2023 by Colin Gagich and Ron (Ronald) Chan. Colin Gagich is the CEO and co-founder, known for emphasizing the importance of keeping AI decentralized and auditable (“Decentralized AI isn’t just our goal, it’s a critical necessity for the future” he says). Ron Chan (Co-founder) has overseen much of the product development – he announced the open-sourcing of DSperse’s code in 2025, highlighting that the team isn’t just publishing research but also “building the foundational tools for the next generation of autonomic systems”. Inference Labs has a strong research-oriented team: the core DSperse framework was detailed in an academic paper with authors Dan Ivanov, Tristan Freiberg, Shirin Shahabi, Jonathan Gold, and Haruna Isah (all using @inferencelabs.com emails). This indicates expertise spanning cryptography and machine learning within the team. The company is based in Hamilton, Ontario, and has grown with support from notable Web3 investors (e.g. Digital Asset Capital Management, Delphi, Mechanism Capital) to expand its proof-of-inference technology. Other team members (as seen through public research and posts) include engineers and scientists focusing on zero-knowledge proofs (for example, Inference Labs worked closely with Dr. Jason Morton of ZKonduit/EZKL on integrating zkML tooling). In summary, DSperse is built by a multidisciplinary team of AI researchers, cryptographers, and blockchain developers, led by co-founders who are pushing for “auditable autonomy” in AI systems.
Colin Gagich – Co-Founder
Ronald Chan – Co-Founder
Eric Lesiuta – Software Engineer
Spencer Graham – Software Developer
Will P – Software Developer
Ehsan Meamari – Researcher
Julia Théberge – Executive Assistant
Shawn Knapczyk – Communities Manager
Ivan Anishchuk – Crypto Researcher
Jonathan Gold – Software Engineer
DSperse is evolving from a single Bittensor subnet into a full-fledged framework for decentralized, verifiable AI, and the roadmap reflects this broad vision. After ending 2025 with the DSperse rebrand and the core system going live on Subnet-2, the team is now focused on expanding and hardening the network’s capabilities. Key roadmap themes include:
Broader Model Support & Efficiency: Future DSperse updates (referred to as Subnet-2 Version 2) will expand the range of AI models that can be deployed and proven on the network. The incentive structure is being tuned to favor smaller, faster, and high-quality models – effectively encouraging participants to use efficient model architectures that yield quick proofs without sacrificing accuracy. This goes hand-in-hand with introducing output-based scoring, meaning miners will be rewarded not just for producing a proof quickly, but for the quality of their model’s output in solving the given task. By pushing for ultra-efficient circuits and better results, DSperse aims to scale up to more complex AI tasks while keeping verification practical.
Integration of New Proof Systems: The DSperse framework will remain proof-system agnostic and is continually integrating the latest advancements in zero-knowledge machine learning. Inference Labs has already tested multiple ZK backends (from early prototypes like JOLT zkVM to current systems like Groth16 via Circom/EZKL). As new proving technologies (faster or more GPU-optimized provers, etc.) emerge, DSperse’s modular architecture will incorporate them – ensuring the network stays at the cutting edge of zkML performance improvements. This flexibility is a core part of making the network “future-proof” for scaling verifiable AI.
External Connectivity & Use-Cases: Although born within Bittensor, DSperse is designed to serve “beyond” a single network. A major roadmap goal is enabling cross-network interoperability – for example, allowing an AI inference to be initiated on one blockchain and its proof verified on another, trustlessly. Already, DSperse’s Proof-of-Inference is being used in partnerships outside the core Bittensor realm: e.g. the DeFi protocol Benqi uses it to verify AI-driven decisions in finance, and TestMachine uses it to audit social media data processing. Going forward, we can expect more integrations across industries (from finance to healthcare and gaming), so that any system requiring provable AI outputs can plug into DSperse. The Subnet-2 portal is also integrating more specialized Bittensor subnets (like Infinite Games, Dippy, EdgeGaming) as “service providers,” suggesting a growing marketplace of AI services all secured by DSperse’s verification layer.
Network and Economic Upgrades: On the Bittensor side, DSperse will adapt to upcoming changes such as the planned dToo/TAO tokenomic updates (a “post-dTAO environment”). The team has indicated that the role of validators may evolve under new economic designs – possibly shifting how consensus or rewards work once Bittensor’s token model changes. Enhancements like Yuma consensus (already used to make validators check each other’s honesty) will continue to be refined. Moreover, the project has deployed an Ethereum smart contract for managing liquid restaking tokens (as part of its early DeFi use-case), and those deposits will become usable when DSperse’s next version goes live. This hints that bridging traditional DeFi with DSperse’s AI verification remains on the roadmap, marrying on-chain financial logic with off-chain AI proofs.
In essence, the roadmap for DSperse is about scaling up and widening out: scaling up the technical performance (faster proofs, bigger models, more nodes) and widening out the adoption (more subnets, blockchains, and real-world applications connected). It is moving from “just a subnet” to a “powerful framework” for decentralized, trustable AI services. Each iteration (and funding milestone) brings DSperse closer to its end-goal: an auditable AI network where any autonomous system or agent can be audited in real-time by cryptography, across industries and networks, ensuring AI remains verifiable, private, and fair by design.
DSperse is evolving from a single Bittensor subnet into a full-fledged framework for decentralized, verifiable AI, and the roadmap reflects this broad vision. After ending 2025 with the DSperse rebrand and the core system going live on Subnet-2, the team is now focused on expanding and hardening the network’s capabilities. Key roadmap themes include:
Broader Model Support & Efficiency: Future DSperse updates (referred to as Subnet-2 Version 2) will expand the range of AI models that can be deployed and proven on the network. The incentive structure is being tuned to favor smaller, faster, and high-quality models – effectively encouraging participants to use efficient model architectures that yield quick proofs without sacrificing accuracy. This goes hand-in-hand with introducing output-based scoring, meaning miners will be rewarded not just for producing a proof quickly, but for the quality of their model’s output in solving the given task. By pushing for ultra-efficient circuits and better results, DSperse aims to scale up to more complex AI tasks while keeping verification practical.
Integration of New Proof Systems: The DSperse framework will remain proof-system agnostic and is continually integrating the latest advancements in zero-knowledge machine learning. Inference Labs has already tested multiple ZK backends (from early prototypes like JOLT zkVM to current systems like Groth16 via Circom/EZKL). As new proving technologies (faster or more GPU-optimized provers, etc.) emerge, DSperse’s modular architecture will incorporate them – ensuring the network stays at the cutting edge of zkML performance improvements. This flexibility is a core part of making the network “future-proof” for scaling verifiable AI.
External Connectivity & Use-Cases: Although born within Bittensor, DSperse is designed to serve “beyond” a single network. A major roadmap goal is enabling cross-network interoperability – for example, allowing an AI inference to be initiated on one blockchain and its proof verified on another, trustlessly. Already, DSperse’s Proof-of-Inference is being used in partnerships outside the core Bittensor realm: e.g. the DeFi protocol Benqi uses it to verify AI-driven decisions in finance, and TestMachine uses it to audit social media data processing. Going forward, we can expect more integrations across industries (from finance to healthcare and gaming), so that any system requiring provable AI outputs can plug into DSperse. The Subnet-2 portal is also integrating more specialized Bittensor subnets (like Infinite Games, Dippy, EdgeGaming) as “service providers,” suggesting a growing marketplace of AI services all secured by DSperse’s verification layer.
Network and Economic Upgrades: On the Bittensor side, DSperse will adapt to upcoming changes such as the planned dToo/TAO tokenomic updates (a “post-dTAO environment”). The team has indicated that the role of validators may evolve under new economic designs – possibly shifting how consensus or rewards work once Bittensor’s token model changes. Enhancements like Yuma consensus (already used to make validators check each other’s honesty) will continue to be refined. Moreover, the project has deployed an Ethereum smart contract for managing liquid restaking tokens (as part of its early DeFi use-case), and those deposits will become usable when DSperse’s next version goes live. This hints that bridging traditional DeFi with DSperse’s AI verification remains on the roadmap, marrying on-chain financial logic with off-chain AI proofs.
In essence, the roadmap for DSperse is about scaling up and widening out: scaling up the technical performance (faster proofs, bigger models, more nodes) and widening out the adoption (more subnets, blockchains, and real-world applications connected). It is moving from “just a subnet” to a “powerful framework” for decentralized, trustable AI services. Each iteration (and funding milestone) brings DSperse closer to its end-goal: an auditable AI network where any autonomous system or agent can be audited in real-time by cryptography, across industries and networks, ensuring AI remains verifiable, private, and fair by design.
Huge thanks to Keith Singery (aka Bittensor Guru) for all of his fantastic work in the Bittensor community. Make sure to check out his other video/audio interviews by clicking HERE.
Inference Labs has developed Subnet 2 Omron to offer cryptographically verified proof-of-inference. Their initial focus is on Active Validation Service (AVS) and Liquid Restaking Tokens (LRT). Colin, cofounder of Inference Labs, guides us through the subnet and its essential role in ensuring authenticity for inference.
1/ We’re launching 3 hackathons with @endgame_summit
Inference Labs is powering cutting-edge challenges in verifiable AI inference, pushing the boundaries of zero-knowledge proofs, privacy, and decentralized AI.
And now, we're rewarding you for doing the same. 👇
We’re proud to announce a strategic partnership with @lagrangedev.
Inference Labs is thrilled to integrate Lagrange’s new cutting edge DeepProve library into our proving system agnostic stack to provide even faster verification for critical applications. 👇
$TAO's Omron (SN2) Do? This is how we get trustless AI, where we don’t have to blindly trust OpenAI, Google, or any single entity. Instead, we rely on cryptographic proofs. That's @omron_ai. Right now, AI models are everywhere. Open-source models, proprietary black-box systems,…
🥩 Omron Miners & Validators 🥩
Version 7.1.3 has deployed!
In this update we have pushed the following changes 🛠️
Join the Omron Subnet 2 chat in the Bittensor Discord to learn more and contribute:
🔗
Join the Bittensor Discord Server!
Check out the Bittensor community on Discord - hang out with 42702 other members and enjoy free voice and text chat.
discord.gg
Introducing the Inference Labs Commercial Accelerator Program.
This new initiative is designed to align decentralized AI talent with real-world industry applications.
Season One with @ezklxyz starts now:
Omron Accelerator
Accelerating zkML using the world's largest decentralized zkML proving network
accelerate.omron.ai
🚀 Excited to announce a new competition with @inference_labs: Help us optimize EZKL to run zero-knowledge proofs efficiently on Apple Silicon! We're launching this on Subnet 2 with significant rewards for performance improvements.