With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time
Bittensor’s Subnet 4, known as Targon, is an integral component of the Bittensor network, designed to facilitate a decentralized marketplace for a specific category of digital commodities related to artificial intelligence (AI).
This subnet enhances AI systems’ ability to process and generate information across various data types and formats. It leads to a deeper understanding of context and relationships, thereby improving human-AI interactions. Multi-modal AI systems in this setup become more resilient and reliable by leveraging data from multiple sources, which helps them handle inconsistencies and errors more effectively, ultimately enhancing output and performance.
Multimodal AI is an advanced form of artificial intelligence that integrates multiple types or modes of data to achieve more accurate assessments, insightful conclusions, and precise predictions. The primary distinction between multimodal AI and traditional single-modal AI lies in the diversity of data they utilize. Single-modal AI typically operates with a single source or type of data, whereas multimodal AI processes data from various sources, such as video, images, speech, sound, and text. This enables a more comprehensive and nuanced understanding of environments or situations.
Multimodal AI systems are generally composed of three key components: data acquisition, multimodal fusion, and decision-making. These systems have a wide range of applications across different industries, including manufacturing process optimization, product quality improvement, healthcare, finance, and entertainment.
In many real-world scenarios, multimodal AI outperforms single-modal AI, representing a new frontier in cognitive AI. By combining the strengths of multiple inputs, multimodal AI excels in solving complex tasks and synthesizing data from diverse sources, resulting in more intelligent and dynamic predictions.
Bittensor’s Subnet 4, known as Targon, is an integral component of the Bittensor network, designed to facilitate a decentralized marketplace for a specific category of digital commodities related to artificial intelligence (AI).
This subnet enhances AI systems’ ability to process and generate information across various data types and formats. It leads to a deeper understanding of context and relationships, thereby improving human-AI interactions. Multi-modal AI systems in this setup become more resilient and reliable by leveraging data from multiple sources, which helps them handle inconsistencies and errors more effectively, ultimately enhancing output and performance.
Multimodal AI is an advanced form of artificial intelligence that integrates multiple types or modes of data to achieve more accurate assessments, insightful conclusions, and precise predictions. The primary distinction between multimodal AI and traditional single-modal AI lies in the diversity of data they utilize. Single-modal AI typically operates with a single source or type of data, whereas multimodal AI processes data from various sources, such as video, images, speech, sound, and text. This enables a more comprehensive and nuanced understanding of environments or situations.
Multimodal AI systems are generally composed of three key components: data acquisition, multimodal fusion, and decision-making. These systems have a wide range of applications across different industries, including manufacturing process optimization, product quality improvement, healthcare, finance, and entertainment.
In many real-world scenarios, multimodal AI outperforms single-modal AI, representing a new frontier in cognitive AI. By combining the strengths of multiple inputs, multimodal AI excels in solving complex tasks and synthesizing data from diverse sources, resulting in more intelligent and dynamic predictions.
The primary objective of Targon is to establish a decentralized, incentive-driven marketplace that specializes in a particular AI-related digital commodity. By creating a competitive environment, Targon aims to harness the collective capabilities of miners and validators to produce high-quality AI services. This approach not only democratizes access to AI development but also ensures that the resulting services are refined through continuous evaluation and feedback. The incentive mechanisms are structured to reward participants based on their contributions, promoting sustained engagement and fostering a culture of excellence within the subnet.
Technical Architecture
Targon’s architecture is built upon the foundational principles of Bittensor’s subnet design, incorporating key components that facilitate its operation:
Incentive Mechanism: At the heart of Targon lies its unique incentive mechanism, which delineates the specific tasks that miners and validators are required to perform. This mechanism is maintained off-chain by the subnet’s creators and defines the interfaces through which participants interact with the subnet. For instance, miners may be tasked with generating AI models or processing data, while validators assess these outputs to ensure they meet predefined quality standards.
Miners: These participants are responsible for executing the tasks outlined in the incentive mechanism. Their work is central to the production of the subnet’s digital commodity, and they are rewarded based on the quality and efficiency of their contributions.
Validators: Validators play a critical role in maintaining the integrity of the subnet by evaluating the work produced by miners. They independently assess the outputs against the standards set forth in the incentive mechanism, providing scores that influence the distribution of rewards.
Yuma Consensus: This on-chain algorithm processes the evaluations provided by validators to determine the allocation of emissions (in the form of TAO tokens) to miners, validators, and subnet creators. The consensus mechanism ensures that rewards are distributed fairly, reflecting the performance and contributions of each participant.
The interplay between these components ensures that Targon operates as a self-regulating ecosystem, promoting the continuous improvement of its AI services through decentralized collaboration and competition.
Subnet Liquidity Reserves and Tokenomics
In alignment with Bittensor’s Dynamic TAO framework, Targon functions as an automated market maker (AMM) with two primary liquidity reserves:
TAO Reserves: This reserve comprises the TAO tokens staked into the subnet, representing the collective investment of participants in Targon’s ecosystem.
Alpha (α) Reserves: Specific to Targon, the α token serves as the subnet’s native currency. Participants can acquire α tokens by staking TAO into the subnet’s reserve, facilitating a fluid exchange between the two currencies.
Emission and Reward Distribution
Targon’s emission model is designed to foster growth while maintaining economic stability within the subnet. TAO and α tokens are emitted per block, with α emissions allocated between the subnet’s reserve and outstanding α tokens held by participants. This allocation supports the liquidity of the subnet and provides incentives for miners, validators, and subnet creators. The Yuma Consensus algorithm plays a pivotal role in this process, evaluating participant performance to ensure that rewards are distributed equitably, reflecting the value each contributor brings to the subnet.
Community Engagement and Development
Targon thrives on active community engagement, with participants ranging from individual developers to organizations specializing in AI. The subnet’s open and decentralized nature encourages collaboration, knowledge sharing, and collective problem-solving. Regular updates, transparent governance, and open-source contributions are hallmarks of Targon’s development approach, fostering a vibrant ecosystem where innovation can flourish.
The key products from Manifold Labs include:
Targon Virtual Machine (TVM): A secure, confidential computing platform that enables AI model training, inference, and validation using hardware like CPUs and GPUs. It incorporates both TEEs and NVIDIA confidential compute for ensuring security and privacy during AI workloads. TVM’s primary function is to offer decentralized computing power while guaranteeing the confidentiality of the computations, providing a platform where miners can contribute to AI tasks securely without risk of data leakage.
Sybil: A hybrid search engine designed to allow users to query decentralized AI models and access various machine learning outputs. The system is model-agnostic and allows users to change the model they are working with, while improving performance through enhanced page load speeds.
Txyz: A blockchain monitoring tool modeled after Bloomberg’s terminal, allowing users to monitor the BitTensor blockchain ecosystem. It enables users to track transactions, wallets, and various network activities in real-time and includes a native wallet integration.
Targon Hub: A platform to run confidential AI models and securely perform inference with strong privacy guarantees. TVM supports end-to-end confidential workflows, including the training, fine-tuning, and deployment of AI models, ensuring that data remains encrypted and secure from malicious actors.
The primary objective of Targon is to establish a decentralized, incentive-driven marketplace that specializes in a particular AI-related digital commodity. By creating a competitive environment, Targon aims to harness the collective capabilities of miners and validators to produce high-quality AI services. This approach not only democratizes access to AI development but also ensures that the resulting services are refined through continuous evaluation and feedback. The incentive mechanisms are structured to reward participants based on their contributions, promoting sustained engagement and fostering a culture of excellence within the subnet.
Technical Architecture
Targon’s architecture is built upon the foundational principles of Bittensor’s subnet design, incorporating key components that facilitate its operation:
Incentive Mechanism: At the heart of Targon lies its unique incentive mechanism, which delineates the specific tasks that miners and validators are required to perform. This mechanism is maintained off-chain by the subnet’s creators and defines the interfaces through which participants interact with the subnet. For instance, miners may be tasked with generating AI models or processing data, while validators assess these outputs to ensure they meet predefined quality standards.
Miners: These participants are responsible for executing the tasks outlined in the incentive mechanism. Their work is central to the production of the subnet’s digital commodity, and they are rewarded based on the quality and efficiency of their contributions.
Validators: Validators play a critical role in maintaining the integrity of the subnet by evaluating the work produced by miners. They independently assess the outputs against the standards set forth in the incentive mechanism, providing scores that influence the distribution of rewards.
Yuma Consensus: This on-chain algorithm processes the evaluations provided by validators to determine the allocation of emissions (in the form of TAO tokens) to miners, validators, and subnet creators. The consensus mechanism ensures that rewards are distributed fairly, reflecting the performance and contributions of each participant.
The interplay between these components ensures that Targon operates as a self-regulating ecosystem, promoting the continuous improvement of its AI services through decentralized collaboration and competition.
Subnet Liquidity Reserves and Tokenomics
In alignment with Bittensor’s Dynamic TAO framework, Targon functions as an automated market maker (AMM) with two primary liquidity reserves:
TAO Reserves: This reserve comprises the TAO tokens staked into the subnet, representing the collective investment of participants in Targon’s ecosystem.
Alpha (α) Reserves: Specific to Targon, the α token serves as the subnet’s native currency. Participants can acquire α tokens by staking TAO into the subnet’s reserve, facilitating a fluid exchange between the two currencies.
Emission and Reward Distribution
Targon’s emission model is designed to foster growth while maintaining economic stability within the subnet. TAO and α tokens are emitted per block, with α emissions allocated between the subnet’s reserve and outstanding α tokens held by participants. This allocation supports the liquidity of the subnet and provides incentives for miners, validators, and subnet creators. The Yuma Consensus algorithm plays a pivotal role in this process, evaluating participant performance to ensure that rewards are distributed equitably, reflecting the value each contributor brings to the subnet.
Community Engagement and Development
Targon thrives on active community engagement, with participants ranging from individual developers to organizations specializing in AI. The subnet’s open and decentralized nature encourages collaboration, knowledge sharing, and collective problem-solving. Regular updates, transparent governance, and open-source contributions are hallmarks of Targon’s development approach, fostering a vibrant ecosystem where innovation can flourish.
The key products from Manifold Labs include:
Targon Virtual Machine (TVM): A secure, confidential computing platform that enables AI model training, inference, and validation using hardware like CPUs and GPUs. It incorporates both TEEs and NVIDIA confidential compute for ensuring security and privacy during AI workloads. TVM’s primary function is to offer decentralized computing power while guaranteeing the confidentiality of the computations, providing a platform where miners can contribute to AI tasks securely without risk of data leakage.
Sybil: A hybrid search engine designed to allow users to query decentralized AI models and access various machine learning outputs. The system is model-agnostic and allows users to change the model they are working with, while improving performance through enhanced page load speeds.
Txyz: A blockchain monitoring tool modeled after Bloomberg’s terminal, allowing users to monitor the BitTensor blockchain ecosystem. It enables users to track transactions, wallets, and various network activities in real-time and includes a native wallet integration.
Targon Hub: A platform to run confidential AI models and securely perform inference with strong privacy guarantees. TVM supports end-to-end confidential workflows, including the training, fine-tuning, and deployment of AI models, ensuring that data remains encrypted and secure from malicious actors.
Targon is developed by Manifold Labs, a team specializing in multimodal artificial intelligence (AI) systems. Multimodal AI integrates various data types—such as text, images, and audio—to enhance the processing and generation of information, leading to a more comprehensive understanding and improved human-AI interactions.
The Manifold team comprises professionals with diverse expertise in AI development, software engineering, and robotics.
Robert Myers – Founder and CEO
James Woodham – Co-Founder
Joshua Brown – Lead Software Engineer
Ahmed Darwich – Software Engineer
Jonathan Guyton – Robotics Engineer
Targon is developed by Manifold Labs, a team specializing in multimodal artificial intelligence (AI) systems. Multimodal AI integrates various data types—such as text, images, and audio—to enhance the processing and generation of information, leading to a more comprehensive understanding and improved human-AI interactions.
The Manifold team comprises professionals with diverse expertise in AI development, software engineering, and robotics.
Robert Myers – Founder and CEO
James Woodham – Co-Founder
Joshua Brown – Lead Software Engineer
Ahmed Darwich – Software Engineer
Jonathan Guyton – Robotics Engineer
Manifold Labs’ roadmap is structured around improving the scalability, privacy, and flexibility of decentralized AI computations. In the next phase, their focus will be on:
TVM Expansion: Enhancing the Targon Virtual Machine to support cross-data center training, enabling efficient, large-scale model training across multiple locations. This will include integrating additional confidential computing technologies, refining the system, and adding more functionality for miners to optimize their hardware.
Sybil and Txyz Improvements: Continuing to develop Sybil as a versatile search engine and further expanding Txyz for more comprehensive blockchain monitoring. This includes a brand new redesign and feature updates to enhance user experience.
Confidential Computing Infrastructure: Launching and refining the confidential compute infrastructure for TVM, aiming to handle terabyte-scale data sets and AI model training across distributed data centers while keeping data encrypted and secure.
Monetization: Rolling out paid AI services on platforms like Open Router, using confidential compute as a unique selling point to offer secure and private AI inference. The team plans to develop exclusive AI models that will only be available on their platform, creating a unique ecosystem within BitTensor’s decentralized network.
Long-Term Vision: In the long term, Manifold Labs plans to create a sustainable decentralized AI ecosystem capable of providing secure AI computation and verification services, while continuing to expand their partnerships and integration with industry leaders in both AI and blockchain spaces.
Manifold Labs’ roadmap is structured around improving the scalability, privacy, and flexibility of decentralized AI computations. In the next phase, their focus will be on:
TVM Expansion: Enhancing the Targon Virtual Machine to support cross-data center training, enabling efficient, large-scale model training across multiple locations. This will include integrating additional confidential computing technologies, refining the system, and adding more functionality for miners to optimize their hardware.
Sybil and Txyz Improvements: Continuing to develop Sybil as a versatile search engine and further expanding Txyz for more comprehensive blockchain monitoring. This includes a brand new redesign and feature updates to enhance user experience.
Confidential Computing Infrastructure: Launching and refining the confidential compute infrastructure for TVM, aiming to handle terabyte-scale data sets and AI model training across distributed data centers while keeping data encrypted and secure.
Monetization: Rolling out paid AI services on platforms like Open Router, using confidential compute as a unique selling point to offer secure and private AI inference. The team plans to develop exclusive AI models that will only be available on their platform, creating a unique ecosystem within BitTensor’s decentralized network.
Long-Term Vision: In the long term, Manifold Labs plans to create a sustainable decentralized AI ecosystem capable of providing secure AI computation and verification services, while continuing to expand their partnerships and integration with industry leaders in both AI and blockchain spaces.
Huge thanks to Keith Singery (aka Bittensor Guru) for all of his fantastic work in the Bittensor community. Make sure to check out his other video/audio interviews by clicking HERE.
In this video, Keith interviews Carro from Manifold Labs! They’re revolutionizing search and beyond using Bittensor with Sybil.
A big thank you to Tao Stats for producing these insightful videos in the Novelty Search series. We appreciate the opportunity to dive deep into the groundbreaking work being done by Subnets within Bittensor! Check out some of their other videos HERE.
In this session, Rob from Manifold Labs provides an extensive update on the team’s recent progress and upcoming developments. He introduces their latest innovations, including the Targon Virtual Machine (TVM), a secure and confidential computing platform designed to handle AI workloads while maintaining data privacy. The discussion covers key products like Sybil, a hybrid search engine for decentralized AI models, and Txyz, a blockchain monitoring tool tailored to the Bittensor network. Rob also dives into the transition to confidential computing on Subnet 4, emphasizing how this will revolutionize AI training and inference processes. Additionally, the team shares exciting advancements in model training, data storage, and the integration of cutting-edge technologies like NVIDIA confidential compute and TEEs. The session highlights Manifold Labs’ vision to compete with major AI research labs and to create a decentralized, secure, and scalable infrastructure for AI computation.
This video from the Novelty Search live sessions, held in late 2024, provides a deep dive into the latest developments within Subnet 4, spearheaded by Manifold Labs. The team discusses new features in their protocol, particularly the introduction of Targon HUB, a tool designed to facilitate decentralized model leasing and inference, where users can rent AI models from validators. They also explain their innovative approach to scaling and optimizing GPU usage, enabling validators to run multiple models more efficiently. The session highlights key updates, including the release of the Epistula protocol, which addresses previous issues with the Axon interface, making interactions with miners more efficient and open-source. There’s also a conversation on how validators can now lease models, the potential of integrating with external platforms like Hugging Face, and how they’re shaping the future of decentralized AI with open-source tools. Additionally, the discussion touches on the challenges around economic security and how staked tokens across different subnets are distributed, with a focus on organic token scoring and incentivizing active participation in the Bittensor ecosystem.
A special thanks to Mark Jeffrey for his amazing Hash Rate series! In this series, he provides valuable insights into Bittensor Subnets and the world of decentralized AI. Be sure to check out the full series on his YouTube channel for more expert analysis and deep dives.
In this early 2025 session, Mark Jeffrey interviews James Woodman, the co-founder of Targon. The conversation begins with a recap of the Bittensor Endgame conference, marking a pivotal moment for the community, before diving into James’ journey into the Bittensor ecosystem. He discusses his previous role at Open Tensor and how he helped launch Targon, a platform offering decentralized AI compute services, positioning Targon as a potential competitor to AWS and OpenAI. James explains Targon’s vision to disrupt the traditional AI infrastructure by providing more affordable, decentralized compute resources through the Bittensor network. He highlights the platform’s cost-efficiency (85% cheaper than AWS), its focus on creating a marketplace for AI compute, and how Targon plans to scale by fostering talent from across the globe. The episode touches on Targon’s efforts to shift from relying on speculative subsidies to generating organic revenue, as well as the importance of incentivizing collaboration across subnets within the Bittensor ecosystem.
Novelty Search is great, but for most investors trying to understand Bittensor, the technical depth is a wall, not a bridge. If we’re going to attract investment into this ecosystem then we need more people to understand it! That’s why Siam Kidd and Mark Creaser from DSV Fund have launched Revenue Search, where they ask the simple questions that investors want to know the answers to.
Recorded in July 2025, this episode of Revenue Search features James Woodman from Targon (Subnet 4), one of the earliest and most active compute subnets on Bittensor. James explains how Targon aggregates over $70 million worth of NVIDIA-certified hardware—including H200s and L40s—and provides enterprise-grade AI inference via trusted execution environments, ensuring data privacy and compliance. Targon already serves paying customers, especially Character AI-type applications and privacy-sensitive startups, generating around $100K/month in real-world revenue, which is fully committed to Alpha token buybacks. The team is laser-focused on scaling revenue to match miner emissions ($4M/month), making the subnet financially sustainable. James emphasizes that their long-term success hinges on balancing supply and demand while differentiating their compute quality and uptime guarantees—positioning Targon as a serious competitor to centralized providers like CoreWeave and AWS.
Recorded October 2025: A Novelty Search session features Rob and Josh from Manifold unveiling Targon’s self-serve compute platform and the evolution of Subnet-4: they’ve shifted from early inference/verification experiments to a security-first stack built on confidential virtual machines using CPU TE (Intel TDX/AMD SEV) and Nvidia attestation (Hopper now, Blackwell incoming), so customers can run containers and serverless Python functions on encrypted VMs across CPUs and H200 GPUs with options for 1/2/4/8 cards, forthcoming GPU virtualization (fractional/turbo VRAM), plus soon RDMA clusters and multi-node network volumes; they emphasize verifiability (remote attestation tying workloads to specific hardware, double-checks like nvidia-smi/NVCC in the report), stability, and price discovery via an interruptible compute order book that matches datacenter supply to demand, with futures/derivatives planned to bring transparency to long-term pricing. They’ve partnered closely with Intel on next-gen TDX features, signed their first 12-month enterprise deal, process millions of daily requests for partners like Dippy, and now offer both CPU and GPU rentals and a serverless SDK at targon.com, with up to $100k in credits for eligible teams; roadmap items include VM rentals, broader virtualization, and more regions. Much of the talk contrasts Targon’s permissionless marketplace against centralized clouds (aiming for better price, uptime orchestration, and privacy), explores use cases from AB-testing to large-scale training, and sketches longer-term ideas like end-to-end private inference and even anti-cheat via attestations. They also cover payments (TAO/fiat today, stablecoin plans), how other BitTensor subnets can use Targon for cheaper, verifiable validation/inference, and invite stress-testing (with bug bounties) while closing on the thesis that decentralized, attested compute is the foundation for a transparent, enterprise-ready AI stack.
Recorded in November 2025, in this Hash rate, Targon (Subnet 4) cofounder James Woodman recaps a $10.5M raise led by OSS Capital with angels like Ram Shriram (early Google) and Shopify’s Tobias Lütke, stressing it hasn’t changed their mission: ship great infra for Bittensor first. He outlines Targon’s Trusted Execution Environment (TVM) now running in production and, since Oct 27, required for Subnet-4 validators to cryptographically attest real hardware (Intel/AMD/NVIDIA) and thus kill weight-copying; subnet owners can mandate TVM to protect their own networks, too. Ridges is their top “colleague-customer,” and Targon builds features on demand, but their go-to-market has shifted away from Fortune-500 compute brokering (SOC2, data-center keycard hurdles) to doubling down on the BitTensor home economy—helping subnets standardize, attest, and source lowest-cost GPUs. On the surprise Net $TAO Flow change, Woodman argues it’s healthy: it forces sustainable, inward GDP—revenue subnets thrive, then reinvest and grant into long-horizon “research” subnets (e.g., frontier training) so both speculation and science can coexist. He predicts centralized AI’s capital binge will hit distress, driving stranded compute onto open networks like Bittensor, provided the community “chops wood” and keeps leakage to zero. Governance today is necessarily guided (not yet fully decentralized), but should broaden; meanwhile, he urges collaboration over infighting—“we’re colleagues”—invites subnet owners to adopt TVM to end weight copying, and points builders/miners/validators to targon.com to get started.
We are excited to be working with our friends at @Bitcast_network to Scale Confidential Decentralized Computing for Enterprise ☁️
Be on the lookout for Targon related content from Bitcast Miners coming soon ⚡️

Targon has solved confidential decentralized computing for enterprise.
The Bitcast network will be diving into how the @TargonCompute infrastructure is built and usage is scaling up.
The longest standing subnet in the ecosystem - you know this one will be huge!
Deploy and Develop your OpenClaw Agents using Targon Rentals Template, Available Now ☁️
→ Head to Targon Rentals
→ Create New Rental
→ Select OpenClaw Template
Enjoy Single Click OpenClaw Deployments powered by Targon Virtual Machine⚡️
Ready to start building? Head to→ https://targon.com/rentals/create
or Deploy Directly using CLI with [curl -fsSL https://targon.com/openclaw.sh | bash]
Targon allows single command setup, making OpenClaw Deployments efficient and lightning fast ⚡️
You can now Spin up a Minecraft Server using Targon Rentals Template ☁️
→ Head to Targon Rentals
→ Create New Rental
→ Select Minecraft Template
Enjoy mining powered by Targon Virtual Machine ⚡️
Ready to start building? Head to →
Targon
Scale with Secure GPU & CPU Rentals on a Lightning-Fast Cloud for Training and Deployment
targon.com
Today, we are excited to announce Targon Alpha Discounts ☁️
→ Get a 10% Discount on Credits when you Pay with any Subnets Alpha ⚡️
✱ Announcing The Subnet Signal Newsletter
→ A weekly look at what's moving across @bittensor from the team behind @targoncompute
Today we are discussing @Openclaw integration with @bittensor, and where utilization is trending, Tap in below ↓
https://manifoldlabs.substack.com/p/bittensors-latest-miners-arent-human-246?r=6mm7v8&utm_campaign=post&utm_medium=web&triedRedirect=true
We just repurchased 350 TAO of SN4 alpha using organic revenues from @TargonCompute and @SybilChat platforms