With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time

Subnet 03

Templar

Alpha Price
Value
Market Cap
Value
Neurons
Value
Registration Cost
Value
TAO Liquidity
Value
Alpha in Pool
Value
Total Alpha Supply
Value
% Alpha Staked
Value

ABOUT

What exactly does it do?

Templar is a decentralized training framework that enables large-scale AI model training across heterogeneous compute resources distributed over the internet. By leveraging a carefully designed incentive mechanism, it connects diverse computational nodes, allowing contributors (miners) to participate in collaborative training while ensuring quality and integrity through a trustless validation process. This innovative approach, known as Incentivized Wide-Internet Training, rewards miners in $TAO tokens for contributing computational power and high-quality data, fostering an open and democratized AI development ecosystem. Unlike traditional cloud-based training, which relies on centralized infrastructure and controlled datasets, Templar ensures privacy, scalability, and resilience by distributing training across a decentralized network.

This model eliminates single points of failure, making AI training more accessible, secure, and efficient. Participants are incentivized to provide high-quality contributions, as rewards are directly tied to the accuracy and usefulness of their updates. The framework supports heterogeneous hardware environments, allowing a wide range of contributors to participate, from individual developers to large-scale compute providers. By integrating blockchain incentives with decentralized AI training, Templar paves the way for a future where advanced machine learning models can be developed collaboratively without reliance on centralized tech monopolies. This ensures that AI remains open, scalable, and privacy-focused while reducing the costs traditionally associated with model training.

Through its decentralized architecture, Templar is redefining the landscape of AI and blockchain integration, unlocking new opportunities for researchers, engineers, and businesses seeking an efficient, community-driven approach to artificial intelligence.

Templar is a decentralized training framework that enables large-scale AI model training across heterogeneous compute resources distributed over the internet. By leveraging a carefully designed incentive mechanism, it connects diverse computational nodes, allowing contributors (miners) to participate in collaborative training while ensuring quality and integrity through a trustless validation process. This innovative approach, known as Incentivized Wide-Internet Training, rewards miners in $TAO tokens for contributing computational power and high-quality data, fostering an open and democratized AI development ecosystem. Unlike traditional cloud-based training, which relies on centralized infrastructure and controlled datasets, Templar ensures privacy, scalability, and resilience by distributing training across a decentralized network.

This model eliminates single points of failure, making AI training more accessible, secure, and efficient. Participants are incentivized to provide high-quality contributions, as rewards are directly tied to the accuracy and usefulness of their updates. The framework supports heterogeneous hardware environments, allowing a wide range of contributors to participate, from individual developers to large-scale compute providers. By integrating blockchain incentives with decentralized AI training, Templar paves the way for a future where advanced machine learning models can be developed collaboratively without reliance on centralized tech monopolies. This ensures that AI remains open, scalable, and privacy-focused while reducing the costs traditionally associated with model training.

Through its decentralized architecture, Templar is redefining the landscape of AI and blockchain integration, unlocking new opportunities for researchers, engineers, and businesses seeking an efficient, community-driven approach to artificial intelligence.

PURPOSE

What exactly is the 'product/build'?

This system involves miners around the world contributing computational resources to train machine learning models. The miners perform a unique role where they receive slices of a larger dataset, train the model on their portion, and then send the model updates (gradients) back to the platform. These gradients are compared to a validator’s output to determine how well the miner’s efforts improve the global model.

Templar integrates a validation mechanism to ensure that the gradients are accurate and useful. Validators assess the quality of the gradients submitted by miners by checking if these gradients improve the model more than the existing training state. If a miner’s contribution is deemed valid, they are rewarded based on the improvement they brought to the model’s accuracy.

A significant aspect of Templar’s design is its incentive system. Miners are not paid just for participating; they are rewarded based on how much their contribution (the gradient they compute) reduces the model’s loss. This incentivizes miners to push the limits of optimization and performance.

Templar is also noteworthy for using R2 buckets (cloud storage) for communication between miners and validators. These storage solutions are crucial for ensuring that large datasets and gradients are efficiently transferred and timestamped, adding another layer of security and verification to the training process. The incentive structure and decentralization aspects are what differentiate Templar from traditional AI training models which usually involve centralized, expensive, and controlled data centers.

One of Templar’s challenges is the coordination of these distributed resources, ensuring that miners’ contributions are synchronized and measured properly. To solve this, the platform enforces strict deadlines for miners to submit their gradients, ensuring the training process is timely and consistent.

 

Key Features

Decentralised Training: Utilises computational resources across the internet to enable large-scale model training.

Incentive-Driven: Implements a reward system that encourages miners to contribute high-quality updates.

Heterogeneous Compute: Supports various hardware configurations to ensure broad participation.

Scalable Architecture: Designed to efficiently train large models across a distributed network.

Fair Participation: Includes mechanisms to prevent manipulation and ensure honest contributions.

 

Technical Architecture

Templar’s architecture is meticulously designed to coordinate computational workloads across a decentralized network, ensuring efficient and secure AI model training. The system is structured around two pivotal roles: Miners and Validators.​

Miners are the backbone of the training process. They are responsible for synchronizing their models with the latest global state, acquiring specific subsets of data deterministically assigned based on unique identifiers and training windows, and performing local training to compute gradients. These gradients are then accumulated over multiple batches within the training window before submission. The deterministic assignment ensures that each miner processes a unique yet consistent portion of the dataset, promoting diversity and comprehensiveness in training. ​

Validators, on the other hand, play a crucial role in maintaining the integrity and quality of the training process. They synchronize their models with the latest global state, select miners for evaluation, and retrieve the same data subsets assigned to those miners using the deterministic seeding mechanism. Validators then gather the compressed gradients submitted by miners, decompress them, and apply them to their local models to evaluate their effectiveness. This rigorous evaluation ensures that only beneficial updates are incorporated into the global model, maintaining the overall quality and reliability of the training process. ​

The collaboration between miners and validators is orchestrated through a series of synchronized windows, each comprising phases of training, submission, evaluation, and integration. This structured approach ensures that the decentralized training process remains organized, efficient, and conducive to producing high-quality AI models. ​

 

Product Implementation and Features

Templar distinguishes itself through several key features that collectively enhance its functionality and appeal:​

Decentralized Training: By utilizing computational resources spread across the internet, Templar enables large-scale model training without the need for centralized infrastructure. This approach not only democratizes access to AI training but also enhances the system’s resilience and scalability. ​

Incentive-Driven Participation: Templar implements a reward system that encourages miners to contribute high-quality updates. By linking rewards directly to the accuracy and usefulness of their contributions, Templar ensures that participants are motivated to perform genuine and effective training. ​

Support for Heterogeneous Compute Resources: The framework is designed to accommodate various hardware configurations, allowing a wide range of contributors, from individual developers with personal computers to large-scale data centers, to participate effectively. ​

Scalable Architecture: Templar’s architecture is built to efficiently train large models across a distributed network, ensuring that the system can scale seamlessly as more participants join the network. ​

Fair Participation Mechanisms: To maintain the integrity of the training process, Templar includes mechanisms to prevent manipulation and ensure honest contributions. This includes rigorous validation processes and a reward system that penalizes malicious behavior. ​

By integrating these features, Templar not only facilitates efficient and decentralized AI training but also creates an ecosystem that is inclusive, secure, and conducive to innovation.

 

This system involves miners around the world contributing computational resources to train machine learning models. The miners perform a unique role where they receive slices of a larger dataset, train the model on their portion, and then send the model updates (gradients) back to the platform. These gradients are compared to a validator’s output to determine how well the miner’s efforts improve the global model.

Templar integrates a validation mechanism to ensure that the gradients are accurate and useful. Validators assess the quality of the gradients submitted by miners by checking if these gradients improve the model more than the existing training state. If a miner’s contribution is deemed valid, they are rewarded based on the improvement they brought to the model’s accuracy.

A significant aspect of Templar’s design is its incentive system. Miners are not paid just for participating; they are rewarded based on how much their contribution (the gradient they compute) reduces the model’s loss. This incentivizes miners to push the limits of optimization and performance.

Templar is also noteworthy for using R2 buckets (cloud storage) for communication between miners and validators. These storage solutions are crucial for ensuring that large datasets and gradients are efficiently transferred and timestamped, adding another layer of security and verification to the training process. The incentive structure and decentralization aspects are what differentiate Templar from traditional AI training models which usually involve centralized, expensive, and controlled data centers.

One of Templar’s challenges is the coordination of these distributed resources, ensuring that miners’ contributions are synchronized and measured properly. To solve this, the platform enforces strict deadlines for miners to submit their gradients, ensuring the training process is timely and consistent.

 

Key Features

Decentralised Training: Utilises computational resources across the internet to enable large-scale model training.

Incentive-Driven: Implements a reward system that encourages miners to contribute high-quality updates.

Heterogeneous Compute: Supports various hardware configurations to ensure broad participation.

Scalable Architecture: Designed to efficiently train large models across a distributed network.

Fair Participation: Includes mechanisms to prevent manipulation and ensure honest contributions.

 

Technical Architecture

Templar’s architecture is meticulously designed to coordinate computational workloads across a decentralized network, ensuring efficient and secure AI model training. The system is structured around two pivotal roles: Miners and Validators.​

Miners are the backbone of the training process. They are responsible for synchronizing their models with the latest global state, acquiring specific subsets of data deterministically assigned based on unique identifiers and training windows, and performing local training to compute gradients. These gradients are then accumulated over multiple batches within the training window before submission. The deterministic assignment ensures that each miner processes a unique yet consistent portion of the dataset, promoting diversity and comprehensiveness in training. ​

Validators, on the other hand, play a crucial role in maintaining the integrity and quality of the training process. They synchronize their models with the latest global state, select miners for evaluation, and retrieve the same data subsets assigned to those miners using the deterministic seeding mechanism. Validators then gather the compressed gradients submitted by miners, decompress them, and apply them to their local models to evaluate their effectiveness. This rigorous evaluation ensures that only beneficial updates are incorporated into the global model, maintaining the overall quality and reliability of the training process. ​

The collaboration between miners and validators is orchestrated through a series of synchronized windows, each comprising phases of training, submission, evaluation, and integration. This structured approach ensures that the decentralized training process remains organized, efficient, and conducive to producing high-quality AI models. ​

 

Product Implementation and Features

Templar distinguishes itself through several key features that collectively enhance its functionality and appeal:​

Decentralized Training: By utilizing computational resources spread across the internet, Templar enables large-scale model training without the need for centralized infrastructure. This approach not only democratizes access to AI training but also enhances the system’s resilience and scalability. ​

Incentive-Driven Participation: Templar implements a reward system that encourages miners to contribute high-quality updates. By linking rewards directly to the accuracy and usefulness of their contributions, Templar ensures that participants are motivated to perform genuine and effective training. ​

Support for Heterogeneous Compute Resources: The framework is designed to accommodate various hardware configurations, allowing a wide range of contributors, from individual developers with personal computers to large-scale data centers, to participate effectively. ​

Scalable Architecture: Templar’s architecture is built to efficiently train large models across a distributed network, ensuring that the system can scale seamlessly as more participants join the network. ​

Fair Participation Mechanisms: To maintain the integrity of the training process, Templar includes mechanisms to prevent manipulation and ensure honest contributions. This includes rigorous validation processes and a reward system that penalizes malicious behavior. ​

By integrating these features, Templar not only facilitates efficient and decentralized AI training but also creates an ecosystem that is inclusive, secure, and conducive to innovation.

 

WHO

Team Info

The Templar team is led by Distributed, who plays the role of the subnet owner and has been instrumental in building and leading the project. He is supported by Joel, Evan, Noah, and several other key community contributors. The team has undergone significant challenges and transformations in the process of building this decentralized AI training platform.

Initially, the team tried to establish a more centralized structure, but a critical turning point came when Const posted about Distributed’s struggle, leading to a more open and collaborative approach. This pivotal moment turned Templar into a community-driven project, where miners, rather than just participants, became owners and stakeholders of the platform. The move from a centralized company-driven model to a frontier community-driven effort was a game-changer for Templar.

The miners have played a central role in Templar’s development. Distributed has mentioned that miners are the backbone of Templar, often pushing the system to its limits and uncovering exploits. This community involvement has helped Templar evolve and perfect its design. However, working with miners also comes with its challenges, such as dealing with various exploits miners use to manipulate the system, including gradient spoofing and bucket copying. These issues have been part of Templar’s learning curve, leading to improvements in the protocol’s robustness and security.

A crucial aspect of the team’s progress has been learning through 200+ experimental runs, with each run uncovering new challenges and insights. Through these experiments, they have learned how to manage a distributed training process that balances adversarial attacks and incentivization effectively.

 

The Templar team is led by Distributed, who plays the role of the subnet owner and has been instrumental in building and leading the project. He is supported by Joel, Evan, Noah, and several other key community contributors. The team has undergone significant challenges and transformations in the process of building this decentralized AI training platform.

Initially, the team tried to establish a more centralized structure, but a critical turning point came when Const posted about Distributed’s struggle, leading to a more open and collaborative approach. This pivotal moment turned Templar into a community-driven project, where miners, rather than just participants, became owners and stakeholders of the platform. The move from a centralized company-driven model to a frontier community-driven effort was a game-changer for Templar.

The miners have played a central role in Templar’s development. Distributed has mentioned that miners are the backbone of Templar, often pushing the system to its limits and uncovering exploits. This community involvement has helped Templar evolve and perfect its design. However, working with miners also comes with its challenges, such as dealing with various exploits miners use to manipulate the system, including gradient spoofing and bucket copying. These issues have been part of Templar’s learning curve, leading to improvements in the protocol’s robustness and security.

A crucial aspect of the team’s progress has been learning through 200+ experimental runs, with each run uncovering new challenges and insights. Through these experiments, they have learned how to manage a distributed training process that balances adversarial attacks and incentivization effectively.

 

FUTURE

Roadmap

Templar has made significant progress in refining its decentralized AI training protocol. The platform is now running over 200 GPUs, which are used to train smaller models. The system is reaching a level of stability, with reproducible loss curves and effective validation. A major achievement has been the ability to create stable runs where the miners’ contributions align with the overall training goals of the subnet. This stability has resulted in a reduction in errors and a better overall learning process.

The network is now focusing on optimizing efficiency by improving bandwidth usage and further enhancing the validator-miner synchronization. One of the key challenges being tackled right now is ensuring that the decentralized model can scale efficiently as the size of the models being trained increases.

Next Steps (Immediate Goals):
In the short term, Templar is aiming for two key improvements:

  1. Scalability: The next goal is to scale the model to 1.2B parameters. Once this scale is achieved, the team believes it will be easier to scale up further to 70B parameters.
  2. Asynchronous Training: Currently, miners and validators are synchronized in their training. However, Templar is planning to transition to asynchronous training, which will allow miners to continue their work without waiting for the validation results from every other miner. This will drastically improve training speeds and reduce idle times, making the decentralized training much more efficient.

 

Long-Term Vision:

The ultimate aim for Templar is to build a platform that can handle trillion-parameter models. Templar is striving to be the world’s leading decentralized platform for AI training, allowing anyone with computational power to participate in training large-scale AI models.

Another long-term vision is to make the models being trained co-owned by the decentralized community. The platform is designed to ensure that everyone can own a piece of the model, contributing to it, benefiting from it, and ensuring its ethical use. The idea of shared ownership of AI models is a radical departure from the traditional approach, where models are owned by companies or governments.

Templar’s focus on open-source technology and decentralization positions it as a future leader in a democratized AI ecosystem, offering individuals the chance to be part of something that historically only large corporations or state-backed entities could achieve.

 

Templar has made significant progress in refining its decentralized AI training protocol. The platform is now running over 200 GPUs, which are used to train smaller models. The system is reaching a level of stability, with reproducible loss curves and effective validation. A major achievement has been the ability to create stable runs where the miners’ contributions align with the overall training goals of the subnet. This stability has resulted in a reduction in errors and a better overall learning process.

The network is now focusing on optimizing efficiency by improving bandwidth usage and further enhancing the validator-miner synchronization. One of the key challenges being tackled right now is ensuring that the decentralized model can scale efficiently as the size of the models being trained increases.

Next Steps (Immediate Goals):
In the short term, Templar is aiming for two key improvements:

  1. Scalability: The next goal is to scale the model to 1.2B parameters. Once this scale is achieved, the team believes it will be easier to scale up further to 70B parameters.
  2. Asynchronous Training: Currently, miners and validators are synchronized in their training. However, Templar is planning to transition to asynchronous training, which will allow miners to continue their work without waiting for the validation results from every other miner. This will drastically improve training speeds and reduce idle times, making the decentralized training much more efficient.

 

Long-Term Vision:

The ultimate aim for Templar is to build a platform that can handle trillion-parameter models. Templar is striving to be the world’s leading decentralized platform for AI training, allowing anyone with computational power to participate in training large-scale AI models.

Another long-term vision is to make the models being trained co-owned by the decentralized community. The platform is designed to ensure that everyone can own a piece of the model, contributing to it, benefiting from it, and ensuring its ethical use. The idea of shared ownership of AI models is a radical departure from the traditional approach, where models are owned by companies or governments.

Templar’s focus on open-source technology and decentralization positions it as a future leader in a democratized AI ecosystem, offering individuals the chance to be part of something that historically only large corporations or state-backed entities could achieve.

 

MEDIA

A big thank you to Tao Stats for producing these insightful videos in the Novelty Search series. We appreciate the opportunity to dive deep into the groundbreaking work being done by Subnets within Bittensor! Check out some of their other videos HERE.

In this session, the team behind Templar discusses their work on decentralized AI training using the Bittensor network. They explore how the platform allows global miners to contribute computational power for training large-scale AI models in a permissionless, incentivized environment. The conversation covers the platform’s innovative approach to model training, where miners submit gradients and are rewarded based on performance. The team shares insights into the technical challenges and lessons learned, such as issues with synchronization and exploits from miners. They also highlight the future roadmap, which includes scaling to larger models and improving efficiency through asynchronous training. Templar’s long-term vision is to revolutionize AI model ownership by allowing decentralized communities to co-own and collaborate on the development of the world’s largest AI models.

A special thanks to Mark Jeffrey for his amazing Hash Rate series! In this series, he provides valuable insights into Bittensor Subnets and the world of decentralized AI. Be sure to check out the full series on his YouTube channel for more expert analysis and deep dives.

In this May 2025 episode of Hash Rate, Mark Jeffrey speaks with Sam Dare, the founder of Templar Subnet 3, about the progress and challenges of the decentralized training network. Sam discusses the launch of DTO and its impact on the Bittensor ecosystem, emphasizing the importance of aligning incentives for subnet owners and miners. He shares insights into how Templar operates, highlighting the complexities of decentralized machine learning training and the need for high-powered compute resources like H100s. Sam explains that Templar is focused on collaborative training, where miners contribute to reducing the loss in AI models by working together on a decentralized platform. They also touch on the future of monetization, with Templar planning to offer pre-training services for niche foundation models. Sam’s journey into the world of blockchain, from nightclub promoter to DeFi CTO, adds a personal touch to the conversation, showcasing his transition from traditional tech roles to building cutting-edge decentralized infrastructure.

Recorded August 2025: This Tao Stats Novelty Search session wrangles a lively panel to unveil Covenant’s three-part push across Bittensor—Templar (pre-training), Basilica (compute), and Grail (post-training/RL). The team spotlights a big research leap: Sparse-LoCo, a decentralized training optimizer that combines top-k compression with 2-bit quantization to slash communication while improving accuracy, enabling a permissionless 70B-parameter run. Basilica is positioned as a compute network that will evolve beyond “rentals” into value-added services like verifiable inference and hardware-efficiency tricks to cut the “Jensen tax.” Grail targets single- then multi-turn RL, plus a fast hidden-state “fingerprint” to verify miners’ outputs and model usage. Together—with a coming rebrand to Covenant.ai—the trio aims to turn open research into production pipelines while keeping incentives aligned and results shareable.

Huge thanks to Keith Singery (aka Bittensor Guru) for all of his fantastic work in the Bittensor community. Make sure to check out his other video/audio interviews by clicking HERE.

In this episode, Sam—lead of Bittensor’s Templar (subnet 3) and Basilica (subnet 39)—walks through how Templar became a top distributed training platform, including the recent CCLoco implementation. He then outlines his next goal: building a permissionless, community-owned compute cloud on Basilica. We cover the technical design, challenges, and what this means for the broader Bittensor ecosystem.

Recorded in November 2025, this episode covers Covenant’s three-subnet “frontier lab” — Templar (decentralized pre-training), Grail (post-training/evals), and Basilica (compute) — and Sam Dair’s goal to democratize frontier-model creation by turning “the internet into the data center.” Templar’s latest 72B run shows decentralized training can approach centralized results (Sam pegs it ~60% of top SOTA today), while Grail refines capabilities via RL-style post-training and Basilica rethinks compute incentives to source cheaper capacity and share revenue rather than overpay for idle GPUs. Sam outlines a shift from pure research to research-plus-product: training/fine-tune APIs, enterprise sales already in motion, and a longer-term aim to release a competitive base model and vertically integrated stack. He says 100% of training fees will buy back Covenant tokens, hints at unifying token economics across the three “orders,” and frames TAO-flow strategy around real demand (paid training) instead of blanket emission cuts.

NEWS

Announcements

Load More