With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time

Subnet 29

AI-ASSeSS

Emissions
Value
Recycled
Value
Recycled (24h)
Value
Registration Cost
Value
Active Validators
Value
Active Miners
Value
Active Dual Miners/Validators
Value

ABOUT

What exactly does it do?

AI-ASSeSS (Subnet 29 of Bittensor, previously Coldint) is a decentralized, community-driven AI training network. Its stated mission is “to maximize the collective training efforts of the Bittensor community by incentivizing the sharing of models, knowledge, insights, and code”. In practice, AI-ASSeSS provides a dynamic incentive mechanism where participants (miners) are rewarded for improving machine-learning models and contributing code or research.

AI-ASSeSS official goals highlight:

  • Train and share models/code: Develop state-of-the-art models for real-world tasks and make them public.
  • Individual & collective incentive: Reward both lone contributors and collaborative efforts to build on each other’s work.
  • Rapid, high-quality iteration: Continuously improve models, objectives, and code at a steady, fast pace.
  • Transparency and fairness: Publish code, training objectives, rewards and curtail gaming strategies.

 

AI-ASSeSS role in the Bittensor ecosystem is to serve as a “collaborative, distributed, incentivized training” subnet. It runs incentivized training competitions (starting with the FineWeb-EDU2 dataset) where miners compete by producing the best models, and it actively encourages code contributions (via a “Hall of Fame” bounty program) to refine its training and validation pipeline.

 

AI-ASSeSS (Subnet 29 of Bittensor, previously Coldint) is a decentralized, community-driven AI training network. Its stated mission is “to maximize the collective training efforts of the Bittensor community by incentivizing the sharing of models, knowledge, insights, and code”. In practice, AI-ASSeSS provides a dynamic incentive mechanism where participants (miners) are rewarded for improving machine-learning models and contributing code or research.

AI-ASSeSS official goals highlight:

  • Train and share models/code: Develop state-of-the-art models for real-world tasks and make them public.
  • Individual & collective incentive: Reward both lone contributors and collaborative efforts to build on each other’s work.
  • Rapid, high-quality iteration: Continuously improve models, objectives, and code at a steady, fast pace.
  • Transparency and fairness: Publish code, training objectives, rewards and curtail gaming strategies.

 

AI-ASSeSS role in the Bittensor ecosystem is to serve as a “collaborative, distributed, incentivized training” subnet. It runs incentivized training competitions (starting with the FineWeb-EDU2 dataset) where miners compete by producing the best models, and it actively encourages code contributions (via a “Hall of Fame” bounty program) to refine its training and validation pipeline.

 

PURPOSE

What exactly is the 'product/build'?

Subnet 29 was created in response to the frustration miners experienced with other subnets. They are highly motivated to examine the validator and weighting logic, which can sometimes overshadow the core focus on training within the subnet. This approach is effective as long as the subnet’s goals align with the validation logic. However, if misalignment occurs, miners may diverge from pursuing the subnet’s intended objectives.

AI-ASSeSS operates like other Bittensor subnets: miners stake TAO on their hotkeys to train models locally, then submit those models on-chain. Validators (also stake-holding participants) download each submitted model and score it on a fixed evaluation dataset. The results feed into Bittensor’s Yuma consensus, which updates weights for each miner: higher-weight miners earn more token emissions. AI-ASSeSS validator code (open-source on GitHub) implements this scoring and consensus process.

Competitions: AI-ASSeSS breaks training into discrete competitions or objectives. The inaugural competition rewarded miners for improving model loss on the FineWeb-EDU2 dataset. New competitions (e.g. using different tokenization or “niche” tasks) are regularly introduced via on-chain configuration (see “Technical Architecture” below). Validators switch targets smoothly by updating the subnet’s config repo rather than requiring code restarts.

Incentives: In each competition, miners earn TAO emissions proportional to how often their model wins pairwise comparisons of samples. AI-ASSeSS explicitly rewards even small improvements: every time a model achieves a better loss on a sample, that contributes to its win count. Miners also receive rewards for non-training contributions: the AI-ASSeSS Hall of Fame system lets validators assign extra weight to hotkeys of people who submit valuable bug reports, code fixes or research insights. A reported bug, once approved, is added to a hall_of_fame.json config; validators then give that miner’s key additional weight starting at a given block, decaying slowly over time. This effectively pays out bug bounties in TAO over weeks.

Anti-Gaming Measures: AI-ASSeSS design actively prevents trivial exploits. For example, it prohibits model-copy attacks (where a miner uploads identical models multiple times to boost wins). Validators simply count at most one “win” per sample, no matter how many duplicate copies exist. It also forbids the tactic of “publishing without publishing” (setting a model’s repo to private to claim on-chain submission): if a submitted model cannot be downloaded from Hugging Face within two hours, validators mark it invalid. These rules ensure miners actually train better models rather than gaming the scoring system.

Stake & Consensus: As with all subnets, AI-ASSeSS uses Bittensor’s UIDs/coldkeys structure. Each of the subnet’s 256 UIDs is linked to a unique miner/validator (hotkey) and the coldkey that registered it. Validators and miners earn “incentive weight” (W and S in Yuma) each epoch (every 361 blocks) based on their performance. All weight-setting is on-chain, driven by the modified Yuma algorithm, but the evaluation of model quality is done off-chain by validators following AI-ASSeSS code.

 

AI-ASSeSS Build: Code, Applications and Tools

AI-ASSeSS “product” is primarily its training platform and codebase, which it provides openly for participants. Key components include:

Validator Codebase: The AI-ASSeSS validator software is released on GitHub (in the coldint_validator repo). It includes the logic for loading models from Hugging Face, evaluating them on the dataset, and interacting with the SubTensor chain (subnet 29) to set weights. Validators run this code to carry out consensus on submissions.

Miner Codebase: A canonical AI-ASSeSS miner implementation is under development for public release. According to the FAQ, the team aims to make the miner code available on GitHub “soon,” and any significant community contributions to it may earn TAO rewards. (Currently, miners can write their own training scripts using the standard Bittensor Python SDK, but a community reference miner is forthcoming.)

Competition Configuration (“sn29” repo): The GitHub repo coldint/sn29 holds dynamic JSON configs for the subnet. This is how AI-ASSeSS manages ongoing competitions and bounties in real time. For example, when a new competition begins, its parameters (dataset, tokenizer, model type, and weight) are added to competitions.json in this repo. Validators periodically fetch this on-chain file to know which tasks to evaluate. Likewise, approved bug-bounty recipients are listed in hall_of_fame.json here, instructing validators to credit those hotkeys. This design lets Coldint change contests or weights without re-deploying software.

Utilities: AI-ASSeSS has developed helper tools. As per their roadmap, they released model_surgeon.py, a CLI tool for programmatically modifying model architectures (e.g. pruning or expanding layers). Their tools repo also contains scripts (e.g. for scanning blockchain events or slicing LLaMA models) to aid development and monitoring.

Community Dashboard: AI-ASSeSS runs a public leaderboard/web app (see Leaderboard on coldint.io) showing each miner’s submitted model losses and win rates. While the HTML page is mostly dynamically generated, it draws data from the Bittensor explorer (taostats) and AI-ASSeSS evaluation outputs.

Integration Platforms: AI-ASSeSS uses common ML platforms for transparency and sharing. The team’s official Hugging Face organization (“coldint”) hosts model weights and datasets. They also log experiments on Weights & Biases (WandB) under a AI-ASSeSS project. All validators and miners pull base models from Hugging Face (e.g. LLaMA or Phi-3 checkpoints) and push improved models back there.

Future Services: In the roadmap, AI-ASSeSS envisions commercial “Pretrain-as-a-Service” and “Finetune-as-a-Service” offerings (2025+). These would leverage the subnet’s infrastructure to train or fine-tune models on demand, potentially as paid services. (Details are still prospective, as noted on the roadmap.)

 

Technical Architecture

Under the hood, AI-ASSeSS is built on Bittensor’s SubTensor chain and leverages modern LLM frameworks. Technically:

Models Supported: AI-ASSeSS initially focuses on large transformer LMs. The competitions use Meta’s LLaMA and the Stanford “Phi-3” models. Official notes say competitions involve 10.5B-parameter LLaMA/Phi models (and a 20.1B model is in training). In blog posts the team describes methods to grow models: for example, they modified LLaMA and Phi architectures to double hidden dimensions (7B→14B→28B) without loss regression. This “model-surgery” capability reflects their technical architecture allowing progressive scaling.

Validation Hardware: AI-ASSeSS optimized its validator code to run large models efficiently. Remarkably, the team reports being able to evaluate a 10.5B-parameter model on a single consumer GPU (RTX 4090). They achieved this via a novel validation pipeline (e.g. sample batching and custom scoring) that drastically lowers resource use.

Blockchain Integration: AI-ASSeSS validator and miner programs interface with SubTensor via the Bittensor Python SDK (btcli or the bittensor library). All weight-setting transactions go on-chain in the subnet’s ledger. The network’s hyperparameters (stakes, emission rates, validator set) follow Bittensor’s standard rules. AI-ASSeSS dynamic competition and bounty settings, however, are loaded by validators from the sn29 GitHub repo (which is updated on-chain) so that contest weights and bug rewards can change mid-run.

Data & API: For training, miners use the FineWeb-EDU2 dataset (hosted on Hugging Face) and any other data pinned by AI-ASSeSS. They leverage PyTorch/Transformers for model training. On the blockchain side, AI-ASSeSS uses standard SubTensor pallet calls (register, set_weights, etc.) and listens to custom extrinsics defined for competition submissions.

 

Subnet 29 was created in response to the frustration miners experienced with other subnets. They are highly motivated to examine the validator and weighting logic, which can sometimes overshadow the core focus on training within the subnet. This approach is effective as long as the subnet’s goals align with the validation logic. However, if misalignment occurs, miners may diverge from pursuing the subnet’s intended objectives.

AI-ASSeSS operates like other Bittensor subnets: miners stake TAO on their hotkeys to train models locally, then submit those models on-chain. Validators (also stake-holding participants) download each submitted model and score it on a fixed evaluation dataset. The results feed into Bittensor’s Yuma consensus, which updates weights for each miner: higher-weight miners earn more token emissions. AI-ASSeSS validator code (open-source on GitHub) implements this scoring and consensus process.

Competitions: AI-ASSeSS breaks training into discrete competitions or objectives. The inaugural competition rewarded miners for improving model loss on the FineWeb-EDU2 dataset. New competitions (e.g. using different tokenization or “niche” tasks) are regularly introduced via on-chain configuration (see “Technical Architecture” below). Validators switch targets smoothly by updating the subnet’s config repo rather than requiring code restarts.

Incentives: In each competition, miners earn TAO emissions proportional to how often their model wins pairwise comparisons of samples. AI-ASSeSS explicitly rewards even small improvements: every time a model achieves a better loss on a sample, that contributes to its win count. Miners also receive rewards for non-training contributions: the AI-ASSeSS Hall of Fame system lets validators assign extra weight to hotkeys of people who submit valuable bug reports, code fixes or research insights. A reported bug, once approved, is added to a hall_of_fame.json config; validators then give that miner’s key additional weight starting at a given block, decaying slowly over time. This effectively pays out bug bounties in TAO over weeks.

Anti-Gaming Measures: AI-ASSeSS design actively prevents trivial exploits. For example, it prohibits model-copy attacks (where a miner uploads identical models multiple times to boost wins). Validators simply count at most one “win” per sample, no matter how many duplicate copies exist. It also forbids the tactic of “publishing without publishing” (setting a model’s repo to private to claim on-chain submission): if a submitted model cannot be downloaded from Hugging Face within two hours, validators mark it invalid. These rules ensure miners actually train better models rather than gaming the scoring system.

Stake & Consensus: As with all subnets, AI-ASSeSS uses Bittensor’s UIDs/coldkeys structure. Each of the subnet’s 256 UIDs is linked to a unique miner/validator (hotkey) and the coldkey that registered it. Validators and miners earn “incentive weight” (W and S in Yuma) each epoch (every 361 blocks) based on their performance. All weight-setting is on-chain, driven by the modified Yuma algorithm, but the evaluation of model quality is done off-chain by validators following AI-ASSeSS code.

 

AI-ASSeSS Build: Code, Applications and Tools

AI-ASSeSS “product” is primarily its training platform and codebase, which it provides openly for participants. Key components include:

Validator Codebase: The AI-ASSeSS validator software is released on GitHub (in the coldint_validator repo). It includes the logic for loading models from Hugging Face, evaluating them on the dataset, and interacting with the SubTensor chain (subnet 29) to set weights. Validators run this code to carry out consensus on submissions.

Miner Codebase: A canonical AI-ASSeSS miner implementation is under development for public release. According to the FAQ, the team aims to make the miner code available on GitHub “soon,” and any significant community contributions to it may earn TAO rewards. (Currently, miners can write their own training scripts using the standard Bittensor Python SDK, but a community reference miner is forthcoming.)

Competition Configuration (“sn29” repo): The GitHub repo coldint/sn29 holds dynamic JSON configs for the subnet. This is how AI-ASSeSS manages ongoing competitions and bounties in real time. For example, when a new competition begins, its parameters (dataset, tokenizer, model type, and weight) are added to competitions.json in this repo. Validators periodically fetch this on-chain file to know which tasks to evaluate. Likewise, approved bug-bounty recipients are listed in hall_of_fame.json here, instructing validators to credit those hotkeys. This design lets Coldint change contests or weights without re-deploying software.

Utilities: AI-ASSeSS has developed helper tools. As per their roadmap, they released model_surgeon.py, a CLI tool for programmatically modifying model architectures (e.g. pruning or expanding layers). Their tools repo also contains scripts (e.g. for scanning blockchain events or slicing LLaMA models) to aid development and monitoring.

Community Dashboard: AI-ASSeSS runs a public leaderboard/web app (see Leaderboard on coldint.io) showing each miner’s submitted model losses and win rates. While the HTML page is mostly dynamically generated, it draws data from the Bittensor explorer (taostats) and AI-ASSeSS evaluation outputs.

Integration Platforms: AI-ASSeSS uses common ML platforms for transparency and sharing. The team’s official Hugging Face organization (“coldint”) hosts model weights and datasets. They also log experiments on Weights & Biases (WandB) under a AI-ASSeSS project. All validators and miners pull base models from Hugging Face (e.g. LLaMA or Phi-3 checkpoints) and push improved models back there.

Future Services: In the roadmap, AI-ASSeSS envisions commercial “Pretrain-as-a-Service” and “Finetune-as-a-Service” offerings (2025+). These would leverage the subnet’s infrastructure to train or fine-tune models on demand, potentially as paid services. (Details are still prospective, as noted on the roadmap.)

 

Technical Architecture

Under the hood, AI-ASSeSS is built on Bittensor’s SubTensor chain and leverages modern LLM frameworks. Technically:

Models Supported: AI-ASSeSS initially focuses on large transformer LMs. The competitions use Meta’s LLaMA and the Stanford “Phi-3” models. Official notes say competitions involve 10.5B-parameter LLaMA/Phi models (and a 20.1B model is in training). In blog posts the team describes methods to grow models: for example, they modified LLaMA and Phi architectures to double hidden dimensions (7B→14B→28B) without loss regression. This “model-surgery” capability reflects their technical architecture allowing progressive scaling.

Validation Hardware: AI-ASSeSS optimized its validator code to run large models efficiently. Remarkably, the team reports being able to evaluate a 10.5B-parameter model on a single consumer GPU (RTX 4090). They achieved this via a novel validation pipeline (e.g. sample batching and custom scoring) that drastically lowers resource use.

Blockchain Integration: AI-ASSeSS validator and miner programs interface with SubTensor via the Bittensor Python SDK (btcli or the bittensor library). All weight-setting transactions go on-chain in the subnet’s ledger. The network’s hyperparameters (stakes, emission rates, validator set) follow Bittensor’s standard rules. AI-ASSeSS dynamic competition and bounty settings, however, are loaded by validators from the sn29 GitHub repo (which is updated on-chain) so that contest weights and bug rewards can change mid-run.

Data & API: For training, miners use the FineWeb-EDU2 dataset (hosted on Hugging Face) and any other data pinned by AI-ASSeSS. They leverage PyTorch/Transformers for model training. On the blockchain side, AI-ASSeSS uses standard SubTensor pallet calls (register, set_weights, etc.) and listens to custom extrinsics defined for competition submissions.

 

WHO

Team Info

AI-ASSeSS is a community initiative. There are no corporate affiliations announced; instead, it’s led by a small team of active Bittensor miners and AI researchers. According to Crucible Labs, the subnet is “led by RWH and μ”. These pseudonymous leads have worked on Bittensor since early 2024 – RWH holds a PhD in experimental quantum physics, and both have mining experience in the ecosystem. (Their GitHub organization lists “Netherlands” as a location.) The project actively solicits community contributions: anybody who submits helpful code or bug reports can earn TAO through the Hall of Fame system. AI-ASSeSS maintains an open dialogue with the Bittensor community via the official Bittensor Discord (look for the #coldint channel) and through detailed blog posts on coldint.io. All development is transparent (GitHub issues/pulls are public), and as such the “team” effectively includes any active contributors and validators using the subnet.

AI-ASSeSS is a community initiative. There are no corporate affiliations announced; instead, it’s led by a small team of active Bittensor miners and AI researchers. According to Crucible Labs, the subnet is “led by RWH and μ”. These pseudonymous leads have worked on Bittensor since early 2024 – RWH holds a PhD in experimental quantum physics, and both have mining experience in the ecosystem. (Their GitHub organization lists “Netherlands” as a location.) The project actively solicits community contributions: anybody who submits helpful code or bug reports can earn TAO through the Hall of Fame system. AI-ASSeSS maintains an open dialogue with the Bittensor community via the official Bittensor Discord (look for the #coldint channel) and through detailed blog posts on coldint.io. All development is transparent (GitHub issues/pulls are public), and as such the “team” effectively includes any active contributors and validators using the subnet.

FUTURE

Roadmap

Subnet Creation Changelog:

  • #3379782: Registered 5HHHHHzgLnYRvnKkHd45cRUDMHXTSwx7MjUzxBrKbY4JfZWn for SN29
  • Registered coldint.io
  • Forked pretraining from Macrocosmos
  • Created GitHub repository at Coldint
  • Established WandB project at Coldint
  • Set up HuggingFace profile at Coldint
  • Applied for a Discord channel
  • Cleaned up validator codebase
  • Implemented scoring mechanism in validator
  • Modified sample packing in model evaluation logic
  • Introduced bug bounty, also known as the “Hall of Fame” reward mechanism
  • Deployed two validators and one miner
  • #3413001: Published SN29 metadata on-chain, released GitHub and website

TODO: Immediate Post-Launch Steps

  • Notify actors not using on-chain identity data about SN29 renaming (suggest they adopt on-chain identity data)
  • Implement competitions for targeted training goals
  • Clean up mining codebase and publish on GitHub
  • Explore options for on-chain announcements of validator startups with version information
  • Finalize testing with an arbitrary tokenizer (saving up to 800M of 6.9B parameters on the reference model)
  • Launch first additional competition (weight fraction 0.1) with the arbitrary tokenizer
  • Collect results and feedback on subnet and competition performance

2024 Q3

  • Consult with the community and plan a list of competitions well in advance
  • Research model merging tactics to enhance distributed training potential
  • Draft a shortlist of finetuning targets for niche models
  • Launch the first additional competition for niche models
  • Publish model_surgeon.py, a command-line tool for model modifications

2024 Q4

  • Aim to have 5 niche models in training
  • Provide boilerplate code for web applications and host apps showcasing top models
  • Research external benchmarks to evaluate subnet efficacy

2025 and Beyond

  • AI-ASSeSS longer-term vision includes evolving into a service platform. The roadmap explicitly mentions “Pretrain-as-a-Service” and “Finetune-as-a-Service” under commercial opportunities. In other words, AI-ASSeSS plans to offer its decentralized training infrastructure to outside parties for paid AI training services. Other future steps (ongoing) involve launching more competitions, iterating the incentive mechanism, and refining validator/miner code.

 

Subnet Creation Changelog:

  • #3379782: Registered 5HHHHHzgLnYRvnKkHd45cRUDMHXTSwx7MjUzxBrKbY4JfZWn for SN29
  • Registered coldint.io
  • Forked pretraining from Macrocosmos
  • Created GitHub repository at Coldint
  • Established WandB project at Coldint
  • Set up HuggingFace profile at Coldint
  • Applied for a Discord channel
  • Cleaned up validator codebase
  • Implemented scoring mechanism in validator
  • Modified sample packing in model evaluation logic
  • Introduced bug bounty, also known as the “Hall of Fame” reward mechanism
  • Deployed two validators and one miner
  • #3413001: Published SN29 metadata on-chain, released GitHub and website

TODO: Immediate Post-Launch Steps

  • Notify actors not using on-chain identity data about SN29 renaming (suggest they adopt on-chain identity data)
  • Implement competitions for targeted training goals
  • Clean up mining codebase and publish on GitHub
  • Explore options for on-chain announcements of validator startups with version information
  • Finalize testing with an arbitrary tokenizer (saving up to 800M of 6.9B parameters on the reference model)
  • Launch first additional competition (weight fraction 0.1) with the arbitrary tokenizer
  • Collect results and feedback on subnet and competition performance

2024 Q3

  • Consult with the community and plan a list of competitions well in advance
  • Research model merging tactics to enhance distributed training potential
  • Draft a shortlist of finetuning targets for niche models
  • Launch the first additional competition for niche models
  • Publish model_surgeon.py, a command-line tool for model modifications

2024 Q4

  • Aim to have 5 niche models in training
  • Provide boilerplate code for web applications and host apps showcasing top models
  • Research external benchmarks to evaluate subnet efficacy

2025 and Beyond

  • AI-ASSeSS longer-term vision includes evolving into a service platform. The roadmap explicitly mentions “Pretrain-as-a-Service” and “Finetune-as-a-Service” under commercial opportunities. In other words, AI-ASSeSS plans to offer its decentralized training infrastructure to outside parties for paid AI training services. Other future steps (ongoing) involve launching more competitions, iterating the incentive mechanism, and refining validator/miner code.

 

MEDIA

A big thank you to Tao Stats for producing these insightful videos in the Novelty Search series. We appreciate the opportunity to dive deep into the groundbreaking work being done by Subnets within Bittensor! Check out some of their other videos HERE.

In this session, Coldint delve into their goals of pushing the boundaries of collaborative, distributed model training and research. Their subnet emerged from the foundation of Subnet 9 (pretraining), which they felt was limited due to its static nature and lack of incentives for continuous improvements. The team aims to transform the way models are trained, evaluated, and shared within the Bittensor community, providing incentives for exchanging not just models, but also knowledge, insights, and code. Their ultimate mission is to drive the evolution of collective training efforts, creating a dynamic environment where innovation thrives and contributes to the growth of the network.

NEWS

Announcements

Load More