With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time
AI-ASSeSS (Subnet 29 of Bittensor, previously Coldint) is a decentralized, community-driven AI training network. Its stated mission is “to maximize the collective training efforts of the Bittensor community by incentivizing the sharing of models, knowledge, insights, and code”. In practice, AI-ASSeSS provides a dynamic incentive mechanism where participants (miners) are rewarded for improving machine-learning models and contributing code or research.
AI-ASSeSS official goals highlight:
AI-ASSeSS role in the Bittensor ecosystem is to serve as a “collaborative, distributed, incentivized training” subnet. It runs incentivized training competitions (starting with the FineWeb-EDU2 dataset) where miners compete by producing the best models, and it actively encourages code contributions (via a “Hall of Fame” bounty program) to refine its training and validation pipeline.
AI-ASSeSS (Subnet 29 of Bittensor, previously Coldint) is a decentralized, community-driven AI training network. Its stated mission is “to maximize the collective training efforts of the Bittensor community by incentivizing the sharing of models, knowledge, insights, and code”. In practice, AI-ASSeSS provides a dynamic incentive mechanism where participants (miners) are rewarded for improving machine-learning models and contributing code or research.
AI-ASSeSS official goals highlight:
AI-ASSeSS role in the Bittensor ecosystem is to serve as a “collaborative, distributed, incentivized training” subnet. It runs incentivized training competitions (starting with the FineWeb-EDU2 dataset) where miners compete by producing the best models, and it actively encourages code contributions (via a “Hall of Fame” bounty program) to refine its training and validation pipeline.
Subnet 29 was created in response to the frustration miners experienced with other subnets. They are highly motivated to examine the validator and weighting logic, which can sometimes overshadow the core focus on training within the subnet. This approach is effective as long as the subnet’s goals align with the validation logic. However, if misalignment occurs, miners may diverge from pursuing the subnet’s intended objectives.
AI-ASSeSS operates like other Bittensor subnets: miners stake TAO on their hotkeys to train models locally, then submit those models on-chain. Validators (also stake-holding participants) download each submitted model and score it on a fixed evaluation dataset. The results feed into Bittensor’s Yuma consensus, which updates weights for each miner: higher-weight miners earn more token emissions. AI-ASSeSS validator code (open-source on GitHub) implements this scoring and consensus process.
Competitions: AI-ASSeSS breaks training into discrete competitions or objectives. The inaugural competition rewarded miners for improving model loss on the FineWeb-EDU2 dataset. New competitions (e.g. using different tokenization or “niche” tasks) are regularly introduced via on-chain configuration (see “Technical Architecture” below). Validators switch targets smoothly by updating the subnet’s config repo rather than requiring code restarts.
Incentives: In each competition, miners earn TAO emissions proportional to how often their model wins pairwise comparisons of samples. AI-ASSeSS explicitly rewards even small improvements: every time a model achieves a better loss on a sample, that contributes to its win count. Miners also receive rewards for non-training contributions: the AI-ASSeSS Hall of Fame system lets validators assign extra weight to hotkeys of people who submit valuable bug reports, code fixes or research insights. A reported bug, once approved, is added to a hall_of_fame.json config; validators then give that miner’s key additional weight starting at a given block, decaying slowly over time. This effectively pays out bug bounties in TAO over weeks.
Anti-Gaming Measures: AI-ASSeSS design actively prevents trivial exploits. For example, it prohibits model-copy attacks (where a miner uploads identical models multiple times to boost wins). Validators simply count at most one “win” per sample, no matter how many duplicate copies exist. It also forbids the tactic of “publishing without publishing” (setting a model’s repo to private to claim on-chain submission): if a submitted model cannot be downloaded from Hugging Face within two hours, validators mark it invalid. These rules ensure miners actually train better models rather than gaming the scoring system.
Stake & Consensus: As with all subnets, AI-ASSeSS uses Bittensor’s UIDs/coldkeys structure. Each of the subnet’s 256 UIDs is linked to a unique miner/validator (hotkey) and the coldkey that registered it. Validators and miners earn “incentive weight” (W and S in Yuma) each epoch (every 361 blocks) based on their performance. All weight-setting is on-chain, driven by the modified Yuma algorithm, but the evaluation of model quality is done off-chain by validators following AI-ASSeSS code.
AI-ASSeSS Build: Code, Applications and Tools
AI-ASSeSS “product” is primarily its training platform and codebase, which it provides openly for participants. Key components include:
Validator Codebase: The AI-ASSeSS validator software is released on GitHub (in the coldint_validator repo). It includes the logic for loading models from Hugging Face, evaluating them on the dataset, and interacting with the SubTensor chain (subnet 29) to set weights. Validators run this code to carry out consensus on submissions.
Miner Codebase: A canonical AI-ASSeSS miner implementation is under development for public release. According to the FAQ, the team aims to make the miner code available on GitHub “soon,” and any significant community contributions to it may earn TAO rewards. (Currently, miners can write their own training scripts using the standard Bittensor Python SDK, but a community reference miner is forthcoming.)
Competition Configuration (“sn29” repo): The GitHub repo coldint/sn29 holds dynamic JSON configs for the subnet. This is how AI-ASSeSS manages ongoing competitions and bounties in real time. For example, when a new competition begins, its parameters (dataset, tokenizer, model type, and weight) are added to competitions.json in this repo. Validators periodically fetch this on-chain file to know which tasks to evaluate. Likewise, approved bug-bounty recipients are listed in hall_of_fame.json here, instructing validators to credit those hotkeys. This design lets Coldint change contests or weights without re-deploying software.
Utilities: AI-ASSeSS has developed helper tools. As per their roadmap, they released model_surgeon.py, a CLI tool for programmatically modifying model architectures (e.g. pruning or expanding layers). Their tools repo also contains scripts (e.g. for scanning blockchain events or slicing LLaMA models) to aid development and monitoring.
Community Dashboard: AI-ASSeSS runs a public leaderboard/web app (see Leaderboard on coldint.io) showing each miner’s submitted model losses and win rates. While the HTML page is mostly dynamically generated, it draws data from the Bittensor explorer (taostats) and AI-ASSeSS evaluation outputs.
Integration Platforms: AI-ASSeSS uses common ML platforms for transparency and sharing. The team’s official Hugging Face organization (“coldint”) hosts model weights and datasets. They also log experiments on Weights & Biases (WandB) under a AI-ASSeSS project. All validators and miners pull base models from Hugging Face (e.g. LLaMA or Phi-3 checkpoints) and push improved models back there.
Future Services: In the roadmap, AI-ASSeSS envisions commercial “Pretrain-as-a-Service” and “Finetune-as-a-Service” offerings (2025+). These would leverage the subnet’s infrastructure to train or fine-tune models on demand, potentially as paid services. (Details are still prospective, as noted on the roadmap.)
Technical Architecture
Under the hood, AI-ASSeSS is built on Bittensor’s SubTensor chain and leverages modern LLM frameworks. Technically:
Models Supported: AI-ASSeSS initially focuses on large transformer LMs. The competitions use Meta’s LLaMA and the Stanford “Phi-3” models. Official notes say competitions involve 10.5B-parameter LLaMA/Phi models (and a 20.1B model is in training). In blog posts the team describes methods to grow models: for example, they modified LLaMA and Phi architectures to double hidden dimensions (7B→14B→28B) without loss regression. This “model-surgery” capability reflects their technical architecture allowing progressive scaling.
Validation Hardware: AI-ASSeSS optimized its validator code to run large models efficiently. Remarkably, the team reports being able to evaluate a 10.5B-parameter model on a single consumer GPU (RTX 4090). They achieved this via a novel validation pipeline (e.g. sample batching and custom scoring) that drastically lowers resource use.
Blockchain Integration: AI-ASSeSS validator and miner programs interface with SubTensor via the Bittensor Python SDK (btcli or the bittensor library). All weight-setting transactions go on-chain in the subnet’s ledger. The network’s hyperparameters (stakes, emission rates, validator set) follow Bittensor’s standard rules. AI-ASSeSS dynamic competition and bounty settings, however, are loaded by validators from the sn29 GitHub repo (which is updated on-chain) so that contest weights and bug rewards can change mid-run.
Data & API: For training, miners use the FineWeb-EDU2 dataset (hosted on Hugging Face) and any other data pinned by AI-ASSeSS. They leverage PyTorch/Transformers for model training. On the blockchain side, AI-ASSeSS uses standard SubTensor pallet calls (register, set_weights, etc.) and listens to custom extrinsics defined for competition submissions.
Subnet 29 was created in response to the frustration miners experienced with other subnets. They are highly motivated to examine the validator and weighting logic, which can sometimes overshadow the core focus on training within the subnet. This approach is effective as long as the subnet’s goals align with the validation logic. However, if misalignment occurs, miners may diverge from pursuing the subnet’s intended objectives.
AI-ASSeSS operates like other Bittensor subnets: miners stake TAO on their hotkeys to train models locally, then submit those models on-chain. Validators (also stake-holding participants) download each submitted model and score it on a fixed evaluation dataset. The results feed into Bittensor’s Yuma consensus, which updates weights for each miner: higher-weight miners earn more token emissions. AI-ASSeSS validator code (open-source on GitHub) implements this scoring and consensus process.
Competitions: AI-ASSeSS breaks training into discrete competitions or objectives. The inaugural competition rewarded miners for improving model loss on the FineWeb-EDU2 dataset. New competitions (e.g. using different tokenization or “niche” tasks) are regularly introduced via on-chain configuration (see “Technical Architecture” below). Validators switch targets smoothly by updating the subnet’s config repo rather than requiring code restarts.
Incentives: In each competition, miners earn TAO emissions proportional to how often their model wins pairwise comparisons of samples. AI-ASSeSS explicitly rewards even small improvements: every time a model achieves a better loss on a sample, that contributes to its win count. Miners also receive rewards for non-training contributions: the AI-ASSeSS Hall of Fame system lets validators assign extra weight to hotkeys of people who submit valuable bug reports, code fixes or research insights. A reported bug, once approved, is added to a hall_of_fame.json config; validators then give that miner’s key additional weight starting at a given block, decaying slowly over time. This effectively pays out bug bounties in TAO over weeks.
Anti-Gaming Measures: AI-ASSeSS design actively prevents trivial exploits. For example, it prohibits model-copy attacks (where a miner uploads identical models multiple times to boost wins). Validators simply count at most one “win” per sample, no matter how many duplicate copies exist. It also forbids the tactic of “publishing without publishing” (setting a model’s repo to private to claim on-chain submission): if a submitted model cannot be downloaded from Hugging Face within two hours, validators mark it invalid. These rules ensure miners actually train better models rather than gaming the scoring system.
Stake & Consensus: As with all subnets, AI-ASSeSS uses Bittensor’s UIDs/coldkeys structure. Each of the subnet’s 256 UIDs is linked to a unique miner/validator (hotkey) and the coldkey that registered it. Validators and miners earn “incentive weight” (W and S in Yuma) each epoch (every 361 blocks) based on their performance. All weight-setting is on-chain, driven by the modified Yuma algorithm, but the evaluation of model quality is done off-chain by validators following AI-ASSeSS code.
AI-ASSeSS Build: Code, Applications and Tools
AI-ASSeSS “product” is primarily its training platform and codebase, which it provides openly for participants. Key components include:
Validator Codebase: The AI-ASSeSS validator software is released on GitHub (in the coldint_validator repo). It includes the logic for loading models from Hugging Face, evaluating them on the dataset, and interacting with the SubTensor chain (subnet 29) to set weights. Validators run this code to carry out consensus on submissions.
Miner Codebase: A canonical AI-ASSeSS miner implementation is under development for public release. According to the FAQ, the team aims to make the miner code available on GitHub “soon,” and any significant community contributions to it may earn TAO rewards. (Currently, miners can write their own training scripts using the standard Bittensor Python SDK, but a community reference miner is forthcoming.)
Competition Configuration (“sn29” repo): The GitHub repo coldint/sn29 holds dynamic JSON configs for the subnet. This is how AI-ASSeSS manages ongoing competitions and bounties in real time. For example, when a new competition begins, its parameters (dataset, tokenizer, model type, and weight) are added to competitions.json in this repo. Validators periodically fetch this on-chain file to know which tasks to evaluate. Likewise, approved bug-bounty recipients are listed in hall_of_fame.json here, instructing validators to credit those hotkeys. This design lets Coldint change contests or weights without re-deploying software.
Utilities: AI-ASSeSS has developed helper tools. As per their roadmap, they released model_surgeon.py, a CLI tool for programmatically modifying model architectures (e.g. pruning or expanding layers). Their tools repo also contains scripts (e.g. for scanning blockchain events or slicing LLaMA models) to aid development and monitoring.
Community Dashboard: AI-ASSeSS runs a public leaderboard/web app (see Leaderboard on coldint.io) showing each miner’s submitted model losses and win rates. While the HTML page is mostly dynamically generated, it draws data from the Bittensor explorer (taostats) and AI-ASSeSS evaluation outputs.
Integration Platforms: AI-ASSeSS uses common ML platforms for transparency and sharing. The team’s official Hugging Face organization (“coldint”) hosts model weights and datasets. They also log experiments on Weights & Biases (WandB) under a AI-ASSeSS project. All validators and miners pull base models from Hugging Face (e.g. LLaMA or Phi-3 checkpoints) and push improved models back there.
Future Services: In the roadmap, AI-ASSeSS envisions commercial “Pretrain-as-a-Service” and “Finetune-as-a-Service” offerings (2025+). These would leverage the subnet’s infrastructure to train or fine-tune models on demand, potentially as paid services. (Details are still prospective, as noted on the roadmap.)
Technical Architecture
Under the hood, AI-ASSeSS is built on Bittensor’s SubTensor chain and leverages modern LLM frameworks. Technically:
Models Supported: AI-ASSeSS initially focuses on large transformer LMs. The competitions use Meta’s LLaMA and the Stanford “Phi-3” models. Official notes say competitions involve 10.5B-parameter LLaMA/Phi models (and a 20.1B model is in training). In blog posts the team describes methods to grow models: for example, they modified LLaMA and Phi architectures to double hidden dimensions (7B→14B→28B) without loss regression. This “model-surgery” capability reflects their technical architecture allowing progressive scaling.
Validation Hardware: AI-ASSeSS optimized its validator code to run large models efficiently. Remarkably, the team reports being able to evaluate a 10.5B-parameter model on a single consumer GPU (RTX 4090). They achieved this via a novel validation pipeline (e.g. sample batching and custom scoring) that drastically lowers resource use.
Blockchain Integration: AI-ASSeSS validator and miner programs interface with SubTensor via the Bittensor Python SDK (btcli or the bittensor library). All weight-setting transactions go on-chain in the subnet’s ledger. The network’s hyperparameters (stakes, emission rates, validator set) follow Bittensor’s standard rules. AI-ASSeSS dynamic competition and bounty settings, however, are loaded by validators from the sn29 GitHub repo (which is updated on-chain) so that contest weights and bug rewards can change mid-run.
Data & API: For training, miners use the FineWeb-EDU2 dataset (hosted on Hugging Face) and any other data pinned by AI-ASSeSS. They leverage PyTorch/Transformers for model training. On the blockchain side, AI-ASSeSS uses standard SubTensor pallet calls (register, set_weights, etc.) and listens to custom extrinsics defined for competition submissions.
AI-ASSeSS is a community initiative. There are no corporate affiliations announced; instead, it’s led by a small team of active Bittensor miners and AI researchers. According to Crucible Labs, the subnet is “led by RWH and μ”. These pseudonymous leads have worked on Bittensor since early 2024 – RWH holds a PhD in experimental quantum physics, and both have mining experience in the ecosystem. (Their GitHub organization lists “Netherlands” as a location.) The project actively solicits community contributions: anybody who submits helpful code or bug reports can earn TAO through the Hall of Fame system. AI-ASSeSS maintains an open dialogue with the Bittensor community via the official Bittensor Discord (look for the #coldint channel) and through detailed blog posts on coldint.io. All development is transparent (GitHub issues/pulls are public), and as such the “team” effectively includes any active contributors and validators using the subnet.
AI-ASSeSS is a community initiative. There are no corporate affiliations announced; instead, it’s led by a small team of active Bittensor miners and AI researchers. According to Crucible Labs, the subnet is “led by RWH and μ”. These pseudonymous leads have worked on Bittensor since early 2024 – RWH holds a PhD in experimental quantum physics, and both have mining experience in the ecosystem. (Their GitHub organization lists “Netherlands” as a location.) The project actively solicits community contributions: anybody who submits helpful code or bug reports can earn TAO through the Hall of Fame system. AI-ASSeSS maintains an open dialogue with the Bittensor community via the official Bittensor Discord (look for the #coldint channel) and through detailed blog posts on coldint.io. All development is transparent (GitHub issues/pulls are public), and as such the “team” effectively includes any active contributors and validators using the subnet.
Subnet Creation Changelog:
TODO: Immediate Post-Launch Steps
2024 Q3
2024 Q4
2025 and Beyond
Subnet Creation Changelog:
TODO: Immediate Post-Launch Steps
2024 Q3
2024 Q4
2025 and Beyond
A big thank you to Tao Stats for producing these insightful videos in the Novelty Search series. We appreciate the opportunity to dive deep into the groundbreaking work being done by Subnets within Bittensor! Check out some of their other videos HERE.
In this session, Coldint delve into their goals of pushing the boundaries of collaborative, distributed model training and research. Their subnet emerged from the foundation of Subnet 9 (pretraining), which they felt was limited due to its static nature and lack of incentives for continuous improvements. The team aims to transform the way models are trained, evaluated, and shared within the Bittensor community, providing incentives for exchanging not just models, but also knowledge, insights, and code. Their ultimate mission is to drive the evolution of collective training efforts, creating a dynamic environment where innovation thrives and contributes to the growth of the network.
🚨 SN29 returns - AI ASSeSS is here 🚨
After a year of mining highs and lows, Discord battles, and dTAO chaos, Subnet 29 is relaunching with a clear mission:
🎯 Build the first Bittensor subnet focused on AI Agent Safety & Security
Not theoretical. Not vibes. Actual…
Why now?
The team behind SN29 has been around since buffer overflows, SQL injections, Web2, Web3, and broken smart contracts. We’ve seen every hype cycle come with 10 years of security issues.
Agentic AI is next - and it’s going to be messy.
So we’re not waiting. We’re…
This is the kind of thing we’ve wanted to do for a long time.
Bounty-driven. Open. Relevant. Built for the next era of digital risk.
We’re not sharing the full mechanics yet (to keep the clones at bay), but code is coming soon.
📢 If you’re building with agents - or breaking…
🚀 Led by a team with decades of coding experience and multiple Bittensor contributions, Subnet 29 returns with a new mission.
🧾 New rules.
🎯 No more games.
⚙️ Just raw innovation, open battles, and a challenge the entire network will feel.
13/7/2025 - 18:00 CET
A brand net validator concept
https://coldint.io/a-new-validator-concept/
We think we have a new concept to reward miners, that solves a lot of issues we’ve described in this series…
https://coldint.io/known-validator-concepts/
The design of Bittensor was originally to have multiple validators that would check miner quality, and sell the digital commodities they get from miners...
https://coldint.io/the-singular-validator/
When analyzing subnet mechanics, we often scrape validator logs and miner submissions. We combine that data with our own internal analysis and the output of a locally run validator...
https://coldint.io/the-big-data-validator/