With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time
Distil SN97 is a Bittensor subnet dedicated to knowledge distillation of a large AI model. In particular, it aims to compress the 35B-parameter Qwen3.5-A3B model into a much smaller one. Large LLMs like Qwen3.5-35B require enormous hardware (≈67GB GPUs) to run, which is impractical for most users. Distil solves this by turning model distillation into an open competition: miners train student models (≤5.25B parameters) to imitate the teacher’s behavior, and validators score them. The goal is to “copy the reasoning” of the teacher rather than its weights. Practically, miners upload their compressed model to HuggingFace and submit its link on-chain. Validators then quantitatively compare the student vs. the teacher using KL divergence. Lower KL means better fidelity and higher rewards. In effect, Distil produces a continuously improving small model that anyone can run on consumer hardware.
Miner-Validator Incentive Loop
Each epoch (roughly every 10 minutes) follows a cycle of submission, evaluation, and rewarding:
In summary: miners contribute by training and uploading candidate models, while validators do the heavy evaluation on GPUs. The final output is a continuously-refined compact Qwen model. Each time a new model beats the king’s score, it becomes the live public model. The current top model (the “King”) is available for anyone to query or download (e.g. via chat.arbos.life).
Final Output and Users
Distil’s output is a high-quality, small LLM distilled from Qwen3.5. This model is publicly available: the winning student is hosted so developers or end-users can query or fine-tune it for free. The primary users are AI developers, researchers, and applications that need near-state-of-the-art LLM reasoning without the cost of huge models. Because the distilled model runs on consumer-grade GPUs with fast response, it effectively democratizes advanced AI. Anyone can interact with the King model via the web chat interface, and the model weights are published to HuggingFace. The subnet’s alpha token (SN97) is earned by miners for improving the model; validators simply earn the standard Bittensor consensus reward. In effect, Distil crowdsources what the AI industry typically pays hundreds of millions to achieve: producing a compact model that “behaves like” a 35B-parameter baseline.
Uniqueness within Bittensor
Distil is unique as the first open, incentive-driven model-distillation subnet on Bittensor. Its focus on compressing a frontier LLM and granting all rewards to a single miner per epoch sets it apart from other subnets, which have different tasks or reward curves. Its evaluation scheme (sparse top-128 KL, paired t-test) is specialized. Notably, Distil was deployed entirely autonomously by an AI agent (“Arbos”), which scripted its launch and operation without human-coded smart contracts. The product itself is a decentralized AI service: as TAO media explains, querying the King model is “the end product of a decentralized AI agent… publishing the winner’s output for free”. In summary, Distil delivers a continuously-improving compact model in real time—a model anyone can use or build on—and exemplifies Bittensor’s vision of decentralized, open AI markets.
Distil SN97 is a Bittensor subnet dedicated to knowledge distillation of a large AI model. In particular, it aims to compress the 35B-parameter Qwen3.5-A3B model into a much smaller one. Large LLMs like Qwen3.5-35B require enormous hardware (≈67GB GPUs) to run, which is impractical for most users. Distil solves this by turning model distillation into an open competition: miners train student models (≤5.25B parameters) to imitate the teacher’s behavior, and validators score them. The goal is to “copy the reasoning” of the teacher rather than its weights. Practically, miners upload their compressed model to HuggingFace and submit its link on-chain. Validators then quantitatively compare the student vs. the teacher using KL divergence. Lower KL means better fidelity and higher rewards. In effect, Distil produces a continuously improving small model that anyone can run on consumer hardware.
Miner-Validator Incentive Loop
Each epoch (roughly every 10 minutes) follows a cycle of submission, evaluation, and rewarding:
In summary: miners contribute by training and uploading candidate models, while validators do the heavy evaluation on GPUs. The final output is a continuously-refined compact Qwen model. Each time a new model beats the king’s score, it becomes the live public model. The current top model (the “King”) is available for anyone to query or download (e.g. via chat.arbos.life).
Final Output and Users
Distil’s output is a high-quality, small LLM distilled from Qwen3.5. This model is publicly available: the winning student is hosted so developers or end-users can query or fine-tune it for free. The primary users are AI developers, researchers, and applications that need near-state-of-the-art LLM reasoning without the cost of huge models. Because the distilled model runs on consumer-grade GPUs with fast response, it effectively democratizes advanced AI. Anyone can interact with the King model via the web chat interface, and the model weights are published to HuggingFace. The subnet’s alpha token (SN97) is earned by miners for improving the model; validators simply earn the standard Bittensor consensus reward. In effect, Distil crowdsources what the AI industry typically pays hundreds of millions to achieve: producing a compact model that “behaves like” a 35B-parameter baseline.
Uniqueness within Bittensor
Distil is unique as the first open, incentive-driven model-distillation subnet on Bittensor. Its focus on compressing a frontier LLM and granting all rewards to a single miner per epoch sets it apart from other subnets, which have different tasks or reward curves. Its evaluation scheme (sparse top-128 KL, paired t-test) is specialized. Notably, Distil was deployed entirely autonomously by an AI agent (“Arbos”), which scripted its launch and operation without human-coded smart contracts. The product itself is a decentralized AI service: as TAO media explains, querying the King model is “the end product of a decentralized AI agent… publishing the winner’s output for free”. In summary, Distil delivers a continuously-improving compact model in real time—a model anyone can use or build on—and exemplifies Bittensor’s vision of decentralized, open AI markets.
Availability and Launch
Distil SN97 is live on the Bittensor Finney mainnet. It was autonomously registered and deployed around March 2026 and has been operating continuously since then (netuid 97). The subnet quickly rose to #3 by emissions payout in the network. Currently the network distributes roughly 2,952 τ/day to miners (all of which goes to the epoch winner). The subnet’s alpha token (SN97) is tradable: as of April 2026 it is around $12.50 (all-time high ~$18.31 on Apr 15, 2026).
Architecture and Data Flow
Distil runs atop Bittensor’s blockchain (Finney, netuid=97), with custom miner/validator logic implemented in software. The validation cluster is split into two parts for security: a secure “distil” server holds the wallet keys, chain interface, dashboard, and decision logic, while a separate GPU node performs model inference (teacher/student forward passes and KL computations). During each epoch, the server reads on-chain state (new commitments, current king, etc.), then dispatches evaluation tasks to the GPU node and collects results. After scoring, the server sets the weight on-chain. By design, the GPU node never has wallet keys or direct chain-access, so even if compromised it cannot steal funds. All miner submissions and model scores flow through the Bittensor protocol: miners issue `commit` transactions with their model link, and robots issue `set_weights` transactions for the winner each round. The subnet also publishes data via a small web backend: for example, there are REST endpoints like GET /api/commitments (listing all model links and block numbers), /api/scores (live KL and disqualification info), and /api/price (token price and emission).
GitHub Repository
Distil’s code is open-sourced under MIT license on GitHub (user unarbos, repo distil). It includes all tools needed for miners and validators. For miners, there is miner.py (one-shot commit script) and check scripts (check_model.py, test_miner.py) that can locally validate a model before committing. For validators, the scripts/ directory contains the evaluation components: pod_eval_vllm.py (runs the teacher/student inference on a GPU), remote_validator.py (orchestrates the king-of-the-hill loop), a cosine_similarity_check.py (detects near-duplicate models), and a run_validator.sh wrapper. The repo also provides an example training script at examples/distil_kl_train.py (thanks to Wing Lian, aka @winglian) which trains a student model to match the teacher using forward KL. The README explains usage of all scripts and requirements. In short, the repository contains the full protocol stack: miner tools, validator engine, and supporting utilities.
Metrics and Statistics
Distil has quickly accumulated many participants. On-chain data shows hundreds of committed models, with one miner per epoch receiving reward. According to TaoStats, it distributes ~3,000 τ/day across epochs, placing it in the top 3 subnets by emissions. The SN97 token currently has on the order of $1–2 million market cap (given ~$12–13 price and ~115k circulating supply). Real-time stats can be viewed on tracking sites (e.g. TAOPULSE or TAOStats). The public API endpoints give precise network state, so one can query current miners, ranks, and recent winning models programmatically (e.g. /api/commitments and /api/scores from the Distil web API).
Integrations and Services
Distil integrates with external AI infrastructure. Miners use the HuggingFace Hub to store model weights; the protocol assumes all student models are public on HuggingFace. Validators use the vLLM library and HuggingFace Transformers to serve the Qwen teacher model and student models during evaluation. The pod_eval_vllm.py script shows that teacher logits are generated with vLLM 5-10x faster than naive inference. A chat interface at chat.arbos.life uses the latest King model to allow anyone to interact with the model. On the blockchain side, Distil uses the standard Bittensor token (TAO) and economic rules: its emission rate follows Bittensor’s “Taoflow” mechanism (as of Nov 2025). Other platforms (TaoMarketCap, CryptoRank, TAO.app) also list Distil’s data and trade info externally.
Planned Development
As of mid-2026 Distil’s core functionality is fully operational. No official roadmap has been published by the subnet owners. Most ongoing work is community-driven improvements: for example, contributors have already optimized evaluation with sparse KL (top-128) and may adjust prompt selection or efficiency over time. The subnet will participate in standard Bittensor events—e.g. the next token halving (approx. Dec 2026) will cut its emission rate in half automatically. Otherwise, it is likely to remain an open competition: anyone can continue to join as a miner or validator. Future enhancements might include adding new evaluation features or user interfaces, but the basic goal remains the same. No major changes are planned beyond iterative optimizations.
End Users and Customers
Distil’s “customers” are really the global AI community. Any developer or organization that needs a strong but efficient LLM benefits from the model it produces. For example, anyone building a chatbot, summarization tool, or AI assistant can use the distilled Qwen model at no cost. The emission mechanism also means that SN97 holders (miners) effectively pay to improve the model in order to earn more tokens, aligning incentives with product quality. Validators and stakers do not directly pay but can support the subnet to earn standard TAO rewards. There are no hidden access controls – the model and its chat interface are open to all. In practice, end users include machine learning researchers and companies who prefer an open-sourced intelligent model. As one analysis noted, “Distil produces a continuously improving compressed AI model” that anyone can query. This open model output is what ultimately pays off the subnet’s value to users.
Availability and Launch
Distil SN97 is live on the Bittensor Finney mainnet. It was autonomously registered and deployed around March 2026 and has been operating continuously since then (netuid 97). The subnet quickly rose to #3 by emissions payout in the network. Currently the network distributes roughly 2,952 τ/day to miners (all of which goes to the epoch winner). The subnet’s alpha token (SN97) is tradable: as of April 2026 it is around $12.50 (all-time high ~$18.31 on Apr 15, 2026).
Architecture and Data Flow
Distil runs atop Bittensor’s blockchain (Finney, netuid=97), with custom miner/validator logic implemented in software. The validation cluster is split into two parts for security: a secure “distil” server holds the wallet keys, chain interface, dashboard, and decision logic, while a separate GPU node performs model inference (teacher/student forward passes and KL computations). During each epoch, the server reads on-chain state (new commitments, current king, etc.), then dispatches evaluation tasks to the GPU node and collects results. After scoring, the server sets the weight on-chain. By design, the GPU node never has wallet keys or direct chain-access, so even if compromised it cannot steal funds. All miner submissions and model scores flow through the Bittensor protocol: miners issue `commit` transactions with their model link, and robots issue `set_weights` transactions for the winner each round. The subnet also publishes data via a small web backend: for example, there are REST endpoints like GET /api/commitments (listing all model links and block numbers), /api/scores (live KL and disqualification info), and /api/price (token price and emission).
GitHub Repository
Distil’s code is open-sourced under MIT license on GitHub (user unarbos, repo distil). It includes all tools needed for miners and validators. For miners, there is miner.py (one-shot commit script) and check scripts (check_model.py, test_miner.py) that can locally validate a model before committing. For validators, the scripts/ directory contains the evaluation components: pod_eval_vllm.py (runs the teacher/student inference on a GPU), remote_validator.py (orchestrates the king-of-the-hill loop), a cosine_similarity_check.py (detects near-duplicate models), and a run_validator.sh wrapper. The repo also provides an example training script at examples/distil_kl_train.py (thanks to Wing Lian, aka @winglian) which trains a student model to match the teacher using forward KL. The README explains usage of all scripts and requirements. In short, the repository contains the full protocol stack: miner tools, validator engine, and supporting utilities.
Metrics and Statistics
Distil has quickly accumulated many participants. On-chain data shows hundreds of committed models, with one miner per epoch receiving reward. According to TaoStats, it distributes ~3,000 τ/day across epochs, placing it in the top 3 subnets by emissions. The SN97 token currently has on the order of $1–2 million market cap (given ~$12–13 price and ~115k circulating supply). Real-time stats can be viewed on tracking sites (e.g. TAOPULSE or TAOStats). The public API endpoints give precise network state, so one can query current miners, ranks, and recent winning models programmatically (e.g. /api/commitments and /api/scores from the Distil web API).
Integrations and Services
Distil integrates with external AI infrastructure. Miners use the HuggingFace Hub to store model weights; the protocol assumes all student models are public on HuggingFace. Validators use the vLLM library and HuggingFace Transformers to serve the Qwen teacher model and student models during evaluation. The pod_eval_vllm.py script shows that teacher logits are generated with vLLM 5-10x faster than naive inference. A chat interface at chat.arbos.life uses the latest King model to allow anyone to interact with the model. On the blockchain side, Distil uses the standard Bittensor token (TAO) and economic rules: its emission rate follows Bittensor’s “Taoflow” mechanism (as of Nov 2025). Other platforms (TaoMarketCap, CryptoRank, TAO.app) also list Distil’s data and trade info externally.
Planned Development
As of mid-2026 Distil’s core functionality is fully operational. No official roadmap has been published by the subnet owners. Most ongoing work is community-driven improvements: for example, contributors have already optimized evaluation with sparse KL (top-128) and may adjust prompt selection or efficiency over time. The subnet will participate in standard Bittensor events—e.g. the next token halving (approx. Dec 2026) will cut its emission rate in half automatically. Otherwise, it is likely to remain an open competition: anyone can continue to join as a miner or validator. Future enhancements might include adding new evaluation features or user interfaces, but the basic goal remains the same. No major changes are planned beyond iterative optimizations.
End Users and Customers
Distil’s “customers” are really the global AI community. Any developer or organization that needs a strong but efficient LLM benefits from the model it produces. For example, anyone building a chatbot, summarization tool, or AI assistant can use the distilled Qwen model at no cost. The emission mechanism also means that SN97 holders (miners) effectively pay to improve the model in order to earn more tokens, aligning incentives with product quality. Validators and stakers do not directly pay but can support the subnet to earn standard TAO rewards. There are no hidden access controls – the model and its chat interface are open to all. In practice, end users include machine learning researchers and companies who prefer an open-sourced intelligent model. As one analysis noted, “Distil produces a continuously improving compressed AI model” that anyone can query. This open model output is what ultimately pays off the subnet’s value to users.
The Distil subnet was uniquely launched by an AI agent nicknamed “Arbos” under the guidance of Bittensor’s founder. Jacob “Const” Steeves (co-creator of Bittensor) is credited with setting up Arbos and bootstrapping the subnet. Thus, while there is no traditional “team,” the subnet owner/maintainer entity is Arbos (an agent actin in Bittensor’s ecosystem). The GitHub organization is unarbos. Beyond Arbos, one notable human contributor is Wing Lian (GitHub: caseus, Twitter @winglian). He contributed key code (e.g. `distil_kl_train.py`) and ideas (proposing the sparse-KL evaluation trick) on launch. Community members like “ghost-94” and “sampleratez” appear as top miners on the leaderboard, but they are users/miners rather than core developers. There are no public investors specific to Distil; development has instead been community-driven on the open repo. Project news has circulated through Bittensor’s official channels (the Bittensor Discord and X ) and through newsletters like TAO.media, but no separate Distil blog or Medium is known. In summary, the subnet is managed by the Arbos team (and by extension the OpenTensor/Bittensor foundation) with contributions from the community. The launch date was March 2026, and the community of miners/validators has grown steadily since then.
The Distil subnet was uniquely launched by an AI agent nicknamed “Arbos” under the guidance of Bittensor’s founder. Jacob “Const” Steeves (co-creator of Bittensor) is credited with setting up Arbos and bootstrapping the subnet. Thus, while there is no traditional “team,” the subnet owner/maintainer entity is Arbos (an agent actin in Bittensor’s ecosystem). The GitHub organization is unarbos. Beyond Arbos, one notable human contributor is Wing Lian (GitHub: caseus, Twitter @winglian). He contributed key code (e.g. `distil_kl_train.py`) and ideas (proposing the sparse-KL evaluation trick) on launch. Community members like “ghost-94” and “sampleratez” appear as top miners on the leaderboard, but they are users/miners rather than core developers. There are no public investors specific to Distil; development has instead been community-driven on the open repo. Project news has circulated through Bittensor’s official channels (the Bittensor Discord and X ) and through newsletters like TAO.media, but no separate Distil blog or Medium is known. In summary, the subnet is managed by the Arbos team (and by extension the OpenTensor/Bittensor foundation) with contributions from the community. The launch date was March 2026, and the community of miners/validators has grown steadily since then.
Milestones and Timeline
Officially, Distil’s launch was the major milestone. In March 2026 an autonomous agent brought the subnet online on Finney. Within weeks it attracted miners and validators: by early April it was already #3 in network emission rank. Technical improvements were quickly added (for example, switching to sparse top-128 token KL calculation for efficiency, as suggested by a community contributor). Media coverage followed: TAO.media published a detailed “inside Distil” article on April 29, 2026, and other outlets (SubnetAIQ, CryptoRank) reported on its performance in early May. No specific future roadmap dates have been announced. Upcoming Bittensor-wide events will affect Distil: for example, the TAO token halving scheduled around December 2026 will halve Distil’s emissions. Beyond that, any updates will come via community proposals on GitHub or Bittensor community meetings.
Past Updates
Since launch, the development was rapid and open-ended. Almost immediately the evaluation code added safeguards (duplicate detection, commit integrity, etc.) and the dashboard/ API were deployed. Community members contributed tools (the KL-training script by Wing Lian). As of mid-2026, Distil sits at a stable operation mode: models are continuously evaluated and posted, and the King model is live.
Long-term Vision
The long-term vision of Distil is to continuously produce a high-quality, open-source AI model that anyone can use or improve. Each epoch’s best model (the King) becomes freely available, embodying the idea of a self-improving model marketplace. As one write-up put it, querying Distil’s King model is “the end product of a decentralized AI agent building a Bittensor subnet… and publishing the winner’s output for free”. In a fully realized vision, this could extend to more complex tasks: for instance, similarly governed competitions using newer teacher models, or on-chain fine-tuning. But concretely, the goal is to have an ever-improving distilled Qwen3.5 model served on the web: a continuously updated ‘‘AI for the people’’ powered by the network’s incentives.
Recent News
Recent announcements about Distil have come mostly from community channels. The TAO.media article (Apr 29, 2026) highlighted that no human coded it after launch. SubnetAIQ’s blog (May 3, 2026) emphasized Distil reaching top-3 status. Ongoing updates (e.g. leaderboard changes, new model commits) are visible on the Distil dashboard (distil.arbos.life) and discussion occurs in general Bittensor forums and Discord. No unlisted private roadmap is known; most changes have been made via transparent GitHub collaboration and community calls. Users can watch for new pull requests or Discord/Discord announcements for any future plans.
Milestones and Timeline
Officially, Distil’s launch was the major milestone. In March 2026 an autonomous agent brought the subnet online on Finney. Within weeks it attracted miners and validators: by early April it was already #3 in network emission rank. Technical improvements were quickly added (for example, switching to sparse top-128 token KL calculation for efficiency, as suggested by a community contributor). Media coverage followed: TAO.media published a detailed “inside Distil” article on April 29, 2026, and other outlets (SubnetAIQ, CryptoRank) reported on its performance in early May. No specific future roadmap dates have been announced. Upcoming Bittensor-wide events will affect Distil: for example, the TAO token halving scheduled around December 2026 will halve Distil’s emissions. Beyond that, any updates will come via community proposals on GitHub or Bittensor community meetings.
Past Updates
Since launch, the development was rapid and open-ended. Almost immediately the evaluation code added safeguards (duplicate detection, commit integrity, etc.) and the dashboard/ API were deployed. Community members contributed tools (the KL-training script by Wing Lian). As of mid-2026, Distil sits at a stable operation mode: models are continuously evaluated and posted, and the King model is live.
Long-term Vision
The long-term vision of Distil is to continuously produce a high-quality, open-source AI model that anyone can use or improve. Each epoch’s best model (the King) becomes freely available, embodying the idea of a self-improving model marketplace. As one write-up put it, querying Distil’s King model is “the end product of a decentralized AI agent building a Bittensor subnet… and publishing the winner’s output for free”. In a fully realized vision, this could extend to more complex tasks: for instance, similarly governed competitions using newer teacher models, or on-chain fine-tuning. But concretely, the goal is to have an ever-improving distilled Qwen3.5 model served on the web: a continuously updated ‘‘AI for the people’’ powered by the network’s incentives.
Recent News
Recent announcements about Distil have come mostly from community channels. The TAO.media article (Apr 29, 2026) highlighted that no human coded it after launch. SubnetAIQ’s blog (May 3, 2026) emphasized Distil reaching top-3 status. Ongoing updates (e.g. leaderboard changes, new model commits) are visible on the Distil dashboard (distil.arbos.life) and discussion occurs in general Bittensor forums and Discord. No unlisted private roadmap is known; most changes have been made via transparent GitHub collaboration and community calls. Users can watch for new pull requests or Discord/Discord announcements for any future plans.