With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time

Subnet 120

Affine

Alpha Price
Value
Market Cap
Value
Neurons
Value
Registration Cost
Value
TAO Liquidity
Value
Alpha in Pool
Value
Total Alpha Supply
Value
% Alpha Staked
Value

ABOUT

What exactly does it do?

Affine (Bittensor Subnet-120) serves as an infrastructure layer that connects and coordinates multiple AI subnets to enable seamless interoperability and scalable inference across the Bittensor network. In essence, Affine provides a decentralized reinforcement learning (RL) environment where AI models are continuously refined through competition. It incentivizes miners (contributors) to train and submit improved models for complex tasks (such as program synthesis and code generation), and rewards those who genuinely advance the performance frontier. By acting as a higher-order coordinator rather than a single specialized subnet, Affine prevents fragmentation of the ecosystem – for example, it bridges models from other subnets (like Chutes on Subnet-64, which hosts model inference) with new training tasks, ensuring that advances in one area can be composed and utilized network-wide. In summary, Affine’s role is to “commoditize reasoning” and aggregate collective intelligence: it transforms the problem of improving AI models into an open, incentive-driven competition, thereby continuously pushing the boundaries of what the network’s AI can do.

Affine (Bittensor Subnet-120) serves as an infrastructure layer that connects and coordinates multiple AI subnets to enable seamless interoperability and scalable inference across the Bittensor network. In essence, Affine provides a decentralized reinforcement learning (RL) environment where AI models are continuously refined through competition. It incentivizes miners (contributors) to train and submit improved models for complex tasks (such as program synthesis and code generation), and rewards those who genuinely advance the performance frontier. By acting as a higher-order coordinator rather than a single specialized subnet, Affine prevents fragmentation of the ecosystem – for example, it bridges models from other subnets (like Chutes on Subnet-64, which hosts model inference) with new training tasks, ensuring that advances in one area can be composed and utilized network-wide. In summary, Affine’s role is to “commoditize reasoning” and aggregate collective intelligence: it transforms the problem of improving AI models into an open, incentive-driven competition, thereby continuously pushing the boundaries of what the network’s AI can do.

PURPOSE

What exactly is the 'product/build'?

Affine is delivered as an open-source protocol and platform (code-named “Anima Machina”) that implements this competitive RL environment. Technically, the product consists of a network of validators and miners running the Affine software on Subnet-120, along with supporting infrastructure for model deployment and evaluation:

Incentive Mechanism: Affine’s core innovation is a winner-takes-all RL incentive mechanism. Affine validators continually evaluate models submitted by miners on a suite of RL environments (tasks) and identify the model that sits on the Pareto frontier – meaning it outperforms all others across all tasks. Only genuine improvements can overtake the leading model, since the network is designed to be sybil-proof, decoy-proof, copy-proof, and overfitting-proof (miners cannot win by using multiple fake identities, hiding decoy models, copying someone else’s model, or overfitting to one test). This ensures that only real performance gains are rewarded.

Model Submission & Hosting: Miners don’t directly broadcast large models on-chain; instead, Affine leverages Bittensor’s Subnet-64 (Chutes) as a model hosting and inference subnet. When a miner fine-tunes a model that they believe improves on the current best, they submit it via Chutes, where it is deployed for load-balanced public inference access. In other words, any submitted model becomes publicly available through Chutes’ API for evaluation and use. Affine’s validators then use these deployed models to run standardized RL evaluations.

Evaluation Environments: Affine comes with a set of verifiable RL environments (for tasks like code generation, program reasoning, etc.) packaged as Docker images. The project developed a lightweight container orchestration system called “Affinetes” to manage these environments, allowing validators to easily spin up identical evaluation tasks locally or remotely. This ensures a sandboxed, reproducible testing setup for every model submission – eliminating the need for custom sandbox management and making the evaluations consistent and fair across participants.

Continuous Open Improvement: Every time a model wins in this environment, it doesn’t remain proprietary – all models and the data generated from evaluations are open-source. In fact, the design forces openness: the top-performing model is effectively shared as the new baseline, and other miners are forced to download, copy, and build upon it to compete. This creates a “living” marketplace of models where each contribution raises the bar. Importantly, datasets and results from the RL competitions are public and every submitted model can be fine-tuned by others. This means Affine not only yields continuously improving AI models, but also produces openly available research artifacts (models and data) for the community. The entire build is geared towards transparency and collaboration – an engineer described it as “a direct market for engineers to upload open models that advance the frontier on RL tasks – and get paid for it,” secured by the Bittensor blockchain and instant crypto payouts.

In practical terms, the Affine software includes command-line tools (af CLI) for miners and validators, smart-contract-like logic in the Bittensor substrate for tracking contributions, and integration with external services (e.g. requiring a Chutes API key to deploy models). By combining blockchain incentives with containerized RL benchmarks, the Affine product build creates a self-sustaining cycle: it continuously aggregates small improvements from a distributed group of participants into a rapidly evolving collective AI model, all while ensuring the integrity of the competition and availability of the best models to end-users.

 

Affine is delivered as an open-source protocol and platform (code-named “Anima Machina”) that implements this competitive RL environment. Technically, the product consists of a network of validators and miners running the Affine software on Subnet-120, along with supporting infrastructure for model deployment and evaluation:

Incentive Mechanism: Affine’s core innovation is a winner-takes-all RL incentive mechanism. Affine validators continually evaluate models submitted by miners on a suite of RL environments (tasks) and identify the model that sits on the Pareto frontier – meaning it outperforms all others across all tasks. Only genuine improvements can overtake the leading model, since the network is designed to be sybil-proof, decoy-proof, copy-proof, and overfitting-proof (miners cannot win by using multiple fake identities, hiding decoy models, copying someone else’s model, or overfitting to one test). This ensures that only real performance gains are rewarded.

Model Submission & Hosting: Miners don’t directly broadcast large models on-chain; instead, Affine leverages Bittensor’s Subnet-64 (Chutes) as a model hosting and inference subnet. When a miner fine-tunes a model that they believe improves on the current best, they submit it via Chutes, where it is deployed for load-balanced public inference access. In other words, any submitted model becomes publicly available through Chutes’ API for evaluation and use. Affine’s validators then use these deployed models to run standardized RL evaluations.

Evaluation Environments: Affine comes with a set of verifiable RL environments (for tasks like code generation, program reasoning, etc.) packaged as Docker images. The project developed a lightweight container orchestration system called “Affinetes” to manage these environments, allowing validators to easily spin up identical evaluation tasks locally or remotely. This ensures a sandboxed, reproducible testing setup for every model submission – eliminating the need for custom sandbox management and making the evaluations consistent and fair across participants.

Continuous Open Improvement: Every time a model wins in this environment, it doesn’t remain proprietary – all models and the data generated from evaluations are open-source. In fact, the design forces openness: the top-performing model is effectively shared as the new baseline, and other miners are forced to download, copy, and build upon it to compete. This creates a “living” marketplace of models where each contribution raises the bar. Importantly, datasets and results from the RL competitions are public and every submitted model can be fine-tuned by others. This means Affine not only yields continuously improving AI models, but also produces openly available research artifacts (models and data) for the community. The entire build is geared towards transparency and collaboration – an engineer described it as “a direct market for engineers to upload open models that advance the frontier on RL tasks – and get paid for it,” secured by the Bittensor blockchain and instant crypto payouts.

In practical terms, the Affine software includes command-line tools (af CLI) for miners and validators, smart-contract-like logic in the Bittensor substrate for tracking contributions, and integration with external services (e.g. requiring a Chutes API key to deploy models). By combining blockchain incentives with containerized RL benchmarks, the Affine product build creates a self-sustaining cycle: it continuously aggregates small improvements from a distributed group of participants into a rapidly evolving collective AI model, all while ensuring the integrity of the competition and availability of the best models to end-users.

 

WHO

Team Info

Affine was founded and is led by Const, who is notable as a co-founder of the Bittensor project itself. Const brought his vision of “mining for intelligence” into Affine, aiming to apply crypto-economic principles to reinforcement learning. Development of Affine is coordinated under the Affine Foundation (the open-source team behind the project), and it draws talent from around the world. In particular, the project has strong participation from the Chinese AI developer community – in fact, Affine is cited as one of the largest and most competitive Bittensor subnets built by a Chinese development team. This blend of Bittensor’s original leadership and new contributors has made Affine a flagship subnet.

Beyond the founder, specific team member details are not extensively public (consistent with Bittensor’s semi-anonymous, decentralized ethos), but the available information highlights a small, highly skilled team of researchers and engineers. The project has been actively hiring for roles such as Research Engineer (ML/RL), Protocol Engineer (Incentive Design & Validation), and Senior Software Architect, indicating a focus on deep RL expertise, mechanism design, and robust infrastructure development. The team’s vision is clearly articulated in its communications and job postings: they see Affine as a way to “reshape how AI is trained, evaluated, and aligned in a decentralized ecosystem”. With Jacob Steeves at the helm and a cadre of reinforcement learning specialists and blockchain engineers contributing (many under pseudonyms or as community contributors), Affine’s team is pushing the frontier of decentralized AI. The collaborative nature of the project – open-source code, a public Discord for coordination, and competitive leaderboards on the blockchain – means that in a sense the “team” includes all miner-engineers participating in refining the models, under the guidance of the core developers.

Affine was founded and is led by Const, who is notable as a co-founder of the Bittensor project itself. Const brought his vision of “mining for intelligence” into Affine, aiming to apply crypto-economic principles to reinforcement learning. Development of Affine is coordinated under the Affine Foundation (the open-source team behind the project), and it draws talent from around the world. In particular, the project has strong participation from the Chinese AI developer community – in fact, Affine is cited as one of the largest and most competitive Bittensor subnets built by a Chinese development team. This blend of Bittensor’s original leadership and new contributors has made Affine a flagship subnet.

Beyond the founder, specific team member details are not extensively public (consistent with Bittensor’s semi-anonymous, decentralized ethos), but the available information highlights a small, highly skilled team of researchers and engineers. The project has been actively hiring for roles such as Research Engineer (ML/RL), Protocol Engineer (Incentive Design & Validation), and Senior Software Architect, indicating a focus on deep RL expertise, mechanism design, and robust infrastructure development. The team’s vision is clearly articulated in its communications and job postings: they see Affine as a way to “reshape how AI is trained, evaluated, and aligned in a decentralized ecosystem”. With Jacob Steeves at the helm and a cadre of reinforcement learning specialists and blockchain engineers contributing (many under pseudonyms or as community contributors), Affine’s team is pushing the frontier of decentralized AI. The collaborative nature of the project – open-source code, a public Discord for coordination, and competitive leaderboards on the blockchain – means that in a sense the “team” includes all miner-engineers participating in refining the models, under the guidance of the core developers.

FUTURE

Roadmap

As of now, Affine’s detailed roadmap has not been publicly released. Community members looking for an official roadmap have noted that, unlike some other subnets (e.g. Chutes), Affine’s future milestones are not clearly published, and much of what’s expected comes from the project’s stated vision and ongoing progress. However, the overarching trajectory of Affine can be inferred from its goals and recent developments:

Current Status (2025): Affine launched its incentivized RL competition platform in 2025 and is already operational, with miners earning “thousands of dollars per day” for model improvements according to the team’s posts. The initial focus has been on tasks like code generation (program synthesis/abduction), which have immediate utility and measurable benchmarks. Affine has successfully integrated with Subnet-64 (Chutes) to make the best models easily accessible, and this inference-as-a-service capability is expected to remain a cornerstone.

Ecosystem Integration: In late 2025, Affine also joined Project Rubicon – a Bittensor initiative to bridge subnet tokens into the broader Web3 ecosystem. Affine’s subnet token (often called SN120 or α120) was included in Rubicon’s first cohort of 17 subnets to have liquid staking and trading on Ethereum’s Base chain. This means Affine’s token is now represented as an ERC-20 (via xAlpha liquid staking assets) and can tap into DeFi liquidity. This step was not a direct product feature for the AI subnet itself, but it signals a commitment to supporting Affine’s growth and sustainability (by giving its stakeholders access to liquidity and broader markets). It’s a foundation for attracting more participants and investment into the subnet as the project matures.

Near-Term Focus: In the absence of a published roadmap, the team’s likely near-term focus is refining the RL environments and scaling participation. The job listings and community posts suggest active work on improving the validation mechanisms and safety – for example, developing better evaluation metrics, adversarial testing, and alignment checks to ensure models truly improve in a robust way. We can expect the introduction of additional RL tasks or environments as the project progresses, potentially expanding beyond coding tasks to other domains of “reasoning”. Each new environment would increase the breadth of skills the collective model can learn, moving Affine closer to its vision of general problem-solving ability.

Long-Term Vision: Affine’s ultimate goal is to “break the intelligence sound barrier” by commoditizing reasoning. In practical terms, this hints at a long-term roadmap where the subnet continually ramps up the complexity of challenges it tackles – evolving from coding tasks to more general reasoning, multi-step problem solving, and agentic AI behavior. The phrasing suggests the team aspires to reach levels of AI capability that are currently out of reach, by harnessing the aggregated contributions of many. While no dates or specific milestones are given, the measurable indicator of success would be the quality of the best model produced by Affine over time. The project will likely consider itself on track if each iteration yields a smarter model than the last, possibly approaching human-level reasoning in certain domains.

In summary, Affine’s roadmap is more vision-driven than date-driven at this stage. The project will continue to:

  • Expand and harden its platform, making sure the incentive mechanisms remain fair and ungameable as participation grows.
  • Incorporate more tasks and possibly integrate outputs from other subnets (for instance, leveraging code-generation from Ridges or data from other specialized subnets) to broaden the AI’s capabilities.
  • Foster its community of miner-researchers, since the pace of progress depends on attracting skilled contributors – the inclusion in initiatives like Rubicon and outreach to developer communities (e.g. in China) support this growth.

 

As of now, Affine’s detailed roadmap has not been publicly released. Community members looking for an official roadmap have noted that, unlike some other subnets (e.g. Chutes), Affine’s future milestones are not clearly published, and much of what’s expected comes from the project’s stated vision and ongoing progress. However, the overarching trajectory of Affine can be inferred from its goals and recent developments:

Current Status (2025): Affine launched its incentivized RL competition platform in 2025 and is already operational, with miners earning “thousands of dollars per day” for model improvements according to the team’s posts. The initial focus has been on tasks like code generation (program synthesis/abduction), which have immediate utility and measurable benchmarks. Affine has successfully integrated with Subnet-64 (Chutes) to make the best models easily accessible, and this inference-as-a-service capability is expected to remain a cornerstone.

Ecosystem Integration: In late 2025, Affine also joined Project Rubicon – a Bittensor initiative to bridge subnet tokens into the broader Web3 ecosystem. Affine’s subnet token (often called SN120 or α120) was included in Rubicon’s first cohort of 17 subnets to have liquid staking and trading on Ethereum’s Base chain. This means Affine’s token is now represented as an ERC-20 (via xAlpha liquid staking assets) and can tap into DeFi liquidity. This step was not a direct product feature for the AI subnet itself, but it signals a commitment to supporting Affine’s growth and sustainability (by giving its stakeholders access to liquidity and broader markets). It’s a foundation for attracting more participants and investment into the subnet as the project matures.

Near-Term Focus: In the absence of a published roadmap, the team’s likely near-term focus is refining the RL environments and scaling participation. The job listings and community posts suggest active work on improving the validation mechanisms and safety – for example, developing better evaluation metrics, adversarial testing, and alignment checks to ensure models truly improve in a robust way. We can expect the introduction of additional RL tasks or environments as the project progresses, potentially expanding beyond coding tasks to other domains of “reasoning”. Each new environment would increase the breadth of skills the collective model can learn, moving Affine closer to its vision of general problem-solving ability.

Long-Term Vision: Affine’s ultimate goal is to “break the intelligence sound barrier” by commoditizing reasoning. In practical terms, this hints at a long-term roadmap where the subnet continually ramps up the complexity of challenges it tackles – evolving from coding tasks to more general reasoning, multi-step problem solving, and agentic AI behavior. The phrasing suggests the team aspires to reach levels of AI capability that are currently out of reach, by harnessing the aggregated contributions of many. While no dates or specific milestones are given, the measurable indicator of success would be the quality of the best model produced by Affine over time. The project will likely consider itself on track if each iteration yields a smarter model than the last, possibly approaching human-level reasoning in certain domains.

In summary, Affine’s roadmap is more vision-driven than date-driven at this stage. The project will continue to:

  • Expand and harden its platform, making sure the incentive mechanisms remain fair and ungameable as participation grows.
  • Incorporate more tasks and possibly integrate outputs from other subnets (for instance, leveraging code-generation from Ridges or data from other specialized subnets) to broaden the AI’s capabilities.
  • Foster its community of miner-researchers, since the pace of progress depends on attracting skilled contributors – the inclusion in initiatives like Rubicon and outreach to developer communities (e.g. in China) support this growth.

 

NEWS

Announcements

Load More