With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time
BrainPlay (Subnet 117) is a specialized subnet on the Bittensor decentralized AI network that benchmarks AI models through competitive gameplay. Instead of evaluating models with obscure metrics, BrainPlay makes performance visual and intuitive – AI models compete in games (starting with the word-association game Codenames) so that people can watch how well each model plays and directly judge which AI is better. This approach provides human-comprehensible benchmarks for reasoning and intelligence, rather than abstract scores. By observing AIs play engaging games, even non-experts can grasp how different models perform, making AI evaluation more transparent, accessible, and fun.
In practice, BrainPlay sets up AI-versus-AI matches in a game environment. For example, in a Codenames match two teams of AI “miners” (models) compete, with each team composed of two models collaborating to win. The game outcomes translate into on-chain performance scores: the winning team’s models earn higher scores/rewards (proportional to their staked tokens) while the losers get less or none. This competitive framework motivates the models to perform optimally and provides a clear metric of AI quality. In summary, BrainPlay turns AI benchmarking into a “sport” – models play games, and their wins/losses serve as an easy-to-understand measure of their reasoning abilities.
BrainPlay (Subnet 117) is a specialized subnet on the Bittensor decentralized AI network that benchmarks AI models through competitive gameplay. Instead of evaluating models with obscure metrics, BrainPlay makes performance visual and intuitive – AI models compete in games (starting with the word-association game Codenames) so that people can watch how well each model plays and directly judge which AI is better. This approach provides human-comprehensible benchmarks for reasoning and intelligence, rather than abstract scores. By observing AIs play engaging games, even non-experts can grasp how different models perform, making AI evaluation more transparent, accessible, and fun.
In practice, BrainPlay sets up AI-versus-AI matches in a game environment. For example, in a Codenames match two teams of AI “miners” (models) compete, with each team composed of two models collaborating to win. The game outcomes translate into on-chain performance scores: the winning team’s models earn higher scores/rewards (proportional to their staked tokens) while the losers get less or none. This competitive framework motivates the models to perform optimally and provides a clear metric of AI quality. In summary, BrainPlay turns AI benchmarking into a “sport” – models play games, and their wins/losses serve as an easy-to-understand measure of their reasoning abilities.
BrainPlay’s product is essentially a decentralized gaming arena for AI models built on Bittensor. Technically, it consists of a custom Bittensor subnet (netuid 117) running the game logic and scoring system, plus accompanying tools and interfaces. Key components of the build include:
Game Engine for AI: BrainPlay has implemented the game Codenames as its first evaluation environment. The code defines how two teams of AI agents play the game and how winners are determined. More games are planned to be added to diversify the benchmarks. Each game is structured so that AI “miners” (the models connected to the subnet) form teams and play according to set rules, yielding a win/lose result that can be recorded on the blockchain. For instance, in Codenames, two teams of two models compete, and whichever team wins the round grants its models a reward score. This game outcome feeds into the model’s reputation or ranking in the subnet.
On-Chain Scoring & Rewards: The subnet’s consensus mechanism is adapted to reward models based on game performance. When a model wins games, it earns Alpha tokens (the native token of BrainPlay) or scoring credits proportional to its contribution and stake. For example, if the “red” team wins, the two red-team models might each get a score of 1.0, while the two blue-team (loser) models get 0. These scores are adjusted by how much stake each model has, ensuring a fair but incentive-aligned reward system. All of this happens transparently on-chain – the gameplay results and rewards are recorded on the Bittensor blockchain for verifiability.
Infrastructure & Mining Clients: The BrainPlay subnet comes with the software needed to run a validator node and mining clients (the AI model participants). The validator node runs on standard hardware (no special GPU required)and maintains the subnet’s blockchain consensus, including the game coordination. The miner clients are the AI models that connect to play games; BrainPlay supports two miner modes: one can run a local Large Language Model (LLM) which requires a GPU to handle the model, or one can use an API-based model (e.g. calling OpenAI or Anthropic APIs) which offloads compute and thus needs no local GPU. This flexible design lowers the barrier to entry – participants without powerful hardware can still compete by using API-accessible AI models.
Web Interface & “Watch Games”: The team provides a front-end (the play.shiftlayer.ai site) where users can observe games in action and access information. The website highlights the concept of “competitive reasoning” and even allows one to watch live or recorded AI-vs-AI games (for example, to see how an AI gives clues in Codenames). This viewer component is part of making the benchmarks public and engaging. It turns model evaluation into a spectator experience, reinforcing transparency.
Public API & Alpha Token Utility: BrainPlay has rolled out an API (v1.0) that lets external developers query the top-performing AI model on the subnet. In other words, the best model’s reasoning capabilities can be accessed via a simple API call – for example, to get that model’s answer or action for a given input. This API is metered using BrainPlay’s native Alpha token as a credit system. During the initial phase, usage is free or uses test credits (as the full token integration is being finalized), but eventually Alpha tokens will be required to query the models. This design gives the token real utility: as more people or applications use the BrainPlay API (e.g. for robotics simulations or AI services), they will need Alpha tokens, driving demand.
Datasets and Training Material: A valuable by-product of BrainPlay’s competitive games is data – transcripts of games, strategies, and interactions between humans and AIs. The project curates and offers several open datasets derived from these activities. For example, they have compiled a “Codenames AI Training Dataset” containing thousands of Codenames matches between humans and LLMs. They also provide an “LLM Reasoning Benchmark” dataset with logical reasoning tasks, logs from robot simulation reasoning, multi-agent debate transcripts, and records of human-vs-AI competitions. These datasets (some of which are made available for purchase) are meant to help researchers train better reasoning models and to document how AI progresses through gameplay. In essence, BrainPlay is not only benchmarking AI but also generating new training data for the community, closing the loop to improve models.
In summary, the BrainPlay build consists of the Bittensor subnet implementation (code on GitHub), the game/benchmark logic (currently focusing on Codenames), the Alpha token economy within that subnet, and user-facing services like the viewing platform and API. It’s a full-stack product where the blockchain backend, the AI game engine, and the community interface all come together to evaluate and improve AI models in a novel way.
BrainPlay’s product is essentially a decentralized gaming arena for AI models built on Bittensor. Technically, it consists of a custom Bittensor subnet (netuid 117) running the game logic and scoring system, plus accompanying tools and interfaces. Key components of the build include:
Game Engine for AI: BrainPlay has implemented the game Codenames as its first evaluation environment. The code defines how two teams of AI agents play the game and how winners are determined. More games are planned to be added to diversify the benchmarks. Each game is structured so that AI “miners” (the models connected to the subnet) form teams and play according to set rules, yielding a win/lose result that can be recorded on the blockchain. For instance, in Codenames, two teams of two models compete, and whichever team wins the round grants its models a reward score. This game outcome feeds into the model’s reputation or ranking in the subnet.
On-Chain Scoring & Rewards: The subnet’s consensus mechanism is adapted to reward models based on game performance. When a model wins games, it earns Alpha tokens (the native token of BrainPlay) or scoring credits proportional to its contribution and stake. For example, if the “red” team wins, the two red-team models might each get a score of 1.0, while the two blue-team (loser) models get 0. These scores are adjusted by how much stake each model has, ensuring a fair but incentive-aligned reward system. All of this happens transparently on-chain – the gameplay results and rewards are recorded on the Bittensor blockchain for verifiability.
Infrastructure & Mining Clients: The BrainPlay subnet comes with the software needed to run a validator node and mining clients (the AI model participants). The validator node runs on standard hardware (no special GPU required)and maintains the subnet’s blockchain consensus, including the game coordination. The miner clients are the AI models that connect to play games; BrainPlay supports two miner modes: one can run a local Large Language Model (LLM) which requires a GPU to handle the model, or one can use an API-based model (e.g. calling OpenAI or Anthropic APIs) which offloads compute and thus needs no local GPU. This flexible design lowers the barrier to entry – participants without powerful hardware can still compete by using API-accessible AI models.
Web Interface & “Watch Games”: The team provides a front-end (the play.shiftlayer.ai site) where users can observe games in action and access information. The website highlights the concept of “competitive reasoning” and even allows one to watch live or recorded AI-vs-AI games (for example, to see how an AI gives clues in Codenames). This viewer component is part of making the benchmarks public and engaging. It turns model evaluation into a spectator experience, reinforcing transparency.
Public API & Alpha Token Utility: BrainPlay has rolled out an API (v1.0) that lets external developers query the top-performing AI model on the subnet. In other words, the best model’s reasoning capabilities can be accessed via a simple API call – for example, to get that model’s answer or action for a given input. This API is metered using BrainPlay’s native Alpha token as a credit system. During the initial phase, usage is free or uses test credits (as the full token integration is being finalized), but eventually Alpha tokens will be required to query the models. This design gives the token real utility: as more people or applications use the BrainPlay API (e.g. for robotics simulations or AI services), they will need Alpha tokens, driving demand.
Datasets and Training Material: A valuable by-product of BrainPlay’s competitive games is data – transcripts of games, strategies, and interactions between humans and AIs. The project curates and offers several open datasets derived from these activities. For example, they have compiled a “Codenames AI Training Dataset” containing thousands of Codenames matches between humans and LLMs. They also provide an “LLM Reasoning Benchmark” dataset with logical reasoning tasks, logs from robot simulation reasoning, multi-agent debate transcripts, and records of human-vs-AI competitions. These datasets (some of which are made available for purchase) are meant to help researchers train better reasoning models and to document how AI progresses through gameplay. In essence, BrainPlay is not only benchmarking AI but also generating new training data for the community, closing the loop to improve models.
In summary, the BrainPlay build consists of the Bittensor subnet implementation (code on GitHub), the game/benchmark logic (currently focusing on Codenames), the Alpha token economy within that subnet, and user-facing services like the viewing platform and API. It’s a full-stack product where the blockchain backend, the AI game engine, and the community interface all come together to evaluate and improve AI models in a novel way.
BrainPlay is developed and operated by the ShiftLayer team. ShiftLayer is a blockchain+AI startup dedicated to creating an open ecosystem for testing and validating AI models in a decentralized, community-driven manner. The team’s philosophy is to remove closed-door evaluation of AI and replace it with transparent benchmarks that anyone can participate in or observe. They are the ones behind Subnet 117 (BrainPlay) and are also working on other Bittensor subnets/projects under the ShiftLayer umbrella (e.g. Taoproxynet and OpenFundNet as mentioned on their site).
In terms of team members, ShiftLayer’s public presence indicates a mix of AI researchers and blockchain developers. Notably, Vasyl Hlushchak serves as the Chief Technology Officer (CTO) of ShiftLayer. Hlushchak has a background as a senior blockchain developer and is “building at the edge of AI & decentralization”, highlighting the expertise driving BrainPlay’s technical development. The company’s GitHub is under the handle shiftlayer-llc, implying an LLC registered entity, and their official Twitter (X) account @ShiftLayer_Ai frequently updates the community on progress. While specific names beyond the CTO are not widely published, the core team is relatively small and highly focused on this niche of AI benchmarking. They collaborate with the broader Bittensor community and likely have contributors from both the AI research side and the crypto/blockchain side.
The ShiftLayer team’s mission is clearly reflected in BrainPlay: they value transparency, fairness, and community involvement in AI evaluation. All their projects (BrainPlay included) aim to let the community see how AI models perform and even contribute to improving them. This ethos suggests the team is not just building a product, but also trying to foster an open research community around it. As BrainPlay grows, the team often invites beta-testers, miners (AI model operators), and even players to engage with the platform. In social media posts, they emphasize that the team is working tirelessly and will keep the community updated on new features (for example, hints of “background wizardry” being finalized, as they teased on X).
BrainPlay is developed and operated by the ShiftLayer team. ShiftLayer is a blockchain+AI startup dedicated to creating an open ecosystem for testing and validating AI models in a decentralized, community-driven manner. The team’s philosophy is to remove closed-door evaluation of AI and replace it with transparent benchmarks that anyone can participate in or observe. They are the ones behind Subnet 117 (BrainPlay) and are also working on other Bittensor subnets/projects under the ShiftLayer umbrella (e.g. Taoproxynet and OpenFundNet as mentioned on their site).
In terms of team members, ShiftLayer’s public presence indicates a mix of AI researchers and blockchain developers. Notably, Vasyl Hlushchak serves as the Chief Technology Officer (CTO) of ShiftLayer. Hlushchak has a background as a senior blockchain developer and is “building at the edge of AI & decentralization”, highlighting the expertise driving BrainPlay’s technical development. The company’s GitHub is under the handle shiftlayer-llc, implying an LLC registered entity, and their official Twitter (X) account @ShiftLayer_Ai frequently updates the community on progress. While specific names beyond the CTO are not widely published, the core team is relatively small and highly focused on this niche of AI benchmarking. They collaborate with the broader Bittensor community and likely have contributors from both the AI research side and the crypto/blockchain side.
The ShiftLayer team’s mission is clearly reflected in BrainPlay: they value transparency, fairness, and community involvement in AI evaluation. All their projects (BrainPlay included) aim to let the community see how AI models perform and even contribute to improving them. This ethos suggests the team is not just building a product, but also trying to foster an open research community around it. As BrainPlay grows, the team often invites beta-testers, miners (AI model operators), and even players to engage with the platform. In social media posts, they emphasize that the team is working tirelessly and will keep the community updated on new features (for example, hints of “background wizardry” being finalized, as they teased on X).
BrainPlay is an evolving project, and the team has outlined several future developments and milestones on its roadmap:
Expansion of Games: Having started with Codenames as the first benchmark game, the next step is to introduce more games and tasks to challenge AI models in different ways. The roadmap includes adding other interactive games (potentially board games, logic puzzles, or team strategy games) to broaden the scope of AI evaluation. Each new game will provide a fresh angle on reasoning or collaboration, making the benchmark more comprehensive. Status: Codenames is live (✓) and more games are planned “coming soon”.
AI vs Human Competitions & Challenges: The team intends to host and facilitate more public competitions. In August 2025, BrainPlay was involved in the IEEE Codenames AI Challenge – an event exploring how well AI agents can play Codenames, including in mixed human-AI settings. Going forward, we can expect regular tournaments or challenges where AI models from BrainPlay compete against each other or even against human players in real time. These events not only stress-test the models but also draw community interest and feedback. The roadmap likely includes making such competitions a recurring part of BrainPlay (for example, seasonal “AI Games” or collaboration with academic conferences).
Integration with Robotics: A unique future direction is bridging BrainPlay’s AI agents into the physical world. The project is researching integrating reasoning agents into robotic systems – effectively using the best-performing BrainPlay AI models to control robots in simulation and reality. This means the benchmarks move beyond screen-based games to tasks like navigating a robot or performing real-world missions that require reasoning. The team has already begun this phase: by late 2025 they acquired four physical robots as testbeds for BrainPlay’s LLM agents (signaling this “Phase 2” of the project). The goal is to demonstrate that an AI agent proven in games like Codenames can also exhibit strong performance in guiding robots through complex, dynamic tasks. The roadmap includes developing robotic simulations first, then real-life robot trials, effectively extending BrainPlay into an embodied AI benchmark.
Token Utility and Mainnet Rollout: Another roadmap item is full deployment of the Alpha token economy. Currently, BrainPlay operates with its Alpha subnet token in a limited capacity (internal rewards and test credits for the API). The team plans to enable full mainnet utility of Alpha tokens for external users – meaning the public API will require Alpha tokens for access once everything is stable. This transition is expected to coincide with broader availability of the BrainPlay API and perhaps listing of the Alpha token on subnet exchanges. A successful integration would see the Alpha token become a functional currency for AI benchmarking as people pay to query models or purchase datasets, thereby completing the economic loop. Timeline: Before this, the API use remains in a free/credit phase; after mainnet utility is live, real token transactions will gate usage.
Community and Open-Source Contributions: As part of the roadmap, ShiftLayer emphasizes growing the community around BrainPlay. This involves open-sourcing the code (already on GitHub) and encouraging developers to contribute new games or improvements. Documentation and knowledge bases (like the subnet docs repository) are being maintained to help others run miners or build on top of BrainPlay. We can expect ongoing improvements in ease of deployment (perhaps Docker images or one-click setups for miners), and educational content so that more AI enthusiasts can join Subnet 117. The roadmap likely includes outreach through blog posts, technical papers, or conference presentations to share results from BrainPlay and attract collaborators.
Rebranding and Identity: It’s worth noting that Subnet 117 was previously referred to as “RNA” in the Bittensor ecosystem, but was renamed to BrainPlay by October 2025 to better reflect its focus on brain-like reasoning play/benchmarks. This rebranding was part of clarifying the project’s identity. Going forward, the name BrainPlay will be used consistently. Any future branding changes are not expected, as BrainPlay now has recognition in the community.
Timeline Highlights: BrainPlay launched in mid-2025 (approximately) with its v1.0 Codenames game. By late 2025, it rebranded from RNA to BrainPlay and introduced an API and initial dataset offerings. Upcoming in 2026, one can anticipate new games added to the subnet, pilot experiments with robots running BrainPlay AI, and possibly the first instances of AI agents achieving human-level performance in certain games (if a model consistently beats human players, for example). The team’s ultimate roadmap goal is to establish BrainPlay as a standard for evaluating AI reasoning, much like ImageNet was for vision – except done in an open, decentralized, and continuously evolving manner.
Overall, BrainPlay’s roadmap is ambitious: it starts with gamified AI benchmarking and stretches into real-world AI applications, all while building a community and economy around the process. The project will continue to update its milestones on official channels as each stage is achieved, keeping stakeholders informed as BrainPlay pioneers new ways to measure and improve machine intelligence.
BrainPlay is an evolving project, and the team has outlined several future developments and milestones on its roadmap:
Expansion of Games: Having started with Codenames as the first benchmark game, the next step is to introduce more games and tasks to challenge AI models in different ways. The roadmap includes adding other interactive games (potentially board games, logic puzzles, or team strategy games) to broaden the scope of AI evaluation. Each new game will provide a fresh angle on reasoning or collaboration, making the benchmark more comprehensive. Status: Codenames is live (✓) and more games are planned “coming soon”.
AI vs Human Competitions & Challenges: The team intends to host and facilitate more public competitions. In August 2025, BrainPlay was involved in the IEEE Codenames AI Challenge – an event exploring how well AI agents can play Codenames, including in mixed human-AI settings. Going forward, we can expect regular tournaments or challenges where AI models from BrainPlay compete against each other or even against human players in real time. These events not only stress-test the models but also draw community interest and feedback. The roadmap likely includes making such competitions a recurring part of BrainPlay (for example, seasonal “AI Games” or collaboration with academic conferences).
Integration with Robotics: A unique future direction is bridging BrainPlay’s AI agents into the physical world. The project is researching integrating reasoning agents into robotic systems – effectively using the best-performing BrainPlay AI models to control robots in simulation and reality. This means the benchmarks move beyond screen-based games to tasks like navigating a robot or performing real-world missions that require reasoning. The team has already begun this phase: by late 2025 they acquired four physical robots as testbeds for BrainPlay’s LLM agents (signaling this “Phase 2” of the project). The goal is to demonstrate that an AI agent proven in games like Codenames can also exhibit strong performance in guiding robots through complex, dynamic tasks. The roadmap includes developing robotic simulations first, then real-life robot trials, effectively extending BrainPlay into an embodied AI benchmark.
Token Utility and Mainnet Rollout: Another roadmap item is full deployment of the Alpha token economy. Currently, BrainPlay operates with its Alpha subnet token in a limited capacity (internal rewards and test credits for the API). The team plans to enable full mainnet utility of Alpha tokens for external users – meaning the public API will require Alpha tokens for access once everything is stable. This transition is expected to coincide with broader availability of the BrainPlay API and perhaps listing of the Alpha token on subnet exchanges. A successful integration would see the Alpha token become a functional currency for AI benchmarking as people pay to query models or purchase datasets, thereby completing the economic loop. Timeline: Before this, the API use remains in a free/credit phase; after mainnet utility is live, real token transactions will gate usage.
Community and Open-Source Contributions: As part of the roadmap, ShiftLayer emphasizes growing the community around BrainPlay. This involves open-sourcing the code (already on GitHub) and encouraging developers to contribute new games or improvements. Documentation and knowledge bases (like the subnet docs repository) are being maintained to help others run miners or build on top of BrainPlay. We can expect ongoing improvements in ease of deployment (perhaps Docker images or one-click setups for miners), and educational content so that more AI enthusiasts can join Subnet 117. The roadmap likely includes outreach through blog posts, technical papers, or conference presentations to share results from BrainPlay and attract collaborators.
Rebranding and Identity: It’s worth noting that Subnet 117 was previously referred to as “RNA” in the Bittensor ecosystem, but was renamed to BrainPlay by October 2025 to better reflect its focus on brain-like reasoning play/benchmarks. This rebranding was part of clarifying the project’s identity. Going forward, the name BrainPlay will be used consistently. Any future branding changes are not expected, as BrainPlay now has recognition in the community.
Timeline Highlights: BrainPlay launched in mid-2025 (approximately) with its v1.0 Codenames game. By late 2025, it rebranded from RNA to BrainPlay and introduced an API and initial dataset offerings. Upcoming in 2026, one can anticipate new games added to the subnet, pilot experiments with robots running BrainPlay AI, and possibly the first instances of AI agents achieving human-level performance in certain games (if a model consistently beats human players, for example). The team’s ultimate roadmap goal is to establish BrainPlay as a standard for evaluating AI reasoning, much like ImageNet was for vision – except done in an open, decentralized, and continuously evolving manner.
Overall, BrainPlay’s roadmap is ambitious: it starts with gamified AI benchmarking and stretches into real-world AI applications, all while building a community and economy around the process. The project will continue to update its milestones on official channels as each stage is achieved, keeping stakeholders informed as BrainPlay pioneers new ways to measure and improve machine intelligence.
📢 A little teaser…….for a November to Remember, from the BrainPlay team…Again !!!!!
🕵🏼♂️ Somethings changed on our Website.
🆕 We’ve quietly updated the BrainPlay Homepage
🤯 It might look small but means something so much bigger
❓What do you…..the community think the…
👀 Now 21,000 games played and completed!
👇🏻
🧠
#BrainPlay #SN117 #Bittensor $TAO
👉🏻 Here we go again, another busy day and yet another update!
A November to Remember.
💪🏻 Are you keeping up with the teams relentless hard work.
🚀 v1.5.2 is Live!
⚡️We’re thrilled to roll out v1.5.2 bringing major improvements in concurrency, miner fairness, and system…
📢 Check out these seven day stats for games played on BrainPlay Subnet 117.
🧠 We're only just getting started
🔥 A November To Remember
#BrainPlay #SN117 #Bittensor $TAO
👀 Huge news from the BrainPlay team a November to Remember
🤯 We’ve recently switched to multi-mechanism consensus (sub-subnets) are here.
🧠
#BrainPlay #SN117 #Bittensor #TAO
feat v1.5.1: mechanism emission split feature for competition weights by mrtao613 · Pull Request...
What's New feat: Integrated the mechanism emission split feature into set_weights to support competition-based weight di...
github.com
🎉 Huge Update — BrainPlay v1.5.1 is Live! We said it would be a November to Remember
⚡️Big news for everyone: the sub-subnet feature is now enabled!
This marks a major step toward multi-competition scalability and smarter emission handling across the subnet.
🔗 [View the…