With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time
FakeNews is a specialized decentralized AI subnet dedicated to combating misinformation. Its primary purpose is to provide real-time fake news detection and advanced fact-checking for news content. The project aims to achieve higher accuracy than traditional fact-checking solutions, even on rapid, real-time verifications. In essence, FakeNews strives to become “the world’s most trusted source for news and fact verification,” enabling users to rely on timely and accurate information with the support of a decentralized AI community.
This subnet was launched in early 2025, making it the first new subnet introduced after Bittensor’s Dynamic TAO (dTAO) upgrade. By addressing the fast spread of misinformation online, FakeNews seeks to fill the gap between the speed of false news propagation and the slower pace of human fact-checking, using AI and a distributed network to flag or verify news in real time.
FakeNews is a specialized decentralized AI subnet dedicated to combating misinformation. Its primary purpose is to provide real-time fake news detection and advanced fact-checking for news content. The project aims to achieve higher accuracy than traditional fact-checking solutions, even on rapid, real-time verifications. In essence, FakeNews strives to become “the world’s most trusted source for news and fact verification,” enabling users to rely on timely and accurate information with the support of a decentralized AI community.
This subnet was launched in early 2025, making it the first new subnet introduced after Bittensor’s Dynamic TAO (dTAO) upgrade. By addressing the fast spread of misinformation online, FakeNews seeks to fill the gap between the speed of false news propagation and the slower pace of human fact-checking, using AI and a distributed network to flag or verify news in real time.
Architecture & Roles
FakeNews is implemented as a Bittensor subnet with the typical miner–validator architecture. Validators on Subnet 66 generate challenges (tasks) and verify miners’ answers, while miners use AI models to solve those tasks and produce responses. Every minute, validators create a new task by pulling a real news article (from major trustworthy publications) and using a large language model (LLM) to produce two variants: (1) a paraphrased version of the article that preserves all factual details, and (2) a fake news version that alters facts. These variants are labeled (with “0” denoting a reliable article and “1” denoting fake news). The validator thus presents miners with content that may be real or fake and asks them to evaluate it.
Miner Evaluation Process
Miners receive the original article text (initially, for the baseline phase) along with the altered versions, and must assign each version a score from 0 to 1 indicating truthfulness. In practice, miners attempt to classify each piece as true (0) or fake (1); their continuous scores are normalized to binary outputs (scores ≤0.5 map to 0 = true, and >0.5 map to 1 = fake). The baseline implementation provided by the FakeNews team uses large language models to perform this scoring automatically. In other words, miners can run an LLM (via openAI API or similar) that reads an article and its variant, then judges whether the content is factual or fabricated. As the network evolves, miners are free to improve on this baseline by training more specialized models or ensembles to increase accuracy.
Task Design
The FakeNews subnet introduces an evolving set of tasks for miners, each increasing in complexity to progressively push the AI’s fact-checking capabilities. Initially, Task #1 (Basic Comparison) provides the miner with an original news article and a pair of other articles that could be either real or fake (e.g. one real + one fake, two fakes, or two reals). The miner must identify which content is fake versus which is trustworthy. Task #2 (Advanced Comparison) ups the difficulty by using multiple rewritten versions of the original – for example, a weakly-worded fake, a strongly-worded fake, a weak paraphrase, and a strong paraphrase – to make the distinction less obvious. Subsequent tasks further challenge the miners: Task #3 (Reference Identification) requires miners not just to label content, but also to find 2–3 reliable external sources that back up the original article’s facts. Task #4 (Blind Verification) removes the original article from the prompt entirely, forcing miners to judge authenticity without a direct reference. By Task #5 (Fact-Level Checks), miners receive a list of individual statements (facts extracted from articles) and must decide which are true or false, providing credible references for each. Finally, Task #6 (Comprehensive Fact Analysis) presents a full article and expects the miner to break it down into claims, verify each claim’s accuracy with evidence, and then produce an overall credibility score for the entire piece. This staged approach — from simple true/false classification to full document analysis — is designed to progressively train and test the network’s AI on all aspects of fact-checking.
Mining & Validation Mechanics
Under the hood, validators continuously feed tasks into the network and evaluate miner responses. A validator compares a miner’s submitted answers (0/1 judgments) against the known truth labels it generated for the task (it knows which version was the real article and which was fake). Miners that correctly classify the content receive credit, while incorrect answers lower their performance score. The system keeps a history of each miner’s recent answers — by default, validators store the last 250 responses per miner (which equates to 500 evaluated items, since each task involves two pieces: the paraphrase and the fake). This history is used to calculate an accuracy score over a long window (e.g. last 300 results ≈ 150 tasks) and a short window (e.g. last 20 results ≈ 10 tasks). The miner’s overall performance score is a weighted combination of these long-term and short-term accuracy metrics. By blending long and short horizons (with equal weight α = 0.5 for each in the current design), the protocol ensures that new miners can prove themselves quickly (a newcomer who starts answering correctly will soon gain a high short-term accuracy) while also rewarding sustained high performance over time. This evaluation mechanism is intended to prevent exploitation (e.g. miners randomly guessing answers won’t maintain high accuracy) and to create healthy competition where miners consistently need to perform well to earn rewards.
In summary, FakeNews’s technical implementation leverages the Bittensor framework: a network of AI-powered miner nodes working on fact-checking tasks generated and overseen by validator nodes. It uses large language models for both task generation (simulating fake news) and task solving (detecting fakes) in its initial setup. Through a sequence of increasingly complex tasks and a robust scoring system, the subnet trains a distributed AI ensemble to rapidly distinguish truth from falsehood.
Use Cases and Current Applications
As of 2025, Subnet 66 FakeNews is actively running on Bittensor’s mainnet, serving as a proof-of-concept for decentralized, AI-driven fact checking. In practice today, the subnet’s validators are continuously ingesting real news content and generating fake-news challenges, and the miners are constantly analyzing these to discern truth from falsehood in real time. This means FakeNews is essentially operating as a live misinformation filter on whatever data sources it’s connected to (likely a stream of news articles from various reputable outlets). The immediate “users” of the subnet are the miners and validators themselves, who use the tasks and responses to train and hone AI models for fake news detection. For example, a miner’s node might receive a breaking news article about a political event along with a subtly manipulated version of that story; the miner’s AI will evaluate both and attempt to flag the fake one. The collective outcome is a network that is constantly verifying information – a process that could be seen as an automated, ongoing fact-checking feed.
While the outputs of FakeNews (e.g. which articles were deemed credible or not) are not yet publicly available in a user-friendly way, the project has plans to make its capabilities accessible. The primary envisioned application is a forthcoming FakeNews web application (slated for Phase 2) which will allow end-users to interact with the subnet. Through this web app, journalists, researchers, or even the general public could input a news article or claim and receive an AI-generated fact-check or credibility score, along with references if applicable. This essentially turns FakeNews into a decentralized fact-checking service that anyone can use to vet information on the fly. Such an application could be used by content platforms to flag potentially false news before it spreads, or by individuals to verify stories they come across on social media. Another use case is integration with browsers or messaging apps as a plugin or bot that automatically checks forwarded news links against the FakeNews subnet’s analysis.
During the earlier Testnet phase, FakeNews was used in a more limited, experimental capacity – the team invited a small set of miners to run nodes and solve tasks, mainly to test system performance and accuracy. This could be considered a pilot application where the AI models were stressed under near-real conditions. Any insights from that period were used to tune the network for the mainnet launch. Now on mainnet, although still early, FakeNews is effectively “in production” within the Bittensor ecosystem. Miners are earning real TAO rewards for their fact-checking work, and the subnet token is being traded (meaning there is real value tied to the service being provided). This indicates that the network has stakeholders betting on its success as a useful application.
In terms of live deployments, aside from the network itself, we expect to see the FakeNews web portal (or API) go live as Phase 2 progresses. This will be the first direct deployment for end-users, showcasing the subnet’s capabilities on live data. One can imagine a scenario where during a major news event, FakeNews could be used to rapidly verify reports from various sources, helping to combat rumors or fake updates in real time. Another potential application is collaboration with fact-checking organizations: for instance, an organization could use FakeNews as an automated assistant, filtering large volumes of user-submitted claims and highlighting those likely false for human fact-checkers to review further.
To summarize, FakeNews is currently being used as an AI misinformation watchdog within the Bittensor network, constantly training on and evaluating news content. Its forthcoming applications aim to bring this power to users and platforms outside of Bittensor, allowing real-time fact-checking at scale – something increasingly crucial given the speed at which “fake news” can spread. As development continues, we can expect to see FakeNews integrated into tools for journalists, social media moderation, and public information services, providing a decentralized defense against misinformation in everyday information consumption.
Architecture & Roles
FakeNews is implemented as a Bittensor subnet with the typical miner–validator architecture. Validators on Subnet 66 generate challenges (tasks) and verify miners’ answers, while miners use AI models to solve those tasks and produce responses. Every minute, validators create a new task by pulling a real news article (from major trustworthy publications) and using a large language model (LLM) to produce two variants: (1) a paraphrased version of the article that preserves all factual details, and (2) a fake news version that alters facts. These variants are labeled (with “0” denoting a reliable article and “1” denoting fake news). The validator thus presents miners with content that may be real or fake and asks them to evaluate it.
Miner Evaluation Process
Miners receive the original article text (initially, for the baseline phase) along with the altered versions, and must assign each version a score from 0 to 1 indicating truthfulness. In practice, miners attempt to classify each piece as true (0) or fake (1); their continuous scores are normalized to binary outputs (scores ≤0.5 map to 0 = true, and >0.5 map to 1 = fake). The baseline implementation provided by the FakeNews team uses large language models to perform this scoring automatically. In other words, miners can run an LLM (via openAI API or similar) that reads an article and its variant, then judges whether the content is factual or fabricated. As the network evolves, miners are free to improve on this baseline by training more specialized models or ensembles to increase accuracy.
Task Design
The FakeNews subnet introduces an evolving set of tasks for miners, each increasing in complexity to progressively push the AI’s fact-checking capabilities. Initially, Task #1 (Basic Comparison) provides the miner with an original news article and a pair of other articles that could be either real or fake (e.g. one real + one fake, two fakes, or two reals). The miner must identify which content is fake versus which is trustworthy. Task #2 (Advanced Comparison) ups the difficulty by using multiple rewritten versions of the original – for example, a weakly-worded fake, a strongly-worded fake, a weak paraphrase, and a strong paraphrase – to make the distinction less obvious. Subsequent tasks further challenge the miners: Task #3 (Reference Identification) requires miners not just to label content, but also to find 2–3 reliable external sources that back up the original article’s facts. Task #4 (Blind Verification) removes the original article from the prompt entirely, forcing miners to judge authenticity without a direct reference. By Task #5 (Fact-Level Checks), miners receive a list of individual statements (facts extracted from articles) and must decide which are true or false, providing credible references for each. Finally, Task #6 (Comprehensive Fact Analysis) presents a full article and expects the miner to break it down into claims, verify each claim’s accuracy with evidence, and then produce an overall credibility score for the entire piece. This staged approach — from simple true/false classification to full document analysis — is designed to progressively train and test the network’s AI on all aspects of fact-checking.
Mining & Validation Mechanics
Under the hood, validators continuously feed tasks into the network and evaluate miner responses. A validator compares a miner’s submitted answers (0/1 judgments) against the known truth labels it generated for the task (it knows which version was the real article and which was fake). Miners that correctly classify the content receive credit, while incorrect answers lower their performance score. The system keeps a history of each miner’s recent answers — by default, validators store the last 250 responses per miner (which equates to 500 evaluated items, since each task involves two pieces: the paraphrase and the fake). This history is used to calculate an accuracy score over a long window (e.g. last 300 results ≈ 150 tasks) and a short window (e.g. last 20 results ≈ 10 tasks). The miner’s overall performance score is a weighted combination of these long-term and short-term accuracy metrics. By blending long and short horizons (with equal weight α = 0.5 for each in the current design), the protocol ensures that new miners can prove themselves quickly (a newcomer who starts answering correctly will soon gain a high short-term accuracy) while also rewarding sustained high performance over time. This evaluation mechanism is intended to prevent exploitation (e.g. miners randomly guessing answers won’t maintain high accuracy) and to create healthy competition where miners consistently need to perform well to earn rewards.
In summary, FakeNews’s technical implementation leverages the Bittensor framework: a network of AI-powered miner nodes working on fact-checking tasks generated and overseen by validator nodes. It uses large language models for both task generation (simulating fake news) and task solving (detecting fakes) in its initial setup. Through a sequence of increasingly complex tasks and a robust scoring system, the subnet trains a distributed AI ensemble to rapidly distinguish truth from falsehood.
Use Cases and Current Applications
As of 2025, Subnet 66 FakeNews is actively running on Bittensor’s mainnet, serving as a proof-of-concept for decentralized, AI-driven fact checking. In practice today, the subnet’s validators are continuously ingesting real news content and generating fake-news challenges, and the miners are constantly analyzing these to discern truth from falsehood in real time. This means FakeNews is essentially operating as a live misinformation filter on whatever data sources it’s connected to (likely a stream of news articles from various reputable outlets). The immediate “users” of the subnet are the miners and validators themselves, who use the tasks and responses to train and hone AI models for fake news detection. For example, a miner’s node might receive a breaking news article about a political event along with a subtly manipulated version of that story; the miner’s AI will evaluate both and attempt to flag the fake one. The collective outcome is a network that is constantly verifying information – a process that could be seen as an automated, ongoing fact-checking feed.
While the outputs of FakeNews (e.g. which articles were deemed credible or not) are not yet publicly available in a user-friendly way, the project has plans to make its capabilities accessible. The primary envisioned application is a forthcoming FakeNews web application (slated for Phase 2) which will allow end-users to interact with the subnet. Through this web app, journalists, researchers, or even the general public could input a news article or claim and receive an AI-generated fact-check or credibility score, along with references if applicable. This essentially turns FakeNews into a decentralized fact-checking service that anyone can use to vet information on the fly. Such an application could be used by content platforms to flag potentially false news before it spreads, or by individuals to verify stories they come across on social media. Another use case is integration with browsers or messaging apps as a plugin or bot that automatically checks forwarded news links against the FakeNews subnet’s analysis.
During the earlier Testnet phase, FakeNews was used in a more limited, experimental capacity – the team invited a small set of miners to run nodes and solve tasks, mainly to test system performance and accuracy. This could be considered a pilot application where the AI models were stressed under near-real conditions. Any insights from that period were used to tune the network for the mainnet launch. Now on mainnet, although still early, FakeNews is effectively “in production” within the Bittensor ecosystem. Miners are earning real TAO rewards for their fact-checking work, and the subnet token is being traded (meaning there is real value tied to the service being provided). This indicates that the network has stakeholders betting on its success as a useful application.
In terms of live deployments, aside from the network itself, we expect to see the FakeNews web portal (or API) go live as Phase 2 progresses. This will be the first direct deployment for end-users, showcasing the subnet’s capabilities on live data. One can imagine a scenario where during a major news event, FakeNews could be used to rapidly verify reports from various sources, helping to combat rumors or fake updates in real time. Another potential application is collaboration with fact-checking organizations: for instance, an organization could use FakeNews as an automated assistant, filtering large volumes of user-submitted claims and highlighting those likely false for human fact-checkers to review further.
To summarize, FakeNews is currently being used as an AI misinformation watchdog within the Bittensor network, constantly training on and evaluating news content. Its forthcoming applications aim to bring this power to users and platforms outside of Bittensor, allowing real-time fact-checking at scale – something increasingly crucial given the speed at which “fake news” can spread. As development continues, we can expect to see FakeNews integrated into tools for journalists, social media moderation, and public information services, providing a decentralized defense against misinformation in everyday information consumption.
The FakeNews subnet is developed and maintained by an independent team within the Bittensor community, currently known by the moniker “Gutenberg Team.” This name, referencing the inventor of the printing press (Johannes Gutenberg), aligns with the project’s focus on news and information. The project’s code repository is hosted under the GitHub organization gutenberg-team, indicating the team’s chosen identity. The subnet owner’s on-chain account (which created Subnet 66) is identified by the address 5Grk5jAY…413MWik5, as recorded on the Bittensor chain. However, the individual identities of the developers or project leads have not been publicly disclosed. Like many Bittensor subnets, the contributors often go by pseudonyms or simply let the project’s name represent the collective effort. The FakeNews team established the subnet in January–February 2025 (the project’s official Twitter was created in Jan 2025 and the subnet was registered on Feb 19, 2025). They have since been active in onboarding miners and validators and sharing updates through community channels. Any specific project leads or developers are not explicitly named in public documentation, suggesting the team is keeping a low personal profile and allowing “FakeNews” (and the Gutenberg Team label) to speak for itself. As development progresses, the team’s contributions are visible via code commits on GitHub and posts from the official FakeNews subnet social media, rather than through individual biographies.
The FakeNews subnet is developed and maintained by an independent team within the Bittensor community, currently known by the moniker “Gutenberg Team.” This name, referencing the inventor of the printing press (Johannes Gutenberg), aligns with the project’s focus on news and information. The project’s code repository is hosted under the GitHub organization gutenberg-team, indicating the team’s chosen identity. The subnet owner’s on-chain account (which created Subnet 66) is identified by the address 5Grk5jAY…413MWik5, as recorded on the Bittensor chain. However, the individual identities of the developers or project leads have not been publicly disclosed. Like many Bittensor subnets, the contributors often go by pseudonyms or simply let the project’s name represent the collective effort. The FakeNews team established the subnet in January–February 2025 (the project’s official Twitter was created in Jan 2025 and the subnet was registered on Feb 19, 2025). They have since been active in onboarding miners and validators and sharing updates through community channels. Any specific project leads or developers are not explicitly named in public documentation, suggesting the team is keeping a low personal profile and allowing “FakeNews” (and the Gutenberg Team label) to speak for itself. As development progresses, the team’s contributions are visible via code commits on GitHub and posts from the official FakeNews subnet social media, rather than through individual biographies.
The FakeNews project has a clear three-phase roadmap to evolve its capabilities and ecosystem. Each phase builds on the previous, gradually expanding the system’s functionality and reach:
Phase 1 – Foundation & Launch
The initial phase focuses on building the core foundations for FakeNews. First, the team set out to establish a comprehensive “FakeNews” dataset. This involves collecting a large corpus of reliable news articles in real time, which will serve as ground truth for training and validating the models. Alongside data gathering, Phase 1 included a Testnet launch and miner onboarding: an early version of the subnet was deployed in a controlled test environment to recruit miners and refine the system under real-world conditions. This allowed the team to gather feedback and improve the subnet’s performance before public release. A baseline model was then introduced to miners, achieving roughly 82% accuracy in fake news detection during this phase. With these pieces in place, Phase 1 culminated in the Version 1.0 deployment to Bittensor mainnet. In this step, Subnet 66 was officially launched on the main network (dubbing it “SN66”), opening participation to a broader community of miners, validators, and delegators. By the end of Phase 1, the subnet had a working system with a moderate accuracy baseline and the infrastructure for further development.
Phase 2 – Expansion & User Interface
In the second phase, the FakeNews team plans to expand the task complexity and scope of the subnet. This means introducing more challenging fact-checking tasks and diversifying the dataset, so miners have to handle a wider variety of misinformation scenarios (for example, deeper analysis of sources, more subtle fake vs real distinctions, etc.). The goal is to continually improve accuracy and ensure miners can tackle all incoming task types effectively as the difficulty ramps up. A major milestone in Phase 2 is the launch of a web application (Web App) for FakeNews. This web interface will allow the wider community to engage with the subnet’s AI directly – for instance, users could submit news snippets or links and receive a fact-check or credibility analysis in real time. Essentially, the web app will turn the subnet into a publicly usable fake news detection service, not just a backend network. To support this, real-time request handling by miners will be enabled, connecting user queries to the decentralized AI validators/miners. Additionally, the team intends to release an improved baseline model as Phase 2 progresses. This new baseline would likely incorporate lessons from Phase 1 and handle the broadened task set with higher accuracy, serving as a stronger starting point for miners. By the end of Phase 2, FakeNews should have an interactive platform and a robust system capable of addressing a broad spectrum of misinformation challenges.
Phase 3 – Full-Scale Fact-Checking Solution
The final phase aims to mature FakeNews into a comprehensive, production-grade fact-checking system. The subnet will integrate advanced features to cover the broadest possible range of news and fact-checking needs. This might include more sophisticated AI techniques for cross-referencing information, handling multimedia misinformation (if within scope), or collaboration with external fact-checking services. Essentially, Phase 3 is about polishing the network to achieve the highest accuracy in the industry for automated fact verification. Alongside backend improvements, the user-facing application will be further enhanced – delivering an improved UX and new features in the FakeNews web app or related tools. The team also plans to extend language support during this phase. Whereas earlier phases likely focus on English-language news, Phase 3 will introduce support for multiple languages (French, Italian, German, Spanish are explicitly mentioned goals). This reflects the global nature of misinformation and ensures the subnet can serve fact-checking needs across different regions and languages. By the conclusion of Phase 3, FakeNews is expected to operate as a full-scale, multilingual fact-checking network with a high degree of trust and accuracy, potentially ready for mainstream adoption or integration into media platforms.
The FakeNews project has a clear three-phase roadmap to evolve its capabilities and ecosystem. Each phase builds on the previous, gradually expanding the system’s functionality and reach:
Phase 1 – Foundation & Launch
The initial phase focuses on building the core foundations for FakeNews. First, the team set out to establish a comprehensive “FakeNews” dataset. This involves collecting a large corpus of reliable news articles in real time, which will serve as ground truth for training and validating the models. Alongside data gathering, Phase 1 included a Testnet launch and miner onboarding: an early version of the subnet was deployed in a controlled test environment to recruit miners and refine the system under real-world conditions. This allowed the team to gather feedback and improve the subnet’s performance before public release. A baseline model was then introduced to miners, achieving roughly 82% accuracy in fake news detection during this phase. With these pieces in place, Phase 1 culminated in the Version 1.0 deployment to Bittensor mainnet. In this step, Subnet 66 was officially launched on the main network (dubbing it “SN66”), opening participation to a broader community of miners, validators, and delegators. By the end of Phase 1, the subnet had a working system with a moderate accuracy baseline and the infrastructure for further development.
Phase 2 – Expansion & User Interface
In the second phase, the FakeNews team plans to expand the task complexity and scope of the subnet. This means introducing more challenging fact-checking tasks and diversifying the dataset, so miners have to handle a wider variety of misinformation scenarios (for example, deeper analysis of sources, more subtle fake vs real distinctions, etc.). The goal is to continually improve accuracy and ensure miners can tackle all incoming task types effectively as the difficulty ramps up. A major milestone in Phase 2 is the launch of a web application (Web App) for FakeNews. This web interface will allow the wider community to engage with the subnet’s AI directly – for instance, users could submit news snippets or links and receive a fact-check or credibility analysis in real time. Essentially, the web app will turn the subnet into a publicly usable fake news detection service, not just a backend network. To support this, real-time request handling by miners will be enabled, connecting user queries to the decentralized AI validators/miners. Additionally, the team intends to release an improved baseline model as Phase 2 progresses. This new baseline would likely incorporate lessons from Phase 1 and handle the broadened task set with higher accuracy, serving as a stronger starting point for miners. By the end of Phase 2, FakeNews should have an interactive platform and a robust system capable of addressing a broad spectrum of misinformation challenges.
Phase 3 – Full-Scale Fact-Checking Solution
The final phase aims to mature FakeNews into a comprehensive, production-grade fact-checking system. The subnet will integrate advanced features to cover the broadest possible range of news and fact-checking needs. This might include more sophisticated AI techniques for cross-referencing information, handling multimedia misinformation (if within scope), or collaboration with external fact-checking services. Essentially, Phase 3 is about polishing the network to achieve the highest accuracy in the industry for automated fact verification. Alongside backend improvements, the user-facing application will be further enhanced – delivering an improved UX and new features in the FakeNews web app or related tools. The team also plans to extend language support during this phase. Whereas earlier phases likely focus on English-language news, Phase 3 will introduce support for multiple languages (French, Italian, German, Spanish are explicitly mentioned goals). This reflects the global nature of misinformation and ensures the subnet can serve fact-checking needs across different regions and languages. By the conclusion of Phase 3, FakeNews is expected to operate as a full-scale, multilingual fact-checking network with a high degree of trust and accuracy, potentially ready for mainstream adoption or integration into media platforms.
Keep ahead of the Bittensor exponential development curve…
Subnet Alpha is an informational platform for Bittensor Subnets.
This site is not affiliated with the Opentensor Foundation or TaoStats.
The content provided on this website is for informational purposes only. We make no guarantees regarding the accuracy or currency of the information at any given time.
Subnet Alpha is created and maintained by The Realistic Trader. If you have any suggestions or encounter any issues, please contact us at [email protected].
Copyright 2024