With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time

Subnet 34

BitMind

Emissions
Value
Recycled
Value
Recycled (24h)
Value
Registration Cost
Value
Active Validators
Value
Active Miners
Value
Active Dual Miners/Validators
Value

ABOUT

What exactly does it do?

The BitMind Subnet is a cutting-edge solution within the Bittensor network, utilizing advanced generative and discriminative AI models to detect AI-generated images. It operates on a decentralized, incentive-based framework designed to enhance trustworthiness while driving ongoing technological progress. With the rapid increase in high-quality synthetic media produced by generative AI models, distinguishing between real and artificial content has become increasingly challenging. The BitMind Subnet tackles this issue by offering powerful detection mechanisms to help preserve the integrity of digital media.

They are expanding their accessibility by developing an intuitive API and user interface, making it easier for users to integrate and leverage their detection technologies. Their platform continuously incorporates the latest AI research and advancements, ensuring it stays ahead of evolving generative techniques.

The BitMind Subnet is a cutting-edge solution within the Bittensor network, utilizing advanced generative and discriminative AI models to detect AI-generated images. It operates on a decentralized, incentive-based framework designed to enhance trustworthiness while driving ongoing technological progress. With the rapid increase in high-quality synthetic media produced by generative AI models, distinguishing between real and artificial content has become increasingly challenging. The BitMind Subnet tackles this issue by offering powerful detection mechanisms to help preserve the integrity of digital media.

They are expanding their accessibility by developing an intuitive API and user interface, making it easier for users to integrate and leverage their detection technologies. Their platform continuously incorporates the latest AI research and advancements, ensuring it stays ahead of evolving generative techniques.

PURPOSE

What exactly is the 'product/build'?

BitMind focuses on creating an AI-powered deep fake detection system. This system uses AI and machine learning models to detect content that has been manipulated or generated by AI. The core product is a platform for miners, validators, and users, where images, videos, and audio are analyzed for authenticity. It’s designed to classify media into categories like “real” or “AI-generated.” The system continuously improves by leveraging global participation, where miners contribute processing power to enhance the accuracy of detection models.

The platform uses a “Content Aware Model Orchestration” (CAMO) system, which aggregates multiple models to outperform the current state-of-the-art methods in academic research. This unique model aims for higher precision in identifying AI-generated content, even in complex scenarios such as face swaps, video manipulation, and audio alterations.

The detection service works by creating a dynamic validation environment where validators send images or videos to miners who predict whether they are real or AI-generated. These predictions are rewarded based on the historical performance of the miners. Additionally, the BitMind team is focused on scaling detection capabilities to handle various forms of content, including semi-synthetic and synthetic media.

Miners: Responsible for running binary classifiers that distinguish between authentic and AI-generated content.

Foundation Model: Drawing from insights in the 2024 CVPR paper Rethinking the Up-Sampling Operations in CNN-based Generative Network for Generalizable Deepfake Detection, their primary model employs Neighborhood Pixel Relationships to identify specific anomalies in AI-generated images.

Research Integration: They continuously update their detection models and methods based on the latest academic research, providing their community with resources like training codes and model weights.

Validators: Tasked with testing miners by presenting them with a carefully balanced selection of real and synthetic images sourced from a wide array of inputs.

Resource Expansion: They are dedicated to enhancing the validators’ effectiveness by expanding the diversity and size of the image pool, ensuring robust testing and validation processes.

BitMind focuses on creating an AI-powered deep fake detection system. This system uses AI and machine learning models to detect content that has been manipulated or generated by AI. The core product is a platform for miners, validators, and users, where images, videos, and audio are analyzed for authenticity. It’s designed to classify media into categories like “real” or “AI-generated.” The system continuously improves by leveraging global participation, where miners contribute processing power to enhance the accuracy of detection models.

The platform uses a “Content Aware Model Orchestration” (CAMO) system, which aggregates multiple models to outperform the current state-of-the-art methods in academic research. This unique model aims for higher precision in identifying AI-generated content, even in complex scenarios such as face swaps, video manipulation, and audio alterations.

The detection service works by creating a dynamic validation environment where validators send images or videos to miners who predict whether they are real or AI-generated. These predictions are rewarded based on the historical performance of the miners. Additionally, the BitMind team is focused on scaling detection capabilities to handle various forms of content, including semi-synthetic and synthetic media.

Miners: Responsible for running binary classifiers that distinguish between authentic and AI-generated content.

Foundation Model: Drawing from insights in the 2024 CVPR paper Rethinking the Up-Sampling Operations in CNN-based Generative Network for Generalizable Deepfake Detection, their primary model employs Neighborhood Pixel Relationships to identify specific anomalies in AI-generated images.

Research Integration: They continuously update their detection models and methods based on the latest academic research, providing their community with resources like training codes and model weights.

Validators: Tasked with testing miners by presenting them with a carefully balanced selection of real and synthetic images sourced from a wide array of inputs.

Resource Expansion: They are dedicated to enhancing the validators’ effectiveness by expanding the diversity and size of the image pool, ensuring robust testing and validation processes.

WHO

Team Info

The team is passionate about creating the most reliable and scalable content detection system available. Their focus is on addressing the growing problem of misinformation and the erosion of trust in media by providing accurate AI-generated content detection tools.

The team is also expanding the platform’s capabilities through a network of miners who contribute their computational power to improve the system’s accuracy. The collective effort from miners has led to notable improvements in detection performance, with the system outperforming existing models in some cases by 20%. This decentralized model not only improves accuracy but also ensures that the detection system continuously evolves as more data and new techniques are incorporated.

Ken Miyachi – Co-Founder

Ammar Mohiyadeen – Marketing Manager

Dylan Uys – AI

Benjamin Liang – AI

Andrew Liang – AI

The team is passionate about creating the most reliable and scalable content detection system available. Their focus is on addressing the growing problem of misinformation and the erosion of trust in media by providing accurate AI-generated content detection tools.

The team is also expanding the platform’s capabilities through a network of miners who contribute their computational power to improve the system’s accuracy. The collective effort from miners has led to notable improvements in detection performance, with the system outperforming existing models in some cases by 20%. This decentralized model not only improves accuracy but also ensures that the detection system continuously evolves as more data and new techniques are incorporated.

Ken Miyachi – Co-Founder

Ammar Mohiyadeen – Marketing Manager

Dylan Uys – AI

Benjamin Liang – AI

Andrew Liang – AI

FUTURE

Roadmap

Short-term Goals:

  • A paper detailing the technology and results of their deep fake detection models will be released soon, followed by feedback from the community.
  • The platform is expanding the number of modalities it can process (e.g., video, images, and audio), and the team plans to incorporate more sophisticated detection features, such as multi-class classification and localized heatmap predictions to highlight AI-manipulated sections in media.

 

Medium-term Goals:

  • Full implementation of the deep fake detection network with improved performance, scalability, and more complex detection tasks like face swaps and audio manipulations.
  • The introduction of semi-synthetic content detection, where AI-generated content is manipulated in subtle ways (e.g., inpainting or style transfers), to improve its detection capabilities.
  • Expanding the dataset using community-driven data contributions through incentivized tasks. This will allow for continuous improvement in the system’s performance and its ability to handle emerging types of media manipulation.
  • Launching consumer applications that leverage the deep fake detection system to build trust among users and help them evaluate media authenticity in real-time.

 

Long-term Goals:

  • Achieve a highly scalable and universally trusted deep fake detection system that can handle new AI models and manipulations without the need for retraining the models.
  • Establish BitMind as the global leader in AI-generated content detection, making the technology available to industries like news, entertainment, and social media platforms.
  • Grow the community and infrastructure around BitMind, reaching millions of users and miners, ensuring that the detection system evolves with the constantly changing landscape of AI-generated media.

 

Short-term Goals:

  • A paper detailing the technology and results of their deep fake detection models will be released soon, followed by feedback from the community.
  • The platform is expanding the number of modalities it can process (e.g., video, images, and audio), and the team plans to incorporate more sophisticated detection features, such as multi-class classification and localized heatmap predictions to highlight AI-manipulated sections in media.

 

Medium-term Goals:

  • Full implementation of the deep fake detection network with improved performance, scalability, and more complex detection tasks like face swaps and audio manipulations.
  • The introduction of semi-synthetic content detection, where AI-generated content is manipulated in subtle ways (e.g., inpainting or style transfers), to improve its detection capabilities.
  • Expanding the dataset using community-driven data contributions through incentivized tasks. This will allow for continuous improvement in the system’s performance and its ability to handle emerging types of media manipulation.
  • Launching consumer applications that leverage the deep fake detection system to build trust among users and help them evaluate media authenticity in real-time.

 

Long-term Goals:

  • Achieve a highly scalable and universally trusted deep fake detection system that can handle new AI models and manipulations without the need for retraining the models.
  • Establish BitMind as the global leader in AI-generated content detection, making the technology available to industries like news, entertainment, and social media platforms.
  • Grow the community and infrastructure around BitMind, reaching millions of users and miners, ensuring that the detection system evolves with the constantly changing landscape of AI-generated media.

 

MEDIA

A big thank you to Tao Stats for producing these insightful videos in the Novelty Search series. We appreciate the opportunity to dive deep into the groundbreaking work being done by Subnets within Bittensor! Check out some of their other videos HERE.

In this session, the BitMind team discusses Subnet 34, a groundbreaking platform focused on detecting AI-generated content across various media types, such as images, videos, and audio. The team highlights the growing challenge of identifying deep fakes and manipulated media, which is eroding trust in online information. They explain the technology behind their deep fake detection system, which outperforms current academic models, and the decentralized network of miners and validators that contribute to its continuous improvement. The team also explores the roadmap for enhancing detection capabilities, incorporating new AI models, and expanding the platform’s use in real-world applications. Ultimately, BitMind aims to provide a reliable and authoritative solution for content verification, combating misinformation, and building trust in digital media.

A special thanks to Mark Jeffrey for his amazing Hash Rate series! In this series, he provides valuable insights into Bittensor Subnets and the world of decentralized AI. Be sure to check out the full series on his YouTube channel for more expert analysis and deep dives.

This session, recorded late 2024, features Ken Miyachi of Bittensor’s Subnet 34, focusing on deep fake detection using AI. Ken discusses how his subnet leverages Bittensor’s decentralized framework to create an open and competitive environment for improving AI models that classify whether digital content—currently images, but soon to include video and audio—is AI-generated or real. The conversation delves into the rapid growth of AI-generated media, with deep fakes becoming increasingly realistic, posing risks of misinformation and media manipulation. Ken highlights the importance of transparent, decentralized solutions for deep fake detection and explains how users can interact with their technology through a browser extension, website, and community-driven platforms. Additionally, he explores the unique aspects of Bittensor’s economic and incentive structures, which encourage miners and validators to improve detection models through open market competition, helping to tackle the growing challenges of generative AI.

Novelty Search is great, but for most investors trying to understand Bittensor, the technical depth is a wall, not a bridge. If we’re going to attract investment into this ecosystem then we need more people to understand it! That’s why Siam Kidd and Mark Creaser from DSV Fund have launched Revenue Search, where they ask the simple questions that investors want to know the answers to.

In this session, recorded in June 2025, Ken, the founder of the BitMind subnet, joins the hosts of Revenue Search to discuss his innovative work in deepfake detection using computer vision technology. Ken explains how BitMind focuses on detecting AI-generated or manipulated images and videos. With deepfake technology becoming a significant threat, especially in the context of misinformation, BitMind aims to offer solutions that help verify the authenticity of media content. Ken shares insights into the development of this technology and its potential applications in both consumer and enterprise markets. The conversation also delves into the challenges of AI verification, including the competition between generative AI models and detection technologies, as well as BitMind’s upcoming plans to monetize their services through a subscription model via a mobile app. Additionally, Ken talks about the potential partnerships that could accelerate BitMind’s growth, including possible integrations with large social media platforms like X and Meta. This session offers a fascinating glimpse into the future of AI verification and the role of decentralized networks in tackling real-world problems like misinformation.

 

NEWS

Announcements

Load More