With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time

Subnet 85

Vidaio

Emissions
Value
Recycled
Value
Recycled (24h)
Value
Registration Cost
Value
Active Validators
Value
Active Miners
Value
Active Dual Miners/Validators
Value

ABOUT

What exactly does it do?

Vidaio is an open-source video processing subnet focused on AI-driven video upscaling (and soon compression/streaming). Its mission is to “democratise video enhancement through decentralisation, artificial intelligence, and blockchain technology”, providing creators and businesses with scalable, affordable, high-quality video processing. The Vidaio team emphasizes a decentralized, community-led approach: the project’s code is fully open-source and built atop Bittensor’s merit-based model. By leveraging Bittensor, Vidaio can tap into a distributed pool of GPUs and reward contributors via on-chain incentives, ensuring continual improvement of its AI models.

Vidaio uses a number of deep learning models to enhance low-resolution videos by predicting and generating high-resolution details. Unlike traditional upscaling methods, which rely on interpolation, their AI-based techniques analyze patterns, textures, and edges to reconstruct sharper and more realistic frames. Vidaio can instantly adapt to growing demand making it a viable solution for businesses of all sizes and eliminates a single point of control, enhancing security and censorship resistance.

Vidaio is an open-source video processing subnet focused on AI-driven video upscaling (and soon compression/streaming). Its mission is to “democratise video enhancement through decentralisation, artificial intelligence, and blockchain technology”, providing creators and businesses with scalable, affordable, high-quality video processing. The Vidaio team emphasizes a decentralized, community-led approach: the project’s code is fully open-source and built atop Bittensor’s merit-based model. By leveraging Bittensor, Vidaio can tap into a distributed pool of GPUs and reward contributors via on-chain incentives, ensuring continual improvement of its AI models.

Vidaio uses a number of deep learning models to enhance low-resolution videos by predicting and generating high-resolution details. Unlike traditional upscaling methods, which rely on interpolation, their AI-based techniques analyze patterns, textures, and edges to reconstruct sharper and more realistic frames. Vidaio can instantly adapt to growing demand making it a viable solution for businesses of all sizes and eliminates a single point of control, enhancing security and censorship resistance.

PURPOSE

What exactly is the 'product/build'?

Vidaio follows the standard Bittensor subnet design with miners and validators cooperating via two “synapses” (query types).

  • Miners: Run AI upscaling models on video data. Miners “enhance video quality using AI-driven upscaling techniques.” They may fine-tune open-source super-resolution models or develop new ones, and they handle upscaling requests from validators and end-users. In practice, miners take low-resolution video chunks (from synthetic or organic queries) and output high-resolution versions.
  • Validators: Check that miners’ outputs meet quality standards. Validators feed controlled test videos to miners and “ensure miners deliver consistent, high-quality results by evaluating performance”. In a Synthetic Query, validators downscale a known high-resolution clip, send it to miners to upscale, then score the result using perceptual metrics (Vidaio uses VMAF and PIE-APP for this). In an Organic Query, real user videos are split into chunks and queued to miners; once upscaled, the pieces are reassembled and returned to the user. Validators then use a no-reference quality metric (Vidaio uses ClipIQA+ for organic jobs) to assess the output’s perceptual quality.
  • Incentive Mechanism: Vidaio’s reward system is built on video-quality metrics. Validators periodically submit weight vectors reflecting each miner’s scores on the subnet. The Bittensor blockchain aggregates these vectors and applies the Yuma consensus algorithm, which combines each miner’s weighted score with its staked TAO to compute token emissions. Thus, miners earn more TAO by consistently producing high-VMAF/PIE-APP scores, and validators are themselves rewarded for honest scoring. This merit-based model (unique to each subnet) drives the network to optimize for perceptual video quality. (In benchmark tests, Vidaio’s upscaler outperformed leading centralized solutions – e.g. its ClipIQA+ score was 0.4697 vs. 0.4658 for Topaz – demonstrating strong perceptual performance.)

 

Products & Services

Vidaio’s primary offering is a video upscaling service accessible via its web portal. Users can create an account, upload a low-resolution video, and choose a target resolution (e.g. 4K) for AI enhancement. The interface shows the upload file, chosen quality, and expected processing time, and has an “Upscale Video” button to submit the job.

Vidaio’s web interface allows users to upload videos and select a target resolution (e.g. 4K) for AI-based upscaling. In addition to the web app, Vidaio has a fully open-source codebase and API. The GitHub repository (vidaio-subnet/vidaio-subnet) contains the core subnet code and setup guides for running miners or validators. The team plans to release a developer API/SDK so other apps can integrate Vidaio’s processing.

 

Technical Architecture

Vidaio’s tech stack combines decentralized blockchain components with AI/ML video pipelines. Core elements include:

  • Bittensor Integration: The subnet runs on the Bittensor chain. Both miners and validators use the Bittensor Python SDK/CLI to register, stake, and communicate on-chain. Validators regularly send weight vectors to the chain, and the Bittensor network’s Yuma consensus module computes emissions. (This is standard for all Bittensor subnets.) Vidaio follows the dynamic TAO model, meaning TAO tokens staked into Vidaio (via the Bittensor wallet) generate dTAO reflecting Vidaio’s value and share in emissions.
  • AI/ML Models: Vidaio’s upscaling engine uses state-of-the-art super-resolution models. The code references known open-source frameworks (e.g. Video2X, ESRGAN variants) and implements novel enhancements. Training and inference are done in Python (likely using PyTorch or TensorFlow libraries, given the requirements). The repository’s Docker setup suggests containerized deployment for reproducibility. For quality evaluation, Vidaio employs multiple metrics (VMAF, PIE-APP, TOPIQ, LPIPS, ClipIQA+, etc.) as shown in their documentation.
  • Infrastructure: Although decentralized, typical miner nodes are GPU-equipped servers/instances running Vidaio’s Python mining software. The GitHub structure (folders like vidaio_subnet_core, services, docker) indicates a modular backend. The public web service (vidaio.io) likely runs on cloud servers using this core code plus a web frontend (React/Vue, since a frontend dev is on the team).
  • Interoperability & Tooling: Vidaio interoperates seamlessly with the Bittensor network: it observes the metagraph of peers and follows the Bittensor protocol. It uses standard crypto wallets (multisig hotkeys) for TAO. Tools like Docker, the Bittensor CLI, and monitoring dashboards are part of the stack. The GitHub repo includes setup guides for validators and miners, as well as an “Incentive Mechanism Guide” (linked in the docs) to explain the scoring algorithm.

 

Vidaio follows the standard Bittensor subnet design with miners and validators cooperating via two “synapses” (query types).

  • Miners: Run AI upscaling models on video data. Miners “enhance video quality using AI-driven upscaling techniques.” They may fine-tune open-source super-resolution models or develop new ones, and they handle upscaling requests from validators and end-users. In practice, miners take low-resolution video chunks (from synthetic or organic queries) and output high-resolution versions.
  • Validators: Check that miners’ outputs meet quality standards. Validators feed controlled test videos to miners and “ensure miners deliver consistent, high-quality results by evaluating performance”. In a Synthetic Query, validators downscale a known high-resolution clip, send it to miners to upscale, then score the result using perceptual metrics (Vidaio uses VMAF and PIE-APP for this). In an Organic Query, real user videos are split into chunks and queued to miners; once upscaled, the pieces are reassembled and returned to the user. Validators then use a no-reference quality metric (Vidaio uses ClipIQA+ for organic jobs) to assess the output’s perceptual quality.
  • Incentive Mechanism: Vidaio’s reward system is built on video-quality metrics. Validators periodically submit weight vectors reflecting each miner’s scores on the subnet. The Bittensor blockchain aggregates these vectors and applies the Yuma consensus algorithm, which combines each miner’s weighted score with its staked TAO to compute token emissions. Thus, miners earn more TAO by consistently producing high-VMAF/PIE-APP scores, and validators are themselves rewarded for honest scoring. This merit-based model (unique to each subnet) drives the network to optimize for perceptual video quality. (In benchmark tests, Vidaio’s upscaler outperformed leading centralized solutions – e.g. its ClipIQA+ score was 0.4697 vs. 0.4658 for Topaz – demonstrating strong perceptual performance.)

 

Products & Services

Vidaio’s primary offering is a video upscaling service accessible via its web portal. Users can create an account, upload a low-resolution video, and choose a target resolution (e.g. 4K) for AI enhancement. The interface shows the upload file, chosen quality, and expected processing time, and has an “Upscale Video” button to submit the job.

Vidaio’s web interface allows users to upload videos and select a target resolution (e.g. 4K) for AI-based upscaling. In addition to the web app, Vidaio has a fully open-source codebase and API. The GitHub repository (vidaio-subnet/vidaio-subnet) contains the core subnet code and setup guides for running miners or validators. The team plans to release a developer API/SDK so other apps can integrate Vidaio’s processing.

 

Technical Architecture

Vidaio’s tech stack combines decentralized blockchain components with AI/ML video pipelines. Core elements include:

  • Bittensor Integration: The subnet runs on the Bittensor chain. Both miners and validators use the Bittensor Python SDK/CLI to register, stake, and communicate on-chain. Validators regularly send weight vectors to the chain, and the Bittensor network’s Yuma consensus module computes emissions. (This is standard for all Bittensor subnets.) Vidaio follows the dynamic TAO model, meaning TAO tokens staked into Vidaio (via the Bittensor wallet) generate dTAO reflecting Vidaio’s value and share in emissions.
  • AI/ML Models: Vidaio’s upscaling engine uses state-of-the-art super-resolution models. The code references known open-source frameworks (e.g. Video2X, ESRGAN variants) and implements novel enhancements. Training and inference are done in Python (likely using PyTorch or TensorFlow libraries, given the requirements). The repository’s Docker setup suggests containerized deployment for reproducibility. For quality evaluation, Vidaio employs multiple metrics (VMAF, PIE-APP, TOPIQ, LPIPS, ClipIQA+, etc.) as shown in their documentation.
  • Infrastructure: Although decentralized, typical miner nodes are GPU-equipped servers/instances running Vidaio’s Python mining software. The GitHub structure (folders like vidaio_subnet_core, services, docker) indicates a modular backend. The public web service (vidaio.io) likely runs on cloud servers using this core code plus a web frontend (React/Vue, since a frontend dev is on the team).
  • Interoperability & Tooling: Vidaio interoperates seamlessly with the Bittensor network: it observes the metagraph of peers and follows the Bittensor protocol. It uses standard crypto wallets (multisig hotkeys) for TAO. Tools like Docker, the Bittensor CLI, and monitoring dashboards are part of the stack. The GitHub repo includes setup guides for validators and miners, as well as an “Incentive Mechanism Guide” (linked in the docs) to explain the scoring algorithm.

 

WHO

Team Info

Vidaio is developed by a multi-disciplinary team of industry professionals (names as listed on the website). Key members include:

Gareth Howells – Director (20+ years in video industry, product leader).

Ahmad Ayad – Machine Learning Engineer (AI/ML specialist).

Gopi Jayaraman – Video Technology Expert (AV/multimedia veteran).

Medfil D – Subnet Developer (backend & AI/ML engineer).

Chinaza – UI/UX Designer.

Akinwunmi Aguda – Frontend Developer.

Marcus “mogmachine” Graichen – Angel Advisor (cryptoinvestor and Bittensor community figure).

Many of the above contributors have public profiles (e.g. Marcus Graichen is known as “mogmachine” on Bittensor forums/X), and the GitHub repo shows activity by the Vidaio org. The project maintains an active presence on social media and forums: the official X (Twitter) handle is @vidaio_τ (declared as “open-source, decentralized video processing, subnet 85 on Bittensor” in the bio), and a community Discord is open via the invite on their site. All code and documentation are published on GitHub (confirmed by project links on their Medium blog), and the team engages with the Bittensor community for announcements and support.

 

Vidaio is developed by a multi-disciplinary team of industry professionals (names as listed on the website). Key members include:

Gareth Howells – Director (20+ years in video industry, product leader).

Ahmad Ayad – Machine Learning Engineer (AI/ML specialist).

Gopi Jayaraman – Video Technology Expert (AV/multimedia veteran).

Medfil D – Subnet Developer (backend & AI/ML engineer).

Chinaza – UI/UX Designer.

Akinwunmi Aguda – Frontend Developer.

Marcus “mogmachine” Graichen – Angel Advisor (cryptoinvestor and Bittensor community figure).

Many of the above contributors have public profiles (e.g. Marcus Graichen is known as “mogmachine” on Bittensor forums/X), and the GitHub repo shows activity by the Vidaio org. The project maintains an active presence on social media and forums: the official X (Twitter) handle is @vidaio_τ (declared as “open-source, decentralized video processing, subnet 85 on Bittensor” in the bio), and a community Discord is open via the invite on their site. All code and documentation are published on GitHub (confirmed by project links on their Medium blog), and the team engages with the Bittensor community for announcements and support.

 

FUTURE

Roadmap

Vidaio’s roadmap lays out a clear timeline from Q1–Q4 2025 and beyond. The Phase 1 Upscaling Synapse launched in early 2025, delivering the initial product. Benchmark results already show its base model exceeding leading proprietary upscalers. In the short term, Vidaio is implementing Phase 2 (AI-powered video compression, Q2 2025) and Phase 3 (transcode optimization, Q3). The upcoming Phase 4 (on-demand streaming) and Phase 5 (live streaming) target Q4 2025, aiming to enable decentralized streaming features. By 2026, they plan to roll out Phase 6 (public API for external integration). The team regularly publishes updates (via X/Discord) on milestone progress. In sum, Vidaio has a clear, technology-driven roadmap and is on track: the active subnet (SN85) is fully functional for upscaling today, with successive features being developed on schedule.

Tangible product milestones on the roadmap include:

  • AI Video Compression (Q2 2025): A model to compress videos with minimal quality loss (reducing bitrate/file size).
  • Transcode Optimization Synapse (Q3 2025): GPU-powered transcoding for format/device compatibility, scored by speed and quality.
  • On-Demand Streaming Architecture (Q4 2025): Decentralized infrastructure (with P2P storage) for low-latency video streaming.
  • Live Streaming Support (2026): Real-time upscaling/transcoding of live video feeds with adaptive bitrate.
  • Public API (2026+): RESTful endpoints for uploading, processing, and retrieving videos, enabling third-party integration.

 

All major features and timelines are documented on the site and GitHub. The initial Upscaling Synapse (Phase 1) is complete – the subnet launched in early 2025 with live upscaling service – and the project is now building the next phases (compression, transcoding, etc.). This makes Vidaio one of the few subnets with a running product at launch.

 

Vidaio’s roadmap lays out a clear timeline from Q1–Q4 2025 and beyond. The Phase 1 Upscaling Synapse launched in early 2025, delivering the initial product. Benchmark results already show its base model exceeding leading proprietary upscalers. In the short term, Vidaio is implementing Phase 2 (AI-powered video compression, Q2 2025) and Phase 3 (transcode optimization, Q3). The upcoming Phase 4 (on-demand streaming) and Phase 5 (live streaming) target Q4 2025, aiming to enable decentralized streaming features. By 2026, they plan to roll out Phase 6 (public API for external integration). The team regularly publishes updates (via X/Discord) on milestone progress. In sum, Vidaio has a clear, technology-driven roadmap and is on track: the active subnet (SN85) is fully functional for upscaling today, with successive features being developed on schedule.

Tangible product milestones on the roadmap include:

  • AI Video Compression (Q2 2025): A model to compress videos with minimal quality loss (reducing bitrate/file size).
  • Transcode Optimization Synapse (Q3 2025): GPU-powered transcoding for format/device compatibility, scored by speed and quality.
  • On-Demand Streaming Architecture (Q4 2025): Decentralized infrastructure (with P2P storage) for low-latency video streaming.
  • Live Streaming Support (2026): Real-time upscaling/transcoding of live video feeds with adaptive bitrate.
  • Public API (2026+): RESTful endpoints for uploading, processing, and retrieving videos, enabling third-party integration.

 

All major features and timelines are documented on the site and GitHub. The initial Upscaling Synapse (Phase 1) is complete – the subnet launched in early 2025 with live upscaling service – and the project is now building the next phases (compression, transcoding, etc.). This makes Vidaio one of the few subnets with a running product at launch.

 

NEWS

Announcements

MORE INFO

Useful Links