With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time

Subnet 99

Neza

Emissions
Value
Recycled
Value
Recycled (24h)
Value
Registration Cost
Value
Active Validators
Value
Active Miners
Value
Active Dual Miners/Validators
Value

ABOUT

What exactly does it do?

Neza (Subnet 99) is a fully decentralized AI-powered video generation network built on the Bittensor platform. In practice, validators define video-generation “workflows” or tasks, which are then executed by miners running video-generation models. Neza “shards” advanced open-source video models (e.g. WAN2.1) across its peer-to-peer network, so that each miner and validator runs its own copy of the model. When a workflow is submitted (either as a synthetic benchmark or a user prompt), miners process it (via ComfyUI) to generate a video. Validators then use the ImageBind model to evaluate the generated video’s quality, speed, and reliability, and distribute rewards accordingly.

Decentralized video generation: Every miner runs an AI video model locally through ComfyUI (a visual workflow tool). Validators dispatch tasks and use ImageBind to assess outputs. This ensures there is “no central authority” – the network relies purely on validators’ and miners’ hardware.

Quality and speed incentives: Miners are rewarded based on the quality of their videos (measured against benchmarks via ImageBind) and their processing speed (frames per second). Higher quality and faster output earn greater TAO rewards.

P2P content access: Generated videos are uploaded to public cloud storage for users or other validators to retrieve. Over time, Neza aims to operate as a public “playground” API for on-demand text-to-video and related AI media generation.

In summary, Neza enables creators to generate high-quality videos on demand by tapping into a distributed network of AI models. Its combination of decentralized task execution, open-source video models, and automated quality scoring provides a scalable infrastructure for AI video production.

Neza (Subnet 99) is a fully decentralized AI-powered video generation network built on the Bittensor platform. In practice, validators define video-generation “workflows” or tasks, which are then executed by miners running video-generation models. Neza “shards” advanced open-source video models (e.g. WAN2.1) across its peer-to-peer network, so that each miner and validator runs its own copy of the model. When a workflow is submitted (either as a synthetic benchmark or a user prompt), miners process it (via ComfyUI) to generate a video. Validators then use the ImageBind model to evaluate the generated video’s quality, speed, and reliability, and distribute rewards accordingly.

Decentralized video generation: Every miner runs an AI video model locally through ComfyUI (a visual workflow tool). Validators dispatch tasks and use ImageBind to assess outputs. This ensures there is “no central authority” – the network relies purely on validators’ and miners’ hardware.

Quality and speed incentives: Miners are rewarded based on the quality of their videos (measured against benchmarks via ImageBind) and their processing speed (frames per second). Higher quality and faster output earn greater TAO rewards.

P2P content access: Generated videos are uploaded to public cloud storage for users or other validators to retrieve. Over time, Neza aims to operate as a public “playground” API for on-demand text-to-video and related AI media generation.

In summary, Neza enables creators to generate high-quality videos on demand by tapping into a distributed network of AI models. Its combination of decentralized task execution, open-source video models, and automated quality scoring provides a scalable infrastructure for AI video production.

PURPOSE

What exactly is the 'product/build'?

Neza’s product is essentially an open-source Bittensor subnet for AI-driven video creation. It consists of:

Software Stack: The code (on GitHub) defines two roles – miner and validator. Miners run video-generation workflows via ComfyUI (an open-source video model interface). Validators publish tasks and then use a built-in VideoVerifier (based on ImageBind) to score video outputs.

AI Models: Neza shards state-of-the-art video generation models (e.g. WAN2.1) across its network. Each node has a copy of the model and processes inputs independently.

Workflow Mechanism: Two workflow types are supported: synthetic (validator-defined benchmark tasks to test miners) and organic (real user requests). Validators queue these workflows and dispatch them to miners in real-time. After miners produce videos, validators immediately evaluate quality via ImageBind and score the miners based on quality, speed, and response time.

Decentralized Architecture: By design, Neza uses no central server or API. Each validator/miner pair communicates peer-to-peer over the Bittensor blockchain. The network’s state (such as hyperparameters and rewards) is recorded on Bittensor’s chain, leveraging TAO tokens for incentives.

Open APIs and Tools: The project provides a user-facing Playground for prompt-to-video generation and an API for developers. It also plans integrations (mentioned in the roadmap) for further tools like post-processing (filters, subtitles, audio) and business integrations.

In essence, the Neza “product” is the combined software infrastructure and community that together enables decentralized video synthesis. It’s a complete pipeline: from text/image prompt → distributed model execution → video output → quality assessment → reward distribution. The GitHub repository and its documentation guide users on running miner and validator nodes (with recommended hardware). As an open-source project under the MIT license, Neza’s code and models can be inspected, forked, or improved by the community.

 

Neza’s product is essentially an open-source Bittensor subnet for AI-driven video creation. It consists of:

Software Stack: The code (on GitHub) defines two roles – miner and validator. Miners run video-generation workflows via ComfyUI (an open-source video model interface). Validators publish tasks and then use a built-in VideoVerifier (based on ImageBind) to score video outputs.

AI Models: Neza shards state-of-the-art video generation models (e.g. WAN2.1) across its network. Each node has a copy of the model and processes inputs independently.

Workflow Mechanism: Two workflow types are supported: synthetic (validator-defined benchmark tasks to test miners) and organic (real user requests). Validators queue these workflows and dispatch them to miners in real-time. After miners produce videos, validators immediately evaluate quality via ImageBind and score the miners based on quality, speed, and response time.

Decentralized Architecture: By design, Neza uses no central server or API. Each validator/miner pair communicates peer-to-peer over the Bittensor blockchain. The network’s state (such as hyperparameters and rewards) is recorded on Bittensor’s chain, leveraging TAO tokens for incentives.

Open APIs and Tools: The project provides a user-facing Playground for prompt-to-video generation and an API for developers. It also plans integrations (mentioned in the roadmap) for further tools like post-processing (filters, subtitles, audio) and business integrations.

In essence, the Neza “product” is the combined software infrastructure and community that together enables decentralized video synthesis. It’s a complete pipeline: from text/image prompt → distributed model execution → video output → quality assessment → reward distribution. The GitHub repository and its documentation guide users on running miner and validator nodes (with recommended hardware). As an open-source project under the MIT license, Neza’s code and models can be inspected, forked, or improved by the community.

 

WHO

Team Info

No public information has been released about the individual team members or organization behind Neza. The GitHub repository (subnet99/Neza) was created on July 11, 2025 by a user with the handle “hunters-xx”, who appears to be the primary committer. However, no real names or affiliations are listed in the code or on the official website, and no press announcements naming founders have been found. In short, the developers’ identities are not disclosed on available public sources.

 

No public information has been released about the individual team members or organization behind Neza. The GitHub repository (subnet99/Neza) was created on July 11, 2025 by a user with the handle “hunters-xx”, who appears to be the primary committer. However, no real names or affiliations are listed in the code or on the official website, and no press announcements naming founders have been found. In short, the developers’ identities are not disclosed on available public sources.

 

FUTURE

Roadmap

Q3 2025 – Foundation & Launch: Define video scoring criteria (semantic relevance, visual quality, temporal consistency); implement competitive incentives for model improvement; launch a public Playground UI for prompt-to-video generation; and provide an open API for developers/integrators.

Q4 2025 – Monetization & Rights: Add features for commercial use (e.g. watermark removal, licensing flow) and enable creators to earn revenue from their generated videos. Also launch a public content “plaza” to showcase high-quality videos.

Q1 2026 – Model Expansion & Post-Production Tools: Integrate additional diverse video-generation models for broader content types. Introduce post-processing (trimming, style filters, subtitles, audio) and apply video outputs to real-world business use-cases (e.g. e-commerce product demos, explainer clips).

Q2 2026 – Creator Ecosystem & Platform Growth: Build community features such as creator tiers, dynamic revenue-sharing (based on video performance), leaderboards, and incentive campaigns to encourage participation. Expand integrations with external platforms (e.g. e-commerce, education, content marketing). Streamline payments and licensing (via Stripe, PayPal, USDT and Bittensor’s TAO token) to enable seamless global/Web3 transactions.

 

Q3 2025 – Foundation & Launch: Define video scoring criteria (semantic relevance, visual quality, temporal consistency); implement competitive incentives for model improvement; launch a public Playground UI for prompt-to-video generation; and provide an open API for developers/integrators.

Q4 2025 – Monetization & Rights: Add features for commercial use (e.g. watermark removal, licensing flow) and enable creators to earn revenue from their generated videos. Also launch a public content “plaza” to showcase high-quality videos.

Q1 2026 – Model Expansion & Post-Production Tools: Integrate additional diverse video-generation models for broader content types. Introduce post-processing (trimming, style filters, subtitles, audio) and apply video outputs to real-world business use-cases (e.g. e-commerce product demos, explainer clips).

Q2 2026 – Creator Ecosystem & Platform Growth: Build community features such as creator tiers, dynamic revenue-sharing (based on video performance), leaderboards, and incentive campaigns to encourage participation. Expand integrations with external platforms (e.g. e-commerce, education, content marketing). Streamline payments and licensing (via Stripe, PayPal, USDT and Bittensor’s TAO token) to enable seamless global/Web3 transactions.

 

NEWS

Announcements