With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time

Subnet 110

Green Compute

Alpha Price
Value
Market Cap
Value
Neurons
Value
Registration Cost
Value
TAO Liquidity
Value
Alpha in Pool
Value
Total Alpha Supply
Value
% Alpha Staked
Value

ABOUT

What exactly does it do?

Core Mission and Purpose:

Green Compute is Bittensor subnet 110 – an inference marketplace where only verifiably green compute is rewarded. It targets enterprise-grade AI inference tasks, offering an OpenAI-compatible API for chat-completions and other model queries. The core thesis is to recycle otherwise-wasted renewable energy into AI compute. In practice, Green Compute turns surplus power (biogas from farms, excess solar/wind, etc.) into usable GPU compute for inference, giving site owners (like dairy farms or solar installs) far higher returns than selling power to the grid. This also fits a market need: unlike spot GPU rentals, enterprises often need long-term, large-scale clusters with predictable performance. Green Compute aims to provide identical multiprocessor rigs (e.g. clusters of 4090/5090 GPUs) and human support to meet those enterprise demands.

Miner/Validator Incentive Loop:

Participation on SN110 follows Bittensor’s proof-of-work-market model. GPU operators register as miners and must first prove their power source is green. Once verified (e.g. via hardware location proofs or oracles attesting to biogas/solar/wind use), miners run inference workloads on the network’s models. Validators (the subnet’s scoring nodes) then evaluate each miner’s performance. Per Bittensor’s rules, validators send inference queries to miners and score their answers, and these scores determine each miner’s share of the subnet’s new-token emissions. In Green Compute specifically, a portion of the enterprise revenue is converted into the subnet’s ALPHA tokens. In short, miners contribute raw GPU compute (serving model inference) and are scored by validators for output quality; they earn ALPHA (and TAO) proportional to performance, while validators and stakers earn by securing the network and locking tokens, respectively.

Output and Users:

The practical output of SN110 is AI inference results. Customers (e.g. enterprises or researchers) send queries through Green Compute’s API (which mimics OpenAI’s API), and receive model outputs (like chat completions) generated by the underlying GPUs. Intended customers are businesses that need scalable AI inference and have an interest in sustainability; they benefit from deep biogas-floor pricing (no premium for green energy) and robust infrastructure. Green Compute is unique in Bittensor because it’s explicitly a sustainable compute market – a niche not addressed by other Bittensor subnets.

Core Mission and Purpose:

Green Compute is Bittensor subnet 110 – an inference marketplace where only verifiably green compute is rewarded. It targets enterprise-grade AI inference tasks, offering an OpenAI-compatible API for chat-completions and other model queries. The core thesis is to recycle otherwise-wasted renewable energy into AI compute. In practice, Green Compute turns surplus power (biogas from farms, excess solar/wind, etc.) into usable GPU compute for inference, giving site owners (like dairy farms or solar installs) far higher returns than selling power to the grid. This also fits a market need: unlike spot GPU rentals, enterprises often need long-term, large-scale clusters with predictable performance. Green Compute aims to provide identical multiprocessor rigs (e.g. clusters of 4090/5090 GPUs) and human support to meet those enterprise demands.

Miner/Validator Incentive Loop:

Participation on SN110 follows Bittensor’s proof-of-work-market model. GPU operators register as miners and must first prove their power source is green. Once verified (e.g. via hardware location proofs or oracles attesting to biogas/solar/wind use), miners run inference workloads on the network’s models. Validators (the subnet’s scoring nodes) then evaluate each miner’s performance. Per Bittensor’s rules, validators send inference queries to miners and score their answers, and these scores determine each miner’s share of the subnet’s new-token emissions. In Green Compute specifically, a portion of the enterprise revenue is converted into the subnet’s ALPHA tokens. In short, miners contribute raw GPU compute (serving model inference) and are scored by validators for output quality; they earn ALPHA (and TAO) proportional to performance, while validators and stakers earn by securing the network and locking tokens, respectively.

Output and Users:

The practical output of SN110 is AI inference results. Customers (e.g. enterprises or researchers) send queries through Green Compute’s API (which mimics OpenAI’s API), and receive model outputs (like chat completions) generated by the underlying GPUs. Intended customers are businesses that need scalable AI inference and have an interest in sustainability; they benefit from deep biogas-floor pricing (no premium for green energy) and robust infrastructure. Green Compute is unique in Bittensor because it’s explicitly a sustainable compute market – a niche not addressed by other Bittensor subnets.

PURPOSE

What exactly is the 'product/build'?

Current Status and Architecture:

As of mid-2026, Green Compute’s mainnet is live with real enterprise inference traffic flowing on SN110. The commercial platform is accessible now: users can rent GPUs by the minute (e.g. RTX 4090 at $0.40/GPU-hr) across verified green data centers, with real-time pricing that updates per provider. Early chain data shows this subnet accounts for a small fraction of network reward – roughly 0.3% of daily TAO emissions (about 10 TAO/day after the 2025 halving) – reflecting its nascent scale.

Technical Architecture:

Green Compute employs standard Bittensor infrastructure. It runs on the Polkadot/Substrate-based Bittensor chain with netuid=110. In practice, each miner is a Bittensor node running specialized compute subnet software. The developers have published a compute subnet repository (commune-ai/compute-ni) that outlines the protocol: key files include compute/protocol.py for the wire format, and neurons/miner.py/neurons/validator.py defining miner and validator behavior. Miners run models and respond to validators’ requests as per this code, while validators coordinate scoring and consensus. On top of this, Green Compute provides a user-facing API gateway and rental dashboard. Customer requests likely hit a set of validator nodes which distribute the work to appropriate miners.

Data Flows and Integrations:

Customers interact via an OpenAI-compatible endpoint, which charges either TAO/ALPHA or fiat. Internally, calls to the API trigger inference jobs on the validators, which in turn query the chosen miners’ GPUs. The compute software notes integration points with cloud GPU platforms (Runpod, AWS Lambda, etc.) for a composable infrastructure. The on-chain component handles token staking/burn, mining rewards, and possibly attestation oracles for green power. The deployment likely uses Docker/Kubernetes under the hood (common in Bittensor subnets).

Current Status and Architecture:

As of mid-2026, Green Compute’s mainnet is live with real enterprise inference traffic flowing on SN110. The commercial platform is accessible now: users can rent GPUs by the minute (e.g. RTX 4090 at $0.40/GPU-hr) across verified green data centers, with real-time pricing that updates per provider. Early chain data shows this subnet accounts for a small fraction of network reward – roughly 0.3% of daily TAO emissions (about 10 TAO/day after the 2025 halving) – reflecting its nascent scale.

Technical Architecture:

Green Compute employs standard Bittensor infrastructure. It runs on the Polkadot/Substrate-based Bittensor chain with netuid=110. In practice, each miner is a Bittensor node running specialized compute subnet software. The developers have published a compute subnet repository (commune-ai/compute-ni) that outlines the protocol: key files include compute/protocol.py for the wire format, and neurons/miner.py/neurons/validator.py defining miner and validator behavior. Miners run models and respond to validators’ requests as per this code, while validators coordinate scoring and consensus. On top of this, Green Compute provides a user-facing API gateway and rental dashboard. Customer requests likely hit a set of validator nodes which distribute the work to appropriate miners.

Data Flows and Integrations:

Customers interact via an OpenAI-compatible endpoint, which charges either TAO/ALPHA or fiat. Internally, calls to the API trigger inference jobs on the validators, which in turn query the chosen miners’ GPUs. The compute software notes integration points with cloud GPU platforms (Runpod, AWS Lambda, etc.) for a composable infrastructure. The on-chain component handles token staking/burn, mining rewards, and possibly attestation oracles for green power. The deployment likely uses Docker/Kubernetes under the hood (common in Bittensor subnets).

WHO

Team Info

Publicly, Green Compute is represented by Josh Riddett, who launched the project. In interviews he describes himself as a UK-based GPU infrastructure entrepreneur with experience building and selling GPU infrastructure in the UK since 2017. Josh is effectively the founder/CEO of the subnet. The development appears to involve contributors clustered under the GitHub organization commune-ai. According to Green Compute’s roadmap, the team launched a testnet in Feb 2026 and mainnet in April 2026.

Publicly, Green Compute is represented by Josh Riddett, who launched the project. In interviews he describes himself as a UK-based GPU infrastructure entrepreneur with experience building and selling GPU infrastructure in the UK since 2017. Josh is effectively the founder/CEO of the subnet. The development appears to involve contributors clustered under the GitHub organization commune-ai. According to Green Compute’s roadmap, the team launched a testnet in Feb 2026 and mainnet in April 2026.

FUTURE

Roadmap

Green Compute has publicly announced a very tight initial roadmap. The official timeline shows a testnet going live in February 2026 and a full mainnet launch in April 2026. These milestones have been met: as of April 2026 the subnet is live on mainnet with real inference jobs being processed. In the near term, the team seems focused on growing the network and community. The long-term vision is clear from their statements: a global distributed compute network using stranded renewable energy. For example, the founder states the goal is to bring data centers to renewable sites and turn that stranded power into AI compute. This suggests the fully-realized goal is a world-wide platform where all AI workloads run carbon-neutrally on SN110. Recent updates have been mostly launch-related: on April 22, 2026 the subnet was profiled on the Inside Bittensor podcast, officially announcing Green Compute’s mission and launch.

Green Compute has publicly announced a very tight initial roadmap. The official timeline shows a testnet going live in February 2026 and a full mainnet launch in April 2026. These milestones have been met: as of April 2026 the subnet is live on mainnet with real inference jobs being processed. In the near term, the team seems focused on growing the network and community. The long-term vision is clear from their statements: a global distributed compute network using stranded renewable energy. For example, the founder states the goal is to bring data centers to renewable sites and turn that stranded power into AI compute. This suggests the fully-realized goal is a world-wide platform where all AI workloads run carbon-neutrally on SN110. Recent updates have been mostly launch-related: on April 22, 2026 the subnet was profiled on the Inside Bittensor podcast, officially announcing Green Compute’s mission and launch.