With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time

Subnet 27

Nodexo

Emissions
Value
Recycled
Value
Recycled (24h)
Value
Registration Cost
Value
Active Validators
Value
Active Miners
Value
Active Dual Miners/Validators
Value

ABOUT

What exactly does it do?

Nodexo (formerly known as “NI Compute” on Bittensor’s Subnet 27) is a decentralized GPU compute marketplace on the Bittensor network. It lets GPU owners contribute their hardware to a blockchain-based cloud and earn rewards, while clients can rent on-demand compute power. Validators in the network run Proof-of-GPU benchmarks to measure each miner’s GPU performance and allocate jobs accordingly. In effect, Nodexo “decentralizes computing resources by combining siloed pools of compute on a blockchain to be validated and accessed trustlessly”. This matches the project’s vision of providing “direct access to enterprise-grade GPU resources… but delivered through a decentralized network”. Together, Nodexo transforms GPU compute into a tradable commodity on-chain, connecting supply and demand for AI workloads.

Nodexo operates as a blockchain-backed GPU rental platform. It allows anyone with idle GPUs (“miners”) to contribute their hardware to a shared pool and earn TAO rewards based on performance, while clients (“renters”) can instantly launch GPU instances through the Nodexo service. Validators in the network continually benchmark and score these GPUs (using the Proof-of-GPU mechanism) to ensure high-performance hardware is fairly rewarded. Key functions include:

Decentralized GPU Marketplace: Provides a peer-to-peer cloud where GPU providers list resources and customers rent compute, all managed on-chain for trustless transparency.

Performance-based Incentives: Uses real-time GPU benchmarking (TFLOPS tests, etc.) to verify miner hardware and dynamically set rental pricing and rewards.

Open Access: Any participant can join permissionlessly. High-end GPUs (H100/A100) earn higher scores, but even smaller rigs can join, ensuring inclusivity across hardware tiers.

Composable AI Infrastructure: By linking GPUs globally, Nodexo effectively forms a decentralized “supercomputer” that AI developers can tap into without relying on a single cloud provider.

In summary, Nodexo’s subnet provides scalable, on-demand GPU computing without centralized bottlenecks, aligning with Bittensor’s goal of tokenizing compute power.

 

Nodexo (formerly known as “NI Compute” on Bittensor’s Subnet 27) is a decentralized GPU compute marketplace on the Bittensor network. It lets GPU owners contribute their hardware to a blockchain-based cloud and earn rewards, while clients can rent on-demand compute power. Validators in the network run Proof-of-GPU benchmarks to measure each miner’s GPU performance and allocate jobs accordingly. In effect, Nodexo “decentralizes computing resources by combining siloed pools of compute on a blockchain to be validated and accessed trustlessly”. This matches the project’s vision of providing “direct access to enterprise-grade GPU resources… but delivered through a decentralized network”. Together, Nodexo transforms GPU compute into a tradable commodity on-chain, connecting supply and demand for AI workloads.

Nodexo operates as a blockchain-backed GPU rental platform. It allows anyone with idle GPUs (“miners”) to contribute their hardware to a shared pool and earn TAO rewards based on performance, while clients (“renters”) can instantly launch GPU instances through the Nodexo service. Validators in the network continually benchmark and score these GPUs (using the Proof-of-GPU mechanism) to ensure high-performance hardware is fairly rewarded. Key functions include:

Decentralized GPU Marketplace: Provides a peer-to-peer cloud where GPU providers list resources and customers rent compute, all managed on-chain for trustless transparency.

Performance-based Incentives: Uses real-time GPU benchmarking (TFLOPS tests, etc.) to verify miner hardware and dynamically set rental pricing and rewards.

Open Access: Any participant can join permissionlessly. High-end GPUs (H100/A100) earn higher scores, but even smaller rigs can join, ensuring inclusivity across hardware tiers.

Composable AI Infrastructure: By linking GPUs globally, Nodexo effectively forms a decentralized “supercomputer” that AI developers can tap into without relying on a single cloud provider.

In summary, Nodexo’s subnet provides scalable, on-demand GPU computing without centralized bottlenecks, aligning with Bittensor’s goal of tokenizing compute power.

 

PURPOSE

What exactly is the 'product/build'?

Nodexo is delivered as a full-stack cloud platform combining blockchain, web services, and GPU orchestration. Its architecture follows a three-tier model:

Validator Layer: Special “validator” nodes continuously run the Proof-of-GPU engine to benchmark miner hardware and calculate performance scores. These validators post results to the Bittensor Subnet 27 blockchain, enforcing a trustless consensus on GPU quality.

Miner Layer: GPU providers run miner software on their machines. Each miner registers its GPU instance, runs containerized AI workloads for renters, and reports performance data. The miner software manages Docker containers or Kubernetes pods to host user jobs. Miners join the Bittensor P2P network (Axon/Dendrite) to communicate with validators and the allocation system.

Resource Allocation API / Platform: A RESTful API and web console (the Nodexo Cloud UI) handle client requests. Developers use the Nodexo app or CLI to browse available GPUs, submit jobs, and retrieve results. The resource allocation service matches incoming compute jobs to free GPUs based on validator scores and usage. This layer enforces high availability and health checks on allocations. In practice, Nodexo provides a web console (e.g. app.neuralinternet.ai) where users “Launch a Rental” or “Become a Provider” through an intuitive interface.

In terms of technical stack, the initial platform was built as a microservices architecture on Google Cloud Platform. The front end uses Angular/TypeScript, the backend is Node.js, and Supabase was chosen for the database. This design was explicitly chosen for scalability and to allow potential open-source collaboration. All components are connected via Bittensor’s blockchain interface (Subtensor) for on-chain governance of resources. In essence, Nodexo’s product is a turnkey GPU-cloud platform: it automatically benchmarks new GPUs, encloses them in Docker images (with AI tools pre-installed), and handles billing and access control. By automating GPU validation and rental management, it makes leasing hardware as effortless as clicking “deploy”.

 

Nodexo is delivered as a full-stack cloud platform combining blockchain, web services, and GPU orchestration. Its architecture follows a three-tier model:

Validator Layer: Special “validator” nodes continuously run the Proof-of-GPU engine to benchmark miner hardware and calculate performance scores. These validators post results to the Bittensor Subnet 27 blockchain, enforcing a trustless consensus on GPU quality.

Miner Layer: GPU providers run miner software on their machines. Each miner registers its GPU instance, runs containerized AI workloads for renters, and reports performance data. The miner software manages Docker containers or Kubernetes pods to host user jobs. Miners join the Bittensor P2P network (Axon/Dendrite) to communicate with validators and the allocation system.

Resource Allocation API / Platform: A RESTful API and web console (the Nodexo Cloud UI) handle client requests. Developers use the Nodexo app or CLI to browse available GPUs, submit jobs, and retrieve results. The resource allocation service matches incoming compute jobs to free GPUs based on validator scores and usage. This layer enforces high availability and health checks on allocations. In practice, Nodexo provides a web console (e.g. app.neuralinternet.ai) where users “Launch a Rental” or “Become a Provider” through an intuitive interface.

In terms of technical stack, the initial platform was built as a microservices architecture on Google Cloud Platform. The front end uses Angular/TypeScript, the backend is Node.js, and Supabase was chosen for the database. This design was explicitly chosen for scalability and to allow potential open-source collaboration. All components are connected via Bittensor’s blockchain interface (Subtensor) for on-chain governance of resources. In essence, Nodexo’s product is a turnkey GPU-cloud platform: it automatically benchmarks new GPUs, encloses them in Docker images (with AI tools pre-installed), and handles billing and access control. By automating GPU validation and rental management, it makes leasing hardware as effortless as clicking “deploy”.

 

WHO

Team Info

Nodexo is developed by the Neural Internet team (also called the Nodexo core team), a tech-focused DAO working on Bittensor infrastructure. Key figures include:

Hansel Melo – Founder & CEO of Neural Internet. He conceived the Nodexo vision of a decentralized cloud. (As noted in an engineering case study, Hansel is “Founder of Neural Internet.”)

Donald J. Milligan (Don Milligan) – Co-Founder & CTO. Don is a seasoned cloud architect who led the platform’s development. He joined the project as a fractional CTO in late 2024 and became co-founder in April 2025. Don manages the technical roadmap, validation algorithms, and system reliability.

These two form the core leadership. In late 2024, Neural Internet also partnered with Cloud Pathfinder Services for the initial implementation of Nodexo’s MVP. (Don Milligan’s firm did the first six-month build, delivering the platform on schedule.) Other team roles (developers, devops, model specialists) are not publicly named, but the project is open-source and encouraged community contributions via GitHub. The team is currently organizing as a DAO under the Neural Internet banner, aiming to scale up development through token incentives and public bounties.

Hansel Melo – Co-Founder

Gunner McLeod – Co-Founder

Arthur Simonian – Co-Founder

Adrian Walker – Co-Founder

Angel Rivas – Co-founder

Matan K – Co-Founder

Felix Peterson – Design and Product Architecture

Alex Kiriakides – Product and Financial Analyst

Douglas Albert – Product Development

Ibtehaj Khan – ML Engineer

Saram Hai – ML Engineer

 

Nodexo is developed by the Neural Internet team (also called the Nodexo core team), a tech-focused DAO working on Bittensor infrastructure. Key figures include:

Hansel Melo – Founder & CEO of Neural Internet. He conceived the Nodexo vision of a decentralized cloud. (As noted in an engineering case study, Hansel is “Founder of Neural Internet.”)

Donald J. Milligan (Don Milligan) – Co-Founder & CTO. Don is a seasoned cloud architect who led the platform’s development. He joined the project as a fractional CTO in late 2024 and became co-founder in April 2025. Don manages the technical roadmap, validation algorithms, and system reliability.

These two form the core leadership. In late 2024, Neural Internet also partnered with Cloud Pathfinder Services for the initial implementation of Nodexo’s MVP. (Don Milligan’s firm did the first six-month build, delivering the platform on schedule.) Other team roles (developers, devops, model specialists) are not publicly named, but the project is open-source and encouraged community contributions via GitHub. The team is currently organizing as a DAO under the Neural Internet banner, aiming to scale up development through token incentives and public bounties.

Hansel Melo – Co-Founder

Gunner McLeod – Co-Founder

Arthur Simonian – Co-Founder

Adrian Walker – Co-Founder

Angel Rivas – Co-founder

Matan K – Co-Founder

Felix Peterson – Design and Product Architecture

Alex Kiriakides – Product and Financial Analyst

Douglas Albert – Product Development

Ibtehaj Khan – ML Engineer

Saram Hai – ML Engineer

 

FUTURE

Roadmap

Nodexo’s rollout and future plans have been communicated through release notes and updates:

October 2024: Public MVP Launch – The Nodexo platform went live (beta) on Oct 8, 2024. Early users could already rent GPUs and stake them on Bittensor.

Late 2024: Proof-of-GPU v2 & Core Setup – The team introduced the v2 PoG benchmarking, ensuring accurate hardware scoring. Around the same time, Don Milligan was brought on as CTO (Dec 2024). Infrastructure stability became a focus (high availability, firewall rules, etc.).

2025: Reliability and Growth – Throughout 2025 the roadmap emphasizes network reliability and scale. Plans include high-availability upgrades and enterprise-grade GPU offerings. The company is in a VC funding phase and, according to internal updates, is preparing a marketing push in Q4 2025. (A corporate rebranding initiative to “Nodexo” was slated for completion in 2025.) In parallel, the team continues iterating on the miner/validator code (with frequent open-source updates on GitHub).

2026: Feature Expansion – A major upcoming feature is a model inference endpoint service. The Nodexo site explicitly advertises “Inference: Deploy your models with low-latency endpoints…” as “Coming feature Q4 2026.”. This indicates plans to let users not only rent raw GPUs but also host APIs or applications on the decentralized cloud. Beyond that, future sub-versions of Proof-of-GPU and dynamic tokenomics (TAO emission models) are under development, aligning SN27’s incentives with broader Bittensor updates.

In summary, Nodexo’s near-term roadmap is to solidify a production-grade network (with robust SLAs and ease-of-use) and then layer on advanced services (like inference endpoints) to become a full-featured decentralized GPU cloud platform. All development is shared via the official docs and GitHub, so the roadmap evolves in the open as the platform matures.

 

Nodexo’s rollout and future plans have been communicated through release notes and updates:

October 2024: Public MVP Launch – The Nodexo platform went live (beta) on Oct 8, 2024. Early users could already rent GPUs and stake them on Bittensor.

Late 2024: Proof-of-GPU v2 & Core Setup – The team introduced the v2 PoG benchmarking, ensuring accurate hardware scoring. Around the same time, Don Milligan was brought on as CTO (Dec 2024). Infrastructure stability became a focus (high availability, firewall rules, etc.).

2025: Reliability and Growth – Throughout 2025 the roadmap emphasizes network reliability and scale. Plans include high-availability upgrades and enterprise-grade GPU offerings. The company is in a VC funding phase and, according to internal updates, is preparing a marketing push in Q4 2025. (A corporate rebranding initiative to “Nodexo” was slated for completion in 2025.) In parallel, the team continues iterating on the miner/validator code (with frequent open-source updates on GitHub).

2026: Feature Expansion – A major upcoming feature is a model inference endpoint service. The Nodexo site explicitly advertises “Inference: Deploy your models with low-latency endpoints…” as “Coming feature Q4 2026.”. This indicates plans to let users not only rent raw GPUs but also host APIs or applications on the decentralized cloud. Beyond that, future sub-versions of Proof-of-GPU and dynamic tokenomics (TAO emission models) are under development, aligning SN27’s incentives with broader Bittensor updates.

In summary, Nodexo’s near-term roadmap is to solidify a production-grade network (with robust SLAs and ease-of-use) and then layer on advanced services (like inference endpoints) to become a full-featured decentralized GPU cloud platform. All development is shared via the official docs and GitHub, so the roadmap evolves in the open as the platform matures.

 

NEWS

Announcements

Load More