With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time

Subnet 72

StreetVision

Alpha Price
Value
Market Cap
Value
Neurons
Value
Registration Cost
Value
TAO Liquidity
Value
Alpha in Pool
Value
Total Alpha Supply
Value
% Alpha Staked
Value

ABOUT

What exactly does it do?

StreetVision is a specialized Bittensor subnet for analyzing crowdsourced street-level imagery to improve mapping and autonomous driving (Physical AI). It ingests 360° video and images from NATIX’s global camera network and outputs refined AI models and map insights. In NATIX’s words, StreetVision “combines NATIX’s curated real-world data with Bittensor’s crowdsourced intelligence” to create a “trustless, ever-evolving ecosystem” for autonomous vehicles and smart maps.

The initial focus is real-time roadwork detection (for navigation safety and map updates), but it will quickly expand to other infrastructure use cases (potholes, road signs, litter, etc.) and scenario classification (edge-case driving scenarios). As NATIX explains, StreetVision ingests live street imagery, continuously trains better models via miner competition, and feeds those models back to edge devices (smartphones/dashcams) for real-time analysis. In short, it uses decentralized AI to turn NATIX’s massive crowdsourced map data into actionable models for safer, more up-to-date maps and autonomous driving.

StreetVision is a specialized Bittensor subnet for analyzing crowdsourced street-level imagery to improve mapping and autonomous driving (Physical AI). It ingests 360° video and images from NATIX’s global camera network and outputs refined AI models and map insights. In NATIX’s words, StreetVision “combines NATIX’s curated real-world data with Bittensor’s crowdsourced intelligence” to create a “trustless, ever-evolving ecosystem” for autonomous vehicles and smart maps.

The initial focus is real-time roadwork detection (for navigation safety and map updates), but it will quickly expand to other infrastructure use cases (potholes, road signs, litter, etc.) and scenario classification (edge-case driving scenarios). As NATIX explains, StreetVision ingests live street imagery, continuously trains better models via miner competition, and feeds those models back to edge devices (smartphones/dashcams) for real-time analysis. In short, it uses decentralized AI to turn NATIX’s massive crowdsourced map data into actionable models for safer, more up-to-date maps and autonomous driving.

PURPOSE

What exactly is the 'product/build'?

StreetVision follows a decentralized data pipeline on Bittensor with distinct roles for data ingestion, filtering, validators, and miners. In practice, NATIX’s system works as follows:

  • Data Collection: Edge devices (smartphones running the Drive& app and Tesla cars with the VX360 add-on) continuously capture street-level imagery. These 360° videos are streamed or uploaded into NATIX’s cloud.
    Pre-Processing: Advanced AI filters on NATIX’s server anonymize and prune the raw images. PII (license plates, faces) is blurred, and irrelevant content is discarded. The filter specifically targets features of interest (e.g. roadworks, traffic signs) to send only relevant frames into the subnet.
  • Validator Neurons: Validator nodes on Bittensor receive the filtered images and generate inference tasks. They mix real and synthetic image challenges to test miners, scoring their outputs and ranking model accuracy. (NATIX continually enriches validators with new datasets and generated imagery to cover diverse street scenes.)
  • Miner Neurons: Miner nodes run machine-learning models (binary classifiers and object detectors) on each image to detect features like roadwork or other infrastructure changes. For example, miners output a score (0–1) per image indicating likelihood of roadwork. Miners are rewarded in TAO tokens based on prediction accuracy. Critically, miners must continuously improve their models: they publish their models (e.g. on Hugging Face) and after 90 days the reward factor decays to zero unless the model is retrained. This dynamic scheme (“submit-and-decay”) forces an ongoing cycle of model refinement.
  • Decentralized Refinement: As miners compete, the best models emerge and are incrementally more accurate. These updated models are then deployed back to NATIX’s edge network (smartphones/dashcams) for on-device inference, enabling efficient real-time detection.

 

The subnet runs under Bittensor’s incentive model: it issues dTAO tokens each block (around 2 per block) shared by miners, validators, and NATIX as owner. Validators must stake NATIX tokens (and hold Alpha tokens) to participate; miners currently have no staking requirement to encourage wide participation. Together, this framework encourages a global community of miners and validators to build and refine the vision models while NATIX supplies the raw data.

 

Products and Applications

StreetVision sits atop NATIX’s DePIN platform of crowdsourced mapping. Key NATIX offerings feed into it:

  • Drive& App: A smartphone app that rewards ordinary drivers (100K+ users) for collecting street imagery and map data. Drive& features on-device AI (via mobile NPUs) to detect map attributes (traffic signs, signals, etc.) in real time, offloading heavier analysis to the cloud.
  • VX360 Device: A plug-in 360° camera module for Tesla vehicles (built with Grab) that captures four-angle video using the car’s existing cameras. This provides high-quality, panoramic street data without installing new hardware. VX360 launched in May 2025, rapidly collecting test data (2,000+ driving hours in 10 days).
  • NATIX Network: A Solana-based blockchain network (a DePIN) that underpins data tokenomics. Contributors earn $NATIX tokens by uploading data. StreetVision itself is the decentralized AI subnet built on Bittensor, using NATIX data.

 

StreetVision Platform: The subnet provides a backend AI service for smart mobility. Its applications include:

  • Real-time Map Updates: Models from StreetVision can automatically update map features (e.g. flagging new roadwork zones) in consumer map services.
  • Autonomous Driving Support: The subnet generates realistic driving scenarios (including rare edge cases) and classification data, aiding AV companies to train and validate their systems against real-world conditions.
  • Infrastructure Monitoring: Beyond roadwork, StreetVision will evolve into a platform detecting potholes, signage, road debris, and other city infrastructure issues, providing city planners or logistics companies actionable alerts.
  • Model Marketplace: As NATIX notes, detection models (e.g. for roadwork) will be published and could be monetized through licensing, effectively making the subnet a marketplace for AI vision models.

 

In essence, StreetVision is not a consumer app, but an AI platform: it processes NATIX’s live data to produce intelligent insights and models that feed back into NATIX’s ecosystem of apps and devices.

 

Architecture and Technical Details

Technically, StreetVision spans from edge hardware through Bittensor nodes:

  • Infrastructure: The subnet runs on the Bittensor network (a decentralized compute protocol). NATIX’s own infrastructure is Solana-based – a blockchain that records $NATIX token flows and powers its IoT devices. In practice, images are ingested by a NATIX cloud server (the “Applications” tier) that acts as gateway. This server samples incoming video, performs anonymization (blurring sensitive data), and load-balances tasks to Bittensor nodes. It also aggregates miner outputs into training datasets for retraining.
  • Data Sources: The system uses NATIX’s proprietary datasets of street imagery (over 170M km collected worldwide) plus real-time feeds. Devices include Android/iOS phones (Drive&), dashcams, and Tesla vehicles with VX360. These generate high-volume 360° video streams of streets.
  • AI Models: Miners run neural-network models (e.g. convolutional classifiers and object detectors) on input images. The GitHub code suggests miners use Python frameworks (likely PyTorch/TensorFlow, as is common on Bittensor). For example, miners output a [0,1] score for “roadwork present”. Models are stored externally (e.g. Hugging Face); a submitted model is valid for 90 days before requiring updates. Validators maintain a dynamic challenge set, continually adding new real images and GAN-generated samples to cover diverse conditions.
  • Hardware: At the edge, NATIX leverages commodity hardware: Drive& runs on standard smartphones with AI-accelerator chips (enabling on-device inferences for signs, lights, etc.). The VX360 uses Tesla’s built-in cameras. On the back end, validators and miners typically run on GPUs or cloud servers. Yuma (DCG) has provided AI development support, implying professional hardware and ML pipelines under the hood.
  • Privacy/Filtering: Only non-sensitive data is fed into Bittensor. The pipeline explicitly filters out license plates and faces. This ensures compliance with privacy norms as models train.

 

Taken together, StreetVision is a physical-to-digital AI stack: real-world video → cloud preprocessing → Bittensor validators/miners → refined models → edge inference.

 

 

StreetVision follows a decentralized data pipeline on Bittensor with distinct roles for data ingestion, filtering, validators, and miners. In practice, NATIX’s system works as follows:

  • Data Collection: Edge devices (smartphones running the Drive& app and Tesla cars with the VX360 add-on) continuously capture street-level imagery. These 360° videos are streamed or uploaded into NATIX’s cloud.
    Pre-Processing: Advanced AI filters on NATIX’s server anonymize and prune the raw images. PII (license plates, faces) is blurred, and irrelevant content is discarded. The filter specifically targets features of interest (e.g. roadworks, traffic signs) to send only relevant frames into the subnet.
  • Validator Neurons: Validator nodes on Bittensor receive the filtered images and generate inference tasks. They mix real and synthetic image challenges to test miners, scoring their outputs and ranking model accuracy. (NATIX continually enriches validators with new datasets and generated imagery to cover diverse street scenes.)
  • Miner Neurons: Miner nodes run machine-learning models (binary classifiers and object detectors) on each image to detect features like roadwork or other infrastructure changes. For example, miners output a score (0–1) per image indicating likelihood of roadwork. Miners are rewarded in TAO tokens based on prediction accuracy. Critically, miners must continuously improve their models: they publish their models (e.g. on Hugging Face) and after 90 days the reward factor decays to zero unless the model is retrained. This dynamic scheme (“submit-and-decay”) forces an ongoing cycle of model refinement.
  • Decentralized Refinement: As miners compete, the best models emerge and are incrementally more accurate. These updated models are then deployed back to NATIX’s edge network (smartphones/dashcams) for on-device inference, enabling efficient real-time detection.

 

The subnet runs under Bittensor’s incentive model: it issues dTAO tokens each block (around 2 per block) shared by miners, validators, and NATIX as owner. Validators must stake NATIX tokens (and hold Alpha tokens) to participate; miners currently have no staking requirement to encourage wide participation. Together, this framework encourages a global community of miners and validators to build and refine the vision models while NATIX supplies the raw data.

 

Products and Applications

StreetVision sits atop NATIX’s DePIN platform of crowdsourced mapping. Key NATIX offerings feed into it:

  • Drive& App: A smartphone app that rewards ordinary drivers (100K+ users) for collecting street imagery and map data. Drive& features on-device AI (via mobile NPUs) to detect map attributes (traffic signs, signals, etc.) in real time, offloading heavier analysis to the cloud.
  • VX360 Device: A plug-in 360° camera module for Tesla vehicles (built with Grab) that captures four-angle video using the car’s existing cameras. This provides high-quality, panoramic street data without installing new hardware. VX360 launched in May 2025, rapidly collecting test data (2,000+ driving hours in 10 days).
  • NATIX Network: A Solana-based blockchain network (a DePIN) that underpins data tokenomics. Contributors earn $NATIX tokens by uploading data. StreetVision itself is the decentralized AI subnet built on Bittensor, using NATIX data.

 

StreetVision Platform: The subnet provides a backend AI service for smart mobility. Its applications include:

  • Real-time Map Updates: Models from StreetVision can automatically update map features (e.g. flagging new roadwork zones) in consumer map services.
  • Autonomous Driving Support: The subnet generates realistic driving scenarios (including rare edge cases) and classification data, aiding AV companies to train and validate their systems against real-world conditions.
  • Infrastructure Monitoring: Beyond roadwork, StreetVision will evolve into a platform detecting potholes, signage, road debris, and other city infrastructure issues, providing city planners or logistics companies actionable alerts.
  • Model Marketplace: As NATIX notes, detection models (e.g. for roadwork) will be published and could be monetized through licensing, effectively making the subnet a marketplace for AI vision models.

 

In essence, StreetVision is not a consumer app, but an AI platform: it processes NATIX’s live data to produce intelligent insights and models that feed back into NATIX’s ecosystem of apps and devices.

 

Architecture and Technical Details

Technically, StreetVision spans from edge hardware through Bittensor nodes:

  • Infrastructure: The subnet runs on the Bittensor network (a decentralized compute protocol). NATIX’s own infrastructure is Solana-based – a blockchain that records $NATIX token flows and powers its IoT devices. In practice, images are ingested by a NATIX cloud server (the “Applications” tier) that acts as gateway. This server samples incoming video, performs anonymization (blurring sensitive data), and load-balances tasks to Bittensor nodes. It also aggregates miner outputs into training datasets for retraining.
  • Data Sources: The system uses NATIX’s proprietary datasets of street imagery (over 170M km collected worldwide) plus real-time feeds. Devices include Android/iOS phones (Drive&), dashcams, and Tesla vehicles with VX360. These generate high-volume 360° video streams of streets.
  • AI Models: Miners run neural-network models (e.g. convolutional classifiers and object detectors) on input images. The GitHub code suggests miners use Python frameworks (likely PyTorch/TensorFlow, as is common on Bittensor). For example, miners output a [0,1] score for “roadwork present”. Models are stored externally (e.g. Hugging Face); a submitted model is valid for 90 days before requiring updates. Validators maintain a dynamic challenge set, continually adding new real images and GAN-generated samples to cover diverse conditions.
  • Hardware: At the edge, NATIX leverages commodity hardware: Drive& runs on standard smartphones with AI-accelerator chips (enabling on-device inferences for signs, lights, etc.). The VX360 uses Tesla’s built-in cameras. On the back end, validators and miners typically run on GPUs or cloud servers. Yuma (DCG) has provided AI development support, implying professional hardware and ML pipelines under the hood.
  • Privacy/Filtering: Only non-sensitive data is fed into Bittensor. The pipeline explicitly filters out license plates and faces. This ensures compliance with privacy norms as models train.

 

Taken together, StreetVision is a physical-to-digital AI stack: real-world video → cloud preprocessing → Bittensor validators/miners → refined models → edge inference.

 

 

WHO

Team Info

NATIX is led by its founding team, supported by strategic partners in AI and mobility:

Alireza Ghods, Ph.D. – Co-founder & CEO of NATIX. (10+ years in IoT, mapping, autonomous driving).

Lorenz Muck – Co-founder & CPO. (VR/Computer Vision product expert).

Omid Mogharian – Co-founder & CTO. (15+ years in software and blockchain).

Dr. Ulrich Lages – Core contributor (Automotive Lead). (LiDAR pioneer with 30+ years in AI/AV).

 

Partnering organizations:

Yuma (DCG) – An AI infrastructure company incubated by Digital Currency Group. Yuma guided StreetVision’s launch, contributing AI modeling and go-to-market support. (Yuma is explicitly credited for incubating Subnet 72.)

BitMind – Open-source AI architecture (cited as inspiration for the subnet).

Grab – Southeast Asian tech “super-app”; collaborated on the VX360 hardware. Grab is also an early customer (using NATIX data for maps).

Solana Labs – Underlying blockchain (NATIX runs on Solana).

Bittensor Foundation – The core decentralized AI network. (Subnet is part of Bittensor’s ecosystem; validators are part of Bittensor community).

These affiliations and personnel are publicly verifiable from NATIX’s site and press. For instance, NATIX’s “About” page lists the founders and core contributors, and the StreetVision announcement explicitly names Yuma/DCG involvement and Grab’s role.

 

NATIX is led by its founding team, supported by strategic partners in AI and mobility:

Alireza Ghods, Ph.D. – Co-founder & CEO of NATIX. (10+ years in IoT, mapping, autonomous driving).

Lorenz Muck – Co-founder & CPO. (VR/Computer Vision product expert).

Omid Mogharian – Co-founder & CTO. (15+ years in software and blockchain).

Dr. Ulrich Lages – Core contributor (Automotive Lead). (LiDAR pioneer with 30+ years in AI/AV).

 

Partnering organizations:

Yuma (DCG) – An AI infrastructure company incubated by Digital Currency Group. Yuma guided StreetVision’s launch, contributing AI modeling and go-to-market support. (Yuma is explicitly credited for incubating Subnet 72.)

BitMind – Open-source AI architecture (cited as inspiration for the subnet).

Grab – Southeast Asian tech “super-app”; collaborated on the VX360 hardware. Grab is also an early customer (using NATIX data for maps).

Solana Labs – Underlying blockchain (NATIX runs on Solana).

Bittensor Foundation – The core decentralized AI network. (Subnet is part of Bittensor’s ecosystem; validators are part of Bittensor community).

These affiliations and personnel are publicly verifiable from NATIX’s site and press. For instance, NATIX’s “About” page lists the founders and core contributors, and the StreetVision announcement explicitly names Yuma/DCG involvement and Grab’s role.

 

FUTURE

Roadmap

2020: NATIX Network founded (Hamburg); launched Drive& app to crowdsource mapping data.

2023–2024: 250K+ drivers joined; the network mapped 170M+ km of roads. Platform matured to include wearable/automotive devices.

Nov 2024: Launched the VX360 Tesla camera device and companion app. Secured first enterprise data client (large geospatial firm). Burned 33M+ $NATIX tokens as part of tokenomics. Achieved a “Network Laps” milestone of 100M km, rewarding users.

Q1 2025: Network grew to ~244K users and ~153M km mapped. NATIX relaunched its Network Laps V2 reward program and burned another 38M+ $NATIX. New partnerships announced (e.g. with E-Money, RepairPal).

May 2025: StreetVision Subnet (Subnet 72) launched on Bittensor. This marked the beginning of on-chain training of the roadwork-detection model. Initial tasks (roadwork classification) went live to test the system.

Current (mid-2025): StreetVision is processing NATIX’s live data feeds. Grab is a paying customer already integrating NATIX data. The team is adding validators and encouraging miners (now open without staking requirements). Several autonomous-driving companies are in talks to use the data/models. NATIX holds regular updates (monthly “Progress” blogs and AMAs) hinting at big announcements ahead.

Upcoming: The subnet will broaden its scope. As NATIX and its press note, next targets include pothole detection, sign recognition, litter identification, and infrastructure monitoring. They will also enable scenario classification of driving videos for AV simulation. In parallel, NATIX plans to monetize these models and insights (via protocol revenue) to support token value. Technical enhancements like model updates on Hugging Face and further decentralization (more validators) are ongoing. Finally, NATIX is working with top AV research labs to deliver “simulation-to-reality” products in the coming months.

Milestones to watch: deployment of production-grade roadwork models, rollout of additional use-cases (potholes, etc.), and any official partnerships with autonomous vehicle or mapping firms. NATIX’s public communications (blog and social media) regularly outline these plans, which align with their vision of a continuously improving, decentralized street-vision AI network.

 

2020: NATIX Network founded (Hamburg); launched Drive& app to crowdsource mapping data.

2023–2024: 250K+ drivers joined; the network mapped 170M+ km of roads. Platform matured to include wearable/automotive devices.

Nov 2024: Launched the VX360 Tesla camera device and companion app. Secured first enterprise data client (large geospatial firm). Burned 33M+ $NATIX tokens as part of tokenomics. Achieved a “Network Laps” milestone of 100M km, rewarding users.

Q1 2025: Network grew to ~244K users and ~153M km mapped. NATIX relaunched its Network Laps V2 reward program and burned another 38M+ $NATIX. New partnerships announced (e.g. with E-Money, RepairPal).

May 2025: StreetVision Subnet (Subnet 72) launched on Bittensor. This marked the beginning of on-chain training of the roadwork-detection model. Initial tasks (roadwork classification) went live to test the system.

Current (mid-2025): StreetVision is processing NATIX’s live data feeds. Grab is a paying customer already integrating NATIX data. The team is adding validators and encouraging miners (now open without staking requirements). Several autonomous-driving companies are in talks to use the data/models. NATIX holds regular updates (monthly “Progress” blogs and AMAs) hinting at big announcements ahead.

Upcoming: The subnet will broaden its scope. As NATIX and its press note, next targets include pothole detection, sign recognition, litter identification, and infrastructure monitoring. They will also enable scenario classification of driving videos for AV simulation. In parallel, NATIX plans to monetize these models and insights (via protocol revenue) to support token value. Technical enhancements like model updates on Hugging Face and further decentralization (more validators) are ongoing. Finally, NATIX is working with top AV research labs to deliver “simulation-to-reality” products in the coming months.

Milestones to watch: deployment of production-grade roadwork models, rollout of additional use-cases (potholes, etc.), and any official partnerships with autonomous vehicle or mapping firms. NATIX’s public communications (blog and social media) regularly outline these plans, which align with their vision of a continuously improving, decentralized street-vision AI network.

 

NEWS

Announcements

Load More