Subnet 21

Any-to-Any

Emissions
Value
Recycled
Value
Recycled (24h)
Value
Registration Cost
Value
Active Validators
Value
Active Miners
Value
Active Dual Miners/Validators
Value

ABOUT

What exactly does it do?

The Any-to-Any Subnet, developed by OMEGA Labs on the Bittensor blockchain, represents a decentralized, open-source AI initiative. Their mission is to pioneer state-of-the-art multimodal any-to-any models, drawing in top AI researchers globally to leverage Bittensor’s incentivized intelligence platform for training and compute contributions. Their vision includes establishing a self-sustaining research lab where participants are rewarded for advancing AI capabilities through computing resources and research insights.

Boasting a dataset that includes over 1 million hours of footage, 30 million video clips, and expanding coverage of 50+ scenarios and 15,000+ action phrases. This dataset places them as a significant contender in AI training datasets, challenging the scale of even prominent collections like YouTube’s 8 million video IDs. This initiative signifies a major advancement in decentralized AI on the Bittensor network, powered by high-quality data hosted on Huggingface, solidifying their position at the forefront of AI research and development

The Any-to-Any Subnet, developed by OMEGA Labs on the Bittensor blockchain, represents a decentralized, open-source AI initiative. Their mission is to pioneer state-of-the-art multimodal any-to-any models, drawing in top AI researchers globally to leverage Bittensor’s incentivized intelligence platform for training and compute contributions. Their vision includes establishing a self-sustaining research lab where participants are rewarded for advancing AI capabilities through computing resources and research insights.

Boasting a dataset that includes over 1 million hours of footage, 30 million video clips, and expanding coverage of 50+ scenarios and 15,000+ action phrases. This dataset places them as a significant contender in AI training datasets, challenging the scale of even prominent collections like YouTube’s 8 million video IDs. This initiative signifies a major advancement in decentralized AI on the Bittensor network, powered by high-quality data hosted on Huggingface, solidifying their position at the forefront of AI research and development

PURPOSE

What exactly is the 'product/build'?

Subnet 21 is poised to redefine AI training with the world’s largest multimodal dataset. Over 30 million videos are fueling advanced models, empowered by $TAO. By leveraging the decentralized Bittensor network, Omega Labs stands at the forefront of revolutionizing open AGI research. They provide an unparalleled scale of multimodal data, addressing challenges such as the ARC-AGI benchmark. Their goal is to surpass traditional LLMs and closed-source initiatives, fostering pioneering open-source innovation.

 

Multimodal Approach: A2A integrates all modalities (text, image, audio, video) concurrently, driven by their belief that true intelligence emerges from associative representations at the intersection of these modalities.

 

Unified Representation of Reality: The Platonic Representation Hypothesis suggests that as AI models scale, they converge towards a fundamental representation of reality. A2A models, by jointly modeling all modalities, allow them to capture this structure, potentially accelerating progress towards more generalized AI.

 

Decentralized Data Collection: Through their SN24 data collection, they leverage a continuous flow of data mirroring real-world demand distribution for training and evaluation. Refreshing topics based on data gaps helps them mitigate underrepresented data classes, ensuring robust training via self-play among their subnet’s top checkpoints.

 

Incentivized Research: With Bittensor’s model for incentivizing intelligence, world-class AI researchers and engineers can be permissionlessly compensated for their efforts and have their compute subsidized according to their productivity, which they believe fosters open-source innovation.

 

Subnet Orchestrator: Their Bittensor Subnet Orchestrator integrates specialized models from other subnets, serving as a high-bandwidth router. As the leading open-source multimodal model, their platform enables future AI projects to bootstrap their expert models using rich multimodal embeddings.

 

Public-Driven Capability Expansion: They prioritize learning capabilities based on public demand through decentralized incentives.

 

Beyond Transformers: They integrate cutting-edge architectures like early fusion transformers, diffusion transformers, liquid neural networks, and KANs to expand their model’s capabilities beyond traditional transformer frameworks.

Subnet 21 is poised to redefine AI training with the world’s largest multimodal dataset. Over 30 million videos are fueling advanced models, empowered by $TAO. By leveraging the decentralized Bittensor network, Omega Labs stands at the forefront of revolutionizing open AGI research. They provide an unparalleled scale of multimodal data, addressing challenges such as the ARC-AGI benchmark. Their goal is to surpass traditional LLMs and closed-source initiatives, fostering pioneering open-source innovation.

 

Multimodal Approach: A2A integrates all modalities (text, image, audio, video) concurrently, driven by their belief that true intelligence emerges from associative representations at the intersection of these modalities.

 

Unified Representation of Reality: The Platonic Representation Hypothesis suggests that as AI models scale, they converge towards a fundamental representation of reality. A2A models, by jointly modeling all modalities, allow them to capture this structure, potentially accelerating progress towards more generalized AI.

 

Decentralized Data Collection: Through their SN24 data collection, they leverage a continuous flow of data mirroring real-world demand distribution for training and evaluation. Refreshing topics based on data gaps helps them mitigate underrepresented data classes, ensuring robust training via self-play among their subnet’s top checkpoints.

 

Incentivized Research: With Bittensor’s model for incentivizing intelligence, world-class AI researchers and engineers can be permissionlessly compensated for their efforts and have their compute subsidized according to their productivity, which they believe fosters open-source innovation.

 

Subnet Orchestrator: Their Bittensor Subnet Orchestrator integrates specialized models from other subnets, serving as a high-bandwidth router. As the leading open-source multimodal model, their platform enables future AI projects to bootstrap their expert models using rich multimodal embeddings.

 

Public-Driven Capability Expansion: They prioritize learning capabilities based on public demand through decentralized incentives.

 

Beyond Transformers: They integrate cutting-edge architectures like early fusion transformers, diffusion transformers, liquid neural networks, and KANs to expand their model’s capabilities beyond traditional transformer frameworks.

WHO

Team Info

awaiting data

awaiting data

FUTURE

Roadmap

Phase 1: Foundation (Remainder of Q2 2024)

  • Develop a robust validation mechanism that incentivizes deep video understanding.
  • Create the initial checkpoint demonstrating state-of-the-art (SOTA) image and video comprehension using the ImageBind + Llama-3 architecture as a proof of concept.
  • Expand the validation mechanism to support extensive architecture exploration and new multimodal tokenization methods.
  • Recruit over 20 top AI researchers from leading labs and open-source initiatives.
  • Expand SN24 data collection to include multimodal websites like Reddit and blogs, alongside synthetic data pipelines.
  • Launch the OMEGA Focus screen recording app to gather comprehensive data on long-term human workflows, addressing issues seen in closed-source LLMs regarding hallucinations and distractions.

 

Phase 2: Fully Multimodal (Q3 2024)

  • Develop the first any-to-any checkpoint that inherently models all modalities, surpassing other open-source models on multimodal and reasoning benchmarks.
  • Design a user-friendly interface for miners and validators to interact with top models on the subnet.
  • Onboard an additional 50 top AI researchers from prominent labs and open-source research communities.
  • Publish a research paper detailing A2A’s architecture, incentive model, and performance.
  • Release open-source multimodal embedding models based on the top A2A checkpoint’s internal embedding space for external labs to use in their models.
  • Implement a framework capable of automatically evaluating models and assets generated by other Bittensor subnets, allowing top models to engage through tool usage and native communication via projection modules.

 

Phase 3: Exponential Open Research Progress (Q4 2024)

  • Develop the first any-to-any OSS checkpoint surpassing all closed-source SOTA general intelligence models.
  • Forge partnerships with AI labs, universities, and industry leaders to drive adoption.
  • Expand the Bittensor model evaluation and routing framework into a comprehensive platform for assessing open-source and closed-source checkpoints and APIs.
  • Introduce task-driven learning, with OMEGA Labs regularly curating high-quality tasks for model training.
  • Begin creating an innovative “online” validation mechanism rewarding miners for developing autonomous models capable of real-world task completion.
  • Utilize the top checkpoint to enhance the multimodal intelligence features of the OMEGA Focus app.

 

Phase 4: Agentic Focus (Q1 2025)

  • Launch an “online” validation mechanism focused on long-term task completion by autonomous agents.
  • Achieve SOTA performance benchmarks for agent-based tasks.
  • Integrate OMEGA Focus to provide users with OMEGA digital twin companions.
  • Establish an app store for applications powered by A2A, leveraging our open-source models.
  • Expand the user base to over 10 million with the OMEGA Focus app.

OMEGA A2A aims to redefine the AI landscape by leveraging Bittensor’s incentivized intelligence model and attracting top AI researchers worldwide. Their mission focuses on:

  • Advancing fully multimodal, any-to-any models that surpass all existing open-source solutions.
  • Establishing an AI gateway framework to seamlessly integrate and evaluate models across the Bittensor ecosystem and beyond.
  • Implementing task-driven learning and agent-focused validation to develop models capable of complex real-world tasks.
  • Enhancing the OMEGA Focus app with state-of-the-art multimodal intelligence and personalized digital twin companions.

 

Moving forward, they plan to explore decentralized infrastructure and governance to democratize the AI ecosystem fully. Their research will explore innovative architectures beyond transformers and attention mechanisms, pushing the boundaries of AI capabilities.

By hyper-connecting Subnet 24, OMEGA A2A accesses diverse, high-quality data crucial for their models’ development and versatility. They will implement innovative monetization strategies to sustain and expand the ecosystem for long-term viability and success.

Through the collaborative efforts of their decentralized OMEGA A2A research collective, they aim to demonstrate the vast potential of Bittensor’s incentivized intelligence model and establish leadership in the AI research community and beyond.

Phase 1: Foundation (Remainder of Q2 2024)

  • Develop a robust validation mechanism that incentivizes deep video understanding.
  • Create the initial checkpoint demonstrating state-of-the-art (SOTA) image and video comprehension using the ImageBind + Llama-3 architecture as a proof of concept.
  • Expand the validation mechanism to support extensive architecture exploration and new multimodal tokenization methods.
  • Recruit over 20 top AI researchers from leading labs and open-source initiatives.
  • Expand SN24 data collection to include multimodal websites like Reddit and blogs, alongside synthetic data pipelines.
  • Launch the OMEGA Focus screen recording app to gather comprehensive data on long-term human workflows, addressing issues seen in closed-source LLMs regarding hallucinations and distractions.

 

Phase 2: Fully Multimodal (Q3 2024)

  • Develop the first any-to-any checkpoint that inherently models all modalities, surpassing other open-source models on multimodal and reasoning benchmarks.
  • Design a user-friendly interface for miners and validators to interact with top models on the subnet.
  • Onboard an additional 50 top AI researchers from prominent labs and open-source research communities.
  • Publish a research paper detailing A2A’s architecture, incentive model, and performance.
  • Release open-source multimodal embedding models based on the top A2A checkpoint’s internal embedding space for external labs to use in their models.
  • Implement a framework capable of automatically evaluating models and assets generated by other Bittensor subnets, allowing top models to engage through tool usage and native communication via projection modules.

 

Phase 3: Exponential Open Research Progress (Q4 2024)

  • Develop the first any-to-any OSS checkpoint surpassing all closed-source SOTA general intelligence models.
  • Forge partnerships with AI labs, universities, and industry leaders to drive adoption.
  • Expand the Bittensor model evaluation and routing framework into a comprehensive platform for assessing open-source and closed-source checkpoints and APIs.
  • Introduce task-driven learning, with OMEGA Labs regularly curating high-quality tasks for model training.
  • Begin creating an innovative “online” validation mechanism rewarding miners for developing autonomous models capable of real-world task completion.
  • Utilize the top checkpoint to enhance the multimodal intelligence features of the OMEGA Focus app.

 

Phase 4: Agentic Focus (Q1 2025)

  • Launch an “online” validation mechanism focused on long-term task completion by autonomous agents.
  • Achieve SOTA performance benchmarks for agent-based tasks.
  • Integrate OMEGA Focus to provide users with OMEGA digital twin companions.
  • Establish an app store for applications powered by A2A, leveraging our open-source models.
  • Expand the user base to over 10 million with the OMEGA Focus app.

OMEGA A2A aims to redefine the AI landscape by leveraging Bittensor’s incentivized intelligence model and attracting top AI researchers worldwide. Their mission focuses on:

  • Advancing fully multimodal, any-to-any models that surpass all existing open-source solutions.
  • Establishing an AI gateway framework to seamlessly integrate and evaluate models across the Bittensor ecosystem and beyond.
  • Implementing task-driven learning and agent-focused validation to develop models capable of complex real-world tasks.
  • Enhancing the OMEGA Focus app with state-of-the-art multimodal intelligence and personalized digital twin companions.

 

Moving forward, they plan to explore decentralized infrastructure and governance to democratize the AI ecosystem fully. Their research will explore innovative architectures beyond transformers and attention mechanisms, pushing the boundaries of AI capabilities.

By hyper-connecting Subnet 24, OMEGA A2A accesses diverse, high-quality data crucial for their models’ development and versatility. They will implement innovative monetization strategies to sustain and expand the ecosystem for long-term viability and success.

Through the collaborative efforts of their decentralized OMEGA A2A research collective, they aim to demonstrate the vast potential of Bittensor’s incentivized intelligence model and establish leadership in the AI research community and beyond.

TOKEN

No Token

This Subnet currently does not support its own individual token.

NEWS

Announcements

MORE INFO

Useful Links