With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time

Subnet 24

Quasar

Alpha Price
Value
Market Cap
Value
Neurons
Value
Registration Cost
Value
TAO Liquidity
Value
Alpha in Pool
Value
Total Alpha Supply
Value
% Alpha Staked
Value

ABOUT

What exactly does it do?

Quasar is a decentralized AI subnet designed to overcome the context length limitation in modern AI language models. In essence, Quasar enables large language models (LLMs) to handle ultra-long context inputs—on the order of millions of tokens—without losing track of details or suffering performance degradation. Traditional transformer-based AIs struggle with memory: they can only “remember” a limited window of text (usually a few thousand tokens) before context is forgotten or costs explode. Quasar tackles this AI memory problem head-on by fundamentally redesigning model architecture and training approach. It introduces a new long-context foundation model that can ingest extremely large documents, codebases, or even entire book series, and answer questions or perform reasoning without forgetting earlier parts of the input. This means Quasar-powered models can, for example, read multiple books worth of text and then answer a question about a detail in the third book, or analyze an entire code repository to locate a specific function – tasks that break the limits of normal LLMs.

Quasar achieves this by using a novel Continuous-Time Attention Transformer architecture with a linear complexity attention mechanism instead of the standard quadratic complexity of transformers. In practical terms, doubling the context length no longer quadruples the compute cost as it would in a regular transformer – Quasar’s attention scales roughly O(N) instead of O(N²). This allows it to handle extremely long inputs efficiently. The model effectively removes or rethinks positional embeddings (the part of a transformer that normally limits sequence lengths), which eliminates the fragility when context lengths exceed the training size. Thanks to this approach, Quasar models can maintain 99.9% recall over massive sequences, meaning they retain almost all information even across millions of tokens of context. They can reason consistently at any depth without the usual collapse in coherence, and do so at a fraction of the computational cost per token of other long-context strategies.

Furthermore, Quasar is not just a single model but a decentralized network of models and evaluators on Bittensor. It provides an evaluation subnet where independent miners (node operators) run long-context models and compete to solve tasks, while validators verify their outputs against benchmarks. This creates a self-improving ecosystem: models are continually tested on real-world long-text tasks and are rewarded in $TAO (the Bittensor token) based on their accuracy and ability to handle longer contexts. By leveraging this decentralized compute and incentive structure, Quasar can iteratively improve the state-of-the-art in long-context understanding. In summary, Quasar turns “memory” into a distributed, incentivized resource, allowing AI to remember and reason with vastly more information than ever before, all on a decentralized network.

 

Quasar is a decentralized AI subnet designed to overcome the context length limitation in modern AI language models. In essence, Quasar enables large language models (LLMs) to handle ultra-long context inputs—on the order of millions of tokens—without losing track of details or suffering performance degradation. Traditional transformer-based AIs struggle with memory: they can only “remember” a limited window of text (usually a few thousand tokens) before context is forgotten or costs explode. Quasar tackles this AI memory problem head-on by fundamentally redesigning model architecture and training approach. It introduces a new long-context foundation model that can ingest extremely large documents, codebases, or even entire book series, and answer questions or perform reasoning without forgetting earlier parts of the input. This means Quasar-powered models can, for example, read multiple books worth of text and then answer a question about a detail in the third book, or analyze an entire code repository to locate a specific function – tasks that break the limits of normal LLMs.

Quasar achieves this by using a novel Continuous-Time Attention Transformer architecture with a linear complexity attention mechanism instead of the standard quadratic complexity of transformers. In practical terms, doubling the context length no longer quadruples the compute cost as it would in a regular transformer – Quasar’s attention scales roughly O(N) instead of O(N²). This allows it to handle extremely long inputs efficiently. The model effectively removes or rethinks positional embeddings (the part of a transformer that normally limits sequence lengths), which eliminates the fragility when context lengths exceed the training size. Thanks to this approach, Quasar models can maintain 99.9% recall over massive sequences, meaning they retain almost all information even across millions of tokens of context. They can reason consistently at any depth without the usual collapse in coherence, and do so at a fraction of the computational cost per token of other long-context strategies.

Furthermore, Quasar is not just a single model but a decentralized network of models and evaluators on Bittensor. It provides an evaluation subnet where independent miners (node operators) run long-context models and compete to solve tasks, while validators verify their outputs against benchmarks. This creates a self-improving ecosystem: models are continually tested on real-world long-text tasks and are rewarded in $TAO (the Bittensor token) based on their accuracy and ability to handle longer contexts. By leveraging this decentralized compute and incentive structure, Quasar can iteratively improve the state-of-the-art in long-context understanding. In summary, Quasar turns “memory” into a distributed, incentivized resource, allowing AI to remember and reason with vastly more information than ever before, all on a decentralized network.

 

PURPOSE

What exactly is the 'product/build'?

Quasar’s core product is twofold: (1) the Quasar foundation model series – long-context large language models engineered to process massive sequences of text – and (2) the Quasar subnet framework on Bittensor that trains and evaluates these models in a decentralized, competitive environment. Essentially, Quasar is a new AI model architecture plus a blockchain-based network that supports it. Below we break down its technical architecture and features:

Long-Context Transformer Architecture: The Quasar models use a custom transformer variant called Continuous-Time Attention Transformer which eliminates traditional positional embeddings and employs a linear-time attention mechanism. This innovation allows Quasar models to scale their context window from tens of thousands to millions of tokens without degradation. By removing the fixed position limitations that make other models “forget” or break beyond their trained context length, Quasar can handle entire books or large datasets in a single forward pass. Performance scales linearly with context length, avoiding the quadratic slowdown of standard transformers. In practice, Quasar’s design achieved a context window surpassing 10 million tokens in testing (effectively limitless context), with near-perfect information retention across that span. This means the model can maintain coherence and recall details even with extremely long inputs, a breakthrough in AI memory capability.

Optimized Memory and Computation: The architecture yields not only length extension but also efficiency gains. Quasar’s attention mechanism is optimized such that inference over long sequences incurs only about 1/10th the computational cost compared to naive dense-attention methods. It can recall details with 99.9% accuracy across its entire context window. Moreover, Quasar models are built to run on accessible hardware – they are optimized for inference on consumer-level GPUs despite their long context capability. For example, the team has released Quasar-2M-Base, an open-source model on Hugging Face with a 2 million token context window (26B parameters), demonstrating that even relatively small models can be modified for huge context lengths. Quasar’s techniques can be applied to other architectures as well; indeed, the subnet supports multiple model types to foster diversity (e.g. a 48B parameter Kimi-Linear model and an 80B Qwen3-Next model are also compatible participants). This flexibility shows that Quasar is as much a method as a single model – a method to extend memory and context in AI.

Decentralized Training via Bittensor: Quasar is built on the Bittensor network, meaning it leverages a decentralized cluster of miners (compute nodes) around the world to train and improve the models. Instead of training in a single large lab, Quasar’s model improvements are “mined” by independent participants contributing GPU power on the subnet. This dramatically lowers the cost of model development. According to SILX, by using Bittensor’s distributed compute, Quasar brought pre-training costs down by 99.5% (on the order of <$50k, versus millions of dollars normally). Each miner runs a copy or variant of the Quasar model and continuously trains or fine-tunes it on tasks provided by the network. This not only decentralizes the computation but also makes the training process adversarial and competitive: many versions of the model compete, and only the best-performing ones earn rewards.

Miner-Validator Framework: The Quasar subnet introduces a robust evaluation framework to continually test and incentivize model performance. Here’s how it works:

  • Miners are nodes running long-context models (Quasar or other supported models). Their job is to answer queries or perform tasks that require processing long contexts (up to 2M tokens in current benchmarks). They must optimize for both accuracy (e.g. answering questions correctly) and speed (fast inference) to be competitive.
  • Validators are special nodes that generate tasks and evaluate miners’ responses. They pull real-world benchmark tasks from a suite called LongBench, then send a context (e.g. a long document or multiple documents) plus a question or prompt to the miners. The validators then score the responses using appropriate metrics (F1, exact match, ROUGE, etc., depending on the task). Importantly, the scoring also accounts for context length – longer tasks have a multiplier that boosts the reward for handling massive inputs. This means the network incentivizes models to tackle harder, longer-context problems. Validators reach consensus on scores and then automatically reward top-performing miners with $TAO tokens based on their accuracy. This creates a self-sustaining cycle: models that genuinely excel at long-context reasoning get more mining rewards, encouraging further improvements.

 

Benchmarking and Evaluation: Quasar’s “product” isn’t just the model, but also a comprehensive evaluation toolkit for long-context AI. The project assembled LongBench, a collection of real-world tasks that test an AI’s ability to handle and make sense of lengthy inputs. These include:

  • Long-document Question Answering: e.g. NarrativeQA (questions on long stories), Qasper (Q&A on academic papers), TriviaQA and others.
  • Summarization: e.g. GovReport (summarizing government reports), MultiNews (multi-document news summaries), QMSum (meeting transcript summarization).
  • Classification: e.g. TREC (question classification) and other multi-field QA tasks. In addition, Quasar devised QuasarBench, a “needle-in-a-haystack” synthetic test to check if a model can find specific information buried in a huge text (this evaluates absolute recall across the full context window). Together, these benchmarks ensure that Quasar models aren’t just theoretically capable of long context, but prove it on diverse tasks – from reading comprehension to summarization – while maintaining factual accuracy and coherence over long inputs. The network tracks these performances openly (with integration to tools like Weights & Biases for real-time monitoring), providing transparency in how each model is doing.

 

Open Source and Integration: SILX AI has made Quasar’s developments open-source. The subnet’s code (miners, validators, training scripts) is publicly available on GitHub, and the model weights (e.g. Quasar-2M-Base) are released on Hugging Face for anyone to download and use. This open approach means developers can already experiment with Quasar’s long-memory model locally or integrate it into their own applications. Notably, Quasar’s long-context technique can be applied to any model architecture, so the team has also provided modified versions of popular open models with extended context (for instance, a custom 48B model “Kimi-Linear” with ~1M token context is part of the subnet). The aim is to foster an ecosystem where long-memory AI becomes widely accessible, not confined to big tech labs. Quasar’s tech allows running these models without exorbitant costs – you don’t pay more just because the model remembers more, as one commentator noted. This could be transformative for applications that need AI to maintain long-term state or knowledge, from lengthy research assistants to autonomous agents that must remember history over time.

In summary, Quasar is both a breakthrough AI model (solving long-term memory in language models) and a decentralized network build that harnesses global contributors to push that model’s capabilities further. It represents a new kind of AI product: one that is continuously evolving in public, mined and verified by a community rather than trained behind closed doors. The product is not a static model but an ongoing service of “infinite memory” AI accessible on the Bittensor network.

 

Quasar’s core product is twofold: (1) the Quasar foundation model series – long-context large language models engineered to process massive sequences of text – and (2) the Quasar subnet framework on Bittensor that trains and evaluates these models in a decentralized, competitive environment. Essentially, Quasar is a new AI model architecture plus a blockchain-based network that supports it. Below we break down its technical architecture and features:

Long-Context Transformer Architecture: The Quasar models use a custom transformer variant called Continuous-Time Attention Transformer which eliminates traditional positional embeddings and employs a linear-time attention mechanism. This innovation allows Quasar models to scale their context window from tens of thousands to millions of tokens without degradation. By removing the fixed position limitations that make other models “forget” or break beyond their trained context length, Quasar can handle entire books or large datasets in a single forward pass. Performance scales linearly with context length, avoiding the quadratic slowdown of standard transformers. In practice, Quasar’s design achieved a context window surpassing 10 million tokens in testing (effectively limitless context), with near-perfect information retention across that span. This means the model can maintain coherence and recall details even with extremely long inputs, a breakthrough in AI memory capability.

Optimized Memory and Computation: The architecture yields not only length extension but also efficiency gains. Quasar’s attention mechanism is optimized such that inference over long sequences incurs only about 1/10th the computational cost compared to naive dense-attention methods. It can recall details with 99.9% accuracy across its entire context window. Moreover, Quasar models are built to run on accessible hardware – they are optimized for inference on consumer-level GPUs despite their long context capability. For example, the team has released Quasar-2M-Base, an open-source model on Hugging Face with a 2 million token context window (26B parameters), demonstrating that even relatively small models can be modified for huge context lengths. Quasar’s techniques can be applied to other architectures as well; indeed, the subnet supports multiple model types to foster diversity (e.g. a 48B parameter Kimi-Linear model and an 80B Qwen3-Next model are also compatible participants). This flexibility shows that Quasar is as much a method as a single model – a method to extend memory and context in AI.

Decentralized Training via Bittensor: Quasar is built on the Bittensor network, meaning it leverages a decentralized cluster of miners (compute nodes) around the world to train and improve the models. Instead of training in a single large lab, Quasar’s model improvements are “mined” by independent participants contributing GPU power on the subnet. This dramatically lowers the cost of model development. According to SILX, by using Bittensor’s distributed compute, Quasar brought pre-training costs down by 99.5% (on the order of <$50k, versus millions of dollars normally). Each miner runs a copy or variant of the Quasar model and continuously trains or fine-tunes it on tasks provided by the network. This not only decentralizes the computation but also makes the training process adversarial and competitive: many versions of the model compete, and only the best-performing ones earn rewards.

Miner-Validator Framework: The Quasar subnet introduces a robust evaluation framework to continually test and incentivize model performance. Here’s how it works:

  • Miners are nodes running long-context models (Quasar or other supported models). Their job is to answer queries or perform tasks that require processing long contexts (up to 2M tokens in current benchmarks). They must optimize for both accuracy (e.g. answering questions correctly) and speed (fast inference) to be competitive.
  • Validators are special nodes that generate tasks and evaluate miners’ responses. They pull real-world benchmark tasks from a suite called LongBench, then send a context (e.g. a long document or multiple documents) plus a question or prompt to the miners. The validators then score the responses using appropriate metrics (F1, exact match, ROUGE, etc., depending on the task). Importantly, the scoring also accounts for context length – longer tasks have a multiplier that boosts the reward for handling massive inputs. This means the network incentivizes models to tackle harder, longer-context problems. Validators reach consensus on scores and then automatically reward top-performing miners with $TAO tokens based on their accuracy. This creates a self-sustaining cycle: models that genuinely excel at long-context reasoning get more mining rewards, encouraging further improvements.

 

Benchmarking and Evaluation: Quasar’s “product” isn’t just the model, but also a comprehensive evaluation toolkit for long-context AI. The project assembled LongBench, a collection of real-world tasks that test an AI’s ability to handle and make sense of lengthy inputs. These include:

  • Long-document Question Answering: e.g. NarrativeQA (questions on long stories), Qasper (Q&A on academic papers), TriviaQA and others.
  • Summarization: e.g. GovReport (summarizing government reports), MultiNews (multi-document news summaries), QMSum (meeting transcript summarization).
  • Classification: e.g. TREC (question classification) and other multi-field QA tasks. In addition, Quasar devised QuasarBench, a “needle-in-a-haystack” synthetic test to check if a model can find specific information buried in a huge text (this evaluates absolute recall across the full context window). Together, these benchmarks ensure that Quasar models aren’t just theoretically capable of long context, but prove it on diverse tasks – from reading comprehension to summarization – while maintaining factual accuracy and coherence over long inputs. The network tracks these performances openly (with integration to tools like Weights & Biases for real-time monitoring), providing transparency in how each model is doing.

 

Open Source and Integration: SILX AI has made Quasar’s developments open-source. The subnet’s code (miners, validators, training scripts) is publicly available on GitHub, and the model weights (e.g. Quasar-2M-Base) are released on Hugging Face for anyone to download and use. This open approach means developers can already experiment with Quasar’s long-memory model locally or integrate it into their own applications. Notably, Quasar’s long-context technique can be applied to any model architecture, so the team has also provided modified versions of popular open models with extended context (for instance, a custom 48B model “Kimi-Linear” with ~1M token context is part of the subnet). The aim is to foster an ecosystem where long-memory AI becomes widely accessible, not confined to big tech labs. Quasar’s tech allows running these models without exorbitant costs – you don’t pay more just because the model remembers more, as one commentator noted. This could be transformative for applications that need AI to maintain long-term state or knowledge, from lengthy research assistants to autonomous agents that must remember history over time.

In summary, Quasar is both a breakthrough AI model (solving long-term memory in language models) and a decentralized network build that harnesses global contributors to push that model’s capabilities further. It represents a new kind of AI product: one that is continuously evolving in public, mined and verified by a community rather than trained behind closed doors. The product is not a static model but an ongoing service of “infinite memory” AI accessible on the Bittensor network.

 

WHO

Team Info

Quasar is developed by SILX AI (SILX Labs), the organization behind Subnet 24. SILX is a startup focused on advanced AI research and “the future of synthetic intelligence,” and it spearheaded Quasar’s creation. Key team and supporters include:

Eyad Gomaa – CEO & Co-Founder (SILX): Eyad leads SILX AI as CEO, and he is the principal researcher driving new AI architectures for long-context “synthetic intelligence.” He has been exploring novel model designs to achieve the next wave of AI capabilities. Under his guidance, Quasar’s vision of limitless context and decentralized training took shape.

Youssef Farahat – CTO & Co-Founder (SILX): Youssef is the Chief Technology Officer of SILX with a strong background in blockchain technology (4+ years experience) and decentralized systems. He co-founded SILX and plays a crucial role in integrating the Quasar models with the Bittensor network. His expertise ensures that the AI and blockchain components of Quasar work seamlessly together.

Advisors – Siam Kidd, Mark Creaser, Chris Zacharia: SILX is advised by prominent figures in the decentralized AI and crypto community. Notably, Siam Kidd and Mark Creaser are involved as advisors; they are known for their leadership of the DSV Fund (a hedge fund focused on the Bittensor ecosystem) and their advocacy for decentralized AI. Their backing has provided strategic guidance and credibility – both Siam and Mark are influential in growing Bittensor’s ecosystem (they even host a podcast on Bittensor, and their fund invested early in Quasar). Chris Zacharia is another advisor lending expertise, likely in growth or community strategy (as he has a background in tech ventures). This strong advisory board signals that Quasar has support from veterans in AI, finance, and blockchain, helping steer the project’s direction.

Backers and Launch Support: Quasar was backed by the BitStarter launchpad and DSV Fund during its inception. BitStarter is Bittensor’s native crowdfunding platform, which hosted Quasar’s token raise in late 2025. Through BitStarter, Quasar raised 400 $TAO (the target cap) from 66 contributors in just two weeks to fund the subnet’s development. This successful raise (completed in December 2025) demonstrated strong community belief in Quasar’s mission to solve AI’s memory limitations. Meanwhile, DSV (Decentralized Support Ventures) Fund provided institutional support – DSV is a regulated fund dedicated to investing in Bittensor projects. The involvement of DSV means Quasar has financial backing and mentorship from experienced Bittensor investors (in fact, DSV’s principals include the aforementioned advisors, Siam Kidd and Mark Creaser). Overall, the team behind Quasar is a blend of AI researchers, blockchain engineers, and crypto venture experts, all collaborating to make long-context AI a reality.

 

Quasar is developed by SILX AI (SILX Labs), the organization behind Subnet 24. SILX is a startup focused on advanced AI research and “the future of synthetic intelligence,” and it spearheaded Quasar’s creation. Key team and supporters include:

Eyad Gomaa – CEO & Co-Founder (SILX): Eyad leads SILX AI as CEO, and he is the principal researcher driving new AI architectures for long-context “synthetic intelligence.” He has been exploring novel model designs to achieve the next wave of AI capabilities. Under his guidance, Quasar’s vision of limitless context and decentralized training took shape.

Youssef Farahat – CTO & Co-Founder (SILX): Youssef is the Chief Technology Officer of SILX with a strong background in blockchain technology (4+ years experience) and decentralized systems. He co-founded SILX and plays a crucial role in integrating the Quasar models with the Bittensor network. His expertise ensures that the AI and blockchain components of Quasar work seamlessly together.

Advisors – Siam Kidd, Mark Creaser, Chris Zacharia: SILX is advised by prominent figures in the decentralized AI and crypto community. Notably, Siam Kidd and Mark Creaser are involved as advisors; they are known for their leadership of the DSV Fund (a hedge fund focused on the Bittensor ecosystem) and their advocacy for decentralized AI. Their backing has provided strategic guidance and credibility – both Siam and Mark are influential in growing Bittensor’s ecosystem (they even host a podcast on Bittensor, and their fund invested early in Quasar). Chris Zacharia is another advisor lending expertise, likely in growth or community strategy (as he has a background in tech ventures). This strong advisory board signals that Quasar has support from veterans in AI, finance, and blockchain, helping steer the project’s direction.

Backers and Launch Support: Quasar was backed by the BitStarter launchpad and DSV Fund during its inception. BitStarter is Bittensor’s native crowdfunding platform, which hosted Quasar’s token raise in late 2025. Through BitStarter, Quasar raised 400 $TAO (the target cap) from 66 contributors in just two weeks to fund the subnet’s development. This successful raise (completed in December 2025) demonstrated strong community belief in Quasar’s mission to solve AI’s memory limitations. Meanwhile, DSV (Decentralized Support Ventures) Fund provided institutional support – DSV is a regulated fund dedicated to investing in Bittensor projects. The involvement of DSV means Quasar has financial backing and mentorship from experienced Bittensor investors (in fact, DSV’s principals include the aforementioned advisors, Siam Kidd and Mark Creaser). Overall, the team behind Quasar is a blend of AI researchers, blockchain engineers, and crypto venture experts, all collaborating to make long-context AI a reality.

 

FUTURE

Roadmap

Quasar has a clear multi-phase roadmap outlining its development from testnet launch to a fully featured long-context AI platform. The roadmap spans from late 2025 through 2026, with each phase building new capabilities:

Phase 1: Foundation (Q4 2025) – Establish the subnet and core features. This initial phase focused on launching Quasar in a controlled environment and proving the concept. The Quasar subnet was deployed on Bittensor (testnet), and the foundational evaluation framework was implemented. Key benchmarks (the LongBench suite) were integrated, and a mock mode was introduced for local testing of miners/validators. The team also integrated monitoring tools (Weights & Biases) to track performance in real time. Essentially, by end of 2025 Quasar went live as Subnet 24 with its evaluation engine running in test mode.

Phase 2: Expansion (Q1 2026) – Broadening capabilities and knowledge sharing. In this phase, Quasar plans to add more long-context benchmarks to cover an even wider range of tasks. They will implement dynamic difficulty adjustment, meaning the subnet can adjust task difficulty or context lengths on the fly to continually challenge the models as they improve. Support will be expanded for additional model architectures, encouraging more miners to join with different long-context models. Importantly, the team intends to publish a research paper detailing the Quasar architecture, their reinforcement learning framework for long context, and the results of their approach. This paper will share Quasar’s innovations with the broader AI community. By the end of Q1 2026, Quasar is expected to move from testnet toward mainnet with a larger variety of tasks and models, and formal documentation of its methods.

Phase 3: Advanced Features (Q2 2026) – Enhancing functionality and use-cases. Once the basics are solid, Quasar will introduce more advanced features. One goal is multi-modal long-context evaluation – allowing the models to handle not just text, but possibly images or other data within the long context (for example, analyzing a lengthy document with embedded images). They also plan to build a custom benchmark submission system, so that external researchers or users can add new long-context challenges to the network for models to tackle. A real-time leaderboard and analytics dashboard will be deployed, giving the community an easy way to see which models are top-performers and how they stack up over time. Additionally, Quasar aims for integration with external AI research labs or platforms, bridging the subnet with outside efforts – this could mean partnerships where academic or corporate labs use Quasar for testing their long-context models, for instance.

Phase 4: Ecosystem Growth (Q3 2026) – Scaling up adoption and ecosystem integration. In this later phase, Quasar envisions expanding beyond the Bittensor community to general developers and even end-users. They plan to release a developer API, allowing other applications or services to programmatically access the Quasar subnet’s capabilities (e.g. an app could send a large document to Quasar and get answers). A benchmark marketplace is also on the roadmap – this would let users create and share their own evaluation tasks or datasets for Quasar, potentially earning rewards or driving specialized model improvements. Cross-subnet collaboration features are planned, meaning Quasar could interoperate with other Bittensor subnets (for example, linking a long-memory model with another subnet’s expertise, such as a coding subnet or a multimodal subnet, to solve complex tasks together). Finally, they intend to optimize for mobile and edge device support, which hints at making lightweight long-context models that can run on devices outside of data centers. By Q3 2026, Quasar aims to mature into a full-fledged ecosystem service: widely accessible “memory augmentation” for AI, with community-driven growth.

Each phase of the roadmap is ambitious, and if achieved, will push the boundaries of what AI can do with large-scale context. The Quasar team’s phased approach shows a path from proving the concept (in 2025) to scaling it and integrating it into the broader AI landscape by late 2026. At every stage, the focus remains on extending context and memory for AI models, making Quasar’s long-horizon reasoning capability more powerful and more available to all. The roadmap also highlights Quasar’s ethos of openness and collaboration – from publishing research and open-sourcing models to inviting community benchmarks and cross-project integrations. This aligns with Quasar’s core vision: to redefine AI’s memory limits through collective effort, ultimately ending the era of forgetful AI and ushering in an era of infinite context intelligence.

 

Quasar has a clear multi-phase roadmap outlining its development from testnet launch to a fully featured long-context AI platform. The roadmap spans from late 2025 through 2026, with each phase building new capabilities:

Phase 1: Foundation (Q4 2025) – Establish the subnet and core features. This initial phase focused on launching Quasar in a controlled environment and proving the concept. The Quasar subnet was deployed on Bittensor (testnet), and the foundational evaluation framework was implemented. Key benchmarks (the LongBench suite) were integrated, and a mock mode was introduced for local testing of miners/validators. The team also integrated monitoring tools (Weights & Biases) to track performance in real time. Essentially, by end of 2025 Quasar went live as Subnet 24 with its evaluation engine running in test mode.

Phase 2: Expansion (Q1 2026) – Broadening capabilities and knowledge sharing. In this phase, Quasar plans to add more long-context benchmarks to cover an even wider range of tasks. They will implement dynamic difficulty adjustment, meaning the subnet can adjust task difficulty or context lengths on the fly to continually challenge the models as they improve. Support will be expanded for additional model architectures, encouraging more miners to join with different long-context models. Importantly, the team intends to publish a research paper detailing the Quasar architecture, their reinforcement learning framework for long context, and the results of their approach. This paper will share Quasar’s innovations with the broader AI community. By the end of Q1 2026, Quasar is expected to move from testnet toward mainnet with a larger variety of tasks and models, and formal documentation of its methods.

Phase 3: Advanced Features (Q2 2026) – Enhancing functionality and use-cases. Once the basics are solid, Quasar will introduce more advanced features. One goal is multi-modal long-context evaluation – allowing the models to handle not just text, but possibly images or other data within the long context (for example, analyzing a lengthy document with embedded images). They also plan to build a custom benchmark submission system, so that external researchers or users can add new long-context challenges to the network for models to tackle. A real-time leaderboard and analytics dashboard will be deployed, giving the community an easy way to see which models are top-performers and how they stack up over time. Additionally, Quasar aims for integration with external AI research labs or platforms, bridging the subnet with outside efforts – this could mean partnerships where academic or corporate labs use Quasar for testing their long-context models, for instance.

Phase 4: Ecosystem Growth (Q3 2026) – Scaling up adoption and ecosystem integration. In this later phase, Quasar envisions expanding beyond the Bittensor community to general developers and even end-users. They plan to release a developer API, allowing other applications or services to programmatically access the Quasar subnet’s capabilities (e.g. an app could send a large document to Quasar and get answers). A benchmark marketplace is also on the roadmap – this would let users create and share their own evaluation tasks or datasets for Quasar, potentially earning rewards or driving specialized model improvements. Cross-subnet collaboration features are planned, meaning Quasar could interoperate with other Bittensor subnets (for example, linking a long-memory model with another subnet’s expertise, such as a coding subnet or a multimodal subnet, to solve complex tasks together). Finally, they intend to optimize for mobile and edge device support, which hints at making lightweight long-context models that can run on devices outside of data centers. By Q3 2026, Quasar aims to mature into a full-fledged ecosystem service: widely accessible “memory augmentation” for AI, with community-driven growth.

Each phase of the roadmap is ambitious, and if achieved, will push the boundaries of what AI can do with large-scale context. The Quasar team’s phased approach shows a path from proving the concept (in 2025) to scaling it and integrating it into the broader AI landscape by late 2026. At every stage, the focus remains on extending context and memory for AI models, making Quasar’s long-horizon reasoning capability more powerful and more available to all. The roadmap also highlights Quasar’s ethos of openness and collaboration – from publishing research and open-sourcing models to inviting community benchmarks and cross-project integrations. This aligns with Quasar’s core vision: to redefine AI’s memory limits through collective effort, ultimately ending the era of forgetful AI and ushering in an era of infinite context intelligence.

 

NEWS

Announcements

Load More