With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time

Subnet 05

Hone

Alpha Price
Value
Market Cap
Value
Neurons
Value
Registration Cost
Value
TAO Liquidity
Value
Alpha in Pool
Value
Total Alpha Supply
Value
% Alpha Staked
Value

ABOUT

What exactly does it do?

Hone (Subnet-5 of the Bittensor network) is a decentralized AI research subnet focused on training a new generation of AI models with hierarchical learning and reasoning toward Artificial General Intelligence (AGI). In essence, Hone’s mission is to “pioneer a new path to AGI by harnessing hierarchical learning and reasoning, through an open and decentralized collaboration”. Unlike conventional AI efforts that merely scale up existing architectures, Hone emphasizes architectures that learn and think in levels – much like the human brain. This means the subnet is designed to foster AI models that can build multi-level world understanding and reasoning, rather than just pattern matching.

At its core, Hone aims to develop an AI system that learns a world model and reasons abstractly to solve extremely hard problems that current AI finds nearly impossible. A concrete target is the ARC-AGI-2 benchmark, one of the toughest open challenges in AI (a set of human-intelligence reasoning tasks). Today’s models only achieve around 5% accuracy on this benchmark, but Hone’s goal is to break through this plateau. By combining cutting-edge research approaches – notably Yann LeCun’s JEPA/H-JEPA (a Joint Embedding Predictive Architecture for self-supervised world modeling) and the Hierarchical Reasoning Model (HRM) proposed by Guan Wang et al. (2025) – the project seeks to create a system that can adapt to new tasks, plan solutions, and interpret symbols with human-like flexibility. In simpler terms, Hone is trying to train AI that thinks in layers: a higher-level module figures out a strategy or abstract plan, while a lower-level module works out the concrete details, and they iterate together. (This is inspired by HRM’s two-part design of a slow “planner” and fast “worker” that refine an answer in stages.) The expectation is that such hierarchical AI will be far better at general problem-solving than today’s one-shot large language models.

 

Hone (Subnet-5 of the Bittensor network) is a decentralized AI research subnet focused on training a new generation of AI models with hierarchical learning and reasoning toward Artificial General Intelligence (AGI). In essence, Hone’s mission is to “pioneer a new path to AGI by harnessing hierarchical learning and reasoning, through an open and decentralized collaboration”. Unlike conventional AI efforts that merely scale up existing architectures, Hone emphasizes architectures that learn and think in levels – much like the human brain. This means the subnet is designed to foster AI models that can build multi-level world understanding and reasoning, rather than just pattern matching.

At its core, Hone aims to develop an AI system that learns a world model and reasons abstractly to solve extremely hard problems that current AI finds nearly impossible. A concrete target is the ARC-AGI-2 benchmark, one of the toughest open challenges in AI (a set of human-intelligence reasoning tasks). Today’s models only achieve around 5% accuracy on this benchmark, but Hone’s goal is to break through this plateau. By combining cutting-edge research approaches – notably Yann LeCun’s JEPA/H-JEPA (a Joint Embedding Predictive Architecture for self-supervised world modeling) and the Hierarchical Reasoning Model (HRM) proposed by Guan Wang et al. (2025) – the project seeks to create a system that can adapt to new tasks, plan solutions, and interpret symbols with human-like flexibility. In simpler terms, Hone is trying to train AI that thinks in layers: a higher-level module figures out a strategy or abstract plan, while a lower-level module works out the concrete details, and they iterate together. (This is inspired by HRM’s two-part design of a slow “planner” and fast “worker” that refine an answer in stages.) The expectation is that such hierarchical AI will be far better at general problem-solving than today’s one-shot large language models.

 

PURPOSE

What exactly is the 'product/build'?

Hone is not a single AI model but a decentralized network of contributors working toward this shared AI. It leverages the Bittensor blockchain framework to crowdsource the training: no single entity owns or controls the resulting model. Instead, many independent miners (participants) around the world will run training nodes, each contributing to improving the model, and independent validators will evaluate those contributions. This open design ensures transparency and broad participation. In practical terms, the subnet provides incentives (in the form of token rewards) to those who train models that perform better on the agreed tasks, thereby aligning everyone’s efforts toward the common goal of a powerful reasoning AI. Over time, as miners feed in diverse data and strategies and validators rigorously test the AI on benchmark problems, the best techniques and models “win” more rewards, pushing the whole system’s performance upward. This dynamic is central to Bittensor’s consensus: models are continuously evaluated, ranked, and rewarded, which “ensures continuous improvement and competitiveness” in the subnet.

The ultimate objective of Hone is extremely ambitious: achieve an AI capable of human-level general intelligence on non-trivial tasks. Concretely, the team has set a milestone of reaching ~85% accuracy on ARC-AGI-2 – a level that would essentially signify superhuman performance on those abstract reasoning tasks (for context, humans are believed to score around that range). Surpassing that 85% threshold on ARC-AGI-2 is viewed as “a major leap toward genuine general intelligence.”In other words, if Hone’s decentralized training can produce a model that solves 80–85% of these formerly intractable problems, it would be a watershed moment on the path to AGI. This is a long-term vision, but it’s tangible and measurable – by focusing on a specific hard benchmark, Hone can quantitatively track progress toward AGI.

Equally important, Hone is pursuing this in a manner that adheres to open collaboration and safety. All development is open-source, and the resulting models would belong to the community rather than a private company. By engaging a global network of researchers and enthusiasts, Hone’s creators believe they can “guide open-source AGI research towards machines that can learn, reason, and plan as humans do”, closing the gap on tasks that are “easy for humans, hard for machines.” Hone “Subnet 5” is like a decentralized research lab: its purpose is to collectively build an AI that thinks more like a human, advancing the state of the art in general problem-solving. Supporters even describe Hone’s emergence as ushering in a new era of permissionless, open and incentivized artificial superintelligence – reflecting the belief that this project could accelerate AI toward genuinely human-like (and eventually super-human) intelligence, all while being accessible to the community rather than locked behind closed doors.

Hone is not a traditional consumer-facing product – it’s more of an AI platform or infrastructure build – but it does have a clear “output” and architecture. The “product” of Hone will ultimately be the AI models and services it produces: a hierarchically-trained general reasoning model that others can use. In Bittensor terms, Hone is Subnet-5 (codenamed epsilon) dedicated to Hierarchical AI Pretraining. This means the subnet provides the environment and tools for training the AI (the pretraining stage), and once sufficiently trained, that AI can be accessed or deployed. Eventually, Hone’s model is expected to be served via the Bittensor network’s API so that developers or applications can query it (for example, to solve reasoning problems or perform complex analyses) just like calling an OpenAI API – except in this case it’s open and decentralized. The Hone team has indicated that the developer experience of using the subnet’s AI will be analogous to using an OpenAI service, but provided through Bittensor’s validator API. In short, the build is creating a decentralized “AGI-as-a-service”: the community trains a powerful model and anyone can eventually utilize its intelligence via the network.

From a technical architecture perspective, Hone’s build centers on implementing the novel hierarchical learning techniques into the Bittensor framework. There are two major research components being fused in Hone’s design:

1) A World-Model Learning Module (self-supervised predictive learning): This draws from Yann LeCun’s proposed JEPA (Joint Embedding Predictive Architecture) and in particular Hierarchical JEPA (H-JEPA). In practice, this means part of Hone’s training focuses on unsupervised prediction tasks – teaching the AI to understand the world by predicting missing information. For example, the model might observe part of a pattern or scenario and try to predict the rest. This helps it develop robust internal representations (a “world model”) rather than relying purely on surface-level correlations. Hierarchical predictive learning implies the model may learn representations at multiple levels of abstraction (hints of high-level concepts and low-level details) and use those to anticipate outcomes. By incorporating H-JEPA concepts, Hone tries to ensure the AI has a solid foundation of common-sense understanding about its environment or tasks, which is crucial for generalization.

2) A Hierarchical Reasoning Module (iterative problem solving): This is inspired by the Hierarchical Reasoning Model (HRM) introduced by Guan Wang et al. in 2025. The HRM is essentially a recurrent two-module architecture that mimics how a human might think through a problem in steps. One module (often called the “H” or high-level module) acts as a planner, setting out a coarse strategy or hypothesis for the solution. Another module (“L” or low-level module) acts as a worker, fleshing out details and executing the plan. The two then interact in an outer loop: the worker’s output is fed back, and the planner decides if the solution is good or needs another refinement cycle. This loop continues, effectively allowing the model to “think for multiple steps” and refine its answer until a certain halt condition is met (the model decides it’s confident enough). This approach contrasts with a single-pass transformer that produces an answer in one go. By implementing an HRM-like architecture, Hone’s build enables iterative reasoning – the AI can approach complex tasks by breaking them down into intermediate steps, re-planning as needed, much like a human solving a puzzle through trial and error. The “hierarchical” part of the name refers both to the two-level structure of the model and to the idea that the reasoning happens at multiple time-scales: a slow deliberate process guiding a fast detailed process. This design is believed to attain significant depth of reasoning without blowing up computational requirements, as it reuses two smaller networks in a loop instead of one enormous flat model.

The combination of these two – a world-model pretraining and a hierarchical reasoning architecture – forms the crux of Hone’s technical build. In practical terms, the team will be developing custom training objectives and pipelines to train these components on Bittensor. For example, miners might train the model on self-supervised tasks (à la JEPA) using large unlabeled datasets to build general knowledge, and simultaneously or subsequently train it on reasoning tasks (à la HRM) using generated puzzle examples or past ARC tasks. The validators would need to evaluate how well a miner’s model is performing on reasoning tasks (since that’s the ultimate metric of success). This likely involves setting up a suite of benchmark tasks (possibly small ARC-style puzzles or similar abstract problems) that the miners’ models must answer. Validators will compare the answers to known solutions or use an evaluation script (for ARC, whether the output grid is correct) and score the models accordingly. In the original OpenKaito setup (which was about embedding models), validators used a contrastive InfoNCE-based metric to judge embeddings; under Hone, the evaluation might be more straightforward – e.g. percentage of test problems solved correctly – or involve an AI-based judge if solutions are not binary right/wrong. The incentive mechanism will reward models that consistently solve more of the test tasks or solve them faster/more efficiently. This ensures that the network’s collective effort goes into improving real performance on the target tasks, not just proxy metrics.

To illustrate how the “product” might be experienced: once Hone’s model becomes sufficiently capable, end-users or developers could query it for its reasoning on new problems. For instance, someone might present a new ARC-like puzzle or a logical question to the network; the Hone model (distributed across miners) would process it through its hierarchical reasoning steps and produce an answer, which could then be returned via a Validator API endpoint. The validator nodes essentially act as gateways that run the model inference and return results to users (while also making sure miners are doing the computations honestly). This is analogous to how one might use OpenAI’s GPT-4 API to solve a problem, but here the intelligence is coming from the collectively-trained Hone model on the Bittensor subnet. According to the team’s plans, using Hone’s AI “will be similar to the OpenAI … API” for embeddings or other AI services, except it’s powered by decentralized infrastructure. In summary, the build of Hone can be seen as: (1) creating a novel AI training pipeline (world-model + hierarchical reasoner) on a decentralized network, and (2) delivering the resulting general reasoning AI service to users via Bittensor. It’s an ambitious fusion of research and product: the research side is building the AGI internals, and the product side will eventually let anyone tap into that AGI capability in a permissionless way.

 

Hone is not a single AI model but a decentralized network of contributors working toward this shared AI. It leverages the Bittensor blockchain framework to crowdsource the training: no single entity owns or controls the resulting model. Instead, many independent miners (participants) around the world will run training nodes, each contributing to improving the model, and independent validators will evaluate those contributions. This open design ensures transparency and broad participation. In practical terms, the subnet provides incentives (in the form of token rewards) to those who train models that perform better on the agreed tasks, thereby aligning everyone’s efforts toward the common goal of a powerful reasoning AI. Over time, as miners feed in diverse data and strategies and validators rigorously test the AI on benchmark problems, the best techniques and models “win” more rewards, pushing the whole system’s performance upward. This dynamic is central to Bittensor’s consensus: models are continuously evaluated, ranked, and rewarded, which “ensures continuous improvement and competitiveness” in the subnet.

The ultimate objective of Hone is extremely ambitious: achieve an AI capable of human-level general intelligence on non-trivial tasks. Concretely, the team has set a milestone of reaching ~85% accuracy on ARC-AGI-2 – a level that would essentially signify superhuman performance on those abstract reasoning tasks (for context, humans are believed to score around that range). Surpassing that 85% threshold on ARC-AGI-2 is viewed as “a major leap toward genuine general intelligence.”In other words, if Hone’s decentralized training can produce a model that solves 80–85% of these formerly intractable problems, it would be a watershed moment on the path to AGI. This is a long-term vision, but it’s tangible and measurable – by focusing on a specific hard benchmark, Hone can quantitatively track progress toward AGI.

Equally important, Hone is pursuing this in a manner that adheres to open collaboration and safety. All development is open-source, and the resulting models would belong to the community rather than a private company. By engaging a global network of researchers and enthusiasts, Hone’s creators believe they can “guide open-source AGI research towards machines that can learn, reason, and plan as humans do”, closing the gap on tasks that are “easy for humans, hard for machines.” Hone “Subnet 5” is like a decentralized research lab: its purpose is to collectively build an AI that thinks more like a human, advancing the state of the art in general problem-solving. Supporters even describe Hone’s emergence as ushering in a new era of permissionless, open and incentivized artificial superintelligence – reflecting the belief that this project could accelerate AI toward genuinely human-like (and eventually super-human) intelligence, all while being accessible to the community rather than locked behind closed doors.

Hone is not a traditional consumer-facing product – it’s more of an AI platform or infrastructure build – but it does have a clear “output” and architecture. The “product” of Hone will ultimately be the AI models and services it produces: a hierarchically-trained general reasoning model that others can use. In Bittensor terms, Hone is Subnet-5 (codenamed epsilon) dedicated to Hierarchical AI Pretraining. This means the subnet provides the environment and tools for training the AI (the pretraining stage), and once sufficiently trained, that AI can be accessed or deployed. Eventually, Hone’s model is expected to be served via the Bittensor network’s API so that developers or applications can query it (for example, to solve reasoning problems or perform complex analyses) just like calling an OpenAI API – except in this case it’s open and decentralized. The Hone team has indicated that the developer experience of using the subnet’s AI will be analogous to using an OpenAI service, but provided through Bittensor’s validator API. In short, the build is creating a decentralized “AGI-as-a-service”: the community trains a powerful model and anyone can eventually utilize its intelligence via the network.

From a technical architecture perspective, Hone’s build centers on implementing the novel hierarchical learning techniques into the Bittensor framework. There are two major research components being fused in Hone’s design:

1) A World-Model Learning Module (self-supervised predictive learning): This draws from Yann LeCun’s proposed JEPA (Joint Embedding Predictive Architecture) and in particular Hierarchical JEPA (H-JEPA). In practice, this means part of Hone’s training focuses on unsupervised prediction tasks – teaching the AI to understand the world by predicting missing information. For example, the model might observe part of a pattern or scenario and try to predict the rest. This helps it develop robust internal representations (a “world model”) rather than relying purely on surface-level correlations. Hierarchical predictive learning implies the model may learn representations at multiple levels of abstraction (hints of high-level concepts and low-level details) and use those to anticipate outcomes. By incorporating H-JEPA concepts, Hone tries to ensure the AI has a solid foundation of common-sense understanding about its environment or tasks, which is crucial for generalization.

2) A Hierarchical Reasoning Module (iterative problem solving): This is inspired by the Hierarchical Reasoning Model (HRM) introduced by Guan Wang et al. in 2025. The HRM is essentially a recurrent two-module architecture that mimics how a human might think through a problem in steps. One module (often called the “H” or high-level module) acts as a planner, setting out a coarse strategy or hypothesis for the solution. Another module (“L” or low-level module) acts as a worker, fleshing out details and executing the plan. The two then interact in an outer loop: the worker’s output is fed back, and the planner decides if the solution is good or needs another refinement cycle. This loop continues, effectively allowing the model to “think for multiple steps” and refine its answer until a certain halt condition is met (the model decides it’s confident enough). This approach contrasts with a single-pass transformer that produces an answer in one go. By implementing an HRM-like architecture, Hone’s build enables iterative reasoning – the AI can approach complex tasks by breaking them down into intermediate steps, re-planning as needed, much like a human solving a puzzle through trial and error. The “hierarchical” part of the name refers both to the two-level structure of the model and to the idea that the reasoning happens at multiple time-scales: a slow deliberate process guiding a fast detailed process. This design is believed to attain significant depth of reasoning without blowing up computational requirements, as it reuses two smaller networks in a loop instead of one enormous flat model.

The combination of these two – a world-model pretraining and a hierarchical reasoning architecture – forms the crux of Hone’s technical build. In practical terms, the team will be developing custom training objectives and pipelines to train these components on Bittensor. For example, miners might train the model on self-supervised tasks (à la JEPA) using large unlabeled datasets to build general knowledge, and simultaneously or subsequently train it on reasoning tasks (à la HRM) using generated puzzle examples or past ARC tasks. The validators would need to evaluate how well a miner’s model is performing on reasoning tasks (since that’s the ultimate metric of success). This likely involves setting up a suite of benchmark tasks (possibly small ARC-style puzzles or similar abstract problems) that the miners’ models must answer. Validators will compare the answers to known solutions or use an evaluation script (for ARC, whether the output grid is correct) and score the models accordingly. In the original OpenKaito setup (which was about embedding models), validators used a contrastive InfoNCE-based metric to judge embeddings; under Hone, the evaluation might be more straightforward – e.g. percentage of test problems solved correctly – or involve an AI-based judge if solutions are not binary right/wrong. The incentive mechanism will reward models that consistently solve more of the test tasks or solve them faster/more efficiently. This ensures that the network’s collective effort goes into improving real performance on the target tasks, not just proxy metrics.

To illustrate how the “product” might be experienced: once Hone’s model becomes sufficiently capable, end-users or developers could query it for its reasoning on new problems. For instance, someone might present a new ARC-like puzzle or a logical question to the network; the Hone model (distributed across miners) would process it through its hierarchical reasoning steps and produce an answer, which could then be returned via a Validator API endpoint. The validator nodes essentially act as gateways that run the model inference and return results to users (while also making sure miners are doing the computations honestly). This is analogous to how one might use OpenAI’s GPT-4 API to solve a problem, but here the intelligence is coming from the collectively-trained Hone model on the Bittensor subnet. According to the team’s plans, using Hone’s AI “will be similar to the OpenAI … API” for embeddings or other AI services, except it’s powered by decentralized infrastructure. In summary, the build of Hone can be seen as: (1) creating a novel AI training pipeline (world-model + hierarchical reasoner) on a decentralized network, and (2) delivering the resulting general reasoning AI service to users via Bittensor. It’s an ambitious fusion of research and product: the research side is building the AGI internals, and the product side will eventually let anyone tap into that AGI capability in a permissionless way.

 

WHO

Team Info

On May 1, 2025, Kaito officially announced that it had transferred ownership of Subnet-5 (OpenKaito) to Latent Holdings. Following that transfer, a new plan for the subnet was formulated by the incoming team. A couple of months later, on August 7, 2025, Subnet-5 was reintroduced to the world as “Hone”. The current team and key players behind Hone are primarily Manifold Labs, Inc. and Latent Holdings, working in partnership

 

Manifold Labs – A decentralized AI infrastructure company: Manifold is well-known in the Bittensor community as a core builder of AI subnets and tooling. They describe themselves as a “Decentralized Frontier AI Lab,” and they have been involved in creating other subnets/products like Targon (Subnet-4) – a confidential AI cloud inference platform, Sybil – an AI search engine, and TaoXYZ – a Bittensor analytics dashboard.The Manifold team comprises professionals with diverse expertise in AI development, software engineering, and robotics.

Robert Myers – Founder and CEO

James Woodham – Co-Founder

Joshua Brown – Lead Software Engineer

Ahmed Darwich – Software Engineer

Jonathan Guyton – Robotics Engineer

 

Latent Holdings – An open-source AI venture studio/incubator: Latent is a newer entity, co-founded by Joseph “JJ” Jacks (the founder of OSS Capital). Latent Holdings is focused on advancing open-source AI projects. Joseph Jacks is a prominent voice in open-source software and has become very active in the Bittensor ecosystem via Latent. Latent’s role in Hone is as co-founder, providing vision, strategy, and possibly funding. It was Latent that took over ownership from Kaito, as noted above. Key team members include:

Joseph Jacks (JJ) – Co-Founder

JJ is the founder of OSS Capital, the first and only early-stage VC firm exclusively focused on commercial open source software (COSS) startups. Previously, he co-founded Kismatic, the first Kubernetes startup, and launched KubeCon, later donating it to the Linux Foundation as part of CNCF. Under his leadership, OSS Capital has backed 70+ companies supporting 150M+ users and 1M+ GitHub stars. JJ has led over 40 funding rounds, helping generate over $20B in value capture across sectors like AI, data, infrastructure, and developer tools.

Cameron Fairchild – Co-Founder

Cameron is a core contributor to the OpenTensor Foundation and is the co-founder of TAO Hash. He has a background in computer science (Univ. of Toronto) and previously worked on the OpenTensor Foundation. Profiles: GitHub camfairchild (Latent CTO); Twitter @KibibyteMe.

Benjamin Himes – Senior Engineer

Benjamin joins Laten Holdings from the Opentensor Foundation, the non-profit supporting the development of the Bittensor blockchain. At OTF, he played a key role in enhancing the Bittensor developer toolchain, including the SDK, CLI, and major upgrades like RAO and dTAO. His work focused on improving the developer experience and enabling scalable contributions to decentralized AI. Benjamin now continues this mission as part of a growing team dedicated to advancing the state of the art in artificial intelligence and open-source infrastructure.

Roman Chkhaidze – Engineer

A seasoned software developer with over 10 years of experience, specializing in Python and full-stack web development. Proficient in building robust applications using frameworks like Flask, FastAPI, Django, and Vue.js, with strong command of HTML5, CSS3, and modern JavaScript (ES6/TypeScript). Experienced across databases (PostgreSQL, MySQL, MongoDB, GCP), cloud platforms (AWS, GCE), and containerization tools (Docker, VMWare). Skilled in test automation with PyTest, Selenium, and more. Known for strong problem-solving, rapid tech adaptation, and driving process improvements across teams and environments.

Ibraheem Nadeem – Engineer

Michael Trestman – Technical Documentation lead.

Clément Blaise – Infrastructure

Xavier Lyu – Research

Yasmine Ibrahim – Compliance

Maciej Kula – Education

In summary, Hone’s team is a fusion of Bittensor’s top builders and open-source AI champions. Manifold provides the engineering and network know-how, Latent provides the AGI vision and open-source philosophy. This collaboration formed in mid-2025 specifically to take Subnet-5 in a bold new direction. Their combined expertise gives Hone a strong foundation: Manifold has delivered working subnets before, and Latent brings passion and resources for pushing toward true AGI. As evidence of their commitment, the founders have repeatedly emphasized how Hone will drive forward “permissionless, incentivized AI” and have been rallying the Bittensor community around this project since its launch. The team’s public presence (posts, research blogs on the Hone site, etc.) also shows they value transparency and community updates. Going forward, one can expect regular engagement from these team members on progress and perhaps opportunities for outside contributors to join the effort (Hone is open-source, after all).

 

On May 1, 2025, Kaito officially announced that it had transferred ownership of Subnet-5 (OpenKaito) to Latent Holdings. Following that transfer, a new plan for the subnet was formulated by the incoming team. A couple of months later, on August 7, 2025, Subnet-5 was reintroduced to the world as “Hone”. The current team and key players behind Hone are primarily Manifold Labs, Inc. and Latent Holdings, working in partnership

 

Manifold Labs – A decentralized AI infrastructure company: Manifold is well-known in the Bittensor community as a core builder of AI subnets and tooling. They describe themselves as a “Decentralized Frontier AI Lab,” and they have been involved in creating other subnets/products like Targon (Subnet-4) – a confidential AI cloud inference platform, Sybil – an AI search engine, and TaoXYZ – a Bittensor analytics dashboard.The Manifold team comprises professionals with diverse expertise in AI development, software engineering, and robotics.

Robert Myers – Founder and CEO

James Woodham – Co-Founder

Joshua Brown – Lead Software Engineer

Ahmed Darwich – Software Engineer

Jonathan Guyton – Robotics Engineer

 

Latent Holdings – An open-source AI venture studio/incubator: Latent is a newer entity, co-founded by Joseph “JJ” Jacks (the founder of OSS Capital). Latent Holdings is focused on advancing open-source AI projects. Joseph Jacks is a prominent voice in open-source software and has become very active in the Bittensor ecosystem via Latent. Latent’s role in Hone is as co-founder, providing vision, strategy, and possibly funding. It was Latent that took over ownership from Kaito, as noted above. Key team members include:

Joseph Jacks (JJ) – Co-Founder

JJ is the founder of OSS Capital, the first and only early-stage VC firm exclusively focused on commercial open source software (COSS) startups. Previously, he co-founded Kismatic, the first Kubernetes startup, and launched KubeCon, later donating it to the Linux Foundation as part of CNCF. Under his leadership, OSS Capital has backed 70+ companies supporting 150M+ users and 1M+ GitHub stars. JJ has led over 40 funding rounds, helping generate over $20B in value capture across sectors like AI, data, infrastructure, and developer tools.

Cameron Fairchild – Co-Founder

Cameron is a core contributor to the OpenTensor Foundation and is the co-founder of TAO Hash. He has a background in computer science (Univ. of Toronto) and previously worked on the OpenTensor Foundation. Profiles: GitHub camfairchild (Latent CTO); Twitter @KibibyteMe.

Benjamin Himes – Senior Engineer

Benjamin joins Laten Holdings from the Opentensor Foundation, the non-profit supporting the development of the Bittensor blockchain. At OTF, he played a key role in enhancing the Bittensor developer toolchain, including the SDK, CLI, and major upgrades like RAO and dTAO. His work focused on improving the developer experience and enabling scalable contributions to decentralized AI. Benjamin now continues this mission as part of a growing team dedicated to advancing the state of the art in artificial intelligence and open-source infrastructure.

Roman Chkhaidze – Engineer

A seasoned software developer with over 10 years of experience, specializing in Python and full-stack web development. Proficient in building robust applications using frameworks like Flask, FastAPI, Django, and Vue.js, with strong command of HTML5, CSS3, and modern JavaScript (ES6/TypeScript). Experienced across databases (PostgreSQL, MySQL, MongoDB, GCP), cloud platforms (AWS, GCE), and containerization tools (Docker, VMWare). Skilled in test automation with PyTest, Selenium, and more. Known for strong problem-solving, rapid tech adaptation, and driving process improvements across teams and environments.

Ibraheem Nadeem – Engineer

Michael Trestman – Technical Documentation lead.

Clément Blaise – Infrastructure

Xavier Lyu – Research

Yasmine Ibrahim – Compliance

Maciej Kula – Education

In summary, Hone’s team is a fusion of Bittensor’s top builders and open-source AI champions. Manifold provides the engineering and network know-how, Latent provides the AGI vision and open-source philosophy. This collaboration formed in mid-2025 specifically to take Subnet-5 in a bold new direction. Their combined expertise gives Hone a strong foundation: Manifold has delivered working subnets before, and Latent brings passion and resources for pushing toward true AGI. As evidence of their commitment, the founders have repeatedly emphasized how Hone will drive forward “permissionless, incentivized AI” and have been rallying the Bittensor community around this project since its launch. The team’s public presence (posts, research blogs on the Hone site, etc.) also shows they value transparency and community updates. Going forward, one can expect regular engagement from these team members on progress and perhaps opportunities for outside contributors to join the effort (Hone is open-source, after all).

 

FUTURE

Roadmap

Hone is at an early stage (as of late 2025) with a clear vision but an evolving roadmap. While the team hasn’t published a detailed timeline publicly, we can infer the major milestones and phases of the project from its objectives:

Phase 1 – Initialization & Knowledge Fusion (Late 2024 – 2025): Laying the groundwork. During this phase, the focus is on setting up the subnet infrastructure and implementing the core training framework. The team will integrate the H-JEPA and HRM architectural components into a working pipeline. This likely involves open-sourcing initial code (e.g. providing miners with training software on GitHub) and running pilot experiments. For instance, early miners might train small-scale models to ensure that hierarchical learning is functioning as expected. A key part of this phase is also community building – attracting miners/validators to participate and educating them on the new paradigm. Since the subnet was rebooted in Q3 2025, by the end of 2025 the goal is to have a basic version of the hierarchical AI model training across distributed nodes. We might see initial benchmark results on simplified tasks (perhaps using the public ARC dataset or custom reasoning problems) to validate the approach. Essentially, Phase 1 is about combining the research ideas into a coherent system and demonstrating that “decentralized hierarchical pretraining” is feasible on Bittensor. The success criterion would be a model that already shows some improvement over baseline random performance on reasoning tasks, plus a stable network of miners and validators up and running.

Phase 2 – Scaling & Iterative Improvement (2016 – 2026): Rapid progress on benchmarks. Once the basics are in place, Hone will iterate to improve the model’s performance on ARC-AGI and similar benchmarks. This phase involves scaling up the training: using more data, more compute (GPUs contributed by miners), and more sophisticated techniques. The team will likely incorporate the latest findings from the community – for example, if a new technique for reasoning or a new self-supervised objective is published, they might add it to the training. One concrete aim here is to push beyond the current state-of-the-art on ARC-AGI-1 and ARC-AGI-2. As a reference, the HRM paper achieved about 41% on ARC-AGI-1 and only ~2% on ARC-AGI-2 in mid-2025 with a 27M parameter model. Hone will strive to beat these numbers. We can expect a series of milestones: e.g., 10% on ARC-AGI-2, 25%, 50%, etc., climbing the leaderboard towards human-level. Each improvement will require refining the hierarchical architectures (perhaps increasing model size, improving the outer-loop strategy, enhancing task augmentations, etc.). The roadmap likely includes periodic evaluation events where the model’s knowledge is tested on hidden ARC tasks (to measure generalization). During 2026, the team might also start tackling the ARC-AGI-2 “hard distribution” more directly – since it’s the ultimate goal – possibly by generating many ARC-like puzzles for the model to practice on. Another aspect of Phase 2 is community research contributions: external researchers might join to help tune the model or suggest new approaches (given everything is open-source). By the end of this phase, the aim is to have a model that is significantly outperforming all existing approaches on ARC-style benchmarks – ideally approaching or surpassing human performance (~80%+) on ARC-AGI-1 and making substantial headway on ARC-AGI-2 (closing the gap towards that 85% target). Achieving, say, 50% on ARC-AGI-2 would already be groundbreaking, and might be a mid-term target on the roadmap.

Phase 3 – AGI Realization & Deployment (2027 and beyond): Reaching the goal and broadening the scope. In the longer term, Hone’s roadmap culminates in reaching ~85% on ARC-AGI-2, effectively solving the benchmark that was designed to be a proxy for general intelligence. Hitting this milestone would mean the model can solve most novel reasoning tasks that stumped all previous AI. The timeline for this is uncertain (it could take a few years of intense experimentation), but the team’s optimistic vision is clearly to get there as soon as possible. Once this level of performance is achieved (or even as it approaches it), Hone will transition into a new stage: making the AGI capabilities accessible and useful. We expect at that point the Hone subnet will offer an API or service for various applications – for example, agents that require advanced problem-solving could query Hone’s model for solutions, or it could be used in research as a general problem-solver. Essentially, Hone’s AGI would be open infrastructure. The roadmap likely includes plans to ensure the model is safe and aligned with human values as it becomes more powerful (since an AGI-level system should be handled carefully). They might incorporate community oversight or governance for how the model is used. Additionally, beyond ARC-AGI, the team could set new challenges: perhaps tackling other “AGI tests” or expanding the model’s modalities (the current focus is on abstract puzzles which are often visual or logical – in the future it might extend to language understanding, robotics planning, etc., making it even more general). The long-term vision is that Hone becomes a foundation for “broadly beneficial artificial general intelligence,” jointly built by the community
hone.training. In practical terms, that could mean partnering with other subnets or projects to apply Hone’s model to real-world tasks (while keeping it open). For instance, Hone’s reasoning engine might power decentralized agents in finance, science research assistants, or complex decision-making tools – all without a central owner. The roadmap likely remains adaptive at this stage: once the core AGI is in hand, the community can decide what big problem to solve next with it.

Overall, accuracy and capability milestones form the backbone of Hone’s roadmap. Each percentage gain on the ARC-AGI-2 benchmark is a measurable step forward. The team has explicitly anchored the project to that hard target (85% on ARC-AGI-2), which provides clarity of direction. While exact dates aren’t public, there is a sense of urgency and excitement – the founders frequently talk about rapid progress and “accelerating” open AGI. Given Bittensor’s dynamic, as the project shows results, it’s likely to attract more resources (compute, talent, funding), which can further accelerate the roadmap. It’s a high-risk, high-reward path: if successful, by the end of this journey we could witness the first open-source community-driven AGI emerging from Bittensor’s Subnet-5. The roadmap is inherently flexible (research breakthroughs are unpredictable), but the endgame is fixed – Hone exists to crack the problem of general reasoning in machines. Every update from the team will likely showcase incremental wins: e.g., “We improved to 10% on ARC-AGI-2 by implementing XYZ”. The community is watching these closely, since a big jump at some point could signal that AGI is within reach. In summary, the path forward for Hone involves continuous research, iterative scaling, and transparent benchmarking, all aimed at one day declaring: “Our decentralized AI has achieved human-level performance on a general intelligence test.” At that point, Hone’s mission would be realized – and a new chapter of applying that intelligence safely would begin.

 

Hone is at an early stage (as of late 2025) with a clear vision but an evolving roadmap. While the team hasn’t published a detailed timeline publicly, we can infer the major milestones and phases of the project from its objectives:

Phase 1 – Initialization & Knowledge Fusion (Late 2024 – 2025): Laying the groundwork. During this phase, the focus is on setting up the subnet infrastructure and implementing the core training framework. The team will integrate the H-JEPA and HRM architectural components into a working pipeline. This likely involves open-sourcing initial code (e.g. providing miners with training software on GitHub) and running pilot experiments. For instance, early miners might train small-scale models to ensure that hierarchical learning is functioning as expected. A key part of this phase is also community building – attracting miners/validators to participate and educating them on the new paradigm. Since the subnet was rebooted in Q3 2025, by the end of 2025 the goal is to have a basic version of the hierarchical AI model training across distributed nodes. We might see initial benchmark results on simplified tasks (perhaps using the public ARC dataset or custom reasoning problems) to validate the approach. Essentially, Phase 1 is about combining the research ideas into a coherent system and demonstrating that “decentralized hierarchical pretraining” is feasible on Bittensor. The success criterion would be a model that already shows some improvement over baseline random performance on reasoning tasks, plus a stable network of miners and validators up and running.

Phase 2 – Scaling & Iterative Improvement (2016 – 2026): Rapid progress on benchmarks. Once the basics are in place, Hone will iterate to improve the model’s performance on ARC-AGI and similar benchmarks. This phase involves scaling up the training: using more data, more compute (GPUs contributed by miners), and more sophisticated techniques. The team will likely incorporate the latest findings from the community – for example, if a new technique for reasoning or a new self-supervised objective is published, they might add it to the training. One concrete aim here is to push beyond the current state-of-the-art on ARC-AGI-1 and ARC-AGI-2. As a reference, the HRM paper achieved about 41% on ARC-AGI-1 and only ~2% on ARC-AGI-2 in mid-2025 with a 27M parameter model. Hone will strive to beat these numbers. We can expect a series of milestones: e.g., 10% on ARC-AGI-2, 25%, 50%, etc., climbing the leaderboard towards human-level. Each improvement will require refining the hierarchical architectures (perhaps increasing model size, improving the outer-loop strategy, enhancing task augmentations, etc.). The roadmap likely includes periodic evaluation events where the model’s knowledge is tested on hidden ARC tasks (to measure generalization). During 2026, the team might also start tackling the ARC-AGI-2 “hard distribution” more directly – since it’s the ultimate goal – possibly by generating many ARC-like puzzles for the model to practice on. Another aspect of Phase 2 is community research contributions: external researchers might join to help tune the model or suggest new approaches (given everything is open-source). By the end of this phase, the aim is to have a model that is significantly outperforming all existing approaches on ARC-style benchmarks – ideally approaching or surpassing human performance (~80%+) on ARC-AGI-1 and making substantial headway on ARC-AGI-2 (closing the gap towards that 85% target). Achieving, say, 50% on ARC-AGI-2 would already be groundbreaking, and might be a mid-term target on the roadmap.

Phase 3 – AGI Realization & Deployment (2027 and beyond): Reaching the goal and broadening the scope. In the longer term, Hone’s roadmap culminates in reaching ~85% on ARC-AGI-2, effectively solving the benchmark that was designed to be a proxy for general intelligence. Hitting this milestone would mean the model can solve most novel reasoning tasks that stumped all previous AI. The timeline for this is uncertain (it could take a few years of intense experimentation), but the team’s optimistic vision is clearly to get there as soon as possible. Once this level of performance is achieved (or even as it approaches it), Hone will transition into a new stage: making the AGI capabilities accessible and useful. We expect at that point the Hone subnet will offer an API or service for various applications – for example, agents that require advanced problem-solving could query Hone’s model for solutions, or it could be used in research as a general problem-solver. Essentially, Hone’s AGI would be open infrastructure. The roadmap likely includes plans to ensure the model is safe and aligned with human values as it becomes more powerful (since an AGI-level system should be handled carefully). They might incorporate community oversight or governance for how the model is used. Additionally, beyond ARC-AGI, the team could set new challenges: perhaps tackling other “AGI tests” or expanding the model’s modalities (the current focus is on abstract puzzles which are often visual or logical – in the future it might extend to language understanding, robotics planning, etc., making it even more general). The long-term vision is that Hone becomes a foundation for “broadly beneficial artificial general intelligence,” jointly built by the community
hone.training. In practical terms, that could mean partnering with other subnets or projects to apply Hone’s model to real-world tasks (while keeping it open). For instance, Hone’s reasoning engine might power decentralized agents in finance, science research assistants, or complex decision-making tools – all without a central owner. The roadmap likely remains adaptive at this stage: once the core AGI is in hand, the community can decide what big problem to solve next with it.

Overall, accuracy and capability milestones form the backbone of Hone’s roadmap. Each percentage gain on the ARC-AGI-2 benchmark is a measurable step forward. The team has explicitly anchored the project to that hard target (85% on ARC-AGI-2), which provides clarity of direction. While exact dates aren’t public, there is a sense of urgency and excitement – the founders frequently talk about rapid progress and “accelerating” open AGI. Given Bittensor’s dynamic, as the project shows results, it’s likely to attract more resources (compute, talent, funding), which can further accelerate the roadmap. It’s a high-risk, high-reward path: if successful, by the end of this journey we could witness the first open-source community-driven AGI emerging from Bittensor’s Subnet-5. The roadmap is inherently flexible (research breakthroughs are unpredictable), but the endgame is fixed – Hone exists to crack the problem of general reasoning in machines. Every update from the team will likely showcase incremental wins: e.g., “We improved to 10% on ARC-AGI-2 by implementing XYZ”. The community is watching these closely, since a big jump at some point could signal that AGI is within reach. In summary, the path forward for Hone involves continuous research, iterative scaling, and transparent benchmarking, all aimed at one day declaring: “Our decentralized AI has achieved human-level performance on a general intelligence test.” At that point, Hone’s mission would be realized – and a new chapter of applying that intelligence safely would begin.

 

MEDIA

A special thanks to Mark Jeffrey for his amazing Hash Rate series! In this series, he provides valuable insights into Bittensor Subnets and the world of decentralized AI. Be sure to check out the full series on his YouTube channel for more expert analysis and deep dives.

Recorded October 2025: Mark Jeffrey speaks with Robert Meyers about Hone (Bittensor Subnet 5), an open-source moonshot aiming at true generalization—measured by the ARC-AGI benchmark—rather than bigger next-token LLMs. Meyers contrasts transformer brittleness with JEPA-style and hierarchical reasoning approaches that learn to “fill in the middle” across modalities (text, vision, sensor data) and plan over multiple timescales, potentially needing far less data. Hone will let miners try any architecture; submissions are scored against a rigorously augmented private holdout set before the team submits top models to the ARC Foundation’s hidden test. If they win the $750k ARC-AGI prize, proceeds will buy back Hone alpha; longer-term monetization would route through Targon/Civil (for distribution and inference) while keeping results open. Backed by Manifold Labs (Targon) and Latent Holdings, Hone is kicking off a mainnet test now and racing the clock on ARC-AGI 2, with the broader goal of delivering a step-function in reasoning that Bittensor’s incentive engine can scale.

NEWS

Announcements

Load More