With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time
Core Problem and Mission
Ditto is designed to solve the problem of fragmented, short-lived context in AI conversations. Most AI assistants operate statelessly, meaning they forget previous user interactions. Ditto’s mission is to give these agents a persistent ‘second brain’ by maintaining a shared memory. As explained on the official site, Ditto acts as a “shared place to remember, retrieve, and act” for AI agents, effectively turning ongoing conversations and connected tools into durable context. This means Ditto aims to ensure that past user preferences, tasks, and conversation history are retained and organized, rather than lost between chat sessions.
Bittensor Subnet Operation
On Bittensor, each subnet defines how miners create its output and validators score it. Ditto operates as a Bittensor subnet (netuid 118) under this model. Miners on any subnet run inference or compute tasks and publish their results, while validators compare and score those results. In Ditto’s case, registered nodes (miners and validators) would be dedicated to the memory task defined by Ditto’s incentive rules. In general, validators in a subnet send queries to miners’ Axon endpoints and evaluate their responses, then allocate TAO rewards based on those scores. Specifically, the Bittensor documentation notes that “validators send requests to [miners’] Axon and evaluate responses. This drives the incentive mechanism, awarding emissions to the miner”. While Ditto’s exact query format isn’t publicly detailed, it would follow this pattern: miners host Ditto’s memory database or model and answer memory-related queries, and validators judge the relevance or accuracy of those answers according to Ditto’s own criteria. By successfully retrieving the right bits of contextual knowledge or making correct contextual inferences, a miner would earn higher scores and thus more TAO rewards.
Miner and Validator Roles
In practice, a Ditto miner’s contribution would involve producing the memory-based outputs that the subnet values. For example, a miner might take an input query (such as a question about past conversation topics) and use Ditto’s semantic memory graph to generate an answer or recall relevant information. Validators, on the other hand, would score those outputs. They might compare a miner’s answer against expected context or check the coherence of retrieved memories. The validators’ scores then update trust metrics (VTrust) and ultimately determine how the block emissions are split among miners. Although the technical details (e.g. model types or training datasets) are not public, this is the standard miner/validator loop in a Bittensor subnet: miners produce AI-generated content according to the subnet’s purpose, and validators vote on quality to reward the best contributions.
Final Output and User Experience
The end product of Ditto is its collective memory that agents can query. In user-facing terms, Ditto operates as a kind of decentralized knowledge graph. Users or developers interacting with Ditto (for instance through an API or integrated app) would experience an AI assistant that “remembers” prior details. For example, Ditto automatically clusters conversations into “Subjects” and visualizes how ideas connect over time. It thus provides semantic recall: one could ask about anything mentioned months ago, and Ditto retrieves the context. In effect, the subnet’s output is the ongoing alignment of its neural “neurons” around a common memory state. If similar memory-DB projects are any guide, one might think of Ditto as continuously indexing embeddings of past dialogues and serving them on demand. A related project, Engram (on Bittensor testnet), describes itself as a “decentralized vector database” for AI, storing embeddings and serving queries. Ditto’s mainnet output would play a similar role, but optimized around personal conversational context and real-world tasks as defined by Omni Aura’s team.
Comparison to Other Subnets
Ditto’s focus on AI memory makes it quite distinct among Bittensor’s subnets. Many other subnets target different AI problems. For example, the “Data Universe” subnet (SN13) is built to collect and share large training datasets across sources. Others like vision or roleplay subnets train or serve generative models. By contrast, Ditto’s niche is context and memory for an individual’s agents. One parallel is Project Nobi’s “Nori” agent, which also bills itself as a personal AI companion that remembers the user’s life, though Nori runs on a Bittensor testnet. In general, Ditto appears to be among the first mainnet subnets explicitly dedicated to personal AI memory. Its decentralized memory graph could complement subnets focused on data or content (like Engram or Data Universe) by providing the “personal knowledge” layer. This sets Ditto apart as a specialized substrate enabling context persistence, rather than general compute or data indexing.
Core Problem and Mission
Ditto is designed to solve the problem of fragmented, short-lived context in AI conversations. Most AI assistants operate statelessly, meaning they forget previous user interactions. Ditto’s mission is to give these agents a persistent ‘second brain’ by maintaining a shared memory. As explained on the official site, Ditto acts as a “shared place to remember, retrieve, and act” for AI agents, effectively turning ongoing conversations and connected tools into durable context. This means Ditto aims to ensure that past user preferences, tasks, and conversation history are retained and organized, rather than lost between chat sessions.
Bittensor Subnet Operation
On Bittensor, each subnet defines how miners create its output and validators score it. Ditto operates as a Bittensor subnet (netuid 118) under this model. Miners on any subnet run inference or compute tasks and publish their results, while validators compare and score those results. In Ditto’s case, registered nodes (miners and validators) would be dedicated to the memory task defined by Ditto’s incentive rules. In general, validators in a subnet send queries to miners’ Axon endpoints and evaluate their responses, then allocate TAO rewards based on those scores. Specifically, the Bittensor documentation notes that “validators send requests to [miners’] Axon and evaluate responses. This drives the incentive mechanism, awarding emissions to the miner”. While Ditto’s exact query format isn’t publicly detailed, it would follow this pattern: miners host Ditto’s memory database or model and answer memory-related queries, and validators judge the relevance or accuracy of those answers according to Ditto’s own criteria. By successfully retrieving the right bits of contextual knowledge or making correct contextual inferences, a miner would earn higher scores and thus more TAO rewards.
Miner and Validator Roles
In practice, a Ditto miner’s contribution would involve producing the memory-based outputs that the subnet values. For example, a miner might take an input query (such as a question about past conversation topics) and use Ditto’s semantic memory graph to generate an answer or recall relevant information. Validators, on the other hand, would score those outputs. They might compare a miner’s answer against expected context or check the coherence of retrieved memories. The validators’ scores then update trust metrics (VTrust) and ultimately determine how the block emissions are split among miners. Although the technical details (e.g. model types or training datasets) are not public, this is the standard miner/validator loop in a Bittensor subnet: miners produce AI-generated content according to the subnet’s purpose, and validators vote on quality to reward the best contributions.
Final Output and User Experience
The end product of Ditto is its collective memory that agents can query. In user-facing terms, Ditto operates as a kind of decentralized knowledge graph. Users or developers interacting with Ditto (for instance through an API or integrated app) would experience an AI assistant that “remembers” prior details. For example, Ditto automatically clusters conversations into “Subjects” and visualizes how ideas connect over time. It thus provides semantic recall: one could ask about anything mentioned months ago, and Ditto retrieves the context. In effect, the subnet’s output is the ongoing alignment of its neural “neurons” around a common memory state. If similar memory-DB projects are any guide, one might think of Ditto as continuously indexing embeddings of past dialogues and serving them on demand. A related project, Engram (on Bittensor testnet), describes itself as a “decentralized vector database” for AI, storing embeddings and serving queries. Ditto’s mainnet output would play a similar role, but optimized around personal conversational context and real-world tasks as defined by Omni Aura’s team.
Comparison to Other Subnets
Ditto’s focus on AI memory makes it quite distinct among Bittensor’s subnets. Many other subnets target different AI problems. For example, the “Data Universe” subnet (SN13) is built to collect and share large training datasets across sources. Others like vision or roleplay subnets train or serve generative models. By contrast, Ditto’s niche is context and memory for an individual’s agents. One parallel is Project Nobi’s “Nori” agent, which also bills itself as a personal AI companion that remembers the user’s life, though Nori runs on a Bittensor testnet. In general, Ditto appears to be among the first mainnet subnets explicitly dedicated to personal AI memory. Its decentralized memory graph could complement subnets focused on data or content (like Engram or Data Universe) by providing the “personal knowledge” layer. This sets Ditto apart as a specialized substrate enabling context persistence, rather than general compute or data indexing.
Current Status
As of now, Ditto’s core product is live in a limited sense. The Ditto memory assistant (hosted at heyditto.ai) is available to users – the site reports “760+ people” and “37,500+ conversations remembered”, indicating an active user base. On the blockchain side, Ditto has minted its own subnet token (the SN118 “alpha” token), which is tradeable on TAO-based markets. For instance, CoinGecko shows the SN118 token trading around $2.10, with a market capitalization of roughly $3.9 million. This implies Ditto has at least reached the stage where its token is circulating and miners can stake and mine. A Bittensor block explorer shows that SN118 is currently at full 256 UIDs with 14 validators and only 1 active miner, suggesting the subnet is just starting and has room to onboard more participants. In summary, the initial layers – user-facing memory interface and the subnet infrastructure on-chain – are up, while future development (scaling the subnet, attracting more miners, etc.) appears to be ongoing.
Technical Architecture
Ditto’s underlying architecture centers on AI language models combined with smart memory storage. According to the Ditto blog, the system uses a multi-layer memory approach (short-term, long-term episodic, and a semantic knowledge graph). This likely means Ditto maintains a database of text embeddings or graph links to previous conversations. The GitHub repositories hint at the tech stack: for example, the ‘nlp_server’ repo (part of the ditto-assistant organization) is described as hosting intent, NER models, and “an LLM agent with long term memory vector store” for Ditto clients. Similarly, a ‘vision_server’ repository holds models for image captioning and Q&A, used by an image retrieval LLM agent. These suggest Ditto’s platform uses neural networks (likely in Python/PyTorch) with custom memory management. The mention of “Memory Chain Protocol” (MCP) integration on their site confirms there is a component – possibly an API server – to serve memory queries to other tools. Overlaying all this is a web-based UI (the Ditto chat interface and subject dashboard) that ties together user inputs and memory outputs. Although the exact model sizes or database engines are not public, the presence of vector memory stores indicates use of embedding-based search. The architecture appears to be modular: separate services for NLP, vision, and memory management, likely orchestrated through containers (the “ditto-stack” repo suggests Docker-based local deployment capabilities).
Repository and Code Activity
The official code for Ditto’s Bittensor subnet is not publicly visible, but the developer’s GitHub organization (ditto-assistant) offers insight into related tooling. This org contains multiple projects: an NLP server, a vision server, a mobile app, and other utilities. Activity is modest: key repos like ‘nlp_server’ and ‘vision_server’ show updates in 2024 (June 2024). The ditto-assistant org has a few dozen followers and stars across its repos, suggesting a small core team. There is no separate public repo labeled for SN118, implying the Bittensor-specific code may be private or embedded within these projects. In terms of network metrics, TaoPulse indicates SN118’s daily emission share was around 1.13% of the total leaderboard. The alpha token supply (inferred from the $3.9M cap and $2.10 price) appears to be only a few million units, making it relatively scarce. No detailed on-chain mining stats (like total staked TAO) are published yet. Overall, the technical build shows typical use of deep learning and vector DB approaches, and the development activity level is consistent with a small startup team iterating on the core memory technology.
Integrations and APIs
Ditto is built to integrate with other AI tools. The website advertises a “Memory Chain Protocol” (MCP) with server and client components, indicating an API that any AI agent can use to access Ditto’s memory. It also lists integrations such as Google Workspace – e.g. allowing Ditto to read email, calendar, and documents as context during chat sessions. In practice, a developer could use Ditto’s MCP interface to connect Ditto’s memory graph to a custom chatbot or LLM: the agent would send user context to Ditto’s server and receive relevant memory data in return. On the developer side, the ‘ditto-stack’ repository shows they provide a way to run the full Ditto system locally, which could be used for testing or custom integrations. However, we found no publicly documented SDK beyond the GitHub code and mention of MCP – any detailed developer guide is not available. In summary, Ditto seems designed to plug into the broader AI ecosystem via web APIs and integrations (like MCP and Google services), but external developers will rely on (limited) documentation and code samples from the team.
Current Status
As of now, Ditto’s core product is live in a limited sense. The Ditto memory assistant (hosted at heyditto.ai) is available to users – the site reports “760+ people” and “37,500+ conversations remembered”, indicating an active user base. On the blockchain side, Ditto has minted its own subnet token (the SN118 “alpha” token), which is tradeable on TAO-based markets. For instance, CoinGecko shows the SN118 token trading around $2.10, with a market capitalization of roughly $3.9 million. This implies Ditto has at least reached the stage where its token is circulating and miners can stake and mine. A Bittensor block explorer shows that SN118 is currently at full 256 UIDs with 14 validators and only 1 active miner, suggesting the subnet is just starting and has room to onboard more participants. In summary, the initial layers – user-facing memory interface and the subnet infrastructure on-chain – are up, while future development (scaling the subnet, attracting more miners, etc.) appears to be ongoing.
Technical Architecture
Ditto’s underlying architecture centers on AI language models combined with smart memory storage. According to the Ditto blog, the system uses a multi-layer memory approach (short-term, long-term episodic, and a semantic knowledge graph). This likely means Ditto maintains a database of text embeddings or graph links to previous conversations. The GitHub repositories hint at the tech stack: for example, the ‘nlp_server’ repo (part of the ditto-assistant organization) is described as hosting intent, NER models, and “an LLM agent with long term memory vector store” for Ditto clients. Similarly, a ‘vision_server’ repository holds models for image captioning and Q&A, used by an image retrieval LLM agent. These suggest Ditto’s platform uses neural networks (likely in Python/PyTorch) with custom memory management. The mention of “Memory Chain Protocol” (MCP) integration on their site confirms there is a component – possibly an API server – to serve memory queries to other tools. Overlaying all this is a web-based UI (the Ditto chat interface and subject dashboard) that ties together user inputs and memory outputs. Although the exact model sizes or database engines are not public, the presence of vector memory stores indicates use of embedding-based search. The architecture appears to be modular: separate services for NLP, vision, and memory management, likely orchestrated through containers (the “ditto-stack” repo suggests Docker-based local deployment capabilities).
Repository and Code Activity
The official code for Ditto’s Bittensor subnet is not publicly visible, but the developer’s GitHub organization (ditto-assistant) offers insight into related tooling. This org contains multiple projects: an NLP server, a vision server, a mobile app, and other utilities. Activity is modest: key repos like ‘nlp_server’ and ‘vision_server’ show updates in 2024 (June 2024). The ditto-assistant org has a few dozen followers and stars across its repos, suggesting a small core team. There is no separate public repo labeled for SN118, implying the Bittensor-specific code may be private or embedded within these projects. In terms of network metrics, TaoPulse indicates SN118’s daily emission share was around 1.13% of the total leaderboard. The alpha token supply (inferred from the $3.9M cap and $2.10 price) appears to be only a few million units, making it relatively scarce. No detailed on-chain mining stats (like total staked TAO) are published yet. Overall, the technical build shows typical use of deep learning and vector DB approaches, and the development activity level is consistent with a small startup team iterating on the core memory technology.
Integrations and APIs
Ditto is built to integrate with other AI tools. The website advertises a “Memory Chain Protocol” (MCP) with server and client components, indicating an API that any AI agent can use to access Ditto’s memory. It also lists integrations such as Google Workspace – e.g. allowing Ditto to read email, calendar, and documents as context during chat sessions. In practice, a developer could use Ditto’s MCP interface to connect Ditto’s memory graph to a custom chatbot or LLM: the agent would send user context to Ditto’s server and receive relevant memory data in return. On the developer side, the ‘ditto-stack’ repository shows they provide a way to run the full Ditto system locally, which could be used for testing or custom integrations. However, we found no publicly documented SDK beyond the GitHub code and mention of MCP – any detailed developer guide is not available. In summary, Ditto seems designed to plug into the broader AI ecosystem via web APIs and integrations (like MCP and Google services), but external developers will rely on (limited) documentation and code samples from the team.
Development Team
Publicly available information identifies the Ditto project with an AI startup called “Omni Aura,” as shown on the official site and the GitHub profile. Omni Aura appears to be the creator and is framed as “built by the Omni Aura team.” Specific team members are not disclosed by name. The GitHub page does list a contact email “[email protected]”, implying at least one developer (Omar) is involved, but details beyond that are scarce. Omni Aura’s LinkedIn mentions that they “started building Hey Ditto before ChatGPT launched,” emphasizing the concept of persistent AI memory, but it does not list team bios. There is no indication of outside funding or formal partnerships in public sources. For community outreach, Ditto has a minimal social presence: for example, the @heydittoai Twitter account had only a few dozen followers as of late 2024. No other major backers or collaborations (e.g. with universities or companies) are mentioned. In the broader Bittensor ecosystem, other personal-AI-memory projects (like Project Nobi’s Nori) exist, but these are separate efforts. To summarize, Ditto’s team is primarily the unnamed Omni Aura developers (one of whom is likely Omar based on the GitHub), focusing internally on the product. The only public-facing indicators are the site copy and a few social posts; no additional team or advisor info has been published.
Development Team
Publicly available information identifies the Ditto project with an AI startup called “Omni Aura,” as shown on the official site and the GitHub profile. Omni Aura appears to be the creator and is framed as “built by the Omni Aura team.” Specific team members are not disclosed by name. The GitHub page does list a contact email “[email protected]”, implying at least one developer (Omar) is involved, but details beyond that are scarce. Omni Aura’s LinkedIn mentions that they “started building Hey Ditto before ChatGPT launched,” emphasizing the concept of persistent AI memory, but it does not list team bios. There is no indication of outside funding or formal partnerships in public sources. For community outreach, Ditto has a minimal social presence: for example, the @heydittoai Twitter account had only a few dozen followers as of late 2024. No other major backers or collaborations (e.g. with universities or companies) are mentioned. In the broader Bittensor ecosystem, other personal-AI-memory projects (like Project Nobi’s Nori) exist, but these are separate efforts. To summarize, Ditto’s team is primarily the unnamed Omni Aura developers (one of whom is likely Omar based on the GitHub), focusing internally on the product. The only public-facing indicators are the site copy and a few social posts; no additional team or advisor info has been published.
Future Plans and Vision
As of now, there is no formally published roadmap or milestone list for Ditto. The team has not announced specific phases or timelines on public channels. (For comparison, unrelated subnets have laid out multi-phase plans in their documentation, but no analogous information has emerged for Ditto.) Omni Aura’s communications emphasize the importance of AI memory – their LinkedIn post states they wanted persistent context “from day one” – but provide no schedule. The only hint of progress is a recent social media announcement: the team tweeted that “Live mode is finally here!”, suggesting the core service has entered general availability. Beyond this, anything in 2026 is speculative. A fully-realized vision of Ditto would presumably be a seamlessly integrated memory system for AI agents: an always-on knowledge graph and natural-language interface across all tools and workflows. However, no concrete targets (such as new language support, model upgrades, or on-chain integrations) are described in available sources. The team’s focus appears to remain on refining Ditto’s memory-assistant functionality. In summary, without any public roadmap announcements, we can only infer Ditto’s direction from its stated mission: keep building out the “second brain” concept for AI. Future development is likely to enhance the memory capabilities and integrations, but exact plans have not been shared.
Future Plans and Vision
As of now, there is no formally published roadmap or milestone list for Ditto. The team has not announced specific phases or timelines on public channels. (For comparison, unrelated subnets have laid out multi-phase plans in their documentation, but no analogous information has emerged for Ditto.) Omni Aura’s communications emphasize the importance of AI memory – their LinkedIn post states they wanted persistent context “from day one” – but provide no schedule. The only hint of progress is a recent social media announcement: the team tweeted that “Live mode is finally here!”, suggesting the core service has entered general availability. Beyond this, anything in 2026 is speculative. A fully-realized vision of Ditto would presumably be a seamlessly integrated memory system for AI agents: an always-on knowledge graph and natural-language interface across all tools and workflows. However, no concrete targets (such as new language support, model upgrades, or on-chain integrations) are described in available sources. The team’s focus appears to remain on refining Ditto’s memory-assistant functionality. In summary, without any public roadmap announcements, we can only infer Ditto’s direction from its stated mission: keep building out the “second brain” concept for AI. Future development is likely to enhance the memory capabilities and integrations, but exact plans have not been shared.
3000 TAO staked.
The pace accelerates. Another 1000 TAO in less than a month.
The market doubts. The believers build.
Merry Christmas to the conviction holders.
We build the infrastructure for the believers.
HODL exchange launching very soon. 🚀
2000 TAO staked.
Less than one month from launch.
Doubled.
The conviction is real. The momentum is undeniable.
Subnet 118 is building the infrastructure for long-term dTAO stability.
HODL exchange coming soon. 🚀
After years of weathering volatility and change, this network has blossomed into a thriving intelligence economy—stronger than ever, bringing together incredible people to solve hard problems.
The future looks bright for Bittensor. Happy Halving.
New: Subnet 118 Mining Dashboard
Huge thanks to community member @taoleeh for building this visual dashboard that lets miners and holders track:
→ Active miners in real-time
→ Total staked TAO across @TrustedStake indices
→ Index distribution by miner count
→ Top miners
By popular demand: Subnet 118 now has a CLI interface.
Mine and interact with incentivized @TrustedStake indices directly from the command line.
No app. No extension wallet. No proxy setup through our front end.
For the miners who live in the terminal. 🖥️
2000 TAO staked.
Less than one month from launch.
Doubled.
The conviction is real. The momentum is undeniable.
Subnet 118 is building the infrastructure for long-term dTAO stability.
HODL exchange coming soon. 🚀
1000 TAO staked.
In 4 days.
1000 TAO invested with conviction.
Injected into only Bittensor’s finest subnets.
Rewarding conviction.
Countering the war that is the trenches.
Bittensor forged the perfect incentive flywheel.
Protect its best by incentivizing conviction.
With
@Subnet118 [combo service with @TrustedStake]
is a credible experiment in building the alpha token capital stack (indexes → exchange → derivatives).
The mechanism today is deliberately simple—pay people to hold—and that aligns incentives but leaves the token unanchored until…