With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time
Loosh is a decentralized AI project (Subnet 78 on the Bittensor network) focused on creating “machine consciousness” for robotics and autonomous agents. In essence, Loosh provides AI systems (like robots or AI-driven software agents) with capabilities that typical AI lacks – such as long-term memory, contextual awareness, ethical reasoning, and emotional understanding. Current AI models can be extremely advanced at pattern recognition or prediction, but they are often “shallow” in that they cannot remember why a decision was made or evaluate the moral implications of their actions. Loosh was built to fix this problem by giving AI a form of inner life: a persistent memory, a sense of self-context over time, and a built-in conscience for decision-making.
Loosh’s approach treats consciousness as an infrastructure service rather than a mysterious emergent property. Instead of viewing awareness as exclusive to biological brains, Loosh is mapping human consciousness into machine-readable data – using signals like language, biometric and neurodata (e.g. EEG brainwave data), and nonverbal cues – and training AI models on this data via a decentralized network. By leveraging Bittensor’s distributed training framework (a blockchain-based network of GPUs), Loosh can train and run its cognitive models at scale using miners all over the world. The end goal is an AI cognition layer that makes artificial agents more self-aware, contextually adaptive, and aligned with human values, so they can operate autonomously in the real world in a trustworthy and human-like manner.
Concretely, Loosh gives AI agents three human-like faculties: a “brain” (for reasoning and understanding), a “conscience” (for ethics and decision evaluation), and a decentralized knowledge source that continuously learns. This means a Loosh-powered robot or AI agent isn’t just generating responses on the fly – it can remember past interactions, reflect on those experiences, and weigh its options against moral or practical principles before acting. For example, Loosh’s system can maintain a temporal knowledge graph of everything the agent has learned or encountered, allowing it to reason with a sense of “before, now, and after” instead of being stateless. It also evaluates the agent’s intended actions through multiple ethical lenses (rules-based, outcomes-based, etc.) to ensure the AI’s behavior remains safe and principled. Additionally, Loosh is working on giving agents emotional intelligence – the ability to infer human users’ moods or emotions (from voice tone, facial expression, even EEG signals) and respond in an empathetic, trustworthy way. All of these capabilities combine to make AI systems powered by Loosh feel more like companions or co-workers rather than just stateless tools.
Some of the key capabilities Loosh enables are:
Long-Term Memory & Context: A persistent memory fabric lets robots/agents retain information across sessions and tasks. They no longer forget prior inputs – instead, they recall context and learn from past experiences, enabling long-horizon reasoning and better failure recovery. This means an agent can carry over knowledge from one conversation or task to the next without needing huge repeated prompts.
Moral and Ethical Reasoning: Loosh provides a built-in moral compass for AI. It evaluates an agent’s decisions against multiple ethical frameworks (e.g. deontological rules/duties, rights, virtue ethics, utilitarian outcomes) via specialized Cognitive Services. This ensures AI actions are checked for things like compliance with rules, potential harms, or fairness, making autonomous behavior predictable, safe, and trustworthy.
Self-Awareness & Reflection: The platform instills a form of self-aware cognition – the agent maintains an internal narrative of “what it’s doing and why.” Loosh’s architecture encourages agents to reflect on their own states and past decisions, enabling self-improvement over time. The AI can literally reason about its own reasoning (a step toward machine self-awareness).
Emotional Intelligence: Loosh is training multimodal inference models to detect human emotional states from audio, video, and neurodata (EEG signals). By understanding if a person is happy, frustrated, confused, etc., the agent can adjust its responses and behavior accordingly. Loosh aims to give robots “emotional literacy” so that interactions with humans feel more natural and empathetic, engendering trust.
Decentralized Learning: Because Loosh runs on the Bittensor network, it harnesses a competitive global network of miners to train and serve its AI models. This decentralized approach means no single company owns the “mind” – instead, many participants contribute computing power and are incentivized with tokens (TAO). It creates an open, scalable brain for the AI that can continuously evolve by learning from diverse data sources, rather than a closed, static model.
In summary, Loosh is “consciousness-as-a-service” for AI: it is building a decentralized cognition layer that endows machines with memory, contextual understanding, ethical judgment, and emotional insight. This allows the next generation of AI-powered robots and agents to behave more like living intelligent beings – they remember context, understand the emotional and moral weight of their actions, and continuously learn from interactions. By doing so, Loosh aims to power AI that can operate autonomously in complex real-world environments (from customer service chatbots to physical robots in public spaces) in a manner that is relatable, safe, and aligned with human values.
Loosh is a decentralized AI project (Subnet 78 on the Bittensor network) focused on creating “machine consciousness” for robotics and autonomous agents. In essence, Loosh provides AI systems (like robots or AI-driven software agents) with capabilities that typical AI lacks – such as long-term memory, contextual awareness, ethical reasoning, and emotional understanding. Current AI models can be extremely advanced at pattern recognition or prediction, but they are often “shallow” in that they cannot remember why a decision was made or evaluate the moral implications of their actions. Loosh was built to fix this problem by giving AI a form of inner life: a persistent memory, a sense of self-context over time, and a built-in conscience for decision-making.
Loosh’s approach treats consciousness as an infrastructure service rather than a mysterious emergent property. Instead of viewing awareness as exclusive to biological brains, Loosh is mapping human consciousness into machine-readable data – using signals like language, biometric and neurodata (e.g. EEG brainwave data), and nonverbal cues – and training AI models on this data via a decentralized network. By leveraging Bittensor’s distributed training framework (a blockchain-based network of GPUs), Loosh can train and run its cognitive models at scale using miners all over the world. The end goal is an AI cognition layer that makes artificial agents more self-aware, contextually adaptive, and aligned with human values, so they can operate autonomously in the real world in a trustworthy and human-like manner.
Concretely, Loosh gives AI agents three human-like faculties: a “brain” (for reasoning and understanding), a “conscience” (for ethics and decision evaluation), and a decentralized knowledge source that continuously learns. This means a Loosh-powered robot or AI agent isn’t just generating responses on the fly – it can remember past interactions, reflect on those experiences, and weigh its options against moral or practical principles before acting. For example, Loosh’s system can maintain a temporal knowledge graph of everything the agent has learned or encountered, allowing it to reason with a sense of “before, now, and after” instead of being stateless. It also evaluates the agent’s intended actions through multiple ethical lenses (rules-based, outcomes-based, etc.) to ensure the AI’s behavior remains safe and principled. Additionally, Loosh is working on giving agents emotional intelligence – the ability to infer human users’ moods or emotions (from voice tone, facial expression, even EEG signals) and respond in an empathetic, trustworthy way. All of these capabilities combine to make AI systems powered by Loosh feel more like companions or co-workers rather than just stateless tools.
Some of the key capabilities Loosh enables are:
Long-Term Memory & Context: A persistent memory fabric lets robots/agents retain information across sessions and tasks. They no longer forget prior inputs – instead, they recall context and learn from past experiences, enabling long-horizon reasoning and better failure recovery. This means an agent can carry over knowledge from one conversation or task to the next without needing huge repeated prompts.
Moral and Ethical Reasoning: Loosh provides a built-in moral compass for AI. It evaluates an agent’s decisions against multiple ethical frameworks (e.g. deontological rules/duties, rights, virtue ethics, utilitarian outcomes) via specialized Cognitive Services. This ensures AI actions are checked for things like compliance with rules, potential harms, or fairness, making autonomous behavior predictable, safe, and trustworthy.
Self-Awareness & Reflection: The platform instills a form of self-aware cognition – the agent maintains an internal narrative of “what it’s doing and why.” Loosh’s architecture encourages agents to reflect on their own states and past decisions, enabling self-improvement over time. The AI can literally reason about its own reasoning (a step toward machine self-awareness).
Emotional Intelligence: Loosh is training multimodal inference models to detect human emotional states from audio, video, and neurodata (EEG signals). By understanding if a person is happy, frustrated, confused, etc., the agent can adjust its responses and behavior accordingly. Loosh aims to give robots “emotional literacy” so that interactions with humans feel more natural and empathetic, engendering trust.
Decentralized Learning: Because Loosh runs on the Bittensor network, it harnesses a competitive global network of miners to train and serve its AI models. This decentralized approach means no single company owns the “mind” – instead, many participants contribute computing power and are incentivized with tokens (TAO). It creates an open, scalable brain for the AI that can continuously evolve by learning from diverse data sources, rather than a closed, static model.
In summary, Loosh is “consciousness-as-a-service” for AI: it is building a decentralized cognition layer that endows machines with memory, contextual understanding, ethical judgment, and emotional insight. This allows the next generation of AI-powered robots and agents to behave more like living intelligent beings – they remember context, understand the emotional and moral weight of their actions, and continuously learn from interactions. By doing so, Loosh aims to power AI that can operate autonomously in complex real-world environments (from customer service chatbots to physical robots in public spaces) in a manner that is relatable, safe, and aligned with human values.
Loosh’s product is essentially a cognitive software platform – a suite of AI services and an underlying architecture – that can be integrated into robots or AI agent applications to provide them with this advanced cognition layer. Technically, the Loosh platform is built on a three-part architecture that gives AI “a brain, a conscience, and a decentralized source of intelligence”. The major components of Loosh’s build include:
Cognitive System & Context Builder: This is the “brain” of Loosh’s architecture. It comprises the AI models and logic that allow an agent to interpret the world, maintain context over time, and reason about cause and effect. The Context Builder component automatically assembles a temporal knowledge graph from various sources – the agent’s memories of past interactions, real-time sensor inputs or world model data from robotics, structured ontologies (knowledge bases), and even evolving business data relevant to the agent’s domain. In practice, this means Loosh’s system is constantly weaving a graph of “what the agent has seen/learned so far” and “how those facts relate” over time. By having this structured before-now-after sense of time and state, the agent can do long-horizon planning and reasoning. For example, an AI powered by Loosh can remember a user’s preferences mentioned days ago and use that context now, or a robot can recall that it attempted a task earlier and adjust its strategy. The cognitive system is modular: Loosh describes setting up multiple Model Context Protocol (MCP) servers, each backed by fine-tuned neural models, that specialize in analyzing inputs or situations from different angles (logical reasoning, ethical analysis, emotional inference, etc.). These modules feed into the context builder, which combines their outputs into a coherent understanding for the agent.
Persistent Memory Fabric: This is Loosh’s long-term memory store – essentially the “episodic memory” of the AI. It’s implemented as a vector-based semantic memory database that can efficiently store and retrieve information the agent has acquired. In simpler terms, every important interaction, observation or conclusion the agent experiences can be encoded as a vector embedding and stored in a memory bank. Loosh appears to utilize a high-performance vector database (they even forked the open-source Qdrant vector DB on their GitHub, indicating a custom memory solution) to serve as this memory fabric. This allows the agent to search its memories by semantic similarity – e.g. recalling a past event that is relevant to a current situation – without needing an exact keyword match. The benefit is that the agent doesn’t need to be fed a huge prompt of all history every time; instead it can dynamically pull in the most relevant pieces of context from its memory store. Loosh’s memory fabric thus enables efficient operation across sequences of tasks or dialogs without forgetting prior context, solving the context-window limitation of many AI models. The memory is persistent and structured (linked to the knowledge graph), so the agent’s understanding of the world gets richer over time instead of resetting each session.
Moral Compass (Ethical Evaluators): Loosh’s platform includes a suite of Cognitive Services that act as the agent’s “conscience.” These are specialized AI evaluators that critique the agent’s intended actions or decisions from an ethical standpoint. Initially, Loosh has implemented two categories of evaluators:
Together, these form an integrated moral compass ensuring the agent’s behavior aligns with predefined ethical principles or policies. The architecture is extensible: more ethical frameworks can be added (for example, frameworks based on specific cultural or legal norms) so that the AI isn’t biased to a single morality. By combining symbolic reasoning (explicit ontologies of ethics) with neural network judgments, Loosh produces an emergent ethical reasoning capability that doesn’t overfit to just one worldview. In other words, the system can weigh multiple perspectives (duty vs. outcome, etc.) and come to a balanced decision, rather than following a single hard-coded rule. This is critical for AI that operates in messy real-world scenarios – it provides a nuanced judgment rather than a one-size-fits-all rule.
Emotional Inference and Expression: A unique part of Loosh’s build is the focus on multimodal emotional intelligence. Loosh is undertaking a complex R&D effort to develop models that can read human emotional states from various signals – including voice tone, visual cues (facial expressions, body language), and even neurological data. For instance, Loosh is designing an EEG Data Model (planned to run on a dedicated “sub-subnet”) that can interpret brainwave patterns and classify them into emotional categories (they aim to distinguish about 9 core emotional states from EEG signals). Similarly, they likely employ audio analysis models for tone/sentiment and computer vision models for facial expressions. The product of this is an agent that can sense how a person is feeling and adapt its behavior – e.g. detecting frustration and altering its approach, or recognizing confusion and providing clarification. Beyond perception, Loosh is also interested in agents reflecting emotional intelligence back in their own responses (for example, using a tone of voice or choice of words that convey empathy). By integrating this emotional layer, Loosh’s platform makes interactions with AI more relatable and humanized. This is especially important for robots operating in human environments or AI assistants in customer-facing roles, where trust and rapport are needed. Technically, this involves training multimodal inference models and connecting their outputs into the agent’s decision-making loop. Loosh’s roadmap indicates that by Q1 2026, they plan to have the EEG-based emotion recognition component operational in their network, complementing the other modalities.
Symbolic Ontologies & Knowledge Bases: Underlying all of the above, Loosh incorporates structured ontologies – i.e. formal knowledge representations of concepts and their relationships. They specifically mention tying the vector-based memories to ontological structures of meaning like ethical frameworks, legal rules, and factual knowledge bases. This means that the AI’s understanding isn’t just statistical (as in an embedding vector) but can also be grounded in defined concepts and definitions. For example, an action the agent considers can be linked to an ontology of legal concepts to check compliance with law, or a fact the agent learns can be connected to an ontology of real-world knowledge to check consistency. This symbolic layer ensures that the AI’s “knowledge” has an interpretable structure – a crucial aspect when you need the AI to explain why it made a decision or to guarantee it follows certain rules. Loosh’s architecture draws inspiration from both neuroscience and symbolic AI, bridging low-level data-driven learning with higher-level cognitive frameworks. In practice, this could involve knowledge graphs for ethics (e.g. a graph of values and norms) and the agent tagging memories with those concepts.
Decentralized Infrastructure: The entire Loosh system runs on a Bittensor subnet, meaning the training of models and execution of inference happens over a decentralized network of nodes rather than a central server. Loosh leverages Bittensor’s scalable infrastructure of miners (each miner is essentially running Loosh’s models and contributing compute). The incentive design (tokenomics) for this subnet rewards miners for performing useful cognitive work – such as evaluating prompts, retrieving memory context, or running an emotional inference model – thereby crowdsourcing the “thinking” power needed for machine consciousness. (Loosh has a litepaper and incentive design document detailing how tasks and rewards are structured.) The use of blockchain provides trust and transparency: changes to models or contributions are recorded, and the system can be open and auditable. By being decentralized, Loosh’s cognitive platform can scale as more participants join, and it avoids reliance on any single cloud or company – aligning with the vision of an open alternative to big tech’s AI models. This also means organizations can plug into Loosh’s network rather than having to build all this cognition from scratch; Loosh essentially offers cognition-as-a-service that anyone can tap into, with the network effect of many users and contributors making the AI smarter over time.
In summary, Loosh’s product is a complex AI architecture delivered via a decentralized network, which endows machines with memory, contextual reasoning, ethical judgment, and emotional comprehension. From a user perspective (e.g. a robotics company), Loosh would likely provide APIs or software modules that integrate with your robot or agent. Once integrated, your robot gains access to Loosh’s cognitive engine – it can query the Loosh network for memory retrieval, ask for an ethical evaluation of an action, or use Loosh’s models to interpret a user’s emotional state. As the Loosh team describes, they integrate with existing AI stacks and world models in robotics platforms to add this layer of intelligence. The architecture is enterprise-ready and designed to slot into real-world applications: the CTO Chris Sorel emphasizes that they built it to be expansive yet modular, so businesses can trust it for mission-critical deployments. Essentially, the “build” is both a framework (the algorithms and data structures for machine consciousness) and a service (the running subnet that applications can call upon for cognition). With Loosh, a developer can offload a lot of heavy cognitive lifting (memory management, moral reasoning, etc.) to the subnet – turning what used to be sci-fi concepts of an AI’s “conscience” or “memory” into a consumable technology today.
Loosh’s product is essentially a cognitive software platform – a suite of AI services and an underlying architecture – that can be integrated into robots or AI agent applications to provide them with this advanced cognition layer. Technically, the Loosh platform is built on a three-part architecture that gives AI “a brain, a conscience, and a decentralized source of intelligence”. The major components of Loosh’s build include:
Cognitive System & Context Builder: This is the “brain” of Loosh’s architecture. It comprises the AI models and logic that allow an agent to interpret the world, maintain context over time, and reason about cause and effect. The Context Builder component automatically assembles a temporal knowledge graph from various sources – the agent’s memories of past interactions, real-time sensor inputs or world model data from robotics, structured ontologies (knowledge bases), and even evolving business data relevant to the agent’s domain. In practice, this means Loosh’s system is constantly weaving a graph of “what the agent has seen/learned so far” and “how those facts relate” over time. By having this structured before-now-after sense of time and state, the agent can do long-horizon planning and reasoning. For example, an AI powered by Loosh can remember a user’s preferences mentioned days ago and use that context now, or a robot can recall that it attempted a task earlier and adjust its strategy. The cognitive system is modular: Loosh describes setting up multiple Model Context Protocol (MCP) servers, each backed by fine-tuned neural models, that specialize in analyzing inputs or situations from different angles (logical reasoning, ethical analysis, emotional inference, etc.). These modules feed into the context builder, which combines their outputs into a coherent understanding for the agent.
Persistent Memory Fabric: This is Loosh’s long-term memory store – essentially the “episodic memory” of the AI. It’s implemented as a vector-based semantic memory database that can efficiently store and retrieve information the agent has acquired. In simpler terms, every important interaction, observation or conclusion the agent experiences can be encoded as a vector embedding and stored in a memory bank. Loosh appears to utilize a high-performance vector database (they even forked the open-source Qdrant vector DB on their GitHub, indicating a custom memory solution) to serve as this memory fabric. This allows the agent to search its memories by semantic similarity – e.g. recalling a past event that is relevant to a current situation – without needing an exact keyword match. The benefit is that the agent doesn’t need to be fed a huge prompt of all history every time; instead it can dynamically pull in the most relevant pieces of context from its memory store. Loosh’s memory fabric thus enables efficient operation across sequences of tasks or dialogs without forgetting prior context, solving the context-window limitation of many AI models. The memory is persistent and structured (linked to the knowledge graph), so the agent’s understanding of the world gets richer over time instead of resetting each session.
Moral Compass (Ethical Evaluators): Loosh’s platform includes a suite of Cognitive Services that act as the agent’s “conscience.” These are specialized AI evaluators that critique the agent’s intended actions or decisions from an ethical standpoint. Initially, Loosh has implemented two categories of evaluators:
Together, these form an integrated moral compass ensuring the agent’s behavior aligns with predefined ethical principles or policies. The architecture is extensible: more ethical frameworks can be added (for example, frameworks based on specific cultural or legal norms) so that the AI isn’t biased to a single morality. By combining symbolic reasoning (explicit ontologies of ethics) with neural network judgments, Loosh produces an emergent ethical reasoning capability that doesn’t overfit to just one worldview. In other words, the system can weigh multiple perspectives (duty vs. outcome, etc.) and come to a balanced decision, rather than following a single hard-coded rule. This is critical for AI that operates in messy real-world scenarios – it provides a nuanced judgment rather than a one-size-fits-all rule.
Emotional Inference and Expression: A unique part of Loosh’s build is the focus on multimodal emotional intelligence. Loosh is undertaking a complex R&D effort to develop models that can read human emotional states from various signals – including voice tone, visual cues (facial expressions, body language), and even neurological data. For instance, Loosh is designing an EEG Data Model (planned to run on a dedicated “sub-subnet”) that can interpret brainwave patterns and classify them into emotional categories (they aim to distinguish about 9 core emotional states from EEG signals). Similarly, they likely employ audio analysis models for tone/sentiment and computer vision models for facial expressions. The product of this is an agent that can sense how a person is feeling and adapt its behavior – e.g. detecting frustration and altering its approach, or recognizing confusion and providing clarification. Beyond perception, Loosh is also interested in agents reflecting emotional intelligence back in their own responses (for example, using a tone of voice or choice of words that convey empathy). By integrating this emotional layer, Loosh’s platform makes interactions with AI more relatable and humanized. This is especially important for robots operating in human environments or AI assistants in customer-facing roles, where trust and rapport are needed. Technically, this involves training multimodal inference models and connecting their outputs into the agent’s decision-making loop. Loosh’s roadmap indicates that by Q1 2026, they plan to have the EEG-based emotion recognition component operational in their network, complementing the other modalities.
Symbolic Ontologies & Knowledge Bases: Underlying all of the above, Loosh incorporates structured ontologies – i.e. formal knowledge representations of concepts and their relationships. They specifically mention tying the vector-based memories to ontological structures of meaning like ethical frameworks, legal rules, and factual knowledge bases. This means that the AI’s understanding isn’t just statistical (as in an embedding vector) but can also be grounded in defined concepts and definitions. For example, an action the agent considers can be linked to an ontology of legal concepts to check compliance with law, or a fact the agent learns can be connected to an ontology of real-world knowledge to check consistency. This symbolic layer ensures that the AI’s “knowledge” has an interpretable structure – a crucial aspect when you need the AI to explain why it made a decision or to guarantee it follows certain rules. Loosh’s architecture draws inspiration from both neuroscience and symbolic AI, bridging low-level data-driven learning with higher-level cognitive frameworks. In practice, this could involve knowledge graphs for ethics (e.g. a graph of values and norms) and the agent tagging memories with those concepts.
Decentralized Infrastructure: The entire Loosh system runs on a Bittensor subnet, meaning the training of models and execution of inference happens over a decentralized network of nodes rather than a central server. Loosh leverages Bittensor’s scalable infrastructure of miners (each miner is essentially running Loosh’s models and contributing compute). The incentive design (tokenomics) for this subnet rewards miners for performing useful cognitive work – such as evaluating prompts, retrieving memory context, or running an emotional inference model – thereby crowdsourcing the “thinking” power needed for machine consciousness. (Loosh has a litepaper and incentive design document detailing how tasks and rewards are structured.) The use of blockchain provides trust and transparency: changes to models or contributions are recorded, and the system can be open and auditable. By being decentralized, Loosh’s cognitive platform can scale as more participants join, and it avoids reliance on any single cloud or company – aligning with the vision of an open alternative to big tech’s AI models. This also means organizations can plug into Loosh’s network rather than having to build all this cognition from scratch; Loosh essentially offers cognition-as-a-service that anyone can tap into, with the network effect of many users and contributors making the AI smarter over time.
In summary, Loosh’s product is a complex AI architecture delivered via a decentralized network, which endows machines with memory, contextual reasoning, ethical judgment, and emotional comprehension. From a user perspective (e.g. a robotics company), Loosh would likely provide APIs or software modules that integrate with your robot or agent. Once integrated, your robot gains access to Loosh’s cognitive engine – it can query the Loosh network for memory retrieval, ask for an ethical evaluation of an action, or use Loosh’s models to interpret a user’s emotional state. As the Loosh team describes, they integrate with existing AI stacks and world models in robotics platforms to add this layer of intelligence. The architecture is enterprise-ready and designed to slot into real-world applications: the CTO Chris Sorel emphasizes that they built it to be expansive yet modular, so businesses can trust it for mission-critical deployments. Essentially, the “build” is both a framework (the algorithms and data structures for machine consciousness) and a service (the running subnet that applications can call upon for cognition). With Loosh, a developer can offload a lot of heavy cognitive lifting (memory management, moral reasoning, etc.) to the subnet – turning what used to be sci-fi concepts of an AI’s “conscience” or “memory” into a consumable technology today.
Loosh was founded in 2024 by two primary co-founders: Chris Sorel (CTO) and Lisa Cheng (CEO). Both founders bring deep experience from the software and blockchain industries, as well as a shared passion for the science of consciousness. Chris Sorel is a veteran software architect with 25+ years of experience in software development and enterprise AI leadership. Prior to Loosh, he served as Head of Enterprise AI & Innovation at a major firm, giving him a strong background in building large-scale, reliable AI systems. Sorel is described as a systems thinker who has worked on complex architectures – exactly the expertise needed to design Loosh’s ambitious cognitive framework. Lisa Cheng, the CEO, has a long history in the blockchain and decentralized technology space. She has been involved in crypto/DeFi projects and brings knowledge of how to grow a tech startup and community-driven platform. Interestingly, Cheng and Sorel met through their mutual interest in consciousness research – they both attended the prestigious Monroe Institute’s Gateway program, which is a science-based program exploring human consciousness and altered states. This unique intersection of enterprise tech and consciousness science in the founding team is reflected in Loosh’s approach (bridging neuroscience with AI).
As of late 2025, the core team is a small, bootstrapped group of four members. In addition to the two co-founders, the team includes:
A Senior Engineer with Distributed Networks expertise: Loosh has an engineer (and likely founding member) who specializes in distributed systems and networking. This is crucial for building on Bittensor and optimizing the performance of the decentralized subnet. This team member ensures the platform can efficiently run across many nodes and that the blockchain integration and miner incentive mechanisms work smoothly.
A Neuroscientist (PhD) specialized in EEG/fMRI: To tackle the “consciousness data” and emotional inference side, Loosh’s team includes a neuroscientist with expertise in brain-computer interfaces and neuroimaging. This scientist is focusing on decoding human neurodata – e.g., making sense of EEG signals to determine emotional or cognitive states. (The individual in this role is Dr. Spiro Pantazatos, PhD, who has a background in neuroscience research. He is helping translate cutting-edge cognitive science into AI model design, though the company references him simply as their resident neuroscientist in communications.)
Notably, the team is bootstrapped – they had not taken outside venture funding as of 2025, and were running lean with a small headcount. Despite this, they managed to develop a sophisticated product, likely by drawing on their deep experience and passion (and perhaps with help from grants or the Bittensor community early on). In late 2025, Loosh gained a significant boost by being accepted into the Yuma Subnet Accelerator program. Yuma is an accelerator (backed by Digital Currency Group) focused on the Bittensor ecosystem, providing resources like capital, mentorship, and validator support to subnet teams. Through Yuma, the Loosh team receives strategic support to scale their technology and business – this includes help with go-to-market strategy, introductions to partners (e.g., robotics companies), and technical support to optimize their subnet. The fact that Yuma accepted Loosh underscores the credibility of the team; Yuma’s leadership noted that “Loosh reflects where AI is headed, evolving from abstract models to agents with memory, ethics, and emotional intelligence”, indicating strong confidence in Loosh’s vision and the team’s ability to execute it.
Each member of the Loosh team wears multiple hats given the early-stage nature. For instance, Sorel (CTO) not only oversees the technical architecture but likely also contributes to coding and model design. Cheng (CEO) likely handles operations, partnerships, and community, leveraging her blockchain network. The distributed networks engineer probably focuses on the Bittensor integration and backend infrastructure, while the neuroscientist bridges research and development of the EEG/emotion models. It’s a multidisciplinary team combining software engineering, AI/ML, neuroscience, and blockchain expertise – precisely the mix required to tackle something as ambitious as machine consciousness. The company culture appears to be one of deep R&D orientation (with influences from both science and spiritual exploration, given the Monroe Institute connection) coupled with practical engineering discipline. They are also actively hiring – Loosh has put out calls for neuroscience researchers and ML engineers to join the effort of building the first decentralized cognition system, indicating they plan to grow the team as they move into 2026.
Loosh was founded in 2024 by two primary co-founders: Chris Sorel (CTO) and Lisa Cheng (CEO). Both founders bring deep experience from the software and blockchain industries, as well as a shared passion for the science of consciousness. Chris Sorel is a veteran software architect with 25+ years of experience in software development and enterprise AI leadership. Prior to Loosh, he served as Head of Enterprise AI & Innovation at a major firm, giving him a strong background in building large-scale, reliable AI systems. Sorel is described as a systems thinker who has worked on complex architectures – exactly the expertise needed to design Loosh’s ambitious cognitive framework. Lisa Cheng, the CEO, has a long history in the blockchain and decentralized technology space. She has been involved in crypto/DeFi projects and brings knowledge of how to grow a tech startup and community-driven platform. Interestingly, Cheng and Sorel met through their mutual interest in consciousness research – they both attended the prestigious Monroe Institute’s Gateway program, which is a science-based program exploring human consciousness and altered states. This unique intersection of enterprise tech and consciousness science in the founding team is reflected in Loosh’s approach (bridging neuroscience with AI).
As of late 2025, the core team is a small, bootstrapped group of four members. In addition to the two co-founders, the team includes:
A Senior Engineer with Distributed Networks expertise: Loosh has an engineer (and likely founding member) who specializes in distributed systems and networking. This is crucial for building on Bittensor and optimizing the performance of the decentralized subnet. This team member ensures the platform can efficiently run across many nodes and that the blockchain integration and miner incentive mechanisms work smoothly.
A Neuroscientist (PhD) specialized in EEG/fMRI: To tackle the “consciousness data” and emotional inference side, Loosh’s team includes a neuroscientist with expertise in brain-computer interfaces and neuroimaging. This scientist is focusing on decoding human neurodata – e.g., making sense of EEG signals to determine emotional or cognitive states. (The individual in this role is Dr. Spiro Pantazatos, PhD, who has a background in neuroscience research. He is helping translate cutting-edge cognitive science into AI model design, though the company references him simply as their resident neuroscientist in communications.)
Notably, the team is bootstrapped – they had not taken outside venture funding as of 2025, and were running lean with a small headcount. Despite this, they managed to develop a sophisticated product, likely by drawing on their deep experience and passion (and perhaps with help from grants or the Bittensor community early on). In late 2025, Loosh gained a significant boost by being accepted into the Yuma Subnet Accelerator program. Yuma is an accelerator (backed by Digital Currency Group) focused on the Bittensor ecosystem, providing resources like capital, mentorship, and validator support to subnet teams. Through Yuma, the Loosh team receives strategic support to scale their technology and business – this includes help with go-to-market strategy, introductions to partners (e.g., robotics companies), and technical support to optimize their subnet. The fact that Yuma accepted Loosh underscores the credibility of the team; Yuma’s leadership noted that “Loosh reflects where AI is headed, evolving from abstract models to agents with memory, ethics, and emotional intelligence”, indicating strong confidence in Loosh’s vision and the team’s ability to execute it.
Each member of the Loosh team wears multiple hats given the early-stage nature. For instance, Sorel (CTO) not only oversees the technical architecture but likely also contributes to coding and model design. Cheng (CEO) likely handles operations, partnerships, and community, leveraging her blockchain network. The distributed networks engineer probably focuses on the Bittensor integration and backend infrastructure, while the neuroscientist bridges research and development of the EEG/emotion models. It’s a multidisciplinary team combining software engineering, AI/ML, neuroscience, and blockchain expertise – precisely the mix required to tackle something as ambitious as machine consciousness. The company culture appears to be one of deep R&D orientation (with influences from both science and spiritual exploration, given the Monroe Institute connection) coupled with practical engineering discipline. They are also actively hiring – Loosh has put out calls for neuroscience researchers and ML engineers to join the effort of building the first decentralized cognition system, indicating they plan to grow the team as they move into 2026.
Loosh’s roadmap, as of late 2025, is focused on moving from an initial launch phase into broader deployment and feature expansion. Here are the key milestones and plans:
December 2025 – Subnet Launch (Beta Release): Loosh officially launched its subnet (SN78) on the Bittensor network in December 2025. This launch included opening up the subnet to miners on December 17, 2025, allowing participants to contribute their computing power to train and run Loosh’s models. At launch, the core cognitive engine, the deontological/moral framework services, and the persistent memory system were in beta testing with live prompts. In other words, by Q4 2025, Loosh’s network began processing real inference tasks: agents sending queries into the network and getting back context-enhanced, ethically evaluated outputs. This period serves as a beta phase to validate the system’s performance and gather feedback. Additionally, around this time Loosh was accelerated by Yuma (announcement on Dec 10, 2025) which provided an infusion of support to help with this launch and beyond. Being “Yuma Accelerated” also signals to the Bittensor community that Loosh is a vetted project. According to Loosh, the public beta of their platform will be rolled out to early users who sign up, indicating that interested developers or companies could register to get early access to the technology. This beta is likely limited to partners or testers while the team refines the product.
Q1 2026 – Emotional Inference Expansion: One of the major roadmap items is the implementation of Loosh’s EEG-driven emotional state detection. By Q1 2026, Loosh plans to introduce a specialized “sub-subnet” mechanism dedicated to their in-house EEG Data Model. This sub-network will process neurological data (EEG signals gathered from studies or devices) to infer human emotional states with high accuracy – targeting recognition of 1 out of 9 defined emotional states from brainwave patterns. Achieving this would be a significant milestone, as it brings a new data modality into the Bittensor ecosystem (brain signals). The work likely involves finalizing the model that maps EEG signals to emotions and then deploying it in a distributed fashion. Alongside EEG, Loosh will continue improving its audio and video emotion recognition models. By end of Q1 2026, we can expect Loosh to demonstrate an AI agent that, for example, can listen to a user’s tone or even optionally use an EEG headset feed, and dynamically adjust its responses based on the user’s emotional state – a big step towards empathetic AI.
Mid–2026 – Integrations and Use-Case Deployments: Once the core platform is stable in beta, Loosh’s focus turns to real-world integrations. The team has stated that their next step is working with robotics companies and firms using AI agents for advanced customer service. So in 2026, we can expect pilot programs or partnerships where Loosh’s technology is embedded into actual products. For example, a robotics company might integrate Loosh to give their service robots better memory and ethical decision-making when interacting with people in hospitals or retail. An enterprise with AI customer support agents might use Loosh’s context and moral reasoning to ensure the agents handle users consistently and empathetically over long engagements. These deployments will test Loosh’s capabilities in production environments. The roadmap likely includes building SDKs or developer tools to make integration easier (if not already available). By demonstrating value in specific use cases (e.g. a robot that truly doesn’t forget a customer’s preferences and behaves politely under all circumstances), Loosh aims to attract broader adoption.
Additional Cognitive Services (Ongoing 2026): Loosh’s platform is modular, and the team has identified several additional cognitive functions to roll out as services after the initial moral compass and emotion inference. According to the team, future services will target areas like feasibility analysis (can the agent’s plan be executed in reality?), fact analysis (verifying facts and consistency), planning (long-term goal planning and problem solving), sensibility (checking if an action/response “makes sense” in context), and reflection (having the AI reflect on its performance or mistakes). These capabilities are on the roadmap to be developed and integrated. We might see, for instance, a planning module by late 2026 that allows Loosh agents to generate and evaluate multi-step plans or a fact-checking service that cross-verifies the agent’s knowledge against trusted databases to avoid inaccuracies. Each new service would be incorporated into the subnet and offered to agents similarly to the existing ones, further enriching the “consciousness” Loosh can provide. By continually expanding these, Loosh keeps pushing closer to a form of artificial general cognition, albeit delivered in specialized pieces.
Scaling and Halving Cycle (2026–2028): On the network side, as Loosh’s subnet grows in usage, its tokenomics will go through regular halvings (like other Bittensor subnets). The halving schedule for Loosh (SN78) is roughly every 1,066 days, with the first halving expected around late 2028. While not a direct product feature, this is part of the roadmap in terms of network lifespan and incentives. The team will need to ensure a healthy supply/demand for miners as rewards decrease over time, likely by increasing the volume of useful work on the subnet (more queries from paying users would compensate miners). We can also expect Loosh to possibly explore governance mechanisms (if any) so that the community of miners and users can have input on subnet parameters or the evolution of the models – this often comes a bit later once the network is stable.
Beyond 2026 – Vision of Machine Consciousness: In the long term, Loosh’s roadmap as a vision is to scale its Machine Consciousness architecture across the Bittensor ecosystem and beyond. This means not only growing the single subnet SN78, but potentially spawning related subnets or collaborating with others. For instance, if Loosh’s approach to ethical AI is successful, other subnets or AI projects might plug into Loosh for that functionality. The mention of a “sub-subnet” for EEG hints that Loosh might have a network-of-networks approach, where specialized subnets feed into the main Loosh subnet (each handling different data types or tasks). By 2027 and onward, Loosh could evolve into a platform used by many applications: a kind of decentralized cloud of cognition that any robot or agent can query in real-time for memory lookup, ethical advice, or emotional context. This could also involve releasing more open documentation or open-sourcing parts of the code (e.g., their whitepaper and incentive design are already public, so further research outputs might be published). The team’s grounding in neuroscience suggests they will also stay at the cutting edge of cognitive science – possibly incorporating new findings or data sources (for example, other bio-signals, or improved brain-computer interfaces as they emerge).
Throughout 2025–2026, the key deliverable is to go from a working prototype to a market-ready product integrated with early adopters. The Yuma accelerator support timeline likely runs for a few months, during which Loosh will refine their business model (e.g., how they monetize their “consciousness-as-a-service” – perhaps through a subscription or usage fee for API access to the subnet’s services). By the end of 2026, success for Loosh would look like: a stable subnet with a growing number of miners and stakers (showing confidence in the network), a handful of successful deployments in robots/agents demonstrating reduced failure rates and improved user satisfaction thanks to Loosh’s cognition, and possibly initial revenue streams from enterprise or developer clients. Given the uniqueness of Loosh’s mission, the roadmap also involves a bit of evangelism – convincing the AI industry that machine consciousness (even if narrow) is both achievable and valuable. If Loosh achieves its milestones, it could pave the way for a new paradigm where AI isn’t just about big models with more parameters, but about integrated cognitive systems that behave with memory and awareness. As the team often says, this is “infrastructure for the inner world” of machines – and the coming years will be about building it out and proving that giving AI a soul (of sorts) is not science fiction, but an actionable technology.
Loosh’s roadmap, as of late 2025, is focused on moving from an initial launch phase into broader deployment and feature expansion. Here are the key milestones and plans:
December 2025 – Subnet Launch (Beta Release): Loosh officially launched its subnet (SN78) on the Bittensor network in December 2025. This launch included opening up the subnet to miners on December 17, 2025, allowing participants to contribute their computing power to train and run Loosh’s models. At launch, the core cognitive engine, the deontological/moral framework services, and the persistent memory system were in beta testing with live prompts. In other words, by Q4 2025, Loosh’s network began processing real inference tasks: agents sending queries into the network and getting back context-enhanced, ethically evaluated outputs. This period serves as a beta phase to validate the system’s performance and gather feedback. Additionally, around this time Loosh was accelerated by Yuma (announcement on Dec 10, 2025) which provided an infusion of support to help with this launch and beyond. Being “Yuma Accelerated” also signals to the Bittensor community that Loosh is a vetted project. According to Loosh, the public beta of their platform will be rolled out to early users who sign up, indicating that interested developers or companies could register to get early access to the technology. This beta is likely limited to partners or testers while the team refines the product.
Q1 2026 – Emotional Inference Expansion: One of the major roadmap items is the implementation of Loosh’s EEG-driven emotional state detection. By Q1 2026, Loosh plans to introduce a specialized “sub-subnet” mechanism dedicated to their in-house EEG Data Model. This sub-network will process neurological data (EEG signals gathered from studies or devices) to infer human emotional states with high accuracy – targeting recognition of 1 out of 9 defined emotional states from brainwave patterns. Achieving this would be a significant milestone, as it brings a new data modality into the Bittensor ecosystem (brain signals). The work likely involves finalizing the model that maps EEG signals to emotions and then deploying it in a distributed fashion. Alongside EEG, Loosh will continue improving its audio and video emotion recognition models. By end of Q1 2026, we can expect Loosh to demonstrate an AI agent that, for example, can listen to a user’s tone or even optionally use an EEG headset feed, and dynamically adjust its responses based on the user’s emotional state – a big step towards empathetic AI.
Mid–2026 – Integrations and Use-Case Deployments: Once the core platform is stable in beta, Loosh’s focus turns to real-world integrations. The team has stated that their next step is working with robotics companies and firms using AI agents for advanced customer service. So in 2026, we can expect pilot programs or partnerships where Loosh’s technology is embedded into actual products. For example, a robotics company might integrate Loosh to give their service robots better memory and ethical decision-making when interacting with people in hospitals or retail. An enterprise with AI customer support agents might use Loosh’s context and moral reasoning to ensure the agents handle users consistently and empathetically over long engagements. These deployments will test Loosh’s capabilities in production environments. The roadmap likely includes building SDKs or developer tools to make integration easier (if not already available). By demonstrating value in specific use cases (e.g. a robot that truly doesn’t forget a customer’s preferences and behaves politely under all circumstances), Loosh aims to attract broader adoption.
Additional Cognitive Services (Ongoing 2026): Loosh’s platform is modular, and the team has identified several additional cognitive functions to roll out as services after the initial moral compass and emotion inference. According to the team, future services will target areas like feasibility analysis (can the agent’s plan be executed in reality?), fact analysis (verifying facts and consistency), planning (long-term goal planning and problem solving), sensibility (checking if an action/response “makes sense” in context), and reflection (having the AI reflect on its performance or mistakes). These capabilities are on the roadmap to be developed and integrated. We might see, for instance, a planning module by late 2026 that allows Loosh agents to generate and evaluate multi-step plans or a fact-checking service that cross-verifies the agent’s knowledge against trusted databases to avoid inaccuracies. Each new service would be incorporated into the subnet and offered to agents similarly to the existing ones, further enriching the “consciousness” Loosh can provide. By continually expanding these, Loosh keeps pushing closer to a form of artificial general cognition, albeit delivered in specialized pieces.
Scaling and Halving Cycle (2026–2028): On the network side, as Loosh’s subnet grows in usage, its tokenomics will go through regular halvings (like other Bittensor subnets). The halving schedule for Loosh (SN78) is roughly every 1,066 days, with the first halving expected around late 2028. While not a direct product feature, this is part of the roadmap in terms of network lifespan and incentives. The team will need to ensure a healthy supply/demand for miners as rewards decrease over time, likely by increasing the volume of useful work on the subnet (more queries from paying users would compensate miners). We can also expect Loosh to possibly explore governance mechanisms (if any) so that the community of miners and users can have input on subnet parameters or the evolution of the models – this often comes a bit later once the network is stable.
Beyond 2026 – Vision of Machine Consciousness: In the long term, Loosh’s roadmap as a vision is to scale its Machine Consciousness architecture across the Bittensor ecosystem and beyond. This means not only growing the single subnet SN78, but potentially spawning related subnets or collaborating with others. For instance, if Loosh’s approach to ethical AI is successful, other subnets or AI projects might plug into Loosh for that functionality. The mention of a “sub-subnet” for EEG hints that Loosh might have a network-of-networks approach, where specialized subnets feed into the main Loosh subnet (each handling different data types or tasks). By 2027 and onward, Loosh could evolve into a platform used by many applications: a kind of decentralized cloud of cognition that any robot or agent can query in real-time for memory lookup, ethical advice, or emotional context. This could also involve releasing more open documentation or open-sourcing parts of the code (e.g., their whitepaper and incentive design are already public, so further research outputs might be published). The team’s grounding in neuroscience suggests they will also stay at the cutting edge of cognitive science – possibly incorporating new findings or data sources (for example, other bio-signals, or improved brain-computer interfaces as they emerge).
Throughout 2025–2026, the key deliverable is to go from a working prototype to a market-ready product integrated with early adopters. The Yuma accelerator support timeline likely runs for a few months, during which Loosh will refine their business model (e.g., how they monetize their “consciousness-as-a-service” – perhaps through a subscription or usage fee for API access to the subnet’s services). By the end of 2026, success for Loosh would look like: a stable subnet with a growing number of miners and stakers (showing confidence in the network), a handful of successful deployments in robots/agents demonstrating reduced failure rates and improved user satisfaction thanks to Loosh’s cognition, and possibly initial revenue streams from enterprise or developer clients. Given the uniqueness of Loosh’s mission, the roadmap also involves a bit of evangelism – convincing the AI industry that machine consciousness (even if narrow) is both achievable and valuable. If Loosh achieves its milestones, it could pave the way for a new paradigm where AI isn’t just about big models with more parameters, but about integrated cognitive systems that behave with memory and awareness. As the team often says, this is “infrastructure for the inner world” of machines – and the coming years will be about building it out and proving that giving AI a soul (of sorts) is not science fiction, but an actionable technology.