With the amount of new subnets being added it can be hard to get up to date information across all subnets, so data may be slightly out of date from time to time

Subnet 11

Dippy

Emissions
Value
Recycled
Value
Recycled (24h)
Value
Registration Cost
Value
Active Validators
Value
Active Miners
Value
Active Dual Miners/Validators
Value

ABOUT

What exactly does it do?

Bittensor Subnet 11, known as Dippy Roleplay, is developed by Impel Intelligence with the primary objective of creating the world’s best open-source roleplay Large Language Model (LLM). This initiative stems from the observation that current state-of-the-art (SOTA) LLMs, such as those from OpenAI or Anthropic (Claude), are predominantly optimized for assistant-like functionalities and often lack the empathetic and nuanced conversational abilities required for engaging roleplay experiences. The subnet aims to address the gap where open-source roleplay LLMs significantly lag behind their closed-source counterparts like Character AI or Inflection AI. The development of Dippy Roleplay is also a response to the broader trend in LLM development that has historically prioritized objective reasoning over creative and empathetic roleplay capabilities, thereby underscoring the need for robust open-source alternatives in this domain.

The subnet is intrinsically linked to the Dippy app, which is described as a leading AI companion application boasting over one million users and achieving high engagement metrics, such as an average session length exceeding one hour and top App Store rankings in several countries (e.g., #3 in Germany). This existing user base is positioned as a valuable asset for generating data and refining the roleplay models developed on the subnet

Bittensor Subnet 11, known as Dippy Roleplay, is developed by Impel Intelligence with the primary objective of creating the world’s best open-source roleplay Large Language Model (LLM). This initiative stems from the observation that current state-of-the-art (SOTA) LLMs, such as those from OpenAI or Anthropic (Claude), are predominantly optimized for assistant-like functionalities and often lack the empathetic and nuanced conversational abilities required for engaging roleplay experiences. The subnet aims to address the gap where open-source roleplay LLMs significantly lag behind their closed-source counterparts like Character AI or Inflection AI. The development of Dippy Roleplay is also a response to the broader trend in LLM development that has historically prioritized objective reasoning over creative and empathetic roleplay capabilities, thereby underscoring the need for robust open-source alternatives in this domain.

The subnet is intrinsically linked to the Dippy app, which is described as a leading AI companion application boasting over one million users and achieving high engagement metrics, such as an average session length exceeding one hour and top App Store rankings in several countries (e.g., #3 in Germany). This existing user base is positioned as a valuable asset for generating data and refining the roleplay models developed on the subnet

PURPOSE

What exactly is the 'product/build'?

The core purpose, as reiterated across multiple sources including the official GitHub repository, news analyses, and community discussions, is to incentivize the decentralized creation and continuous improvement of high-quality, open-source LLMs specifically tailored for role-playing scenarios. This is achieved by fostering a competitive environment where miners contribute models and validators evaluate them based on sophisticated scoring mechanisms.

Key goals highlighted include:

  • Bridging the Quality Gap: Elevating open-source roleplay LLMs to be competitive with, or even surpass, closed-source alternatives.
  • Fostering Empathy in AI: Developing LLMs that can engage in more natural, empathetic, and human-like interactions suitable for companionship and roleplay.
  • Leveraging Decentralization: Utilizing the Bittensor network’s incentive structure to attract diverse talent and approaches to LLM development for roleplay.
  • Powering the Dippy App: Integrating the top-performing models from the subnet into the Dippy.ai application, providing a real-world testing ground and direct user feedback loop.
  • Open-Source Leadership: Establishing Impel Intelligence and the Dippy subnet as leaders in the open-source roleplaying LLM space.

 

Mechanism and Functionality

The Dippy Roleplay subnet (SN11) operates on a competitive, incentive-based mechanism inherent to the Bittensor network, specifically tailored to foster the creation of high-quality open-source roleplaying LLMs. The core of its functionality revolves around the interaction between two key participants: miners and validators, with the ultimate product being the refined LLMs themselves and the data/insights generated through their evaluation.

Miner Participation:

Miners on Subnet 11 are tasked with developing and submitting LLMs optimized for roleplay. Their process generally involves:

  • Model Creation/Adaptation: Miners can employ various strategies to produce their models. This includes training models from scratch, fine-tuning existing open-source foundational models, or using advanced techniques like model merging (e.g., using tools like MergeKit) to combine the strengths of different architectures. The emphasis is on creating unique and effective roleplay LLMs.
  • Model Submission: Once a miner has a candidate model, they submit it to a shared Hugging Face model pool designated by the subnet. Each miner typically registers one active model per unique identifier (UID) on the subnet.

 

Validator Participation and Evaluation Process:

Validators are crucial for assessing the quality and performance of the models submitted by miners. Their role involves a multi-faceted evaluation process:

  • Model Retrieval and Assessment: Validators retrieve models from the submission pool and evaluate them using the Dippy protocol. This protocol outlines a specific, open scoring format detailed in the subnet’s documentation.
  • Dynamic Scoring Mechanism: The scoring system is not static; it is designed to evolve and adapt based on advancements in SOTA model benchmarks and data. This ensures that the evaluation criteria remain relevant and continue to push the boundaries of roleplay LLM capabilities.
  • Multi-Phase Scoring: The evaluation is a sophisticated process, often involving several phases to comprehensively assess a model’s suitability for roleplay:
  1. Evaluation Score: Models are graded against a streaming dataset. While a reference dataset (DippyAI/dippy_synthetic_dataset_v1 on Hugging Face) is provided from previous day’s data, the actual evaluation occurs on a real-time generated sample. This score can be modified by a Creativity Score to penalize models that are overfit and lack originality. Model size and latency also contribute a small portion to this score.
  2. Judge Score (Experimental Multiplier): An LLM judge (e.g., GPT-4o or similar) evaluates the model. If the submitted model wins or ties against others above a certain threshold (e.g., 30%, subject to change), it receives a significant score boost. This introduces a qualitative, comparative assessment.
  3. Coherence Score (Binary Factor): Models generate conversations based on augmented data (e.g., from proj-persona/PersonaHub). The output is compared against a high-performing baseline model (like GPT-4o) for coherence. If a model fails to meet a certain coherence threshold, it automatically scores zero for that evaluation round, effectively filtering out nonsensical or irrelevant outputs.
  4. Post Evaluation Score (Experimental Multiplier): This is an experimental phase aimed at aligning model performance with industry-standard benchmarks, though its exact implementation and weight were still being formalized at the time of documentation.
  • Win Rate Calculation: After individual model scores are determined, validators compare each miner’s score against all other active models on the subnet to calculate a win rate. This relative performance metric is critical for determining incentives.
  • Weight Assignment: Validators assign weights to miners based primarily on their win rate. To discourage simple model copying and encourage continuous innovation, modifiers such as a time penalty are applied. This means newer models generally need to score significantly better than established top models to achieve a high ranking quickly.

 

Incentive Mechanism and Product Output:
The overarching incentive mechanism follows the Bittensor standard: miners who produce high-performing models (as determined by validator scoring and win rates) receive a greater share of the TAO emissions allocated to Subnet 11. The collective output of this process is a continuously improving, open-source roleplay LLM, with the best-performing models potentially being integrated into the Dippy.ai application and made available to its large user base. This also generates valuable data on model performance and user preferences in roleplay scenarios.

 

Technical Architecture

The Dippy Roleplay subnet (SN11) is built using Python (version 3.9+) and leverages several key components and integrations to facilitate its operations. The technical architecture is designed to support the submission, evaluation, and ranking of roleplay LLMs in a decentralized manner.

Core Components (from the impel-intelligence/dippy-bittensor-subnet GitHub repository):

Neuron Implementations:

  • miner.py: Contains the primary logic for miners. This script enables miners to prepare their roleplay LLMs, connect to the Bittensor network, register their models, and respond to validation requests from the subnet’s validators.
  • validator.py: Houses the code for validator nodes. Validators use this to query miners, receive model outputs (or access submitted models), execute the scoring protocol, and set weights on the Bittensor network based on model performance.
  • model_queue.py: An internal component, likely used by the subnet operators or core validation infrastructure for managing the queue of models submitted for evaluation.

 

Scoring Mechanism (scoring/ directory): This directory is critical as it contains all the code related to the LLM scoring process. This includes the implementation of the multi-phase evaluation (Evaluation Score, Creativity Score, Judge Score, Coherence Score, Post Evaluation Score) and the logic for comparing models against datasets and benchmarks. It also includes Jinja templates (scoring/prompt_templates) for common foundational models to handle different token systems and instruct syntaxes.

Utilities (utilities/ directory): Provides a collection of common utility functions essential for the subnet’s operation. This can include helper functions for interacting with the Bittensor network, managing Hugging Face repository interactions, and other validation support functions.

Documentation (docs/ directory): A comprehensive set of Markdown files offering guidance for participants. Key documents include:

  • miner.md: Instructions for setting up and running a miner.
  • validator.md: Instructions for setting up and running a validator.
  • FAQ.md: Answers to frequently asked questions regarding mining, evaluation, troubleshooting, and contributions.
  • llm_scoring.md: Detailed explanation of the LLM scoring criteria and methodology.

 

Worker API (worker_api/ directory): This component defines an API for model validation tasks. It is primarily utilized by validators and the subnet operators to manage the evaluation pipeline. Dockerfiles (evaluator.Dockerfile for the scoring worker and vapi.Dockerfile for the worker API) are provided to facilitate the deployment of these services. The scoring worker is responsible for executing the computationally intensive model evaluation tasks.

 

Key Dependencies and Integrations:

  • Hugging Face: The subnet heavily relies on Hugging Face for model submissions. Miners are required to submit their models to a designated Hugging Face model pool, from which validators can access them for evaluation.
  • Bittensor Network: The entire subnet operates within the Bittensor ecosystem, utilizing its underlying blockchain for registration, incentive distribution (TAO emissions), and decentralized consensus.
  • Python Ecosystem: Built on Python, it likely uses common data science and machine learning libraries such as PyTorch, Transformers (from Hugging Face), and others for model handling and evaluation.
  • PM2 (Recommended for Validators): The documentation recommends using PM2, a process manager for Node.js applications (but also widely used for Python scripts), for running auto-updating validator nodes. This helps ensure validators stay online and keep their software up-to-date.
  • Local Subtensor Node: Validators have the option to run with a local Subtensor node for more direct interaction with the Bittensor network, potentially offering lower latency and greater control.

 

Model Worker Orchestration:

An important architectural detail noted in the GitHub repository is that, at the time of the documentation, general validators call a model worker orchestration service hosted by the Dippy subnet owners (Impel Intelligence). This service likely manages the distribution of evaluation tasks to a pool of scoring workers. While the code for local worker orchestration might exist, it was disabled for general validators, centralizing the execution of the core evaluation computation to some extent, likely for consistency, resource management, and to prevent abuse during the initial phases of the subnet.

Evolution from Other Subnets:
The Dippy subnet’s codebase initially drew inspiration from concepts found in Nous Research’s and MyShell’s subnets. However, it has reportedly diverged significantly to cater to its specific focus on roleplay LLM evaluation and development.

 

The core purpose, as reiterated across multiple sources including the official GitHub repository, news analyses, and community discussions, is to incentivize the decentralized creation and continuous improvement of high-quality, open-source LLMs specifically tailored for role-playing scenarios. This is achieved by fostering a competitive environment where miners contribute models and validators evaluate them based on sophisticated scoring mechanisms.

Key goals highlighted include:

  • Bridging the Quality Gap: Elevating open-source roleplay LLMs to be competitive with, or even surpass, closed-source alternatives.
  • Fostering Empathy in AI: Developing LLMs that can engage in more natural, empathetic, and human-like interactions suitable for companionship and roleplay.
  • Leveraging Decentralization: Utilizing the Bittensor network’s incentive structure to attract diverse talent and approaches to LLM development for roleplay.
  • Powering the Dippy App: Integrating the top-performing models from the subnet into the Dippy.ai application, providing a real-world testing ground and direct user feedback loop.
  • Open-Source Leadership: Establishing Impel Intelligence and the Dippy subnet as leaders in the open-source roleplaying LLM space.

 

Mechanism and Functionality

The Dippy Roleplay subnet (SN11) operates on a competitive, incentive-based mechanism inherent to the Bittensor network, specifically tailored to foster the creation of high-quality open-source roleplaying LLMs. The core of its functionality revolves around the interaction between two key participants: miners and validators, with the ultimate product being the refined LLMs themselves and the data/insights generated through their evaluation.

Miner Participation:

Miners on Subnet 11 are tasked with developing and submitting LLMs optimized for roleplay. Their process generally involves:

  • Model Creation/Adaptation: Miners can employ various strategies to produce their models. This includes training models from scratch, fine-tuning existing open-source foundational models, or using advanced techniques like model merging (e.g., using tools like MergeKit) to combine the strengths of different architectures. The emphasis is on creating unique and effective roleplay LLMs.
  • Model Submission: Once a miner has a candidate model, they submit it to a shared Hugging Face model pool designated by the subnet. Each miner typically registers one active model per unique identifier (UID) on the subnet.

 

Validator Participation and Evaluation Process:

Validators are crucial for assessing the quality and performance of the models submitted by miners. Their role involves a multi-faceted evaluation process:

  • Model Retrieval and Assessment: Validators retrieve models from the submission pool and evaluate them using the Dippy protocol. This protocol outlines a specific, open scoring format detailed in the subnet’s documentation.
  • Dynamic Scoring Mechanism: The scoring system is not static; it is designed to evolve and adapt based on advancements in SOTA model benchmarks and data. This ensures that the evaluation criteria remain relevant and continue to push the boundaries of roleplay LLM capabilities.
  • Multi-Phase Scoring: The evaluation is a sophisticated process, often involving several phases to comprehensively assess a model’s suitability for roleplay:
  1. Evaluation Score: Models are graded against a streaming dataset. While a reference dataset (DippyAI/dippy_synthetic_dataset_v1 on Hugging Face) is provided from previous day’s data, the actual evaluation occurs on a real-time generated sample. This score can be modified by a Creativity Score to penalize models that are overfit and lack originality. Model size and latency also contribute a small portion to this score.
  2. Judge Score (Experimental Multiplier): An LLM judge (e.g., GPT-4o or similar) evaluates the model. If the submitted model wins or ties against others above a certain threshold (e.g., 30%, subject to change), it receives a significant score boost. This introduces a qualitative, comparative assessment.
  3. Coherence Score (Binary Factor): Models generate conversations based on augmented data (e.g., from proj-persona/PersonaHub). The output is compared against a high-performing baseline model (like GPT-4o) for coherence. If a model fails to meet a certain coherence threshold, it automatically scores zero for that evaluation round, effectively filtering out nonsensical or irrelevant outputs.
  4. Post Evaluation Score (Experimental Multiplier): This is an experimental phase aimed at aligning model performance with industry-standard benchmarks, though its exact implementation and weight were still being formalized at the time of documentation.
  • Win Rate Calculation: After individual model scores are determined, validators compare each miner’s score against all other active models on the subnet to calculate a win rate. This relative performance metric is critical for determining incentives.
  • Weight Assignment: Validators assign weights to miners based primarily on their win rate. To discourage simple model copying and encourage continuous innovation, modifiers such as a time penalty are applied. This means newer models generally need to score significantly better than established top models to achieve a high ranking quickly.

 

Incentive Mechanism and Product Output:
The overarching incentive mechanism follows the Bittensor standard: miners who produce high-performing models (as determined by validator scoring and win rates) receive a greater share of the TAO emissions allocated to Subnet 11. The collective output of this process is a continuously improving, open-source roleplay LLM, with the best-performing models potentially being integrated into the Dippy.ai application and made available to its large user base. This also generates valuable data on model performance and user preferences in roleplay scenarios.

 

Technical Architecture

The Dippy Roleplay subnet (SN11) is built using Python (version 3.9+) and leverages several key components and integrations to facilitate its operations. The technical architecture is designed to support the submission, evaluation, and ranking of roleplay LLMs in a decentralized manner.

Core Components (from the impel-intelligence/dippy-bittensor-subnet GitHub repository):

Neuron Implementations:

  • miner.py: Contains the primary logic for miners. This script enables miners to prepare their roleplay LLMs, connect to the Bittensor network, register their models, and respond to validation requests from the subnet’s validators.
  • validator.py: Houses the code for validator nodes. Validators use this to query miners, receive model outputs (or access submitted models), execute the scoring protocol, and set weights on the Bittensor network based on model performance.
  • model_queue.py: An internal component, likely used by the subnet operators or core validation infrastructure for managing the queue of models submitted for evaluation.

 

Scoring Mechanism (scoring/ directory): This directory is critical as it contains all the code related to the LLM scoring process. This includes the implementation of the multi-phase evaluation (Evaluation Score, Creativity Score, Judge Score, Coherence Score, Post Evaluation Score) and the logic for comparing models against datasets and benchmarks. It also includes Jinja templates (scoring/prompt_templates) for common foundational models to handle different token systems and instruct syntaxes.

Utilities (utilities/ directory): Provides a collection of common utility functions essential for the subnet’s operation. This can include helper functions for interacting with the Bittensor network, managing Hugging Face repository interactions, and other validation support functions.

Documentation (docs/ directory): A comprehensive set of Markdown files offering guidance for participants. Key documents include:

  • miner.md: Instructions for setting up and running a miner.
  • validator.md: Instructions for setting up and running a validator.
  • FAQ.md: Answers to frequently asked questions regarding mining, evaluation, troubleshooting, and contributions.
  • llm_scoring.md: Detailed explanation of the LLM scoring criteria and methodology.

 

Worker API (worker_api/ directory): This component defines an API for model validation tasks. It is primarily utilized by validators and the subnet operators to manage the evaluation pipeline. Dockerfiles (evaluator.Dockerfile for the scoring worker and vapi.Dockerfile for the worker API) are provided to facilitate the deployment of these services. The scoring worker is responsible for executing the computationally intensive model evaluation tasks.

 

Key Dependencies and Integrations:

  • Hugging Face: The subnet heavily relies on Hugging Face for model submissions. Miners are required to submit their models to a designated Hugging Face model pool, from which validators can access them for evaluation.
  • Bittensor Network: The entire subnet operates within the Bittensor ecosystem, utilizing its underlying blockchain for registration, incentive distribution (TAO emissions), and decentralized consensus.
  • Python Ecosystem: Built on Python, it likely uses common data science and machine learning libraries such as PyTorch, Transformers (from Hugging Face), and others for model handling and evaluation.
  • PM2 (Recommended for Validators): The documentation recommends using PM2, a process manager for Node.js applications (but also widely used for Python scripts), for running auto-updating validator nodes. This helps ensure validators stay online and keep their software up-to-date.
  • Local Subtensor Node: Validators have the option to run with a local Subtensor node for more direct interaction with the Bittensor network, potentially offering lower latency and greater control.

 

Model Worker Orchestration:

An important architectural detail noted in the GitHub repository is that, at the time of the documentation, general validators call a model worker orchestration service hosted by the Dippy subnet owners (Impel Intelligence). This service likely manages the distribution of evaluation tasks to a pool of scoring workers. While the code for local worker orchestration might exist, it was disabled for general validators, centralizing the execution of the core evaluation computation to some extent, likely for consistency, resource management, and to prevent abuse during the initial phases of the subnet.

Evolution from Other Subnets:
The Dippy subnet’s codebase initially drew inspiration from concepts found in Nous Research’s and MyShell’s subnets. However, it has reportedly diverged significantly to cater to its specific focus on roleplay LLM evaluation and development.

 

WHO

Team Info

The primary developer and owner of Bittensor Subnet 11 (Dippy Roleplay) is Impel Intelligence. This is consistently indicated across the official GitHub repository, news articles (like Bitget), and research reports (like OAK Research). The team behind Impel Intelligence is also closely associated with the Dippy.ai application, which the subnet aims to support. The team from Impel includes members with backgrounds from prestigious companies like Microsoft, IBM, and Twitter, who have collectively created apps with over 100 million downloads that were groundbreaking in the AI space. The Impel team is focused on creating AI models that not only offer utility but also incorporate emotional intelligence elements like compassion, empathy, and humor to enhance user engagement. After experiencing success with the viral app “Wombo,” the Impel founders decided to shift their focus to creating AI products with a proactive and context-aware approach, aiming to cater to billions of consumers. Despite being a young company founded in August 2023, impel quickly secured a significant 2.1 million preseed funding round within just a month of incorporation, showcasing rapid growth and potential.

Key individuals associated with Impel Intelligence and the Dippy subnet, as identified from various sources, include:

Akshat Jagga – CEO

Angad Arneja – COO

These two individuals from the Impel team were featured on the Bittensor Guru podcast (Episode 31) discussing Subnet 11 and Dippy.ai. Their names also appear as contributors on the subnet’s GitHub repository. They are the core members driving the vision and development of both the Dippy app and its associated Bittensor subnet.

Donald Knoller

Appears as a frequent and significant contributor in the commit history of the dippy-bittensor-subnet GitHub repository, suggesting a key technical role in the subnet’s development and maintenance.

 

The primary developer and owner of Bittensor Subnet 11 (Dippy Roleplay) is Impel Intelligence. This is consistently indicated across the official GitHub repository, news articles (like Bitget), and research reports (like OAK Research). The team behind Impel Intelligence is also closely associated with the Dippy.ai application, which the subnet aims to support. The team from Impel includes members with backgrounds from prestigious companies like Microsoft, IBM, and Twitter, who have collectively created apps with over 100 million downloads that were groundbreaking in the AI space. The Impel team is focused on creating AI models that not only offer utility but also incorporate emotional intelligence elements like compassion, empathy, and humor to enhance user engagement. After experiencing success with the viral app “Wombo,” the Impel founders decided to shift their focus to creating AI products with a proactive and context-aware approach, aiming to cater to billions of consumers. Despite being a young company founded in August 2023, impel quickly secured a significant 2.1 million preseed funding round within just a month of incorporation, showcasing rapid growth and potential.

Key individuals associated with Impel Intelligence and the Dippy subnet, as identified from various sources, include:

Akshat Jagga – CEO

Angad Arneja – COO

These two individuals from the Impel team were featured on the Bittensor Guru podcast (Episode 31) discussing Subnet 11 and Dippy.ai. Their names also appear as contributors on the subnet’s GitHub repository. They are the core members driving the vision and development of both the Dippy app and its associated Bittensor subnet.

Donald Knoller

Appears as a frequent and significant contributor in the commit history of the dippy-bittensor-subnet GitHub repository, suggesting a key technical role in the subnet’s development and maintenance.

 

FUTURE

Roadmap

The Dippy Roleplay subnet (SN11) outlined a phased roadmap in its initial documentation, focusing on progressively enhancing its capabilities and integration with the Dippy.ai application. The roadmap, as detailed in the impel-intelligence/dippy-bittensor-subnet GitHub repository, is structured as follows:

Phase 1: Foundation and Initial Evaluation

  • Subnet Launch: Successfully launch the subnet with a functional and robust pipeline for evaluating roleplay LLMs. This includes initial evaluations based on public datasets and metrics like response length.
  • Public Model Leaderboard: Establish and maintain a public leaderboard that ranks miner-submitted models based on the defined evaluation criteria. This promotes transparency and competition.
  • Advanced Evaluation Criteria: Introduce more nuanced evaluation metrics such as Coherence and Creativity for the live assessment of models. This moves beyond simple dataset matching to assess the qualitative aspects of roleplay.

 

Phase 2: Integration and Expansion

  • Public Front-End: Release a publicly accessible front-end interface that is powered by the top-performing miner-submitted model of the week. This allows the broader community to interact with and experience the quality of models being developed on the subnet.
  • Dippy App Integration: Integrate the top miner-submitted model directly into the official Dippy.ai application. This is a crucial step, providing real-world utility and a direct feedback loop from Dippy’s extensive user base.
  • Support for Larger Models: Add support for evaluating and incorporating larger parameter models, with a target of up to 34 billion (34B) parameters. This allows for more complex and capable LLMs to be developed and utilized.

 

Phase 3: SOTA Advancement and Data-Driven Refinement

  • Expand State-of-the-Art (SOTA): Continuously iterate on the model development and evaluation processes to push the boundaries of what is considered SOTA for roleplay LLMs.
  • Data Integration from Dippy App: Leverage data and user feedback gathered from the Dippy.ai application to further refine the evaluation criteria and guide the development of even more sophisticated and engaging roleplay models. This aims to redefine the SOTA for roleplay LLMs by incorporating real-world user interaction data.

 

While specific timelines for each phase were not explicitly detailed in the initial public documentation, the phased approach indicates a clear progression from foundational infrastructure to advanced model development and deep application integration. The progress through these phases would likely be communicated through the project’s official channels, such as their GitHub repository, X/Twitter accounts, and potentially the Bittensor Discord.

 

The Dippy Roleplay subnet (SN11) outlined a phased roadmap in its initial documentation, focusing on progressively enhancing its capabilities and integration with the Dippy.ai application. The roadmap, as detailed in the impel-intelligence/dippy-bittensor-subnet GitHub repository, is structured as follows:

Phase 1: Foundation and Initial Evaluation

  • Subnet Launch: Successfully launch the subnet with a functional and robust pipeline for evaluating roleplay LLMs. This includes initial evaluations based on public datasets and metrics like response length.
  • Public Model Leaderboard: Establish and maintain a public leaderboard that ranks miner-submitted models based on the defined evaluation criteria. This promotes transparency and competition.
  • Advanced Evaluation Criteria: Introduce more nuanced evaluation metrics such as Coherence and Creativity for the live assessment of models. This moves beyond simple dataset matching to assess the qualitative aspects of roleplay.

 

Phase 2: Integration and Expansion

  • Public Front-End: Release a publicly accessible front-end interface that is powered by the top-performing miner-submitted model of the week. This allows the broader community to interact with and experience the quality of models being developed on the subnet.
  • Dippy App Integration: Integrate the top miner-submitted model directly into the official Dippy.ai application. This is a crucial step, providing real-world utility and a direct feedback loop from Dippy’s extensive user base.
  • Support for Larger Models: Add support for evaluating and incorporating larger parameter models, with a target of up to 34 billion (34B) parameters. This allows for more complex and capable LLMs to be developed and utilized.

 

Phase 3: SOTA Advancement and Data-Driven Refinement

  • Expand State-of-the-Art (SOTA): Continuously iterate on the model development and evaluation processes to push the boundaries of what is considered SOTA for roleplay LLMs.
  • Data Integration from Dippy App: Leverage data and user feedback gathered from the Dippy.ai application to further refine the evaluation criteria and guide the development of even more sophisticated and engaging roleplay models. This aims to redefine the SOTA for roleplay LLMs by incorporating real-world user interaction data.

 

While specific timelines for each phase were not explicitly detailed in the initial public documentation, the phased approach indicates a clear progression from foundational infrastructure to advanced model development and deep application integration. The progress through these phases would likely be communicated through the project’s official channels, such as their GitHub repository, X/Twitter accounts, and potentially the Bittensor Discord.

 

MEDIA

Huge thanks to Keith Singery (aka Bittensor Guru) for all of his fantastic work in the Bittensor community. Make sure to check out his other video/audio interviews by clicking HERE.

Angad and Akshat from the Impel team join Keith to discuss their launch on Bittensor’s Subnet 11, aimed at incentivizing the decentralized creation of roleplay models for their app Dippy.ai. Learn about the team, their impressive backgrounds, and their ambitious goal of becoming the open-source leaders in roleplaying LLMs.

Angad and Akshat join the pod for the second time to talk evolution of Dippy.ai and how they are using multiple subnets and integration within Bittensor’s network to further the reach and capabilities of their viral roleplaying app. With a successful subnet (S11) and second subnet (S58) launched to add voice to their offering, this team is becoming a major force both in and outside of Bittensor.

NEWS

Announcements

MORE INFO

Useful Links