Fix Memori README Example: Data Not Stored

Alex Johnson
-
Fix Memori README Example: Data Not Stored

If you've been trying to get started with Memori and found that the example in the README.md file isn't quite working as expected, you're not alone! Many users have encountered a peculiar issue where, after successfully creating tables and seeing some conversation data, the core entity_fact and knowledge_graph tables remain frustratingly empty. This can be a real head-scratcher when you're eager to see your AI agent building a robust memory. Let's dive into why this might be happening and how we can get that example humming along.

Understanding the Problem: Empty Entity and Knowledge Tables

When you run the provided README.md example, you'll likely observe that your SQLite database (memori.db in this case) does get populated. You'll see entries in memori_conversation and memori_conversation_message, which confirms that the basic conversational aspects of Memori are functioning. However, the real magic of Memori lies in its ability to extract and store facts and build a knowledge graph from those conversations. The fact that entity_fact and knowledge_graph tables are empty suggests that the process responsible for this deeper level of memory formation isn't being triggered or completed successfully. This is a critical part of Memori's functionality, as it's what allows your AI to recall specific details about entities and understand the relationships between them over time. Without data in these tables, Memori can't effectively learn and remember context beyond the immediate conversation.

The Role of Asynchronous Augmentation

The description in the README.md example itself provides a crucial hint: "Advanced Augmentation runs asynchronously to efficiently create memories. For this example, a short lived command line program, we need to wait for it to finish." This statement points directly to the likely culprit. Memori, for efficiency, often processes memory augmentation (the process of extracting facts, entities, and relationships) in the background. This means that after your initial chat completion response, the augmentation process might still be running. If your script finishes executing before this asynchronous process has a chance to complete and write its findings to the entity_fact and knowledge_graph tables, those tables will remain empty. In a long-running application, this might not be an issue, as the augmentation would eventually complete. However, in a simple, short script designed to demonstrate functionality, the script can exit prematurely, leaving the augmentation incomplete.

Debugging Steps and Potential Solutions

The most direct way to address this is by ensuring the asynchronous augmentation process has sufficient time to finish. The example code already includes memori.augmentation.wait(). This command is specifically designed to pause the script's execution until all pending asynchronous augmentation tasks are completed. If you're still experiencing issues with empty tables, here are a few things to consider:

  1. Verify memori.augmentation.wait() is Called and Effective: Double-check that this line is indeed present in your script and that it's being executed after the initial conversation turn where new information is provided. Ensure there are no errors preceding this line that might prevent it from being reached.
  2. Examine LLM Provider and Model: While less likely to cause empty tables directly (unless the LLM responses themselves are malformed or empty), ensure your OpenAI API key is correctly configured and that the gpt-4.1-mini model is accessible and functioning as expected. Sometimes, specific model versions can have subtle differences in output that might affect parsing.
  3. Check Database Integrity: Although the memori_conversation tables are populated, it's worth ensuring there are no underlying issues with your SQLite database connection or permissions that might prevent writes to entity_fact or knowledge_graph specifically. Try creating a fresh database file to rule this out.
  4. Increase Timeout (If Applicable): In some asynchronous systems, there might be implicit timeouts. While memori.augmentation.wait() should handle this, if you suspect it's a timing issue, you could theoretically add a small, fixed delay before memori.augmentation.wait() (e.g., import time; time.sleep(5)) just for testing, though this isn't a robust solution. The wait() function is the intended mechanism.
  5. Review Memori Version: Ensure you are using a stable and up-to-date version of Memori (3.1.1 in your case is mentioned, which is good). Sometimes, bugs related to asynchronous processing are fixed in newer releases.
  6. Simplify the Example Further: For absolute certainty, try an even simpler interaction. Just one turn, provide a fact, and immediately call memori.augmentation.wait(). Does that populate the tables?

The core idea is that the data isn't lost, it's just not being written to the fact and knowledge graph tables before the script concludes. The memori.augmentation.wait() is the key to bridging that gap in short-lived scripts.

Why entity_fact and knowledge_graph Matter

These tables are the backbone of Memori's persistent memory capabilities. The entity_fact table stores individual pieces of information tied to specific entities. For instance, if your AI learns "John's favorite color is blue," this fact would be stored here, linked to the entity 'John'. The knowledge_graph table, on the other hand, goes a step further by representing relationships between entities. It could store that 'John' has a favorite color, and that color is 'blue'. This structured data allows Memori to build a complex understanding of the world as described in your conversations, enabling much more sophisticated recall and reasoning than simple chat history.

When these tables are empty, Memori operates more like a stateless chatbot. It can respond to your current input based on the LLM's general knowledge and the immediate conversation context, but it cannot recall specific details you've previously told it. The goal of Memori is to move beyond this, creating a persistent, evolving memory that makes AI interactions more personalized and context-aware. Therefore, ensuring these tables are populated is fundamental to unlocking Memori's true potential.

The Role of memori.attribution

The memori.attribution function is used to associate specific data points or interactions with a particular entity and process. This is crucial for traceability and context management. When you call memori.attribution(entity_id="123456", process_id="test-ai-agent"), you're essentially telling Memori, "Everything that happens from this point onwards, or that is recorded, should be linked to entity 123456 and originated from the test-ai-agent process." This helps Memori differentiate between information learned about different users or different sessions. In the context of debugging, ensuring this is correctly set up before the augmentation process kicks in is important, as the extracted facts and knowledge will be correctly attributed. If attribution is missing or incorrect, it might lead to data not being stored or being stored in an unretrievable way, although the primary issue here is more likely the completion of the asynchronous task.

The Importance of memori.config.storage.build()

Executing memori.config.storage.build() is essential for initializing the necessary database schema for Memori's storage. This command ensures that all the required tables, including entity_fact and knowledge_graph, are created with the correct structure. If this command were missed or failed, you would likely see errors during database operations or find that the tables simply don't exist. In your case, the output indicates that Build executed successfully!, so the tables are likely created. However, it's a good reminder that the schema must be in place before Memori attempts to write data into these tables. Issues here could manifest as errors during the wait() call or simply result in empty tables if the build() process didn't correctly set up all necessary indexes or constraints that the augmentation process relies upon.

Conclusion: Patience and Proper Waiting

In summary, the most probable reason for your entity_fact and knowledge_graph tables being empty in the Memori README.md example is that the script is exiting before the asynchronous augmentation process has finished writing its results. The memori.augmentation.wait() command is the key to resolving this in short-lived scripts. By ensuring this command is correctly placed and allowed to complete its task, you should see the example working as intended, with your AI successfully building a knowledge base from your conversations.

If you continue to face issues, consider revisiting the official Memori documentation for the latest examples and troubleshooting tips, or explore the Memori GitHub repository for community discussions and potential bug reports.

For more information on AI and knowledge representation, you can explore resources like Wikipedia's entry on Knowledge Graphs or OpenAI's documentation on their API. These external resources can provide a deeper understanding of the underlying technologies that Memori leverages.

You may also like