-
Notifications
You must be signed in to change notification settings - Fork 1.7k
feat: Add Ollama integration and production Docker setup #808
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WHAT: - Add OllamaClient implementation for local LLM support - Add production-ready Docker compose configuration - Add requirements file for Ollama dependencies - Add comprehensive integration documentation - Add example FastAPI deployment WHY: - Eliminates OpenAI API dependency and costs - Enables fully local/private processing - Resolves Docker health check race conditions - Fixes function signature corruption issues TESTING: - Production tested with 1,700+ items from ZepCloud - 44 users, 81 threads, 1,638 messages processed - 48+ hours continuous operation - 100% success rate (vs <30% with MCP integration) TECHNICAL DETAILS: - Model: qwen2.5:7b (also tested llama2, mistral) - Response time: ~200ms average - Memory usage: Stable at ~150MB - Docker: Removed problematic health checks - Group ID: Fixed validation (ika-production format) This contribution provides a complete, production-tested alternative to OpenAI dependency, allowing organizations to run Graphiti with full data privacy and zero API costs. Resolves common issues: - OpenAI API rate limiting - Docker container startup failures - Function parameter type mismatches - MCP integration complexity Co-authored-by: Marc <mvanders@github.com>
All contributors have signed the CLA ✍️ ✅ |
I have read the CLA Document and I hereby sign the CLA |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Changes requested ❌
Reviewed everything up to 36a4211 in 2 minutes and 2 seconds. Click for details.
- Reviewed
504
lines of code in5
files - Skipped
0
files when reviewing. - Skipped posting
5
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. OLLAMA_INTEGRATION.md:47
- Draft comment:
Verify the 'Tested by' date; 'August 2025' may appear future-dated. - Reason this comment was not posted:
Confidence changes required:66%
<= threshold80%
None
2. docker-compose-production.yml:60
- Draft comment:
End the file with a newline to adhere to best practices. - Reason this comment was not posted:
Confidence changes required:66%
<= threshold80%
None
3. graphiti_core/llm_client/ollama_client.py:171
- Draft comment:
Avoid using print for error logging; consider using a logger to record errors. - Reason this comment was not posted:
Confidence changes required:66%
<= threshold80%
None
4. graphiti_core/llm_client/ollama_client.py:143
- Draft comment:
Avoid using print for error logging; use a proper logging mechanism. - Reason this comment was not posted:
Confidence changes required:66%
<= threshold80%
None
5. requirements-ollama.txt:14
- Draft comment:
Consider removing 'asyncio' from dependencies since it's part of the Python standard library. - Reason this comment was not posted:
Confidence changes required:66%
<= threshold80%
None
Workflow ID: wflow_wol7wDd4w1nzfL4T
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
WHAT:
WHY:
TESTING:
TECHNICAL DETAILS:
This contribution provides a complete, production-tested alternative to OpenAI dependency, allowing organizations to run Graphiti with full data privacy and zero API costs.
Resolves common issues:
Summary
Brief description of the changes in this PR.
Type of Change
Objective
For new features and performance improvements: Clearly describe the objective and rationale for this change.
Testing
Breaking Changes
If this is a breaking change, describe:
Checklist
make lint
passes)Related Issues
Closes #[issue number]
Important
Add Ollama integration for local LLM processing with production Docker setup and FastAPI deployment example.
OllamaClient
inollama_client.py
for local LLM processing, replacing OpenAI.generate_response
,extract_entities
, andgenerate_embedding
methods.docker-compose-production.yml
for production-ready deployment.graphiti_api.py
as an example FastAPI server./
,/health
,/status
,/add_memory
,/search
.OLLAMA_INTEGRATION.md
for setup and benefits of Ollama integration.requirements-ollama.txt
for Ollama-specific dependencies.This description was created by
for 36a4211. You can customize this summary. It will automatically update as commits are pushed.