Skip to content

Conversation

gsw945
Copy link

@gsw945 gsw945 commented Sep 9, 2025

Summary

  • Add llm_client OllamaClient
  • Add embedder OllamaEmbedder

Type of Change

  • Bug fix
  • New feature
  • Performance improvement
  • Documentation/Tests

Objective

For new features and performance improvements: Clearly describe the objective and rationale for this change.

Testing

  • Unit tests added/updated
  • Integration tests added/updated
  • All existing tests pass

Breaking Changes

  • This PR contains breaking changes

If this is a breaking change, describe:

  • What functionality is affected
  • Migration path for existing users

Checklist

  • Code follows project style guidelines (make lint passes)
  • Self-review completed
  • Documentation updated where necessary
  • No secrets or sensitive information committed

Related Issues

Closes #868

@danielchalef
Copy link
Member

danielchalef commented Sep 9, 2025

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

@gsw945
Copy link
Author

gsw945 commented Sep 9, 2025

I have read the CLA Document and I hereby sign the CLA.

danielchalef added a commit that referenced this pull request Sep 9, 2025
@gsw945
Copy link
Author

gsw945 commented Sep 9, 2025

make lint:
图片

python -m pytest -xvs tests/llm_client/test_ollama_client.py:
图片

@gsw945
Copy link
Author

gsw945 commented Sep 9, 2025

Here is a small example, taken from part of Build a ShoeBot Sales Agent using LangGraph and Graphiti.
The relevant data has been anonymised.

graphiti-agent-debug.py:

import asyncio
import json
import logging
import os
import sys
import uuid
from contextlib import suppress
from datetime import datetime, timezone
from pathlib import Path
from typing import Annotated
from dotenv import load_dotenv
from typing_extensions import TypedDict
from graphiti_core.llm_client.config import LLMConfig
# from graphiti_core.llm_client.openai_client import OpenAIClient
# from graphiti_core.llm_client.openai_generic_client import OpenAIGenericClient
from graphiti_core.llm_client.ollama_client import OllamaClient
# from graphiti_core.embedder.openai import OpenAIEmbedder, OpenAIEmbedderConfig
from graphiti_core.embedder.ollama import OllamaEmbedder, OllamaEmbedderConfig
from graphiti_core.cross_encoder.openai_reranker_client import OpenAIRerankerClient
from graphiti_core import Graphiti
from graphiti_core.edges import EntityEdge
from graphiti_core.nodes import EpisodeType
from graphiti_core.utils.maintenance.graph_data_operations import clear_data
from graphiti_core.search.search_config_recipes import NODE_HYBRID_SEARCH_EPISODE_MENTIONS


def setup_logging():
    logger = logging.getLogger()
    logger.setLevel(logging.ERROR)
    console_handler = logging.StreamHandler(sys.stdout)
    console_handler.setLevel(logging.INFO)
    formatter = logging.Formatter('%(name)s - %(levelname)s - %(message)s')
    console_handler.setFormatter(formatter)
    logger.addHandler(console_handler)
    return logger


async def main():
    load_dotenv()
    logger = setup_logging()

    # Configure Ollama LLM client
    llm_config = LLMConfig(
        api_key="ollama",  # Ollama doesn't require a real API key
        model="qwen3:4b",
        small_model="qwen3:4b",
        base_url="http://127.0.0.1:11434/v1",  # Ollama provides this port
        max_tokens=8192,
    )
    llm_client = OllamaClient(config=llm_config)
    embedder = OllamaEmbedder(
        config=OllamaEmbedderConfig(
            api_key="ollama",
            embedding_model="bge-m3:567m",
            embedding_dim=1024,
            base_url="http://127.0.0.1:11434/v1",
        )
    )
    cross_encoder = OpenAIRerankerClient(client=llm_client, config=llm_config)

    neo4j_uri = os.environ.get('NEO4J_URI', 'bolt://10.98.8.113:7687')
    neo4j_user = os.environ.get('NEO4J_USER', 'neo4j')
    neo4j_password = os.environ.get('NEO4J_PASSWORD', 'xxxxxx')

    client = Graphiti(
        neo4j_uri,
        neo4j_user,
        neo4j_password,
        llm_client=llm_client,
        embedder=embedder,
        cross_encoder=cross_encoder
    )

    # Note: This will clear the database
    await clear_data(client.driver)
    await client.build_indices_and_constraints()

    user_name = 'jess'
    await client.add_episode(
        name='User Creation',
        episode_body=(f'{user_name} is interested in buying a pair of shoes'),
        source=EpisodeType.text,
        reference_time=datetime.now(timezone.utc),
        source_description='SalesBot',
    )

    # let's get Jess's node uuid
    nl = await client._search(user_name, NODE_HYBRID_SEARCH_EPISODE_MENTIONS)
    print(nl)

    # and the ManyBirds node uuid
    nl = await client._search('ManyBirds', NODE_HYBRID_SEARCH_EPISODE_MENTIONS)
    print(nl)


if __name__ == "__main__":
    asyncio.run(main())
图片

@gsw945
Copy link
Author

gsw945 commented Sep 13, 2025

recheck

@ing-norante
Copy link

Can't wait to see this merged!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] Cannot get minimal example to work with Ollama
3 participants