TL;DR
This prototype shows how to hook a local LLM (Llama 3 or DeepSeek) directly up to a Neo4j knowledge graph so you can ask complex, multi-step questions in plain English—no Cypher or SQL required, no internet, no token fees. By using a full RAG pipeline with graph embeddings and semantic search, you get zero-hallucination answers on everything from nested calculations to deep family-tree traversals.
Want more? The full transcript’s on InfoQ, and we’re curious—what’s the trickiest traversal or aggregation you’d love to test with this setup?
Watch on YouTube
Top comments (0)