Enhancing Knowledge Discovery: Implementing Retrieval Augmented Generation with Ontotext Technologies

RAG is an approach for enhancing an existing large language model (LLM) with external information provided as part of the input prompt, or grounding context. Most frequently, it uses a vector database indexed with proprietary content and available for a retrieval component to query it. Let’s see how we can achieve our own RAG using Ontotext’s RDF database GraphDB.

GraphDB integrates with the open-source ChatGPT Retrieval Plugin via the ChatGPT Retrieval Plugin Connector, transforming RDF data into embeddings and storing them in a vector database of choice. So we can easily integrate the information from textual documents and structured RDF entities into an LLM-driven application.

GraphDB 10.4 introduced a new Lab functionality called “Talk to Your Graph”. It is directly embedded in the GraphDB Workbench and it enables users to ask questions about their own RDF data in natural language. It is a suitable interface for people without strong technical understanding of SPARQL, who still need to explore and understand data encoded in RDF.

Talk to Your Graph is a RAG approach for NLQ that comes out of the box given a ChatGPT Retrieval Plugin Connector has been set up. Unlike the standard RAG approach that first selects the relevant content and then queries ChatGPT, Talk to Your Graph instructs ChatGPT to generate queries for the vector database to collect the necessary information to answer the user question.

Read more about how to Talk to Your Graph.

Scroll to Top