Retrieval Augmented Generation (RAG) Tutorial
One of the most common use-cases for LlamaIndex is Retrieval-Augmented Generation or RAG, in which your data is indexed and selectively retrieved to be given to an LLM as source material for responding to a query. You can learn more about the concepts behind RAG.
Set up the project
In a new folder, run:
Then, check out the installation steps to install LlamaIndex.TS and prepare an OpenAI key.
You can use other LLMs via their APIs; if you would prefer to use local models check out our local LLM example.
Run queries
Create the file example.ts
. This code will
- load an example file
- convert it into a Document object
- index it (which creates embeddings using OpenAI)
- create a query engine to answer questions about the data
Create a tsconfig.json
file in the same folder:
Now you can run the code with
You should expect output something like:
Once you've mastered basic RAG, you may want to consider chatting with your data.
Edit on GitHub
Last updated on