Integration
Vercel
Integrate LlamaIndex with Vercel's AI SDK
LlamaIndex provides integration with Vercel's AI SDK, allowing you to create powerful search and retrieval applications. You can:
- Use any of Vercel AI's model providers as LLMs in LlamaIndex
- Use indexes (e.g. VectorStoreIndex, LlamaCloudIndex) from LlamaIndexTS in your Vercel AI applications
Setup
First, install the required dependencies:
npm i @llamaindex/vercel ai
Using Vercel AI's Model Providers
Using the VercelLLM
adapter, it's easy to use any of Vercel AI's model providers as LLMs in LlamaIndex. Here's an example of how to use OpenAI's GPT-4o model:
const llm = new VercelLLM({ model: openai("gpt-4o") });
const result = await llm.complete({
prompt: "What is the capital of France?",
stream: false, // Set to true if you want streaming responses
});
console.log(result.text);
Use Indexes
Using VectorStoreIndex
Here's how to create a simple vector store index and query it using Vercel's AI SDK:
import { openai } from "@ai-sdk/openai";
import { llamaindex } from "@llamaindex/vercel";
import { streamText } from "ai";
import { Document, VectorStoreIndex } from "llamaindex";
// Create an index from your documents
const document = new Document({ text: yourText, id_: "unique-id" });
const index = await VectorStoreIndex.fromDocuments([document]);
// Create a query tool
const queryTool = llamaindex({
model: openai("gpt-4"),
index,
description: "Search through the documents", // optional
});
// Use the tool with Vercel's AI SDK
streamText({
model: openai("gpt-4"),
prompt: "Your question here",
tools: { queryTool },
onFinish({ response }) {
console.log("Response:", response.messages); // log the response
},
}).toDataStream();
Note: the Vercel AI model referenced in the
llamaindex
function is used by the response synthesizer to generate a response for the tool call.
Using LlamaCloud
For production deployments, you can use LlamaCloud to store and manage your documents:
import { LlamaCloudIndex } from "@llamaindex/cloud";
// Create a LlamaCloud index
const index = await LlamaCloudIndex.fromDocuments({
documents: [document],
name: "your-index-name",
projectName: "your-project",
apiKey: process.env.LLAMA_CLOUD_API_KEY,
});
// Use it the same way as VectorStoreIndex
const queryTool = llamaindex({
model: openai("gpt-4"),
index,
description: "Search through the documents",
options: { fields: ["sourceNodes", "messages"]}
});
// Use the tool with Vercel's AI SDK
streamText({
model: openai("gpt-4"),
prompt: "Your question here",
tools: { queryTool },
}).toDataStream();
Next Steps
- Explore LlamaCloud for managed document storage and retrieval
- Join our Discord community for support and discussions