Managed Index
Managed index using LlamaCloud
LlamaCloud is a new generation of managed parsing, ingestion, and retrieval services, designed to bring production-grade context-augmentation to your LLM and RAG applications.
LlamaCloud supports
- Managed Ingestion API, handling parsing and document management
- Managed Retrieval API, configuring optimal retrieval for your RAG system
Access
Visit LlamaCloud to sign in and get an API key.
Create a Managed Index
Here's an example of how to create a managed index by ingesting a couple of documents:
import { stdin as input, stdout as output } from "node:process";
import readline from "node:readline/promises";
import { ContextChatEngine, LlamaCloudIndex } from "llamaindex";
async function main() {
const index = new LlamaCloudIndex({
name: "test",
projectName: "Default",
baseUrl: process.env.LLAMA_CLOUD_BASE_URL,
apiKey: process.env.LLAMA_CLOUD_API_KEY,
});
const retriever = index.asRetriever({
similarityTopK: 5,
});
const chatEngine = new ContextChatEngine({ retriever });
const rl = readline.createInterface({ input, output });
while (true) {
const query = await rl.question("User: ");
const stream = await chatEngine.chat({ message: query, stream: true });
for await (const chunk of stream) {
process.stdout.write(chunk.response);
}
process.stdout.write("\n");
}
}
main().catch(console.error);
Use a Managed Index
Here's an example of how to use a managed index together with a chat engine:
import fs from "node:fs/promises";
import { stdin as input, stdout as output } from "node:process";
import readline from "node:readline/promises";
import { Document, LlamaCloudIndex } from "llamaindex";
async function main() {
const path = "node_modules/llamaindex/examples/abramov.txt";
const essay = await fs.readFile(path, "utf-8");
// Create Document object with essay
const document = new Document({ text: essay, id_: path });
const index = await LlamaCloudIndex.fromDocuments({
documents: [document],
name: "test-pipeline",
projectName: "Default",
apiKey: process.env.LLAMA_CLOUD_API_KEY,
baseUrl: process.env.LLAMA_CLOUD_BASE_URL,
});
const queryEngine = index.asQueryEngine({
similarityTopK: 5,
});
const rl = readline.createInterface({ input, output });
while (true) {
const query = await rl.question("Query: ");
const response = await queryEngine.query({
query,
});
console.log(response.toString());
}
}
main().catch(console.error);
API Reference
Edit on GitHub
Last updated on