Logo

Welcome to LlamaIndex.TS

LlamaIndex.TS is the leading framework for utilizing context engineering to build LLM applications in JavaScript and TypeScript.

LlamaIndex.TS is a framework for utilizing context engineering to build generative AI applications with large language models. From rapid-prototyping RAG chatbots to deploying multi-agent workflows in production, LlamaIndex gives you everything you need — all in idiomatic TypeScript.

Built for modern JavaScript runtimes like Node.js Node.js, Deno Deno, Bun Bun, Cloudflare Workers Cloudflare Workers, and more.

Introduction

What are agents?

Agents are LLM-powered assistants that can reason, use external tools, and take actions to accomplish tasks such as research, data extraction, and automation. LlamaIndex.TS provides foundational building blocks for creating and orchestrating these agents.

What are workflows?

Workflows are multi-step, event-driven processes that combine agents, data connectors, and other tools to solve complex problems. With LlamaIndex.TS you can chain together retrieval, generation, and tool-calling steps and then deploy the entire pipeline as a microservice.

What is context engineering?

LLMs come pre-trained on vast public corpora, but not on your private or domain-specific data. Context engineering bridges that gap by injecting the right pieces of your data into the LLM prompt at the right time. The most popular example is Retrieval-Augmented Generation (RAG), but the same idea powers agent memory, evaluation, extraction, summarisation, and more.

LlamaIndex.TS gives you:

  • Data connectors to ingest from APIs, files, SQL, and dozens more sources.
  • Indexes & retrievers to store and retrieve your data for LLM consumption.
  • Agents and Engines to query and use chat+reasoning interfaces over your data.
  • Workflows for fine-grained orchestration of your data and LLM-powered agents.
  • Observability integrations so you can iterate with confidence.

You can learn more about these concepts in our concepts guide.

Use cases

Popular scenarios include:

Getting started

The fastest way to get started is in StackBlitz below — no local setup required:

Want to learn more? We have several tutorials to get you started:


LlamaCloud

Need an end-to-end managed pipeline? Check out LlamaCloud: best-in-class document parsing (LlamaParse), extraction (LlamaExtract), and indexing services with generous free tiers.


Community

We 💜 contributors! View our contributing guide to get started.

Edit on GitHub

Last updated on