Welcome to LlamaIndex.TS
LlamaIndex.TS is the leading framework for utilizing context engineering to build LLM applications in JavaScript and TypeScript.
LlamaIndex.TS is a framework for utilizing context engineering to build generative AI applications with large language models. From rapid-prototyping RAG chatbots to deploying multi-agent workflows in production, LlamaIndex gives you everything you need — all in idiomatic TypeScript.
Built for modern JavaScript runtimes like Node.js, Deno, Bun, Cloudflare Workers, and more.
Introduction
Context engineering, agents & workflows — what do they mean?
Use cases
See what you can build with LlamaIndex.TS.
Getting started
Your first app in 5 lines of code.
LlamaCloud
Managed parsing, extraction & retrieval pipelines.
Community
Join thousands of builders on Discord, Twitter, and more.
Related projects
Connectors, demos & starter kits.
Introduction
What are agents?
Agents are LLM-powered assistants that can reason, use external tools, and take actions to accomplish tasks such as research, data extraction, and automation. LlamaIndex.TS provides foundational building blocks for creating and orchestrating these agents.
What are workflows?
Workflows are multi-step, event-driven processes that combine agents, data connectors, and other tools to solve complex problems. With LlamaIndex.TS you can chain together retrieval, generation, and tool-calling steps and then deploy the entire pipeline as a microservice.
What is context engineering?
LLMs come pre-trained on vast public corpora, but not on your private or domain-specific data. Context engineering bridges that gap by injecting the right pieces of your data into the LLM prompt at the right time. The most popular example is Retrieval-Augmented Generation (RAG), but the same idea powers agent memory, evaluation, extraction, summarisation, and more.
LlamaIndex.TS gives you:
- Data connectors to ingest from APIs, files, SQL, and dozens more sources.
- Indexes & retrievers to store and retrieve your data for LLM consumption.
- Agents and Engines to query and use chat+reasoning interfaces over your data.
- Workflows for fine-grained orchestration of your data and LLM-powered agents.
- Observability integrations so you can iterate with confidence.
You can learn more about these concepts in our concepts guide.
Use cases
Popular scenarios include:
- LLM-Powered Agents
- Indexing and Retrieval
- Extracting Structured Data
- Custom Orchestration with Workflows
Getting started
The fastest way to get started is in StackBlitz below — no local setup required:
Want to learn more? We have several tutorials to get you started:
- Installation + Runtime Guide
- Create your first agent
- Learn how to index data and chat with it
- Learn how to write your own workflows and agents
LlamaCloud
Need an end-to-end managed pipeline? Check out LlamaCloud: best-in-class document parsing (LlamaParse), extraction (LlamaExtract), and indexing services with generous free tiers.
Community
We 💜 contributors! View our contributing guide to get started.
Related projects
- Python framework GitHub
- Python docs
- create-llama — scaffold a new project in seconds!
- UI Components — build chat applications with our Next.js components.
Last updated on