Integrating with LlamaIndex
Build AI applications by combining Workflows with other LlamaIndex features
This guide demonstrates how to combine the power of the workflow engine with LlamaIndex's retrieval and reasoning capabilities to build sophisticated AI applications.
Basic RAG Workflow
Let's build a simple Retrieval-Augmented Generation (RAG) workflow:
:::note
This example requires installing the openai
provider: npm i @llamaindex/openai
and setting OPENAI_API_KEY
in your env vars.
:::
Building an Tool Calling Agent
Workflows can orchestrate calls to LlamaIndex Agents, which can use tools (like functions or other query engines). This example adapts the agent from examples/11_rag.ts
.
Conclusion
By combining the lightweight, event-driven workflow engine with LlamaIndex's powerful document indexing and querying capabilities, you can build sophisticated AI applications with clean, maintainable code.
The event-driven architecture allows you to:
- Break complex processes into manageable steps
- Create reusable components for common AI workflows
- Easily debug and monitor each phase of execution
- Scale your applications by isolating resource-intensive steps
- Build more resilient systems with better error handling
As you build your own applications, consider how the patterns shown here can be adapted to your specific use cases.