Embedding
The embedding model in LlamaIndex is responsible for creating numerical representations of text. By default, LlamaIndex will use the text-embedding-ada-002
model from OpenAI.
This can be explicitly updated through Settings
Installation
Local Embedding
For local embeddings, you can use the HuggingFace embedding model.
Local Ollama Embeddings With Remote Host
Ollama provides a way to run embedding models locally or connect to a remote Ollama instance. This is particularly useful when you need to:
- Run embeddings without relying on external API services
- Use custom embedding models
- Connect to a shared Ollama instance in your network
The ENV variable method you will find elsewhere sometimes may not work with the OllamaEmbedding class. Also note, you'll need to change the host
in the Ollama server to 0.0.0.0
to allow connections from other machines.
To use Ollama embeddings with a remote host, you need to specify the host URL in the configuration like this:
Available Embeddings
Most available embeddings are listed in the sidebar on the left. Additionally the following integrations exist without separate documentation:
- ClipEmbedding using
@xenova/transformers
- FireworksEmbedding see fireworks.ai
Check the LlamaIndexTS Github for the most up to date overview of integrations.
API Reference
Last updated on