Logo
Modules

Embedding

The embedding model in LlamaIndex is responsible for creating numerical representations of text. By default, LlamaIndex will use the text-embedding-ada-002 model from OpenAI.

This can be explicitly updated through Settings

Installation

npm install llamaindex @llamaindex/openai
import { OpenAIEmbedding } from "@llamaindex/openai";
import { Settings } from "llamaindex";
 
Settings.embedModel = new OpenAIEmbedding({
  model: "text-embedding-ada-002",
});

Local Embedding

For local embeddings, you can use the HuggingFace embedding model.

Local Ollama Embeddings With Remote Host

Ollama provides a way to run embedding models locally or connect to a remote Ollama instance. This is particularly useful when you need to:

  • Run embeddings without relying on external API services
  • Use custom embedding models
  • Connect to a shared Ollama instance in your network

The ENV variable method you will find elsewhere sometimes may not work with the OllamaEmbedding class. Also note, you'll need to change the host in the Ollama server to 0.0.0.0 to allow connections from other machines.

To use Ollama embeddings with a remote host, you need to specify the host URL in the configuration like this:

import { OllamaEmbedding } from "@llamaindex/ollama";
import { Settings } from "llamaindex";
 
// Configure Ollama with a remote host
Settings.embedModel = new OllamaEmbedding({
  model: "nomic-embed-text",
  config: {
    host: "http://your-ollama-host:11434"
  }
});

Available Embeddings

Most available embeddings are listed in the sidebar on the left. Additionally the following integrations exist without separate documentation:

Check the LlamaIndexTS Github for the most up to date overview of integrations.

API Reference

Edit on GitHub

Last updated on

On this page