Path parameters
- task_type
string Required The type of the inference task that the model will perform.
Values are
text_embedding
orrerank
. - voyageai_inference_id
string Required The unique identifier of the inference endpoint.
Body
- chunking_settings
object Chunking configuration object
- service
string Required Value is
voyageai
. - service_settings
object Required - task_settings
object
PUT /_inference/{task_type}/{voyageai_inference_id}
Console
PUT _inference/text_embedding/openai-embeddings
{
"service": "voyageai",
"service_settings": {
"model_id": "voyage-3-large",
"dimensions": 512
}
}
resp = client.inference.put(
task_type="text_embedding",
inference_id="openai-embeddings",
inference_config={
"service": "voyageai",
"service_settings": {
"model_id": "voyage-3-large",
"dimensions": 512
}
},
)
const response = await client.inference.put({
task_type: "text_embedding",
inference_id: "openai-embeddings",
inference_config: {
service: "voyageai",
service_settings: {
model_id: "voyage-3-large",
dimensions: 512,
},
},
});
response = client.inference.put(
task_type: "text_embedding",
inference_id: "openai-embeddings",
body: {
"service": "voyageai",
"service_settings": {
"model_id": "voyage-3-large",
"dimensions": 512
}
}
)
$resp = $client->inference()->put([
"task_type" => "text_embedding",
"inference_id" => "openai-embeddings",
"body" => [
"service" => "voyageai",
"service_settings" => [
"model_id" => "voyage-3-large",
"dimensions" => 512,
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"voyageai","service_settings":{"model_id":"voyage-3-large","dimensions":512}}' "$ELASTICSEARCH_URL/_inference/text_embedding/openai-embeddings"
Request examples
A text embedding task
Run `PUT _inference/text_embedding/voyageai-embeddings` to create an inference endpoint that performs a `text_embedding` task. The embeddings created by requests to this endpoint will have 512 dimensions.
{
"service": "voyageai",
"service_settings": {
"model_id": "voyage-3-large",
"dimensions": 512
}
}
Run `PUT _inference/rerank/voyageai-rerank` to create an inference endpoint that performs a `rerank` task.
{
"service": "voyageai",
"service_settings": {
"model_id": "rerank-2"
}
}