Path parameters
- task_type
string Required The type of the inference task that the model will perform. NOTE: The
chat_completion
task type only supports and only through the _stream API.Values are
completion
ortext_embedding
. - azureopenai_inference_id
string Required The unique identifier of the inference endpoint.
Body
- chunking_settings
object Chunking configuration object
- service
string Required Value is
azureopenai
. - service_settings
object Required - task_settings
object
PUT /_inference/{task_type}/{azureopenai_inference_id}
Console
PUT _inference/text_embedding/azure_openai_embeddings
{
"service": "azureopenai",
"service_settings": {
"api_key": "Api-Key",
"resource_name": "Resource-name",
"deployment_id": "Deployment-id",
"api_version": "2024-02-01"
}
}
resp = client.inference.put(
task_type="text_embedding",
inference_id="azure_openai_embeddings",
inference_config={
"service": "azureopenai",
"service_settings": {
"api_key": "Api-Key",
"resource_name": "Resource-name",
"deployment_id": "Deployment-id",
"api_version": "2024-02-01"
}
},
)
const response = await client.inference.put({
task_type: "text_embedding",
inference_id: "azure_openai_embeddings",
inference_config: {
service: "azureopenai",
service_settings: {
api_key: "Api-Key",
resource_name: "Resource-name",
deployment_id: "Deployment-id",
api_version: "2024-02-01",
},
},
});
response = client.inference.put(
task_type: "text_embedding",
inference_id: "azure_openai_embeddings",
body: {
"service": "azureopenai",
"service_settings": {
"api_key": "Api-Key",
"resource_name": "Resource-name",
"deployment_id": "Deployment-id",
"api_version": "2024-02-01"
}
}
)
$resp = $client->inference()->put([
"task_type" => "text_embedding",
"inference_id" => "azure_openai_embeddings",
"body" => [
"service" => "azureopenai",
"service_settings" => [
"api_key" => "Api-Key",
"resource_name" => "Resource-name",
"deployment_id" => "Deployment-id",
"api_version" => "2024-02-01",
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"azureopenai","service_settings":{"api_key":"Api-Key","resource_name":"Resource-name","deployment_id":"Deployment-id","api_version":"2024-02-01"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/azure_openai_embeddings"
Request examples
A text embedding task
Run `PUT _inference/text_embedding/azure_openai_embeddings` to create an inference endpoint that performs a `text_embedding` task. You do not specify a model, as it is defined already in the Azure OpenAI deployment.
{
"service": "azureopenai",
"service_settings": {
"api_key": "Api-Key",
"resource_name": "Resource-name",
"deployment_id": "Deployment-id",
"api_version": "2024-02-01"
}
}
Run `PUT _inference/completion/azure_openai_completion` to create an inference endpoint that performs a `completion` task.
{
"service": "azureopenai",
"service_settings": {
"api_key": "Api-Key",
"resource_name": "Resource-name",
"deployment_id": "Deployment-id",
"api_version": "2024-02-01"
}
}