Query parameters
Specifies what to do when the request:
- Contains wildcard expressions and there are no models that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
If true, it returns an empty array when there are no matches and the subset of results when there are partial matches.
Specifies whether the included model definition should be returned as a JSON map (true) or in a custom compressed format (false).
Indicates if certain fields should be removed from the configuration on retrieval. This allows the configuration to be in an acceptable format to be retrieved and then added to another cluster.
Skips the specified number of models.
A comma delimited string of optional fields to include in the response body.
Values are
definition
,feature_importance_baseline
,hyperparameters
,total_feature_importance
, ordefinition_status
.Specifies the maximum number of models to obtain.
A comma delimited string of tags. A trained model can have many tags, or none. When supplied, only trained models that contain all the supplied tags are returned.
Responses
Hide response attributes Show response attributes object
An array of trained model resources, which are sorted by the model_id value in ascending order.
Hide trained_model_configs attributes Show trained_model_configs attributes object
Values are
tree_ensemble
,lang_ident
, orpytorch
.A comma delimited string of tags. A trained model can have many tags, or none.
Information on the creator of the trained model.
Any field map described in the inference configuration takes precedence.
The free-text description of the trained model.
The estimated heap usage in bytes to keep the trained model in memory.
The estimated number of operations to use the trained model.
True if the full model definition is present.
Inference configuration provided when storing the model config
Hide inference_config attributes Show inference_config attributes object
Hide classification attributes Show classification attributes object
Specifies the number of top class predictions to return. Defaults to 0.
Specifies the maximum number of feature importance values per document.
Specifies the type of the predicted field to write. Acceptable values are: string, number, boolean. When boolean is provided 1.0 is transformed to true and 0.0 to false.
The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
Specifies the field to which the top classes are written. Defaults to top_classes.
Text classification configuration options
Hide text_classification attributes Show text_classification attributes object
Specifies the number of top class predictions to return. Defaults to 0.
Tokenization options stored in inference configuration
The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
Classification labels to apply other than the stored labels. Must have the same deminsions as the default configured labels
Zero shot classification configuration options
Hide zero_shot_classification attributes Show zero_shot_classification attributes object
Tokenization options stored in inference configuration
Hypothesis template used when tokenizing labels for prediction
The zero shot classification labels indicating entailment, neutral, and contradiction Must contain exactly and only entailment, neutral, and contradiction
The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
Indicates if more than one true label exists.
The labels to predict.
Fill mask inference options
Hide fill_mask attributes Show fill_mask attributes object
The string/token which will be removed from incoming documents and replaced with the inference prediction(s). In a response, this field contains the mask token for the specified model/tokenizer. Each model and tokenizer has a predefined mask token which cannot be changed. Thus, it is recommended not to set this value in requests. However, if this field is present in a request, its value must match the predefined value for that model/tokenizer, otherwise the request will fail.
Specifies the number of top class predictions to return. Defaults to 0.
Tokenization options stored in inference configuration
The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
Named entity recognition options
Hide ner attributes Show ner attributes object
Tokenization options stored in inference configuration
The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
The token classification labels. Must be IOB formatted tags
Pass through configuration options
Hide pass_through attributes Show pass_through attributes object
Tokenization options stored in inference configuration
The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
Text embedding inference options
Hide text_embedding attributes Show text_embedding attributes object
The number of dimensions in the embedding output
Tokenization options stored in inference configuration
The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
Text expansion inference options
Hide text_expansion attributes Show text_expansion attributes object
Tokenization options stored in inference configuration
The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
Question answering inference options
Hide question_answering attributes Show question_answering attributes object
Specifies the number of top class predictions to return. Defaults to 0.
Tokenization options stored in inference configuration
The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
The maximum answer length to consider
The license level of the trained model.
Hide metadata attributes Show metadata attributes object
Hide model_package attributes Show model_package attributes object
GET _ml/trained_models/
resp = client.ml.get_trained_models()
const response = await client.ml.getTrainedModels();
response = client.ml.get_trained_models
$resp = $client->ml()->getTrainedModels();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/trained_models/"