API Request
API Environment
Currently, only the mlflow environment is supported.
MLflow
The mlflow inference server provides the following URLs:
/invocations
: The inference path. Send input data via a POST request and get the inference result in return./ping
: Used for health checks./health
: Same as /ping./version
: Returns the MLflow version.
For more information, please refer to the following page:
https://mlflow.org/docs/latest/deployment/deploy-model-locally.html#inference-server-specification
Making API Requests
When making a request, you must use an API Key. Without it, the request will not be authorized and will not be processed.
You can find the API URL in the API information. From here, you can append the path to use it.
Making MLflow Requests
You can make requests by appending the path to the API URL you obtained earlier. Let's explain some examples below.
If you want to send a
/ping
request:Append
/ping
to the API URL.https://api-cloud-function.elice.io/2ff51a26-9c2d-414c-86dc-56ae903291a5/ping
If you want to send an inference request:
Append
/invocations
to the API URL.https://api-cloud-function.elice.io/2ff51a26-9c2d-414c-86dc-56ae903291a5/invocations
If you use the curl
command to make the request, it will look like this:
To learn more about the accepted data formats in MLflow, please refer to the documentation: https://mlflow.org/docs/latest/deployment/deploy-model-locally.html#accepted-input-formats
Last updated