Generate, manipulate, and query information about images using the Arli AI Image Generation API.
Text Generation API DocsMy API KeysThe Arli AI Image Generation API provides endpoints for creating images from text, modifying existing images, upscaling images, and retrieving information about available models, samplers, and configuration options.
Arli AI Image Generation is powered by SDNext, the most advanced and feature-complete open-source image generation inference engine and interface. As such most of our available features will be similar to those available in SDNext.
https://github.com/vladmandic/sdnextAll Image Generation API endpoints require authentication using a Bearer token or Basic Authentication via the Authorization
header. Replace {ARLIAI_API_KEY}
in the examples with your actual API key. For Basic Auth, use your API key as the password, Base64 encoded (e.g., Authorization: Basic Base64Encode(ARLIAI_API_KEY)
).
API requests are routed to appropriate backend servers based on model requirements and availability. Ensure your account has access granted to the specific Image Generation models or Upscaler models you intend to use via the relevant endpoints (`/sd-models`, `/upscalers`).
Image generation and upscaling requests are subject to rate limits and concurrency limits based on your account plan. Exceeding limits may result in temporary account restrictions. Check your account dashboard for details on your limits.
API Key parameter overrides (set in your account settings) will merge with and take precedence over parameters sent in the request body for compatible parameters.
Successful POST requests (txt2img, img2img, extra-single-image) typically return a JSON object containing a list of base64 encoded image(s) in the images
field (or image
for upscale) and an info
field with generation parameters. Successful GET requests return JSON data as described in their respective sections.
POST /sdapi/v1/txt2img
or /v1/txt2img
Generate images from text prompts using a specified Image Generation model checkpoint. Requires the prompt
and sd_model_checkpoint
parameters.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import requests
import json
# Note: You can also use the alternate endpoint: https://api.arliai.com/v1/txt2img
url = "https://api.arliai.com/sdapi/v1/txt2img"
# Example payload - adjust parameters as needed
payload = json.dumps({
"prompt": "A stunning photograph of a majestic eagle soaring over mountains",
"negative_prompt": "cartoon, drawing, illustration, sketch, low quality, blurry",
"sd_model_checkpoint": "image_generation_model", # REQUIRED: Specify the model checkpoint
"steps": 30,
"sampler_name": "DPM++ 2M Karras",
"width": 1024,
"height": 1024,
"cfg_scale": 7,
"seed": -1,
# "detailer_enabled": true, # Optional parameters
# "detailer_strength": 0.5,
# "hr_sampler_name": "DPM++ 2S a Karras"
})
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer {ARLIAI_API_KEY}' # Replace with your API key
}
response = requests.post(url, headers=headers, data=payload)
if response.status_code == 200:
# Process the response, which contains base64 encoded images and info
print("Image generated successfully!")
# print(response.json())
else:
print(f"Error: {response.status_code}")
print(response.text)
POST /sdapi/v1/img2img
or /v1/img2img
Generate images based on an initial image and a text prompt. Requires at least one base64 encoded image in the init_images
array, along with a prompt
and model.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
import requests
import json
import base64
# Note: You can also use the alternate endpoint: https://api.arliai.com/v1/img2img
url = "https://api.arliai.com/sdapi/v1/img2img"
# Example: Load an initial image and encode it in base64
# Replace 'path/to/your/input_image.png' with the actual image path
init_image_base64 = "PLACEHOLDER_BASE64_ENCODED_IMAGE_STRING" # Replace with actual base64 data
# try:
# with open('path/to/your/input_image.png', 'rb') as img_file:
# init_image_base64 = base64.b64encode(img_file.read()).decode('utf-8')
# except FileNotFoundError:
# print("Error: Input image file not found.")
# exit()
# Example payload - adjust parameters as needed
payload = json.dumps({
"prompt": "A fantasy castle in the style of Van Gogh",
"negative_prompt": "photorealistic, blurry, low quality",
"sd_model_checkpoint": "image_generation_model", # REQUIRED
"init_images": [init_image_base64], # REQUIRED: Array of base64 strings
"steps": 35,
"sampler_name": "Euler a",
"width": 768,
"height": 768,
"cfg_scale": 8,
"seed": -1,
# "denoising_strength": 0.75 # Often used in img2img
})
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer {ARLIAI_API_KEY}' # Replace with your API key
}
response = requests.post(url, headers=headers, data=payload)
if response.status_code == 200:
print("Image generated successfully!")
# print(response.json())
else:
print(f"Error: {response.status_code}")
print(response.text)
POST /sdapi/v1/extra-single-image
or /v1/upscale-img
Upscale a single image using specified upscaler models and optionally apply face correction. Requires a base64 encoded image in the image
parameter and an upscaler_1
choice. Access to at least one *image generation model* (not just upscaler model) on your account is required to use this endpoint.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
import requests
import json
import base64
# Note: You can also use the alternate endpoint name: https://api.arliai.com/v1/upscale-img (if supported)
url = "https://api.arliai.com/sdapi/v1/extra-single-image"
# Example: Load an image to upscale and encode it in base64
# Replace 'path/to/your/image_to_upscale.png' with the actual image path
image_base64 = "PLACEHOLDER_BASE64_ENCODED_IMAGE_TO_UPSCALE" # Replace with actual base64 data
# try:
# with open('path/to/your/image_to_upscale.png', 'rb') as img_file:
# image_base64 = base64.b64encode(img_file.read()).decode('utf-8')
# except FileNotFoundError:
# print("Error: Input image file not found.")
# exit()
# Example payload - adjust parameters as needed
payload = json.dumps({
"image": image_base64, # REQUIRED: Base64 encoded image string
"upscaler_1": "R-ESRGAN 4x+", # Example upscaler model name (Check /upscalers endpoint for available names)
"upscaling_resize": 2 # Example upscale factor (e.g., 2x)
# "resize_mode": 0, # Optional: 0 for linear scaling, 1 for exact dimensions (using upscaling_resize_w/h)
# "upscaler_2": "None", # Optional second upscaler
# "extras_upscaler_2_visibility": 0, # Visibility for second upscaler
})
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer {ARLIAI_API_KEY}' # Replace with your API key
}
response = requests.post(url, headers=headers, data=payload)
if response.status_code == 200:
print("Image upscaled successfully!")
# Response contains the upscaled image as base64 in 'image' field
# print(response.json())
else:
print(f"Error: {response.status_code}")
print(response.text)
GET /sdapi/v1/sd-models
or /v1/img-models
Retrieves a list of image generation models (checkpoints) that your authenticated account has access to. Use the model_name
from the response in the sd_model_checkpoint
parameter of txt2img/img2img requests.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import requests
url = "https://api.arliai.com/sdapi/v1/sd-models"
headers = {
'Authorization': 'Bearer {ARLIAI_API_KEY}' # Replace with your API key
}
response = requests.get(url, headers=headers)
if response.status_code == 200:
models = response.json()
print("Available Image Models:")
print(models)
# Example response: [{"title": "model1_name", "model_name": "model1_name"}, ...]
else:
print(f"Error: {response.status_code}")
print(response.text)
GET /sdapi/v1/upscalers
or /v1/upscalers
Retrieves a list of upscaler models available to your authenticated account. Use the name
from the response in the upscaler_1
or upscaler_2
parameters of the Image Upscaling endpoint.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import requests
url = "https://api.arliai.com/sdapi/v1/upscalers"
headers = {
'Authorization': 'Bearer {ARLIAI_API_KEY}' # Replace with your API key
}
response = requests.get(url, headers=headers)
if response.status_code == 200:
upscalers = response.json()
print("Available Upscalers:")
print(upscalers)
# Example response: [{"name": "R-ESRGAN 4x+", "scale": 4.0}, {"name": "Nearest", "scale": 1.0} ...]
else:
print(f"Error: {response.status_code}")
print(response.text)
GET /sdapi/v1/samplers
or /v1/img-samplers
Retrieves a list of available diffusion sampling methods supported by the backend server. Use the name
from the response in the sampler_name
parameter of txt2img/img2img requests. Requires access to at least one image generation model. The request is forwarded to an available backend server.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
import requests
url = "https://api.arliai.com/sdapi/v1/samplers"
headers = {
'Authorization': 'Bearer {ARLIAI_API_KEY}' # Replace with your API key
}
response = requests.get(url, headers=headers)
if response.status_code == 200:
samplers = response.json()
print("Available Samplers:")
print(samplers)
# Example response format depends on underlying SDNext server
# e.g., [{"name": "Euler a", "aliases": ["k_euler_a"], "options": {}}, ...]
else:
print(f"Error: {response.status_code}")
print(response.text)
GET /sdapi/v1/options
or /v1/img-options
Retrieves the current configuration options of one of the backend SDNext servers. This can be useful for understanding default settings or available tunable parameters. Requires access to at least one image generation model. The request is forwarded to an available backend server.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
import requests
url = "https://api.arliai.com/sdapi/v1/options"
headers = {
'Authorization': 'Bearer {ARLIAI_API_KEY}' # Replace with your API key
}
response = requests.get(url, headers=headers)
if response.status_code == 200:
options = response.json()
print("Current Server Options:")
print(options)
# Example response format depends on underlying SDNext server
# e.g., {"sd_model_checkpoint": "currently_loaded_model", "CLIP_stop_at_last_layers": 2, ...}
else:
print(f"Error: {response.status_code}")
print(response.text)
The following table details parameters for the Image Generation API endpoints. Note the 'Applies to' column indicates which endpoint(s) typically use each parameter. GET endpoints do not accept body parameters but require the Authorization header.
Parameter | Description & Usage |
---|---|
prompt | Text prompt describing the desired image.Applies to: Text-to-Image, Image-to-Image |
negative_prompt | Text prompt specifying elements to avoid in the image.Applies to: Text-to-Image, Image-to-Image |
sd_model_checkpoint | Name of the Image Model file (e.g., "model"). Use `/sd-models` to list available models. Required for txt2img/img2img.Applies to: Text-to-Image, Image-to-Image |
steps | Number of diffusion steps. Higher values take longer but can improve detail. (e.g., 20-40).Applies to: Text-to-Image, Image-to-Image |
sampler_name | Sampling method to use (e.g., "DPM++ 2M Karras", "Euler a"). Use `/samplers` to list available samplers.Applies to: Text-to-Image, Image-to-Image |
width | Width of the generated image in pixels.Applies to: Text-to-Image, Image-to-Image |
height | Height of the generated image in pixels.Applies to: Text-to-Image, Image-to-Image |
cfg_scale | Classifier-Free Guidance scale. Controls prompt influence (e.g., 5-10).Applies to: Text-to-Image, Image-to-Image |
seed | Seed for randomization. -1 means random. Use a specific integer for reproducible results.Applies to: Text-to-Image, Image-to-Image |
init_images | Array containing one or more initial images encoded as base64 strings. (Required for img2img)Applies to: Image-to-Image |
denoising_strength | Controls how much the original image content is preserved (0.0 to 1.0). Lower values keep more original content.Applies to: Image-to-Image |
image | Single image encoded as a base64 string to be processed. (Required for extra-single-image)Applies to: Upscaling |
resize_mode | Upscaling resize mode. 0: Scale by factor (upscaling_resize). 1: Resize to specific dimensions (upscaling_resize_w, upscaling_resize_h).Applies to: Upscaling |
upscaling_resize | Factor by which to upscale the image (e.g., 2 for 2x, 4 for 4x) when resize_mode is 0.Applies to: Upscaling |
upscaler_1 | Name of the primary upscaler model. Use `/upscalers` to list. Default: "None".Applies to: Upscaling |