Documentation Index
Fetch the complete documentation index at: https://docs.elumenta.ru/llms.txt
Use this file to discover all available pages before exploring further.
All errors follow a consistent format:
{
"error": "error_code",
"message": "Human-readable description",
"details": {}
}
HTTP Status Codes
| Code | Error | Description |
|---|
400 | bad_request | Invalid parameters or missing required fields |
401 | not_authenticated | Missing or invalid Authorization header |
401 | token_expired | Access token has expired — refresh it |
402 | insufficient_tokens | Your token balance is zero |
403 | insufficient_plan | Feature requires a higher subscription |
404 | not_found | Generation ID or model not found |
422 | validation_error | Request body fails schema validation |
429 | rate_limit_exceeded | Too many requests — see rate limits |
500 | provider_error | Upstream AI provider returned an error |
503 | model_unavailable | Model is temporarily down or overloaded |
Generation Status Responses
Generation responses include a status field that can be:
completed — Generation successful, result available
pending — Generation in progress (for async operations)
failed — Generation failed due to content policy, provider error, or other issues
For failed status, an error field provides the failure reason:
{
"id": "gen_abc123",
"status": "failed",
"error": "Content policy violation: prohibited content detected"
}
Common Generation Errors
Generation failures can occur for several reasons:
Model Overloaded (503)
When a model is experiencing high demand:
{
"status": "failed",
"error": "Service unavailable: model is overloaded"
}
Generation Timeout
For requests that take too long to process:
{
"status": "failed",
"error": "Generation timeout: request deadline exceeded"
}
Content Policy Violations
When content is blocked by safety filters:
{
"status": "failed",
"error": "Content blocked by safety filter"
}
Common Errors
401 — Token expired
Access tokens expire after 15 minutes. Refresh automatically:
import requests
class ElumentaClient:
def __init__(self, api_key: str):
self.api_key = api_key
def request(self, method: str, path: str, **kwargs):
headers = {"Authorization": f"Bearer {self.api_key}"}
response = requests.request(
method,
f"https://elumenta.ru/api/v2{path}",
headers=headers,
**kwargs
)
if response.status_code == 401:
# For API key auth, 401 means invalid key
raise ValueError("Invalid API key")
response.raise_for_status()
return response.json()
402 — Insufficient tokens
{
"error": "insufficient_tokens",
"message": "Insufficient token balance. Required: 8, Available: 3",
"details": {
"required": 8,
"available": 3
}
}
Handle it gracefully:
try:
result = client.generate_image(...)
except requests.HTTPError as e:
if e.response.status_code == 402:
print("Out of tokens! Top up at elumenta.ru/web/billing")
429 — Rate limit exceeded
{
"error": "rate_limit_exceeded",
"message": "Hourly limit reached: 100 requests/hour",
"retry_after": 847,
"limit_hour": 100,
"limit_day": 1000
}
500 — Provider error
Upstream AI providers occasionally fail. Tokens are never charged on provider errors.
{
"error": "provider_error",
"message": "Provider returned an error. No tokens were charged.",
"provider": "openai"
}
Retry Logic
import requests
import time
from typing import Optional
def make_request(
url: str,
headers: dict,
payload: dict,
max_retries: int = 3,
retry_on: tuple = (429, 500, 503)
) -> Optional[dict]:
for attempt in range(max_retries):
try:
res = requests.post(url, headers=headers, json=payload, timeout=60)
if res.status_code in retry_on:
if res.status_code == 429:
# Respect Retry-After header
wait = int(res.headers.get("Retry-After", 60))
else:
# Exponential backoff for 5xx errors
wait = 2 ** attempt
if attempt < max_retries - 1:
print(f"Attempt {attempt+1} failed ({res.status_code}), waiting {wait}s...")
time.sleep(wait)
continue
res.raise_for_status()
return res.json()
except requests.Timeout:
if attempt < max_retries - 1:
time.sleep(2 ** attempt)
continue
raise
return None
Handling Generation Failures
Always check the status field in generation responses:
def generate_with_error_handling(prompt: str):
response = client.generate_image(prompt=prompt)
if response.get("status") == "failed":
error_msg = response.get("error", "Generation failed")
# Handle specific error types
if "overloaded" in error_msg.lower() or "503" in error_msg:
print("Model is overloaded, try again later or use another model")
elif "timeout" in error_msg.lower():
print("Generation timed out, please retry")
elif "content_policy" in error_msg.lower() or "safety" in error_msg.lower():
print("Content blocked by moderation, please modify your prompt")
else:
print(f"Generation failed: {error_msg}")
return None
elif response.get("status") == "completed":
return response.get("result_url")
else:
# Handle pending status for async operations
return poll_generation(response.get("id"))
Validation Errors
For 422 errors, the response includes field-level details:
{
"error": "validation_error",
"message": "Request validation failed",
"details": {
"fields": [
{
"field": "size",
"message": "Value '512x512' is not valid for model dall-e-3. Valid sizes: 1024x1024, 1024x1792, 1792x1024"
}
]
}
}