🚀 Getting Started

Overview

The Radiance AI API is a decentralized batch inference engine that processes AI workloads across distributed compute nodes. The API consists of two main interfaces:

Quick Start (Client)

# 1. List available models
curl https://api.radiance.cloud/airunner/models

# 2. Submit a job
curl -X POST https://api.radiance.cloud/airunner/submit \\
  -F "model=image-classification" \\
  -F "file=@image.jpg" \\
  -F 'payload={"top_k": 5}'

# Response: {"success": true, "result": {"id": "abc123...", "status": "queued"}}

# 3. Check status
curl https://api.radiance.cloud/airunner/status/abc123...

# 4. Get result when completed
curl https://api.radiance.cloud/airunner/result/json/abc123...

Quick Start (Agent)

# Agent authentication required via Bearer token
export AGENT_TOKEN="your-agent-bearer-token"

# 1. Claim a job
curl -X POST https://api.radiance.cloud/airunner/agent/claim \\
  -H "Authorization: Bearer $AGENT_TOKEN" \\
  -H "x-agent-id: my-compute-node-01"

# 2. Download input file (if job has one)
curl https://api.radiance.cloud/airunner/agent/get/abc123 \\
  -H "Authorization: Bearer $AGENT_TOKEN" \\
  -H "x-claim-token: xyz789..." \\
  -o input.jpg

# 3. Submit result
curl -X POST https://api.radiance.cloud/airunner/agent/return/abc123 \\
  -H "Authorization: Bearer $AGENT_TOKEN" \\
  -H "x-claim-token: xyz789..." \\
  -F 'data={"predictions": [{"label": "cat", "confidence": 0.95}]}' \\
  -F "file=@result.json"
💡 Tip: All timestamps in the API are in ISO 8601 format (UTC). Job IDs are UUIDs generated by the system.

👤 Client API

The Client API is publicly accessible and requires no authentication. Use it to submit inference jobs, check their status, and retrieve results.

GET /airunner/models

Description: List all available AI models with their configurations

Request

No parameters required.

Response

{
  "success": true,
  "result": {
    "models": [
      {
        "id": "image-classification",
        "name": "Image Classification",
        "description": "Classify images into predefined categories",
        "requiresFile": true,
        "allowedFileTypes": ["image/jpeg", "image/png", "image/webp", "image/gif"],
        "payloadSchema": {
          "top_k": {
            "type": "number",
            "required": false,
            "description": "Return top K predictions (default: 5)"
          },
          "threshold": {
            "type": "number",
            "required": false,
            "description": "Confidence threshold (0-1)"
          }
        }
      },
      {
        "id": "text-generation",
        "name": "Text Generation",
        "description": "Generate text from prompt",
        "requiresFile": false,
        "payloadSchema": {
          "prompt": {
            "type": "string",
            "required": true,
            "description": "Input prompt"
          },
          "max_tokens": {
            "type": "number",
            "required": false,
            "description": "Maximum tokens to generate"
          }
        }
      }
    ]
  }
}

Example

curl https://api.radiance.cloud/airunner/models

POST /airunner/submit

Description: Submit a single inference job

Request Headers

Header Value Required
Content-Type multipart/form-data Required

Request Body (Form Data)

Field

Success Response (201)

{
  "success": true,
  "result": {
    "id": "550e8400-e29b-41d4-a716-446655440000",
    "model": "image-classification",
    "priority": 1,
    "status": "queued",
    "created_at": "2025-10-27T10:15:30.123Z",
    "upload_key": "uploads/550e8400.../image.jpg",
    "upload_mime": "image/jpeg",
    "webhook_url": "https://myapp.com/webhook"
  }
}

Error Responses

400 Bad Request - Missing model:
{
  "success": false,
  "error": {
    "code": "MISSING_MODEL",
    "message": "Model ID is required"
  }
}
400 Bad Request - Invalid model:
{
  "success": false,
  "error": {
    "code": "INVALID_MODEL",
    "message": "Unknown model: wrong-model-id. Use /airunner/models to see available models"
  }
}
400 Bad Request - File required:
{
  "success": false,
  "error": {
    "code": "FILE_REQUIRED",
    "message": "Model 'image-classification' requires a file"
  }
}
400 Bad Request - Invalid payload:
{
  "success": false,
  "error": {
    "code": "INVALID_PAYLOAD",
    "message": "Payload validation failed",
    "details": [
      "Missing required field: prompt",
      "Field 'temperature' must be a number"
    ]
  }
}
413 Payload Too Large:
{
  "success": false,
  "error": {
    "code": "FILE_TOO_LARGE",
    "message": "Max 52428800 bytes"
  }
}
415 Unsupported Media Type:
{
  "success": false,
  "error": {
    "code": "UNSUPPORTED_MEDIA_TYPE",
    "message": "Model 'image-classification' does not support type: application/pdf",
    "allowedTypes": ["image/jpeg", "image/png", "image/webp", "image/gif"]
  }
}

Examples

# Image classification with file
curl -X POST https://api.radiance.cloud/airunner/submit \\
  -F "model=image-classification" \\
  -F "file=@cat.jpg" \\
  -F 'payload={"top_k": 3, "threshold": 0.5}'

# Text generation without file
curl -X POST https://api.radiance.cloud/airunner/submit \\
  -F "model=text-generation" \\
  -F 'payload={"prompt": "Once upon a time", "max_tokens": 100}'

# High priority job with webhook
curl -X POST https://api.radiance.cloud/airunner/submit \\
  -F "model=speech-to-text" \\
  -F "file=@audio.mp3" \\
  -F "priority=high" \\
  -F "webhook_url=https://myapp.com/webhook"

POST /airunner/submit/batch

Description: Submit multiple jobs in a single request (up to 100 jobs)

Request Headers

Header Value Required
Content-Type application/json Required

Request Body

{
  "jobs": [
    {
      "model": "image-classification",
      "file_base64": "data:image/jpeg;base64,/9j/4AAQSkZJRg...",
      "filename": "image1.jpg",
      "payload": {"top_k": 5},
      "priority": "normal",
      "webhook_url": "https://myapp.com/webhook"
    },
    {
      "model": "text-generation",
      "payload": {"prompt": "Hello world", "max_tokens": 50},
      "priority": "high"
    }
  ]
}

Success Response (200)

{
  "success": true,
  "result": {
    "submitted": 2,
    "failed": 0,
    "jobs": [
      {
        "id": "550e8400-e29b-41d4-a716-446655440000",
        "model": "image-classification",
        "priority": 1,
        "status": "queued",
        "created_at": "2025-10-27T10:15:30.123Z",
        "upload_key": "uploads/550e8400.../image1.jpg",
        "upload_mime": "image/jpeg"
      },
      {
        "id": "660f9511-f30c-52e5-b827-557766551111",
        "model": "text-generation",
        "priority": 2,
        "status": "queued",
        "created_at": "2025-10-27T10:15:30.456Z",
        "upload_key": null,
        "upload_mime": null
      }
    ]
  }
}

Partial Success Response

{
  "success": true,
  "result": {
    "submitted": 1,
    "failed": 1,
    "jobs": [
      {
        "id": "550e8400-e29b-41d4-a716-446655440000",
        "model": "image-classification",
        "priority": 1,
        "status": "queued",
        "created_at": "2025-10-27T10:15:30.123Z"
      }
    ],
    "errors": [
      {
        "index": 1,
        "error": "Model 'invalid-model' is invalid"
      }
    ]
  }
}
📝 Note: For batch submissions, files must be provided as base64-encoded data URLs. The format is: data:<mime-type>;base64,<base64-data>

Example

curl -X POST https://api.radiance.cloud/airunner/submit/batch \\
  -H "Content-Type: application/json" \\
  -d '{
    "jobs": [
      {
        "model": "image-classification",
        "file_base64": "data:image/jpeg;base64,/9j/4AAQ...",
        "filename": "cat.jpg",
        "payload": {"top_k": 3}
      },
      {
        "model": "image-classification",
        "file_base64": "data:image/png;base64,iVBORw0KG...",
        "filename": "dog.png",
        "payload": {"top_k": 3}
      }
    ]
  }'

GET /airunner/status/:id

Description: Check the status and timing metrics of a job

URL Parameters

Parameter Type Description
id string (UUID) Job ID returned from submit

Success Response (200)

{
  "success": true,
  "result": {
    "id": "550e8400-e29b-41d4-a716-446655440000",
    "model": "image-classification",
    "priority": 1,
    "status": "completed",
    "created_at": "2025-10-27T10:15:30.123Z",
    "started_at": "2025-10-27T10:15:45.678Z",
    "completed_at": "2025-10-27T10:16:12.345Z",
    "timing": {
      "queue_time_ms": 15555,
      "running_time_ms": 26667,
      "complete_time_ms": 42222,
      "latency_ms": 0
    },
    "failure_reason": null,
    "result_key": "results/550e8400.../result.json",
    "result_file_key": "results/550e8400.../output.jpg",
    "webhook_url": "https://myapp.com/webhook"
  }
}

Status Values

Status Description
queued Job is waiting to be claimed by an agent
running Job is currently being processed by an agent
completed Job finished successfully, results available
failed Job failed during processing (see failure_reason)
cancelled Job was cancelled by the client

Timing Metrics Explained

  • queue_time_ms: Time from job creation to when an agent started processing it
  • running_time_ms: Time the agent spent actively processing the job
  • complete_time_ms: Total time from creation to completion
  • latency_ms: System overhead (complete_time - queue_time - running_time)

Error Response (404)

{
  "success": false,
  "error": {
    "code": "NOT_FOUND",
    "message": "Job not found (yet)"
  }
}

Example

curl https://api.radiance.cloud/airunner/status/550e8400-e29b-41d4-a716-446655440000

DELETE /airunner/cancel/:id

Description: Cancel a queued or running job

URL Parameters

Parameter Type Description
id string (UUID) Job ID to cancel

Success Response (200)

{
  "success": true,
  "result": {
    "id": "550e8400-e29b-41d4-a716-446655440000",
    "status": "cancelled",
    "cancelled_at": "2025-10-27T10:16:30.123Z"
  }
}

Error Responses

404 Not Found:
{
  "success": false,
  "error": {
    "code": "NOT_FOUND",
    "message": "Job not found"
  }
}
409 Conflict - Already finished:
{
  "success": false,
  "error": {
    "code": "ALREADY_FINISHED",
    "message": "Cannot cancel completed or failed job"
  }
}
📝 Note: Cancelling a running job will mark it as cancelled, but the agent may still complete processing. If a webhook is configured, a cancellation notification will be sent.

Example

curl -X DELETE https://api.radiance.cloud/airunner/cancel/550e8400-e29b-41d4-a716-446655440000

GET /airunner/result/json/:id

Description: Get the result metadata and data as JSON

URL Parameters

Parameter Type Description
id string (UUID) Job ID

Success Response (200)

{
  "id": "550e8400-e29b-41d4-a716-446655440000",
  "finished_at": "2025-10-27T10:16:12.345Z",
  "agent": "compute-node-01",
  "model": "image-classification",
  "payload": {
    "top_k": 5,
    "threshold": 0.5
  },
  "upload_key": "uploads/550e8400.../cat.jpg",
  "result": {
    "predictions": [
      {"label": "cat", "confidence": 0.95},
      {"label": "kitten", "confidence": 0.87},
      {"label": "tabby", "confidence": 0.76}
    ]
  },
  "output": {
    "type": "file",
    "file_key": "results/550e8400.../output.jpg",
    "filename": "output.jpg",
    "content_type": "image/jpeg"
  },
  "meta": {
    "processing_time_ms": 2450,
    "model_version": "v2.1"
  }
}

Error Responses

404 Not Found:
{
  "success": false,
  "error": {
    "code": "NOT_FOUND",
    "message": "Job not found"
  }
}
409 Conflict - Not ready:
{
  "success": false,
  "error": {
    "code": "NOT_READY",
    "message": "Result not available yet"
  }
}

Example

curl https://api.radiance.cloud/airunner/result/json/550e8400-e29b-41d4-a716-446655440000

GET /airunner/result/file/:id

Description: Download the result file (if the job produced one)

URL Parameters

Parameter Type Description
id string (UUID) Job ID

Success Response (200)

Returns the file with appropriate Content-Type and Content-Disposition headers.

Error Responses

404 Not Found - No file:
{
  "success": false,
  "error": {
    "code": "NO_FILE",
    "message": "This result has no file"
  }
}

Example

# Download result file
curl https://api.radiance.cloud/airunner/result/file/550e8400-e29b-41d4-a716-446655440000 \\
  -o result.jpg

GET /airunner/jobs/debug

Description: List recent jobs with timing information (last 100 jobs)

Success Response (200)

{
  "success": true,
  "result": {
    "jobs": [
      {
        "id": "550e8400-e29b-41d4-a716-446655440000",
        "model": "image-classification",
        "priority": 2,
        "status": "completed",
        "created_at": "2025-10-27T10:15:30.123Z",
        "started_at": "2025-10-27T10:15:45.678Z",
        "completed_at": "2025-10-27T10:16:12.345Z",
        "timing": {
          "queue_time_ms": 15555,
          "running_time_ms": 26667,
          "complete_time_ms": 42222,
          "latency_ms": 0
        }
      }
    ]
  }
}

Example

curl https://api.radiance.cloud/airunner/jobs/debug

🤖 Agent API

The Agent API is used by compute nodes to claim jobs, download input files, and submit results. All Agent API endpoints require authentication.

⚠️ Authentication Required: All agent endpoints require a Bearer token in the Authorization header. Contact the system administrator to obtain your agent token.

Agent Authentication

# Set your agent token
export AGENT_TOKEN="your-secret-agent-token"

# All agent requests must include:
Authorization: Bearer $AGENT_TOKEN

POST /airunner/agent/claim

Description: Claim the next available job from the queue. Jobs are claimed based on priority (urgent > high > normal > low) and then by creation time (oldest first).

Request Headers

Header Value Required
Authorization Bearer <token> Required
x-agent-id Agent identifier Optional

Success Response (200) - Job claimed

{
  "success": true,
  "result": {
    "id": "550e8400-e29b-41d4-a716-446655440000",
    "model": "image-classification",
    "payload": {
      "top_k": 5,
      "threshold": 0.5
    },
    "priority": 1,
    "upload_key": "uploads/550e8400.../cat.jpg",
    "upload_mime": "image/jpeg",
    "claim_token": "xyz789-claim-token-abc123"
  }
}

No Content Response (204) - No jobs available

When there are no jobs in the queue, the endpoint returns 204 No Content with an empty body.

Error Response (401)

{
  "success": false,
  "error": {
    "code": "UNAUTHORIZED",
    "message": "Agent auth required"
  }
}
💡 Important: The claim_token in the response is required for all subsequent operations on this job. Store it securely and include it in the x-claim-token header for get, return, and failed endpoints.
📝 Automatic Reclaiming: Jobs stuck in "running" status for more than 30 minutes are automatically reclaimed and returned to the queue. This prevents jobs from being lost if an agent crashes.

Example

curl -X POST https://api.radiance.cloud/airunner/agent/claim \\
  -H "Authorization: Bearer $AGENT_TOKEN" \\
  -H "x-agent-id: my-compute-node-01"

GET /airunner/agent/get/:id

Description: Download the input file for a claimed job

URL Parameters

Parameter Type Description
id string (UUID) Job ID

Request Headers

Header Value Required
Authorization Bearer <token> Required
x-claim-token Token from claim response Required

Success Response (200)

Returns the file with appropriate Content-Type and Content-Disposition headers.

Error Responses

401 Unauthorized:
{
  "success": false,
  "error": {
    "code": "UNAUTHORIZED",
    "message": "Agent auth required"
  }
}
403 Forbidden - Invalid claim token:
{
  "success": false,
  "error": {
    "code": "FORBIDDEN",
    "message": "Invalid claim token"
  }
}
404 Not Found - No file:
{
  "success": false,
  "error": {
    "code": "NO_UPLOAD",
    "message": "Job has no uploaded file"
  }
}
409 Conflict - Invalid state:
{
  "success": false,
  "error": {
    "code": "INVALID_STATE",
    "message": "Job not running"
  }
}

Example

curl https://api.radiance.cloud/airunner/agent/get/550e8400-e29b-41d4-a716-446655440000 \\
  -H "Authorization: Bearer $AGENT_TOKEN" \\
  -H "x-claim-token: xyz789-claim-token-abc123" \\
  -o input.jpg

POST /airunner/agent/return/:id

Description: Submit the result for a completed job. Supports both JSON data and file uploads.

URL Parameters

Parameter Type Description
id string (UUID) Job ID

Request Headers

Header Value Required
Authorization Bearer <token> Required
x-claim-token Token from claim response Required
Content-Type multipart/form-data or application/json Required

Request Body - Option 1: Multipart Form Data

Field Type Required Description
data string (JSON) Optional Result data as JSON string
file file Optional Result file (image, audio, etc.)
meta string (JSON) Optional Additional metadata (processing time, model version, etc.)

Request Body - Option 2: JSON

{
  "data": {
    "predictions": [
      {"label": "cat", "confidence": 0.95}
    ]
  },
  "meta": {
    "processing_time_ms": 2450,
    "model_version": "v2.1"
  }
}

Success Response (200)

{
  "success": true,
  "result": {
    "id": "550e8400-e29b-41d4-a716-446655440000",
    "status": "completed",
    "result_key": "results/550e8400.../result.json",
    "result_file_key": "results/550e8400.../output.jpg"
  }
}
📝 Result Structure: The system automatically stores your result as a JSON file at result_key. This file includes the job ID, timestamp, agent ID, model, payload, your data, optional file reference, and optional metadata.

Error Responses

401 Unauthorized:
{
  "success": false,
  "error": {
    "code": "UNAUTHORIZED",
    "message": "Agent auth required"
  }
}
403 Forbidden:
{
  "success": false,
  "error": {
    "code": "FORBIDDEN",
    "message": "Invalid claim token"
  }
}
409 Conflict:
{
  "success": false,
  "error": {
    "code": "INVALID_STATE",
    "message": "Job not running"
  }
}
415 Unsupported Media Type:
{
  "success": false,
  "error": {
    "code": "UNSUPPORTED_MEDIA_TYPE",
    "message": "Use JSON or multipart"
  }
}

Examples

# Return JSON result only
curl -X POST https://api.radiance.cloud/airunner/agent/return/550e8400-... \\
  -H "Authorization: Bearer $AGENT_TOKEN" \\
  -H "x-claim-token: xyz789..." \\
  -H "Content-Type: application/json" \\
  -d '{
    "data": {
      "predictions": [
        {"label": "cat", "confidence": 0.95},
        {"label": "kitten", "confidence": 0.87}
      ]
    },
    "meta": {
      "processing_time_ms": 2450
    }
  }'

# Return with file and data
curl -X POST https://api.radiance.cloud/airunner/agent/return/550e8400-... \\
  -H "Authorization: Bearer $AGENT_TOKEN" \\
  -H "x-claim-token: xyz789..." \\
  -F 'data={"transcription": "Hello world"}' \\
  -F "file=@output.txt" \\
  -F 'meta={"model_version": "whisper-v3"}'

POST /airunner/agent/failed/:id

Description: Report that a job has failed during processing

URL Parameters

Parameter Type Description
id string (UUID) Job ID

Request Headers

Header Value Required
Authorization Bearer <token> Required
x-claim-token Token from claim response Required
Content-Type application/json Required

Request Body

{
  "reason": "Model inference failed: CUDA out of memory"
}

Success Response (200)

{
  "success": true,
  "result": {
    "id": "550e8400-e29b-41d4-a716-446655440000",
    "status": "failed",
    "reason": "Model inference failed: CUDA out of memory"
  }
}
📝 Webhook Notification: If the job has a webhook configured, a failure notification will be sent automatically.

Error Responses

401 Unauthorized:
{
  "success": false,
  "error": {
    "code": "UNAUTHORIZED",
    "message": "Agent auth required"
  }
}
403 Forbidden:
{
  "success": false,
  "error": {
    "code": "FORBIDDEN",
    "message": "Invalid claim token"
  }
}

Example

curl -X POST https://api.radiance.cloud/airunner/agent/failed/550e8400-... \\
  -H "Authorization: Bearer $AGENT_TOKEN" \\
  -H "x-claim-token: xyz789..." \\
  -H "Content-Type: application/json" \\
  -d '{"reason": "Invalid input file format"}'

📋 Available Models

The following AI models are currently available in the system:

Image Classification

PropertyValue
Model IDimage-classification
Requires File✅ Yes
File Typesimage/jpeg, image/png, image/webp, image/gif

Payload Parameters:

Object Detection

PropertyValue
Model IDobject-detection
Requires File✅ Yes
File Typesimage/jpeg, image/png, image/webp

Payload Parameters:

Speech to Text

PropertyValue
Model IDspeech-to-text
Requires File✅ Yes
File Typesaudio/mpeg, audio/wav, audio/ogg, audio/flac, audio/aac, audio/m4a

Payload Parameters:

Text to Speech

PropertyValue
Model IDtext-to-speech
Requires File❌ No

Payload Parameters:

Text Generation

PropertyValue
Model IDtext-generation
Requires File❌ No

Payload Parameters:

Image Generation

PropertyValue
Model IDimage-generation
Requires File❌ No

Payload Parameters:

Video Analysis

PropertyValue
Model IDvideo-analysis
Requires File✅ Yes
File Typesvideo/mp4, video/avi, video/mov, video/webm, video/mkv

Payload Parameters:

Document Analysis

PropertyValue
Model IDdocument-analysis
Requires File✅ Yes
File Typesapplication/pdf, application/msword, .docx, .xlsx, text/plain, text/csv

Payload Parameters:

Data Processing

PropertyValue
Model IDdata-processing
Requires File✅ Yes
File Typestext/csv, application/json, application/xml, application/x-parquet

Payload Parameters:

3D Model Analysis

PropertyValue
Model ID3d-model-analysis
Requires File✅ Yes
File Typesmodel/obj, model/stl, model/gltf+json, model/gltf-binary

Payload Parameters:

📊 Response Formats

Standard Success Response

{
  "success": true,
  "result": {
    // Endpoint-specific data
  }
}

Standard Error Response

{
  "success": false,
  "error": {
    "code": "ERROR_CODE",
    "message": "Human-readable error description",
    "details": [] // Optional additional information
  }
}

Common Error Codes

Code HTTP Status Description
BAD_REQUEST 400 Malformed request or wrong content type
UNAUTHORIZED 401 Missing or invalid authentication (Agent API only)
FORBIDDEN 403 Invalid claim token
NOT_FOUND 404 Job or resource not found
CONFLICT 409 Invalid state transition (e.g., cancelling completed job)
FILE_TOO_LARGE 413 File exceeds 50MB limit
UNSUPPORTED_MEDIA_TYPE 415 File type not allowed for this model
MISSING_MODEL 400 Model ID not provided
INVALID_MODEL 400 Unknown model ID
FILE_REQUIRED 400 Model requires a file but none provided
INVALID_PAYLOAD 400 Payload validation failed
INVALID_STATE 409 Job not in correct state for operation
NOT_READY 409 Result not yet available

💡 Complete Use Cases

Use Case 1: Image Classification Pipeline (Client)

#!/bin/bash
# Complete workflow: Submit → Poll → Get Result

# Step 1: Submit job
RESPONSE=$(curl -s -X POST https://api.radiance.cloud/airunner/submit \\
  -F "model=image-classification" \\
  -F "file=@my_image.jpg" \\
  -F 'payload={"top_k": 5, "threshold": 0.3}' \\
  -F "priority=high" \\
  -F "webhook_url=https://myapp.com/webhook")

echo "Submit Response: $RESPONSE"

# Extract job ID
JOB_ID=$(echo $RESPONSE | jq -r '.result.id')
echo "Job ID: $JOB_ID"

# Step 2: Poll until completed
while true; do
  STATUS=$(curl -s https://api.radiance.cloud/airunner/status/$JOB_ID)
  STATE=$(echo $STATUS | jq -r '.result.status')
  echo "Current status: $STATE"
  
  if [ "$STATE" = "completed" ]; then
    echo "Job completed!"
    break
  elif [ "$STATE" = "failed" ]; then
    REASON=$(echo $STATUS | jq -r '.result.failure_reason')
    echo "Job failed: $REASON"
    exit 1
  fi
  
  sleep 2
done

# Step 3: Get result JSON
curl -s https://api.radiance.cloud/airunner/result/json/$JOB_ID | jq '.'

# Step 4: Download result file (if exists)
curl -s https://api.radiance.cloud/airunner/result/file/$JOB_ID -o result.jpg

Use Case 2: Batch Image Processing (Client)

#!/bin/bash
# Process multiple images in one batch request

# Convert images to base64
IMAGE1_B64=$(base64 -w 0 cat1.jpg)
IMAGE2_B64=$(base64 -w 0 cat2.jpg)
IMAGE3_B64=$(base64 -w 0 dog1.jpg)

# Submit batch
curl -X POST https://api.radiance.cloud/airunner/submit/batch \\
  -H "Content-Type: application/json" \\
  -d "{
    \\"jobs\\": [
      {
        \\"model\\": \\"image-classification\\",
        \\"file_base64\\": \\"data:image/jpeg;base64,$IMAGE1_B64\\",
        \\"filename\\": \\"cat1.jpg\\",
        \\"payload\\": {\\"top_k\\": 3},
        \\"priority\\": \\"normal\\"
      },
      {
        \\"model\\": \\"image-classification\\",
        \\"file_base64\\": \\"data:image/jpeg;base64,$IMAGE2_B64\\",
        \\"filename\\": \\"cat2.jpg\\",
        \\"payload\\": {\\"top_k\\": 3}
      },
      {
        \\"model\\": \\"image-classification\\",
        \\"file_base64\\": \\"data:image/jpeg;base64,$IMAGE3_B64\\",
        \\"filename\\": \\"dog1.jpg\\",
        \\"payload\\": {\\"top_k\\": 3}
      }
    ]
  }" | jq '.'

Use Case 3: Python Client Example

import requests
import time
import json

BASE_URL = "https://api.radiance.cloud"

def submit_job(model, file_path=None, payload=None, priority="normal", webhook_url=None):
    """Submit a job to the API"""
    url = f"{BASE_URL}/airunner/submit"
    
    files = {}
    data = {"model": model}
    
    if file_path:
        files["file"] = open(file_path, "rb")
    
    if payload:
        data["payload"] = json.dumps(payload)
    
    data["priority"] = priority
    
    if webhook_url:
        data["webhook_url"] = webhook_url
    
    response = requests.post(url, files=files, data=data)
    
    if files:
        files["file"].close()
    
    return response.json()

def get_status(job_id):
    """Get job status"""
    url = f"{BASE_URL}/airunner/status/{job_id}"
    response = requests.get(url)
    return response.json()

def get_result(job_id):
    """Get result JSON"""
    url = f"{BASE_URL}/airunner/result/json/{job_id}"
    response = requests.get(url)
    return response.json()

def download_result_file(job_id, output_path):
    """Download result file"""
    url = f"{BASE_URL}/airunner/result/file/{job_id}"
    response = requests.get(url)
    with open(output_path, "wb") as f:
        f.write(response.content)

# Example: Image classification
result = submit_job(
    model="image-classification",
    file_path="my_image.jpg",
    payload={"top_k": 5, "threshold": 0.3},
    priority="high"
)

job_id = result["result"]["id"]
print(f"Job submitted: {job_id}")

# Poll until completed
while True:
    status = get_status(job_id)
    state = status["result"]["status"]
    print(f"Status: {state}")
    
    if state == "completed":
        # Get result
        result_data = get_result(job_id)
        print("Result:", json.dumps(result_data, indent=2))
        
        # Download file if exists
        if result_data.get("output"):
            download_result_file(job_id, "result.jpg")
            print("Result file downloaded")
        break
    elif state == "failed":
        print(f"Job failed: {status['result']['failure_reason']}")
        break
    
    time.sleep(2)

Use Case 4: Agent Worker Loop (Python)

import requests
import time
import json
import os

BASE_URL = "https://api.radiance.cloud"
AGENT_TOKEN = os.environ.get("AGENT_TOKEN")
AGENT_ID = "compute-node-01"

headers = {
    "Authorization": f"Bearer {AGENT_TOKEN}"
}

def claim_job():
    """Claim next available job"""
    url = f"{BASE_URL}/airunner/agent/claim"
    headers_with_agent = headers.copy()
    headers_with_agent["x-agent-id"] = AGENT_ID
    
    response = requests.post(url, headers=headers_with_agent)
    
    if response.status_code == 204:
        return None  # No jobs available
    
    return response.json()

def download_input(job_id, claim_token, output_path):
    """Download input file"""
    url = f"{BASE_URL}/airunner/agent/get/{job_id}"
    headers_with_token = headers.copy()
    headers_with_token["x-claim-token"] = claim_token
    
    response = requests.get(url, headers=headers_with_token)
    
    if response.status_code == 404:
        return False  # No file
    
    with open(output_path, "wb") as f:
        f.write(response.content)
    return True

def submit_result(job_id, claim_token, data=None, file_path=None, meta=None):
    """Submit result"""
    url = f"{BASE_URL}/airunner/agent/return/{job_id}"
    headers_with_token = headers.copy()
    headers_with_token["x-claim-token"] = claim_token
    
    files = {}
    form_data = {}
    
    if data:
        form_data["data"] = json.dumps(data)
    
    if file_path:
        files["file"] = open(file_path, "rb")
    
    if meta:
        form_data["meta"] = json.dumps(meta)
    
    response = requests.post(url, headers=headers_with_token, files=files, data=form_data)
    
    if files:
        files["file"].close()
    
    return response.json()

def report_failure(job_id, claim_token, reason):
    """Report job failure"""
    url = f"{BASE_URL}/airunner/agent/failed/{job_id}"
    headers_with_token = headers.copy()
    headers_with_token["x-claim-token"] = claim_token
    headers_with_token["Content-Type"] = "application/json"
    
    response = requests.post(
        url,
        headers=headers_with_token,
        json={"reason": reason}
    )
    
    return response.json()

def process_image_classification(input_path, payload):
    """Simulate image classification processing"""
    # Your actual model inference code here
    # This is a placeholder
    time.sleep(2)  # Simulate processing
    
    return {
        "predictions": [
            {"label": "cat", "confidence": 0.95},
            {"label": "kitten", "confidence": 0.87},
            {"label": "tabby", "confidence": 0.76}
        ]
    }

# Main worker loop
print(f"Agent {AGENT_ID} starting...")

while True:
    try:
        # Claim a job
        job_response = claim_job()
        
        if not job_response:
            print("No jobs available, waiting...")
            time.sleep(5)
            continue
        
        job = job_response["result"]
        job_id = job["id"]
        claim_token = job["claim_token"]
        model = job["model"]
        payload = job["payload"]
        
        print(f"Claimed job {job_id} (model: {model})")
        
        # Download input file if exists
        input_file = None
        if job.get("upload_key"):
            input_file = f"input_{job_id}"
            has_file = download_input(job_id, claim_token, input_file)
            if has_file:
                print(f"Downloaded input file: {input_file}")
        
        # Process based on model
        try:
            if model == "image-classification":
                result_data = process_image_classification(input_file, payload)
            # Add more model handlers here
            else:
                raise Exception(f"Unsupported model: {model}")
            
            # Submit result
            submit_result(
                job_id,
                claim_token,
                data=result_data,
                meta={"agent": AGENT_ID, "processing_time_ms": 2000}
            )
            
            print(f"Completed job {job_id}")
            
        except Exception as e:
            # Report failure
            report_failure(job_id, claim_token, str(e))
            print(f"Failed job {job_id}: {e}")
        
        finally:
            # Cleanup
            if input_file and os.path.exists(input_file):
                os.remove(input_file)
    
    except Exception as e:
        print(f"Worker error: {e}")
        time.sleep(10)  # Wait before retrying

Use Case 5: Text Generation Without Files

#!/bin/bash
# Submit text generation job (no file required)

RESPONSE=$(curl -s -X POST https://api.radiance.cloud/airunner/submit \\
  -F "model=text-generation" \\
  -F 'payload={
    "prompt": "Write a short story about a robot learning to paint",
    "max_tokens": 500,
    "temperature": 0.7
  }')

JOB_ID=$(echo $RESPONSE | jq -r '.result.id')

# Wait for completion
while true; do
  STATUS=$(curl -s https://api.radiance.cloud/airunner/status/$JOB_ID)
  STATE=$(echo $STATUS | jq -r '.result.status')
  
  if [ "$STATE" = "completed" ]; then
    # Get generated text
    curl -s https://api.radiance.cloud/airunner/result/json/$JOB_ID \\
      | jq -r '.result.generated_text'
    break
  fi
  
  sleep 2
done

🔔 Webhooks

Webhooks allow you to receive real-time notifications when your jobs complete or fail, instead of polling the status endpoint.

Setting Up Webhooks

Include the webhook_url parameter when submitting a job:

curl -X POST https://api.radiance.cloud/airunner/submit \\
  -F "model=image-classification" \\
  -F "file=@image.jpg" \\
  -F "webhook_url=https://myapp.com/webhook"

Webhook Payload - Job Completed

{
  "event": "job.completed",
  "job": {
    "id": "550e8400-e29b-41d4-a716-446655440000",
    "model": "image-classification",
    "priority": 1,
    "status": "completed",
    "created_at": "2025-10-27T10:15:30.123Z",
    "started_at": "2025-10-27T10:15:45.678Z",
    "completed_at": "2025-10-27T10:16:12.345Z",
    "timing": {
      "queue_time_ms": 15555,
      "running_time_ms": 26667,
      "complete_time_ms": 42222,
      "latency_ms": 0
    },
    "result_key": "results/550e8400.../result.json",
    "result_file_key": "results/550e8400.../output.jpg",
    "webhook_url": "https://myapp.com/webhook"
  },
  "timestamp": "2025-10-27T10:16:12.500Z"
}

Webhook Payload - Job Failed

{
  "event": "job.failed",
  "job": {
    "id": "550e8400-e29b-41d4-a716-446655440000",
    "model": "image-classification",
    "priority": 1,
    "status": "failed",
    "created_at": "2025-10-27T10:15:30.123Z",
    "started_at": "2025-10-27T10:15:45.678Z",
    "completed_at": "2025-10-27T10:16:12.345Z",
    "failure_reason": "CUDA out of memory",
    "webhook_url": "https://myapp.com/webhook"
  },
  "timestamp": "2025-10-27T10:16:12.500Z"
}

Webhook Headers

Header Value
Content-Type application/json
User-Agent AI-Runner-Webhook/1.0

Example Webhook Handler (Node.js/Express)

const express = require('express');
const app = express();

app.use(express.json());

app.post('/webhook', async (req, res) => {
  const { event, job, timestamp } = req.body;
  
  console.log(\`Received webhook: \${event}\`);
  console.log(\`Job ID: \${job.id}\`);
  console.log(\`Status: \${job.status}\`);
  
  if (event === 'job.completed') {
    // Job completed successfully
    console.log(\`Result key: \${job.result_key}\`);
    
    // Fetch result
    const response = await fetch(
      \`https://api.radiance.cloud/airunner/result/json/\${job.id}\`
    );
    const result = await response.json();
    console.log('Result:', result);
    
    // Process result...
  } else if (event === 'job.failed') {
    // Job failed
    console.error(\`Job failed: \${job.failure_reason}\`);
    // Handle failure...
  }
  
  // Respond with 200 to acknowledge receipt
  res.status(200).send('OK');
});

app.listen(3000, () => {
  console.log('Webhook server listening on port 3000');
});
💡 Best Practices:
  • Respond with HTTP 200 quickly to acknowledge receipt
  • Process webhook data asynchronously if it takes time
  • Implement retry logic on your end (webhooks are sent once)
  • Verify the webhook came from the API by checking job status
  • Use HTTPS for your webhook URLs

⚡ Priority System

Jobs can be assigned different priority levels to control processing order. Higher priority jobs are claimed by agents before lower priority jobs.

Priority Levels

Priority Value Use Case
urgent 3 Critical, time-sensitive tasks that need immediate processing
high 2 Important tasks that should be processed soon
normal 1 (default) Standard processing priority
low 0 Background tasks, batch jobs, non-urgent processing

How Priority Works

Examples

# Submit urgent job (string format)
curl -X POST https://api.radiance.cloud/airunner/submit \\
  -F "model=text-generation" \\
  -F 'payload={"prompt": "Emergency alert text"}' \\
  -F "priority=urgent"

# Submit low priority batch job (numeric format)
curl -X POST https://api.radiance.cloud/airunner/submit \\
  -F "model=image-classification" \\
  -F "file=@image.jpg" \\
  -F "priority=0"

# Batch submission with mixed priorities
curl -X POST https://api.radiance.cloud/airunner/submit/batch \\
  -H "Content-Type: application/json" \\
  -d '{
    "jobs": [
      {
        "model": "text-generation",
        "payload": {"prompt": "Critical task"},
        "priority": "urgent"
      },
      {
        "model": "image-classification",
        "file_base64": "data:image/jpeg;base64,...",
        "priority": "low"
      }
    ]
  }'
📝 Note: Priority affects processing order but not the result quality. All jobs are processed with the same quality regardless of priority level.

⏱️ Timing Metrics

The API provides detailed timing metrics for every job to help you understand performance and identify bottlenecks.

Timing Fields

Field Type Description
created_at ISO timestamp When the job was submitted
started_at ISO timestamp or null When an agent started processing (null if still queued)
completed_at ISO timestamp or null When the job finished (null if still in progress)
queue_time_ms number Time waiting in queue (ms)
running_time_ms number Time spent processing (ms)
complete_time_ms number or null Total time from submission to completion (ms)
latency_ms number or null System overhead time (ms)

Timing Calculation

queue_time_ms = started_at - created_at
running_time_ms = completed_at - started_at
complete_time_ms = completed_at - created_at
latency_ms = complete_time_ms - (queue_time_ms + running_time_ms)

Example Timing Data

{
  "id": "550e8400-e29b-41d4-a716-446655440000",
  "status": "completed",
  "created_at": "2025-10-27T10:15:30.000Z",
  "started_at": "2025-10-27T10:15:45.000Z",
  "completed_at": "2025-10-27T10:16:12.000Z",
  "timing": {
    "queue_time_ms": 15000,    // Waited 15 seconds in queue
    "running_time_ms": 27000,  // Processed for 27 seconds
    "complete_time_ms": 42000, // Total 42 seconds
    "latency_ms": 0            // No system overhead
  }
}

Interpreting Metrics

💡 Performance Tips:
  • Monitor queue_time_ms to detect agent capacity issues
  • Use the /airunner/jobs/debug endpoint to analyze timing across multiple jobs
  • High priority jobs typically have lower queue times
  • Agents report their own processing metadata in the result meta field