Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Enqueuing Jobs

Note

These endpoints are available in both application/json and application/msgpack formats.

Jobs are pushed to the queue by your application so that workers can process them asynchronously. Jobs can be scheduled for a future date by specifying a ready_at timestamp in the future, or by default jobs will be ready for processing immediately.

There are two endpoints for enqueueing jobs: single enqueue, or bulk enqueue. Both take jobs inputs in the exact same shape. The server responds with the job(s) and their generated IDs.

Common Job Parameters

Both endpoints accept and return the same structure, except the bulk enqueue endpoint wraps an array of {"jobs": [...]}.

Field Description
queue required
string
Arbitrary queue name to which the job is assigned. Must be valid UTF-8 and must not contain any of the follow reserved characters: ,, *, ?, [, ], {, }, \.
type required
string
Job type known to your application. Must be valid UTF-8 and must not contain any of the follow reserved characters: ,, *, ?, [, ], {, }, \.
ready_at
int64
If the client wishes to schedule this job for a future time, this field is set to the timestamp at which the job is ready for processing.
payload required
object
Any JSON-serializable type to be processed by your application
unique_key
string
Optional unique key for this job, which is used to protect against duplicate job enqueues. This is paired with the optional unique_while field which defines the scope within which the job is considered unique. Uniqueness is status-bound, not time-bound. There is no arbitrary expiry. Conflicting enqueues do not produce errors, but instead behave idempotently. A success response is returned with details of the existing matching job, and its duplicate field set to true. This key is intentionally global across all queues and job types. Clients should prefix it as necessary. Requires a pro license.
unique_while
string
When the job has a unique key, specifies the scope within which that job is considered unique. One of:
queued
Other jobs with the same unique_key will not be enqueued while this job is in the scheduled or ready statuses.
active
Other jobs with the same unique_key will not be enqueued while this job is in the scheduled, ready or in_flight statuses.
exists
Other jobs with the same unique_key will not be enqueued for as long as this job exists (i.e. until this job is reaped, according to the retention policy).
The default scope is queued.
backoff
object
Optional backoff policy which overrides the server's default policy. All fields are required. Zizq computes the backoff delay as base_ms + (attempts^exponent) + (rand(0.0..jitter_ms)*attempts) . The jitter_ms mitigates retry flooding when failures occur clustered together.
backoff.base_ms
int32
The minimum delay in milliseconds between job retries.
backoff.exponent
float
A multiplier applied to the number of attempts on each retry, used as pow(attempts, exponent) to produce an increasing delay in milliseconds.
backoff.jitter_ms
int32
A random delay added onto each attempt. Multiplied by the total number of attempts, such as attempts * rand(0..jitter). Prevents retries clutering together.
retry_limit
int32
Overrides the severs default retry limit for this job. Once this limit is reached, the server marks the job dead.
retention
object
Optional retention policy for dead and completed jobs which overrides the server's default policy. All fields are optional.
retention.dead_ms
int64
The number of milliseconds for which to retain dead jobs after all retries have been exhausted. When not set, the server's default value (7 days) applies. When set to zero, jobs are purged as soon as all retries have been exhausted.
retention.completed_ms
int64
The number of milliseconds for which to retain completed jobs after successful processing. When not set, the server's default value (zero) applies. When set to zero, jobs are purged immediately upon completion.

Common Job Response

Both endpoints accept and return the same structure, except the bulk enqueue endpoint wraps an array of {"jobs": [...]}.

Field Description
id required
string
Unique time-sequenced job ID assigned by the server.
queue required
string
Arbitrary queue name to which the job is assigned
type required
string
Job type known to your application
status required
string
The job status on the server. One of:
  • scheduled
  • ready
  • in_flight
  • completed
  • dead
Actual statuses shown will be context-dependent.
unique_key
string
Optional unique key for this job, which is used to protect against duplicate job enqueues. This is paired with the optional unique_while field which defines the scope within which the job is considered unique.
unique_while
string
When the job has a unique key, specifies the scope within which that job is considered unique. One of:
queued
Conflicting jobs will not be enqueued while this job is in the scheduled or ready statuses.
active
Conflicting jobs will not be enqueued while this job is in the scheduled, ready or in_flight statuses.
exists
Conflicting jobs will not be enqueued while this job exists in any status (i.e. until the job is reaped, according to the retention policy).
The default scope is queued.
duplicate required
boolean
Only returned on enqueue responses. Set to true if this job was a duplicate enqueue of an existing job according to its unique_key and unique_while scope.
ready_at required
int64
The timestamp at which this job is ready to be dequeued by workers.
attempts required
int32
The number of times this job has been previously attempted (starts at zero).
backoff
object
Optional backoff policy which overrides the server's default policy. All fields are required. Zizq computes the backoff delay as base_ms + (attempts^exponent) + (rand(0.0..jitter_ms)*attempts) . The jitter_ms mitigates retry flooding when failures occur clustered together.
backoff.base_ms
int32
The minimum delay in milliseconds between job retries.
backoff.exponent
float
A multiplier applied to the number of attempts on each retry, used as pow(attempts, exponent) to produce an increasing delay in milliseconds.
backoff.jitter_ms
int32
A random delay added onto each attempt. Multiplied by the total number of attempts, such as attempts * rand(0..jitter). Prevents retries clutering together.
retry_limit
int32
Overrides the severs default retry limit for this job. Once this limit is reached, the server marks the job dead.
retention
object
Optional retention policy for dead and completed jobs which overrides the server's default policy. All fields are optional.
retention.dead_ms
int64
The number of milliseconds for which to retain dead jobs after all retries have been exhausted. When not set, the server's default value (7 days) applies. When set to zero, jobs are purged as soon as all retries have been exhausted.
retention.completed_ms
int64
The number of milliseconds for which to retain completed jobs after successful processing. When not set, the server's default value (zero) applies. When set to zero, jobs are purged immediately upon completion.

POST /jobs

Enqueues a single job.

Request Body

See Common Job Parameters.

Responses

200 OK

The request was processed but the specified job was a duplicate of an existing job according to its unique_key and unique_while scope. The returned data is that of the existing job, and the duplicate flag is set to true.

See Common Job Response.

201 Created

The request was processed and a new job has been enqueued.

See Common Job Response.

400 Bad Request

Returned when given invalid inputs.

Field Description
error required
string
A description of the error.

403 Forbidden

Returned when the client attempts to use pro features but the server is not configured with a pro license.

Field Description
error required
string
A description of the error.

POST /jobs/bulk

Enqueues multiple jobs atomically.

Request Body

Field Description
jobs required
array
Array of jobs in the same shape as for a single enqueue request.

Responses

200 OK

The request was processed but all the specified jobs were duplicates of existing jobs according to their unique_key and unique_while scopes. The returned data is that of the existing jobs, and their duplicate flags are set to true.

See Common Job Response.

201 Created

The request was processed and new jobs have been enqueued. Where unique_key values were present, any duplicates are identified by their duplicate flags.

Field Description
jobs required
array
Array of jobs in the same shape as for a single enqueue response, and in the same order as the input request.

400 Bad Request

Returned when given invalid inputs.

Field Description
error required
string
A description of the error.

403 Forbidden

Returned when the client attempts to use pro features but the server is not configured with a pro license.

Field Description
error required
string
A description of the error.

Examples

Enqueue a single job

http POST http://127.0.0.1:7890/jobs <<'JSON'
{
    "queue": "example",
    "priority": 500,
    "type": "hello_world",
    "payload": {"greet": "World"}
}
JSON
HTTP/1.1 201 Created
content-length: 143
content-type: application/json
date: Fri, 13 Mar 2026 08:53:47 GMT

{
    "attempts": 0,
    "id": "03fr1jkpcsipbsckqj0y6pgr7",
    "priority": 500,
    "queue": "example",
    "ready_at": 1773392027425,
    "status": "ready",
    "type": "hello_world"
}

Enqueue a scheduled Job

Jobs are explicitly scheduled by providing a ready_at timestamp with a future dated value.

http POST http://127.0.0.1:7890/jobs <<'JSON'
{
    "queue": "example",
    "priority": 500,
    "type": "hello_world",
    "payload": {"greet": "Later"},
    "ready_at": 1773396035647
}
JSON
HTTP/1.1 201 Created
content-length: 147
content-type: application/json
date: Fri, 13 Mar 2026 09:01:08 GMT

{
    "attempts": 0,
    "id": "03fr1l0cl1quc0sfe6y2711op",
    "priority": 500,
    "queue": "example",
    "ready_at": 1773396035647,
    "status": "scheduled",
    "type": "hello_world"
}

Enqueue jobs with unique keys

Unique jobs require a pro license.

http POST http://127.0.0.1:7890/jobs <<'JSON'
{
    "queue": "example",
    "priority": 500,
    "type": "hello_world",
    "unique_key": "hello_world:world",
    "payload": {"greet": "World"}
}
JSON
HTTP/1.1 201 Created
content-length: 218
content-type: application/json
date: Mon, 23 Mar 2026 11:19:58 GMT

{
    "attempts": 0,
    "duplicate": false,
    "id": "03ft8h3ubrx53abhw1fxbora3",
    "priority": 500,
    "queue": "example",
    "ready_at": 1774264798519,
    "status": "ready",
    "type": "hello_world",
    "unique_key": "hello_world:world",
    "unique_while": "queued"
}
http POST http://127.0.0.1:7890/jobs <<'JSON'
{
    "queue": "example",
    "priority": 500,
    "type": "hello_world",
    "unique_key": "hello_world:world",
    "payload": {"greet": "World"}
}
JSON
HTTP/1.1 200 OK
content-length: 217
content-type: application/json
date: Mon, 23 Mar 2026 11:20:26 GMT

{
    "attempts": 0,
    "duplicate": true,
    "id": "03ft8h3ubrx53abhw1fxbora3",
    "priority": 500,
    "queue": "example",
    "ready_at": 1774264798519,
    "status": "ready",
    "type": "hello_world",
    "unique_key": "hello_world:world",
    "unique_while": "queued"
}

Bulk enqueue multiple jobs

An array of jobs is passed in the request, and the server responds with an array containing the same number of jobs, in the same order as the input request. This operation is atomic. If any jobs are invalid or fail to be enqueued, no jobs are enqueued and an error response is returned.

http POST http://127.0.0.1:7890/jobs/bulk <<'JSON'
{
    "jobs": [
        {
            "queue": "example",
            "priority": 500,
            "type": "hello_world",
            "payload": {"greet": "World"}
        },
        {
            "queue": "example",
            "priority": 500,
            "type": "hello_world",
            "payload": {"greet": "Later"},
            "ready_at": 1773396035647
        }
    ]
}
JSON
HTTP/1.1 201 Created
content-length: 302
content-type: application/json
date: Fri, 13 Mar 2026 09:07:17 GMT

{
    "jobs": [
        {
            "attempts": 0,
            "id": "03fr1m7p1mwctku2fptz1x5p4",
            "priority": 500,
            "queue": "example",
            "ready_at": 1773392837882,
            "status": "ready",
            "type": "hello_world"
        },
        {
            "attempts": 0,
            "id": "03fr1m7p1mwctku2fpx425jzr",
            "priority": 500,
            "queue": "example",
            "ready_at": 1773396035647,
            "status": "scheduled",
            "type": "hello_world"
        }
    ]
}

Enqueue a job with explicit backoff policy

http POST http://127.0.0.1:7890/jobs <<'JSON'
{
    "queue": "example",
    "priority": 500,
    "type": "hello_world",
    "payload": {"greet": "World"},
    "backoff": {
        "base_ms": 1000,
        "exponent": 1.5,
        "jitter_ms": 10000
    }
}
JSON
HTTP/1.1 201 Created
content-length: 203
content-type: application/json
date: Sat, 14 Mar 2026 03:24:16 GMT

{
    "attempts": 0,
    "backoff": {
        "base_ms": 1000,
        "exponent": 1.5,
        "jitter_ms": 10000
    },
    "id": "03fr7ki3x5kqf1epbydrfebkz",
    "priority": 500,
    "queue": "example",
    "ready_at": 1773458656424,
    "status": "ready",
    "type": "hello_world"
}

Enqueue a job with explicit retention policy

http POST http://127.0.0.1:7890/jobs <<'JSON'
{
    "queue": "example",
    "priority": 500,
    "type": "hello_world",
    "payload": {"greet": "World"},
    "retention": {
        "completed_ms": 86400000,
        "dead_ms": 604800000
    }
}
JSON
HTTP/1.1 201 Created
content-length: 201
content-type: application/json
date: Sat, 14 Mar 2026 03:26:01 GMT

{
    "attempts": 0,
    "id": "03fr7kudjeradun2wk1v3tn7b",
    "priority": 500,
    "queue": "example",
    "ready_at": 1773458761086,
    "retention": {
        "completed_ms": 86400000,
        "dead_ms": 604800000
    },
    "status": "ready",
    "type": "hello_world"
}