Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Taking & Processing Jobs

Jobs are processed by workers by taking them from the queue, doing work specific to your application and then notifying the server of success (ack) or failure (nack).

Workers stream jobs from specific queues, or from all queues. This endpoint never closes its connection so the worker can continue taking jobs indefinitely. By default workers receive at most one job at a time and will not receive any more jobs until the server receives a success or failure for that job. Workers can specify a prefetch limit to request more than one job at a time, e.g. to increase throughput, or because they dispatch to multiple threads to process more than one job concurrently.

When jobs fail Zizq applies a backoff policy, as configured on the server and/or configured on that specific job. After the configured retry_limit is reached, jobs move to the dead set and are optionally retained for a period of time based on the configured retention policy.

When jobs complete successfully, they may be retained for a period of time based on the configured retention policy.

GET /jobs/take

Note

This is a streaming endpoint and is available in both application/x-ndjson and application/vnd.zizq.msgpack-stream formats.

Opens a persistent streaming connection that receives jobs for specified queues or all queues in realtime as new jobs become ready for processing.

Note

Blank heartbeat messages are sent over the stream at periodic intervals so that the server can detect disconnects early. The client should consume but ignore these messages.

Take jobs from one or more queues. This is a streaming endpoint — the connection stays open and jobs are delivered as newline-delimited JSON as they become available.

Caution

Any in_flight jobs are automatically returned to the queue (the ready status) whenever the connection is closed. This is correct and robust, so other workers may receive those in-flight jobs in the case of interruption, however if the reason for disconnection is part of a coordinated worker shutdown process, make sure to acknowledge or fail in-flight jobs before closing the stream.

Parameters

Field Description
queue query
string
Optional comma-separated list of queue names from which to take jobs. Defaults to all queues.
prefetch query
int32
Optional number of jobs to prefetch at once without reporting success or failure. The default is 1. Workers that process multiple jobs concurrently should increase this accordingly.

Responses

200 OK

Streaming list of jobs. See the Content Type section of the introduction for details on the streaming content types. Each job has the following structure.

Note

If the connection closes prematurely, any jobs that were in-flight for this worker are automatically returned to the ready status so other workers can take the job.

Field Description
id required
string
Unique time-sequenced job ID assigned by the server.
queue required
string
Arbitrary queue name to which the job is assigned
type required
string
Job type known to your application
status required
string
The job status on the server. One of:
  • scheduled
  • ready
  • in_flight
  • completed
  • dead
Actual statuses shown will be context-dependent.
unique_key
string
Optional unique key for this job, which is used to protect against duplicate job enqueues. This is paired with the optional unique_while field which defines the scope within which the job is considered unique.
unique_while
string
When the job has a unique key, specifies the scope within which that job is considered unique. One of:
queued
Conflicting jobs will not be enqueued while this job is in the scheduled or ready statuses.
active
Conflicting jobs will not be enqueued while this job is in the scheduled, ready or in_flight statuses.
exists
Conflicting jobs will not be enqueued while this job exists in any status (i.e. until the job is reaped, according to the retention policy).
The default scope is queued.
payload required
object
Any JSON-serializable type to be processed by your application.
ready_at required
int64
The timestamp at which this job is ready to be dequeued by workers.
attempts required
int32
The number of times this job has been previously attempted (starts at zero).
backoff
object
Optional backoff policy which overrides the server's default policy. All fields are required. Zizq computes the backoff delay as base_ms + (attempts^exponent) + (rand(0.0..jitter_ms)*attempts) . The jitter_ms mitigates retry flooding when failures occur clustered together.
backoff.base_ms
int32
The minimum delay in milliseconds between job retries.
backoff.exponent
float
A multiplier applied to the number of attempts on each retry, used as pow(attempts, exponent) to produce an increasing delay in milliseconds.
backoff.jitter_ms
int32
A random delay added onto each attempt. Multiplied by the total number of attempts, such as attempts * rand(0..jitter). Prevents retries clutering together.
retry_limit
int32
Overrides the severs default retry limit for this job. Once this limit is reached, the server marks the job dead.
retention
object
Optional retention policy for dead and completed jobs which overrides the server's default policy. All fields are optional.
retention.dead_ms
int64
The number of milliseconds for which to retain dead jobs after all retries have been exhausted. When not set, the server's default value (7 days) applies. When set to zero, jobs are purged as soon as all retries have been exhausted.
retention.completed_ms
int64
The number of milliseconds for which to retain completed jobs after successful processing. When not set, the server's default value (zero) applies. When set to zero, jobs are purged immediately upon completion.

POST /jobs/{id}/success

Note

This endpoint is available in both application/json and application/msgpack formats.

Notify the backend that an in_flight job has completed successfully (ack).

Tip

If your client supports HTTP/2, you should use multiplexing to send multiple acknowledgements over the same stream without waiting for the response of each acknowledgment synchronously.

If your client only supports HTTP/1.1, you should use a keep-alive connection so all acknowledgements share the same connection.

Parameters

Field Description
id path
string
The ID of the job that is currently in_flight and has completed successfully.

Responses

204 No Content

Acknowledgement successfully received.

404 Not Found

The job does not exist or is no longer in-flight.

Note

Clients can generally ignore this error as there is action to be taken as a result.

Field Description
error required
string
A description of the error.

POST /jobs/success

Note

This endpoint is available in both application/json and application/msgpack formats.

Notify the backend that multiple in_flight jobs have completed successfully (bulk ack).

Tip

If your client supports HTTP/2, you should use multiplexing to send multiple batches of acknowledgements over the same stream without waiting for the response of each bulk acknowledgment synchronously.

If your client only supports HTTP/1.1, you should use a keep-alive connection so all acknowledgements share the same connection.

Request Body

Field Description
ids required
array
The IDs of the jobs that are currently in_flight and have completed successfully.

Responses

204 No Content

Acknowledgement successfully received.

422 Unprocessible Entity

Returned when the operation was partially (or completely) unsuccessfull due to the presence of job IDs that do not exist or are no longer in-flight. Only the invalid IDs are not processed. All other IDs are acknowledged successfully.

Note

Clients can generally ignore this error as there is action to be taken as a result.

Field Description
not_found required
array
Array of input job IDs that were not valid in-flight IDs

POST /jobs/{id}/failure

Note

This endpoint is available in both application/json and application/msgpack formats.

Notify the backend that an in_flight job has failed (nack). Zizq may retry this job according to the backoff policy.

Parameters

Field Description
id path
string
The ID of the job that is currently in_flight and has failed.

Request Body

Field Description
message required
string
Error message to be recorded as the reason for this failure.
error_type
string
Optional error type (e.g. exception class) to be recorded with the error.
backtrace
string
Optional full backtrace to be recorded with the error.
retry_at
int64
Optional timestamp specifying that this job should be retried at the specified time. This overrides any configured backoff policy.
kill
boolean
When set to true, overrides the backoff policy and marks the job as dead immediately. Setting this field to false does not prevent the server from marking this job dead if it has exceeded its retry limit.

Responses

200 OK

Error details successfully received.

Field Description
id required
string
Unique time-sequenced job ID assigned by the server.
queue required
string
Arbitrary queue name to which the job is assigned
type required
string
Job type known to your application
status required
string
The job status on the server. One of:
  • scheduled
  • ready
  • in_flight
  • completed
  • dead
Actual statuses shown will be context-dependent.
unique_key
string
Optional unique key for this job, which is used to protect against duplicate job enqueues. This is paired with the optional unique_while field which defines the scope within which the job is considered unique.
unique_while
string
When the job has a unique key, specifies the scope within which that job is considered unique. One of:
queued
Conflicting jobs will not be enqueued while this job is in the scheduled or ready statuses.
active
Conflicting jobs will not be enqueued while this job is in the scheduled, ready or in_flight statuses.
exists
Conflicting jobs will not be enqueued while this job exists in any status (i.e. until the job is reaped, according to the retention policy).
The default scope is queued.
duplicate required
boolean
Only returned on enqueue responses. Set to true if this job was a duplicate enqueue of an existing job according to its unique_key and unique_while scope.
ready_at required
int64
The timestamp at which this job is ready to be dequeued by workers.
attempts required
int32
The number of times this job has been previously attempted (starts at zero).
backoff
object
Optional backoff policy which overrides the server's default policy. All fields are required. Zizq computes the backoff delay as base_ms + (attempts^exponent) + (rand(0.0..jitter_ms)*attempts) . The jitter_ms mitigates retry flooding when failures occur clustered together.
backoff.base_ms
int32
The minimum delay in milliseconds between job retries.
backoff.exponent
float
A multiplier applied to the number of attempts on each retry, used as pow(attempts, exponent) to produce an increasing delay in milliseconds.
backoff.jitter_ms
int32
A random delay added onto each attempt. Multiplied by the total number of attempts, such as attempts * rand(0..jitter). Prevents retries clutering together.
retry_limit
int32
Overrides the severs default retry limit for this job. Once this limit is reached, the server marks the job dead.
retention
object
Optional retention policy for dead and completed jobs which overrides the server's default policy. All fields are optional.
retention.dead_ms
int64
The number of milliseconds for which to retain dead jobs after all retries have been exhausted. When not set, the server's default value (7 days) applies. When set to zero, jobs are purged as soon as all retries have been exhausted.
retention.completed_ms
int64
The number of milliseconds for which to retain completed jobs after successful processing. When not set, the server's default value (zero) applies. When set to zero, jobs are purged immediately upon completion.

404 Not Found

The job does not exist or is no longer in-flight.

Note

Clients can generally ignore this error as there is action to be taken as a result.

Field Description
error required
string
A description of the error.

Examples

Streaming jobs from all queues

http --stream GET http://127.0.0.1:7890/jobs/take
HTTP/1.1 200 OK
content-type: application/x-ndjson
date: Sat, 14 Mar 2026 04:59:48 GMT
transfer-encoding: chunked

{
    "attempts": 0,
    "dequeued_at": 1773464388204,
    "id": "03fr82s077azjmurys29qjch4",
    "payload": {
        "greet": "World On Queue #1"
    },
    "priority": 200,
    "queue": "example_1",
    "ready_at": 1773464269527,
    "status": "in_flight",
    "type": "hello_world"
}

Streaming jobs from specified queues

http --stream GET "http://127.0.0.1:7890/jobs/take?queue=example_5,example_7"
HTTP/1.1 200 OK
content-type: application/x-ndjson
date: Sat, 14 Mar 2026 05:01:44 GMT
transfer-encoding: chunked

{
    "attempts": 0,
    "dequeued_at": 1773464504362,
    "id": "03fr82s4q2jktvg5p1acpeqfe",
    "payload": {
        "greet": "World On Queue #5"
    },
    "priority": 200,
    "queue": "example_5",
    "ready_at": 1773464270599,
    "status": "in_flight",
    "type": "hello_world"
}

Streaming jobs with prefetching

By default prefetch=1 so only one job is received before each acknowledgement. Specifying a higher prefetch value allows the worker to take more jobs at once.

http --stream GET "http://127.0.0.1:7890/jobs/take?prefetch=3"
HTTP/1.1 200 OK
content-type: application/x-ndjson
date: Sat, 14 Mar 2026 05:04:41 GMT
transfer-encoding: chunked

{
    "attempts": 0,
    "dequeued_at": 1773464681299,
    "id": "03fr82s077azjmurys29qjch4",
    "payload": {
        "greet": "World On Queue #1"
    },
    "priority": 200,
    "queue": "example_1",
    "ready_at": 1773464269527,
    "status": "in_flight",
    "type": "hello_world"
}

{
    "attempts": 0,
    "dequeued_at": 1773464681299,
    "id": "03fr82s1c7s2arw4bxboy1ibe",
    "payload": {
        "greet": "World On Queue #2"
    },
    "priority": 200,
    "queue": "example_2",
    "ready_at": 1773464269797,
    "status": "in_flight",
    "type": "hello_world"
}

{
    "attempts": 0,
    "dequeued_at": 1773464681299,
    "id": "03fr82s2honqa7zobvzmwql9u",
    "payload": {
        "greet": "World On Queue #3"
    },
    "priority": 200,
    "queue": "example_3",
    "ready_at": 1773464270070,
    "status": "in_flight",
    "type": "hello_world"
}

Reporting job success (ack)

http POST http://127.0.0.1:7890/jobs/03fr82s1c7s2arw4bxboy1ibe/success
HTTP/1.1 204 No Content
date: Sat, 14 Mar 2026 05:07:08 GMT

Reporting bulk job success (bulk ack)

http POST http://127.0.0.1:7890/jobs/success <<'JSON'
{
    "ids": [
        "03fr82s1c7s2arw4bxboy1ibe",
        "03fr82s2honqa7zobvzmwql9u",
        "03fr82s3lxshnj4znseuvlaub"
    ]
}
JSON
HTTP/1.1 204 No Content
date: Sat, 14 Mar 2026 05:10:09 GMT

Reporting job failure (nack)

http POST http://127.0.0.1:7890/jobs/03fr82s4q2jktvg5p1acpeqfe/failure <<'JSON'
{
    "message": "Something went wrong",
    "error_type": "RuntimeError"
}
JSON
HTTP/1.1 200 OK
content-length: 203
content-type: application/json
date: Sat, 14 Mar 2026 05:11:53 GMT

{
    "attempts": 1,
    "dequeued_at": 1773465062881,
    "failed_at": 1773465113506,
    "id": "03fr82s4q2jktvg5p1acpeqfe",
    "priority": 200,
    "queue": "example_5",
    "ready_at": 1773465156928,
    "status": "scheduled",
    "type": "hello_world"
}

Acknowledging jobs while streaming

Your application would usually do this in code, not on the command line like this. This example uses HTTPie to pipe the JSON through jq which extracts the id of each job before piping it through xargs to send an acknowledgement for each job so that the stream receives the next job until no more jobs remain. Because the server never closes the stream this pipe command will never exit.

$ http --stream GET http://127.0.0.1:7890/jobs/take \
  | jq -r --unbuffered .id \
  | xargs -I {id} http POST http://127.0.0.1:7890/jobs/{id}/success

HTTP/1.1 204 No Content
date: Sat, 14 Mar 2026 05:53:44 GMT



HTTP/1.1 204 No Content
date: Sat, 14 Mar 2026 05:53:44 GMT



HTTP/1.1 204 No Content
date: Sat, 14 Mar 2026 05:53:45 GMT



HTTP/1.1 204 No Content
date: Sat, 14 Mar 2026 05:53:45 GMT



HTTP/1.1 204 No Content
date: Sat, 14 Mar 2026 05:53:45 GMT



HTTP/1.1 204 No Content
date: Sat, 14 Mar 2026 05:53:45 GMT



HTTP/1.1 204 No Content
date: Sat, 14 Mar 2026 05:53:46 GMT