Sockudo
Server

Queue

Asynchronous job processing for webhooks and background tasks.

The queue system handles asynchronous processing of webhooks and background jobs. When an event triggers a webhook, the event is enqueued as a job and processed asynchronously by worker tasks.

How It Works

  1. An event occurs (channel occupied, member joined, client event, etc.).
  2. The webhook integration creates a job with the event payload.
  3. The job is pushed to the configured queue backend.
  4. Worker tasks pull jobs from the queue and dispatch webhook HTTP requests.

This decouples event handling from webhook delivery, preventing slow or failing webhook endpoints from blocking the WebSocket server.

Drivers

DriverBest ForPersistenceExternal Dependency
memoryDevelopment, testingNone (lost on restart)None
redisProduction single RedisYesRedis 6+
redis-clusterProduction with Redis ClusterYesRedis Cluster 6+
sqsAWS-native, serverlessYesAWS SQS
noneDisable webhooks entirelyN/ANone

Set the driver:

QUEUE_DRIVER=redis
{ "queue": { "driver": "redis" } }

Memory Queue

In-memory queue with no external dependencies. Jobs are lost on restart.

{ "queue": { "driver": "memory" } }
  • Capacity: 100,000 jobs max (FIFO eviction when full).
  • Processing: Background task polls every 500ms, spawns async tasks per job.
  • Use case: Development and testing only.
The memory queue is not suitable for production. Jobs are lost on server restart and cannot be shared across multiple Sockudo nodes.

Redis Queue

Uses Redis lists with BLPOP for efficient blocking queue consumption.

{
  "queue": {
    "driver": "redis",
    "redis": {
      "concurrency": 5,
      "prefix": "sockudo_queue:"
    }
  }
}
SettingEnv VarDefaultDescription
concurrencyQUEUE_REDIS_CONCURRENCY5Number of concurrent worker tasks
prefixQUEUE_REDIS_PREFIXsockudo_queue:Redis key prefix
url_overridenullOverride Redis URL (otherwise uses database.redis)
cluster_modefalseUse cluster-aware connections

How Workers Process Jobs

Each worker runs an infinite loop:

  1. BLPOP on the queue key with a short timeout.
  2. Deserialize the job payload (JSON).
  3. Dispatch the webhook HTTP request.
  4. Move to the next job.

Workers run concurrently - with concurrency: 5, up to 5 jobs are processed simultaneously.

Redis Key Format

{prefix}queue:{queue_name}

For example: sockudo_queue:queue:webhooks

Redis Cluster Queue

Same processing model as the Redis queue, but uses Redis Cluster connections.

{
  "queue": {
    "driver": "redis-cluster",
    "redis_cluster": {
      "concurrency": 5,
      "prefix": "sockudo_queue:",
      "nodes": ["redis://10.0.0.1:7000", "redis://10.0.0.2:7001"],
      "request_timeout_ms": 5000
    }
  }
}
SettingEnv VarDefaultDescription
concurrencyREDIS_CLUSTER_QUEUE_CONCURRENCY5Concurrent workers
prefixREDIS_CLUSTER_QUEUE_PREFIXsockudo_queue:Key prefix
nodesREDIS_CLUSTER_NODES["redis://127.0.0.1:6379"]Cluster seed nodes
request_timeout_ms5000Request timeout in ms

SQS Queue

AWS SQS-based queue with long polling. Supports both standard and FIFO queues.

{
  "queue": {
    "driver": "sqs",
    "sqs": {
      "region": "us-east-1",
      "concurrency": 5,
      "fifo": false
    }
  }
}

Configuration

SettingEnv VarDefaultDescription
regionQUEUE_SQS_REGIONus-east-1AWS region
queue_url_prefixnullCustom queue URL prefix
visibility_timeoutQUEUE_SQS_VISIBILITY_TIMEOUT30Message visibility timeout (seconds)
endpoint_urlQUEUE_SQS_ENDPOINT_URLnullCustom endpoint (for LocalStack)
max_messagesQUEUE_SQS_MAX_MESSAGES10Messages per poll
wait_time_secondsQUEUE_SQS_WAIT_TIME_SECONDS5Long polling wait time (seconds)
concurrencyQUEUE_SQS_CONCURRENCY5Concurrent workers
fifoQUEUE_SQS_FIFOfalseUse FIFO queue
message_group_iddefaultMessage group ID for FIFO queues

FIFO Queues

When fifo: true, Sockudo appends .fifo to the queue name and uses message_group_id for ordering. FIFO queues guarantee exactly-once delivery and strict ordering within a message group.

Local Development

Use LocalStack for local SQS testing:

QUEUE_SQS_ENDPOINT_URL=http://localhost:4566
QUEUE_SQS_REGION=us-east-1

Auto-Creation

Sockudo automatically creates SQS queues if they don't exist. Malformed messages are deleted automatically to prevent poison pill scenarios.

Job Structure

Each queued job contains:

FieldDescription
app_keyThe app key for webhook signing
app_idThe app ID
app_secretThe app secret for HMAC signature
payload.time_msEvent timestamp in milliseconds
payload.eventsArray of Pusher-compatible event objects
original_signatureDeduplication key

Choosing a Driver

ScenarioRecommended Driver
Development / testingmemory
Single-node productionredis
Multi-node productionredis or redis-cluster
AWS-native deploymentsqs
Webhooks not needednone
If you're already using Redis for the adapter, use redis for the queue too - no additional infrastructure needed.
Copyright © 2026