Pricing
Pay for what you ingest
Pay for uncompressed ingress bytes delivered to your S3.
Includes 10 GiB/month with the Starter plan.
Detailed comparison
| Feature | Starter | Pro | Enterprise |
|---|---|---|---|
| Base price | Free | $99/mo | Custom |
| Usage pricing | $0.10/GiB | $0.06/GiB | Volume discounts |
| Included ingest | 10 GiB/mo included | — | — |
| Edge regions | 1 | Up to 3 | Unlimited |
| Autoscaled CUs per edge region | 1 CU (max) | Up to 3 CUs | Custom |
| Sustained throughput | ~0.7–1.0 MiB/s | ~3–4.5 MiB/s (per region) | Custom SLO |
| Max payload size | 1 MiB | 10 MiB | Custom |
| Scale-to-zero | On (1–3s wake) | Off | Off |
| Burst bucket | 100–150 MiB | 2× reserved (≤5 min) | 3× reserved |
| Reserved throughput | — | Add-on (per edge region) | |
| Dedicated outbound IP | — | Add-on | |
| Webhooks & alerts | — | ||
| Usage exports | — | ||
| SLA | Best-effort | 99.9% | 99.99% |
| Support | Best effort | Business hours | 24×7 + TAM |
| Compliance (SOC2, DPA) | — | — | |
| Private networking | — | — | |
| Cross‑region HA | — | — | |
| BYO KMS | — | — |
What's a Compute Unit (CU)?
A Compute Unit (CU) is EdgeMQ's atomic scaling unit. Each CU handles HTTP ingestion, writes to a durable Write-Ahead Log (WAL) on local NVMe storage, then ships sealed segments to your S3 bucket.
CU‑S (Starter)
- ▸CPU: shared‑cpu‑1x
- ▸Memory: 512 MB
- ▸WAL Storage: 10 GB NVMe
- ▸Processes: API + S3 shipper
- ▸Max payload: 1 MiB
CU‑P (Pro default)
- ▸CPU: shared‑cpu‑1x
- ▸Memory: 1 GB
- ▸WAL Storage: 20 GB NVMe
- ▸Processes: API + S3 shipper
- ▸Max payload: 10 MiB
Scale horizontally: Pro accounts autoscale to run multiple CUs per edge region (up to 3) for higher aggregate throughput. Traffic is automatically load balanced across CUs, and each operates independently with its own WAL and S3 shipper.
How EdgeMQ works
POST /ingest
Send NDJSON, RecordIO, or JSON payloads via HTTP. No brokers, no shards—just HTTP.
Durable WAL
Data appends to a local Write-Ahead Log with CRC checksums. Sub-10ms p95 latency to durable storage.
S3 Commit
Sealed segments are compressed (zstd) and shipped to your S3 bucket with commit markers.
HTTP Request → WAL Append (<10ms p95) → Seal Segment → zstd Compress → S3 Upload → Commit Marker ✓
Frequently asked questions
What happens after the included 10 GiB on Starter?
We notify you at 80% and 100% of your included amount. Usage continues seamlessly at $0.10/GiB beyond 10 GiB.
What is “scale-to-zero” on Starter?
Starter accounts automatically suspend their compute after ~5 minutes of inactivity to minimize infrastructure costs.
Wake time: 1–3 seconds on the first request after idle. Subsequent requests are fast.
Data gap risk: Requests during the wake period may timeout or be lost. For always-on availability, upgrade to Pro (scale-to-zero disabled by default).
Why is the Starter payload limit 1 MiB instead of 10 MiB?
The 1 MiB limit on Starter prevents memory exhaustion and ensures low latency for shared-CPU instances. This limit is appropriate for most event streams, analytics, and IoT use cases.
If you need larger payloads (batch uploads, large analytics events), upgrade to Pro for the full 10 MB limit or Enterprise for custom sizes up to 50 MB.
What are “reserved throughput” and “burst allowance”?
Reserved throughput is the guaranteed MB/s you can sustain per edge region with consistent low latency (p95 ≤10ms). This is purchased as an add-on for Pro accounts.
Burst allowance is a token bucket that lets you temporarily exceed your reserved rate. Pro accounts get 2× reserved capacity for ≤5 minutes; Enterprise gets 3× by negotiation.
Above your burst allowance, requests receive 503 + Retry-After headers to signal backpressure.
Is my data isolated from other tenants?
Yes. Every EdgeMQ account runs on its own dedicated compute instance with:
- Private NVMe-backed WAL volume (no shared disk)
- Dedicated public IPv4 and IPv6 addresses
- Isolated process boundaries
Starter uses shared CPU scheduling (time-sliced with other tenants) but maintains full data isolation. Pro+ can optionally use dedicated CPUs for guaranteed performance.
How is metering calculated?
We meter uncompressed ingress bytes using the HTTP Content-Length header.
This means you pay for the raw data you send, not the compressed size stored in S3. EdgeMQ automatically compresses segments with zstd before shipping to reduce your S3 storage costs.
Example: Send 100 MB of NDJSON → Charged for 100 MB → Stored in S3 as ~30 MB (with typical compression).
Can I use EdgeMQ with multiple cloud providers?
Yes. EdgeMQ is cloud-agnostic. You can configure it to ship to:
- AWS S3 (any region)
- Google Cloud Storage
- Azure Blob Storage
- S3-compatible services (MinIO, Backblaze B2, etc.)
Simply provide your bucket credentials and region. EdgeMQ handles the rest.
Ready to get started?
Starter is free and includes 10 GiB/month. After that, pay $0.10/GiB. Upgrade anytime as your needs grow.