HTTP → S3 ingest
Edge HTTP → S3 ingest for your lakehouse, warehouses, and models
EdgeMQ is a managed HTTP → S3 edge ingest layer that takes events from services, devices, and partners on the public internet and lands them durably in your S3 bucket, ready for tools like Snowflake, Databricks, ClickHouse, DuckDB, and your feature pipelines.
Start free — 10 GiB/month included on the Starter plan.
Stop babysitting brittle data feeds.
Start assuming S3 is always fresh.
Starter is free up to 10 GiB/month, then pay per GiB ingested. See pricing
Your tools read from S3. EdgeMQ just keeps your bucket fed.
No connectors or plugins required.
See S3 usage patterns →Getting data into the lake
As an ML engineer, MLOps engineer, or AI platform owner, you're held back by one thing over and over:
Data doesn't show up in S3 reliably.
Instead, you deal with:
- ▸Training pipelines that depend on homegrown data collectors that break quietly.
- ▸Constant questions like: "Is this dataset actually up to date?" "Did we drop any events during that incident?"
- ▸Painful back-and-forth with product / data engineering teams just to get a new event stream wired up.
You want to focus on models, features, and evaluation—not HTTP retries and S3 multipart uploads.
EdgeMQ: a managed ingest layer for ML and lakehouse stacks
EdgeMQ is a managed ingest layer for modern data and ML stacks.
- Apps/devices send NDJSON to
/ingestover HTTPS - EdgeMQ writes to a WAL, segments, compresses, and ships to S3
- Commit markers tell your jobs which segments are safe to read
ML-friendly ingest in one call
Your upstream teams can send training and feature data with a simple call:
curl -X POST "https://<region>.edge.mq/ingest" \
-H "Authorization: Bearer $EDGEMQ_TOKEN" \
-H "Content-Type: application/x-ndjson" \
--data-binary @events.ndjsonEdgeMQ guarantees:
- ▸WAL ensures events hit disk before acknowledging.
- ▸503 + Retry-After backpressure prevents silent drops during overload.
- ▸S3 + commit markers tell your jobs which segments are safe to read.
You don't build or own any of this ingest plumbing. You just depend on it.
Want to see your data in S3 in under 10 minutes?
Built for ML workflows
EdgeMQ powers the data foundation for training, evaluation, features, and integrations.
Training & evaluation datasets
Keep training and eval datasets current without rebuilding pipelines. EdgeMQ streams new events into S3; your jobs load from EdgeMQ-managed prefixes.
Feature pipelines & replay
Rebuild feature tables from historical data when you change logic, without complex pipelines. EdgeMQ streams events into S3; transform segments into features as needed. The same raw data feeds both offline training and online feature stores.
Tool integrations
Query with Snowflake, Databricks, ClickHouse, DuckDB, and Postgres. EdgeMQ doesn't ask you to switch engines—it just keeps them fed with fresh data from S3.
You don't need to own ingest infrastructure
Most ML teams don't want to:
- ▸Run Kafka or Kinesis just for ingest.
- ▸Run and debug critical HTTP collectors, S3 uploads, and edge-case retries.
- ▸Explain to security why there are random access keys in source trees.
EdgeMQ takes this off your plate:
Managed edge infrastructure
Per-tenant microVMs, WAL on NVMe, S3 shippers, and health checks are operated for you.
Predictable overload behavior
If things get hot, producers see 503 + Retry-After. You don't get silent gaps in datasets.
Security that fits your platform
S3 writes via short-lived IAM roles and scoped prefixes; data teams and platform teams can govern it using the tools they already know.
You get a dependable data hose; platform/infrastructure stays in control; ML teams move faster.
Collaborate cleanly with data and platform teams
EdgeMQ is a shared primitive you can rally around. It's the common lakehouse ingest layer that data engineers, ML teams, and platform engineers all depend on, with S3 as the shared source of truth.
EdgeMQ is the shared lakehouse ingest layer each team sees slightly differently:
Platform / infra
- ▸Set up S3 buckets, prefixes, and IAM roles.
- ▸Provision EdgeMQ endpoints as a "paved road" for ingest.
Data engineers
- ▸Define schemas, prefixes, and downstream load jobs.
- ▸Use EdgeMQ as the standard way data enters the lake.
ML teams
- ▸Consume from the same S3 lake for training, evaluation, and features.
- ▸Ask for "one more prefix + schema" instead of "a new ingest system."
Everyone aligns on a single, well-understood lakehouse ingest layer.
Make S3 the live heart of your ML platform
- ▸Treat S3 as your live raw layer for ML data.
- ▸Build training, evaluation, and feature datasets from one source of truth.
- ▸Add new signals by pointing them at the same HTTP → S3 endpoint.
Related pages
- For Data Engineers — how your S3 Bronze layer is built and maintained on top of EdgeMQ.
- For Platform / Infra — how EdgeMQ is operated as a standardized ingest primitive.
Ready to feed your models with live data instead of brittle pipelines?
Stop babysitting brittle data feeds. Start assuming S3 is always fresh.