HTTP → S3 ingest

Edge HTTP → S3 ingest for your lakehouse, warehouses, and models

EdgeMQ ingests events over HTTPS and lands them in your S3 as segments, Parquet, or schema-aware views - ready for Snowflake, Databricks, Postgres, DuckDB, and your feature pipelines.

Start free - 10 GiB/month included on the Starter plan. No credit card required.

How it works

EdgeMQ: a managed edge to S3 ingest layer for ML and lakehouse stacks

EdgeMQ is a managed edge ingest layer for modern data and ML stacks.

  1. Apps/devices send NDJSON to /ingest over HTTPS
  2. EdgeMQ materializes sealed segments into your S3 as segments and/or Parquet (raw or views), depending on your endpoint configuration
  3. Commit markers list the artifacts for each segment, and tell your jobs what's fully written and safe to read
p95 ingest latency
≤ 10 ms
P95 request → WAL
WAL → S3 commit
< 1 min
Median S3 commit
Regional availability
≥ 99.95%/region
Availability

Starter is free up to 10 GiB/month, then pay per GiB ingested. See pricing

Query with
Snowflake
Databricks
ClickHouse
DuckDB
Postgres

Your tools read Parquet (raw or views) or compressed segments from S3. EdgeMQ just keeps those datasets fresh.

No connectors or plugins required.

See S3 usage patterns →
The ML bottleneck

Getting data into the lake

As an ML engineer, MLOps engineer, or AI platform owner, you're held back by one thing over and over:

Data doesn't show up in S3 reliably.

Instead, you deal with:

  • Training pipelines that depend on homegrown data collectors that break quietly.
  • Constant questions like: "Is this dataset actually up to date?" "Did we drop any events during that incident?"
  • Painful back-and-forth with product / data engineering teams just to get a new event stream wired up.

You want to focus on models, features, and evaluation-not HTTP retries and S3 multipart uploads.

Simple integration

ML-friendly ingest in one call

Your upstream teams can send training and feature data with a simple call:

Choose your S3 artifacts

Segments

Compressed WAL segments plus commit markers. Ideal for long-term retention, raw replays, and custom pipelines that expand back to NDJSON when needed.

Parquet (raw)

Partitioned by date. Designed for direct reads by lakehouse and warehouse engines from S3, without changing how producers send NDJSON.

No upfront schema: full‑fidelity payload preserved (in a payload column). Filter efficiently by tenant/time via partitions and metadata columns.

Parquet (views)

Schema-aware, typed Parquet generated from view definitions. Great for feature tables and analytics without a separate “expand + parse” job.

curl -X POST "https://<region>.edge.mq/ingest" \
    -H "Authorization: Bearer $EDGEMQ_TOKEN" \
    -H "Content-Type: application/x-ndjson" \
    --data-binary @events.ndjson

EdgeMQ guarantees:

  • WAL ensures events hit disk before acknowledging.
  • 503 + Retry-After backpressure prevents silent drops during overload.
  • S3 + commit markers tell your jobs which segments are safe to read.

You don't build or own any of this ingest plumbing. You just depend on it.

Want to see your data in S3 in under 10 minutes?

Use cases

Built for ML workflows

EdgeMQ powers the data foundation for training, evaluation, features, and integrations.

Training & evaluation datasets

Keep training and eval datasets current without rebuilding pipelines. EdgeMQ streams new events into S3; your jobs load from EdgeMQ-managed prefixes.

Feature pipelines & replay

Rebuild feature tables from historical data when you change logic, without complex pipelines. EdgeMQ streams events into S3; transform segments into features as needed. The same raw data feeds both offline training and online feature stores.

Tool integrations

Query Parquet output with Snowflake, Databricks, ClickHouse, DuckDB, and Postgres (or expand segments when you need raw replay). EdgeMQ doesn't ask you to switch engines-it just keeps your S3 tables fed.

Built for teams that live on S3

You don't need to own ingest infrastructure

Most ML teams don't want to:

  • Run Kafka or Kinesis just for ingest.
  • Run and debug critical HTTP collectors, S3 uploads, and edge-case retries.
  • Explain to security why there are random access keys in source trees.

EdgeMQ takes this off your plate:

Managed edge infrastructure

Per-tenant microVMs, WAL on NVMe, S3 shippers, and health checks are operated for you.

Predictable overload behavior

If things get hot, producers see 503 + Retry-After. You don't get silent gaps in datasets.

Security that fits your platform

S3 writes via short-lived IAM roles and scoped prefixes; data teams and platform teams can govern it using the tools they already know.

You get a dependable data hose; platform/infrastructure stays in control; ML teams move faster.

Make S3 the live heart of your ML platform

  • Treat S3 as your live raw layer for ML data.
  • Build training, evaluation, and feature datasets from one source of truth.
  • Add new signals by pointing them at the same HTTP → S3 endpoint.

Related pages

Ready to feed your models with live data instead of brittle pipelines?

Stop babysitting brittle data feeds. Start assuming S3 is always fresh.