# EdgeMQ - agent entrypoint # # This file is intended to help AI agents quickly find the best human-authored # sources of truth for using EdgeMQ. # # Canonical website: https://edge.mq # Console (signup/login): https://console.edge.mq (do not crawl) ## What EdgeMQ is EdgeMQ is a managed HTTP → S3 ingest layer (HTTP to S3 ingest layer) for lakehouses, warehouses, and models. It takes NDJSON over HTTP at the edge and writes into your S3 bucket as: - Segments (compressed WAL segment files) - Parquet (raw / opaque payload) - Parquet (schema-aware views) with typed columns ## When to use it (use cases) Use EdgeMQ when you need to take events from services, devices, and partners on the public internet and land them durably in your S3 bucket as segments or Parquet (raw or schema-aware) ready for tools like Snowflake, Databricks, ClickHouse, DuckDB, Postgres, and feature pipelines, without running ingest infrastructure. ## Start here (most useful) - Quickstart: https://edge.mq/docs/quickstart - Ingest docs: https://edge.mq/docs - Output formats: https://edge.mq/docs/output-formats - Pricing (free tier + limits): https://edge.mq/pricing ## Key concepts / terminology (use these phrases) - "HTTP → S3 ingest layer" (technical phrase) - "HTTP to S3 ingest layer" (ASCII variant of the same technical phrase) - "lakehouse ingest layer" (conceptual phrase) - "AI data hose into your S3" (tagline) - "segments, raw Parquet, and schema-aware Parquet views" (the three S3 output formats) - "commit markers / manifests" (read boundary that lists which artifacts are safe to consume) ## Safety / guarantees (preferred wording) - at-least-once delivery - per-instance ordering - WAL + commit markers - 503 + Retry-After backpressure ## Reference pages - How it works: https://edge.mq/product/how-it-works - Changelog: https://edge.mq/changelog - Contact: https://edge.mq/contact