The problem is not Datadog. The problem is what you’re sending to it.

Most engineering teams ingest everything because filtering feels risky. What if you drop something important? So logs flow in unfiltered, labels pile up, and Datadog ingestion costs climb quarter over quarter. The data is there. Your team just won’t use most of it.

That volume problem compounds fast. As your observability stack grows, so does the number of sources, services, and platforms generating telemetry. Without a central point of control, every new integration adds more noise and more cost. Apica Flow gives you that central point. One place to set rules, one place to manage what flows where, one place to keep the whole stack from getting away from you. That kind of control gets harder to build later. Teams that establish it early spend less time firefighting their own data and more time using it.

Apica Flow sits between your log sources and your observability platform, letting you decide what actually gets ingested before the cost is incurred. For Datadog users specifically, that distinction matters. Datadog pricing is tied directly to ingested data volume. Every label you don’t need, every cart service log you don’t read, every Kafka event you’ve never once opened in a live investigation is costing you money.

Filtering after ingestion is too late. Apica Flow filters before.

Apica was named a Visionary in the Gartner® Magic Quadrant™ for Observability Platforms, 2025 — recognition that reflects both the breadth of the platform and where the market is heading.

[Recognized as a Visionary in the 2025 Gartner Magic Quadrant for Observability Platforms. Learn more at www.apica.io or visit docs.apica.io.]

What a Pipeline Actually Does

The concept is simple. A telemetry pipeline intercepts your log data in motion, applies rules to it, and then forwards only what you want to your downstream platform. Apica Flow handles that process with a visual pipeline builder that does not require custom code or deep infrastructure expertise.

You create a pipeline, give it a name, and then attach rules. The most direct rule for cost reduction is the filter rule. It lets you drop specific labels from individual log entries or exclude entire log classes by service name, severity, message type, or any field using a regular expression. If your cart service logs or Kafka events are not part of your active alerting or investigation workflow, you can exclude them entirely before they reach Datadog.

The pipeline preview feature gives you a real-time view of how your rules affect log output before anything goes live. Fields marked for removal appear crossed out, so you can confirm the effect of your configuration before applying it to production data. You see the output before you commit.

Once the rules are set and the pipeline is active, you attach a forwarder. Apica Flow supports 200+ pre-built integrations, and for Datadog, setup takes a few minutes. Navigate to the integrations tab, add a new forwarder, paste your Datadog API key, and map the forwarder to your active pipeline. Data starts flowing immediately.

What the Numbers Actually Look Like

The difference shows up fast. In testing, applying a filter pipeline to otel demo logs and routing the results to Datadog produced a visible and immediate drop in ingestion volume. The Datadog log ingestion dashboard, viewed over a 24-hour window, showed a clear inflection point the moment pipeline rules were applied. Ingestion volume dropped significantly, not gradually.

That’s what upstream filtering produces at small scale. Apply the same logic to an environment generating hundreds of thousands of logs per day and the savings multiply accordingly. Apica customers who apply pipeline filtering consistently see observability spending drop by 40%. The mechanism is not complicated: you stop paying to store data you were never going to use. The pipeline makes that choice deliberate instead of accidental.

In Apica-observed deployments, teams using upstream filtering have reduced log volume anywhere from 40 to 70 percent depending on how aggressively they configure their rules and how noisy their original data stream was. Most teams start conservative and tighten from there as they build confidence in what they’re dropping.

Cleaner Data, Better Signals

Cost reduction is the obvious win, but cleaner data has a downstream effect worth naming. Noisy telemetry creates problems for any tool that sits on top of it — monitors, alerts, and dashboards all perform better when what they’re reading is accurate and relevant. Filtering upstream with Apica Flow improves the quality of what reaches Datadog, not just the volume. That means fewer false positives, faster root cause analysis, and less time tuning alerting thresholds against data that should never have been ingested in the first place.

Compliance and Data Ownership

One more factor that Datadog-heavy organizations increasingly raise: data sovereignty.

When you route all telemetry directly to a third-party platform, you lose visibility into what data left your environment and when. For organizations subject to the EU Data Act, GDPR, or internal data governance policies, that matters. Apica Flow keeps you in control of what gets forwarded and what stays internal. You own the pipeline configuration. You decide what crosses the boundary.

Compliance requirements around telemetry data are tightening, not loosening. Building a pipeline layer now means you have a governance checkpoint you can audit and adjust as requirements change, rather than scrambling to retrofit controls after the fact.

What Engineers Are Actually Doing With This

The practical use cases coming out of Apica Flow deployments follow a pattern.

Teams start by auditing their most expensive Datadog log sources. They identify which services generate high volume with low investigation value: cart service logs, background job outputs, verbose health check events, Kafka consumer logs for topics that rarely cause incidents. These become the first filter targets.

Then they drop labels. Not every field in a structured log is useful for every downstream purpose. If a label was added for debugging months ago and the issue is long resolved, it does not need to live in Datadog storage. Dropping it reduces per-log weight and ingestion cost without touching the underlying service.

After that, teams refine by severity and message type. Filtering out debug-level events from services that run cleanly is a straightforward way to reduce volume without any risk to incident detection.

The result is a leaner data stream, a lower Datadog bill, and a cleaner environment for every tool that depends on that data.

The Setup Is Not a Project

One thing worth saying directly: this is not a complex infrastructure initiative. Creating a pipeline in Apica Flow, applying filter rules, and routing results to Datadog takes mintes for a single log source. The visual interface does not require scripting. The forwarder setup is a form fill and an API key.

Teams that have been putting off telemetry pipeline work because it seemed like a platform migration are often surprised by how contained the initial implementation is. Start with one log source. See the ingestion drop in your Datadog dashboard. Then expand.

For Datadog users paying on ingestion volume, the math is immediate.

Citations

  1. Apica Flow product page: apica.io/flow/ — 200+ integrations, 40% observability spend reduction
  2. Volume reduction figures reflect Apica-observed customer outcomes, not third-party benchmark
  3. Gartner Magic Quadrant for Observability Platforms, 2025 — Apica named Visionary.