It is quite difficult to acquire useful insights for businesses when dealing with massive amounts of data generated by systems. The challenge is compounded when trying to do this on a large scale while also bringing together data from various sources that are scattered and distinct. This process is crucial to ensure that customers, employees, and partners have access to the necessary data at the exact moment they require it.

At Apica all our products are built with infinite scalability in mind. The system is designed to be flexible and to support an infinite scale of ingestion, processing, and storage.

Apica’s architecture is based on the following pragmatic principles:

Infinite compute and
storage support

Auto scale-out

Decoupled storage and compute



Auto scale-out’s auto scale-out architecture allows for the addition of compute and storage resources instead of merely increasing the capacity or specifications of existing resources. This allows for the system to adjust to increased data streams or data rates by proportionally adding bandwidth, compute, storage, and throughput.

Compute and storage decoupled

The decoupling of storage and compute is imperative for any modern architecture, particularly in observability systems, which involve the two dimensions of data: volume and retention.

Many systems such as Splunk, Datadog, QRadar, and Arcsight claim to use decoupling but they don’t truly apply it. Although they may decouple some of the storage used for long-term retention, the expensive storage required for indexing and processing remains tightly coupled to their compute resources. Consequently, this results in complications, inflexibility, and higher costs.

In contrast, offers authentic decoupling that separates indexes and retention from compute resources. Our approach ensures that ingest capacity is entirely decoupled from storage capacity, providing more flexibility, simplicity, and lower costs.

Infinite compute’s architecture also supports infinite compute, using Kubernetes containers as their compute layer, which offers the inherent benefits of autoscaling. The system can ingest any scale seamlessly, from GBs to PBs of data ingestion per second. The addition of intelligent algorithms using AI/ML techniques takes care of capacity planning, enabling infinite scalability.

Infinite storage

The use of any object store or S3-compatible object store as the primary storage layer allows for infinite storage, handling any growth in data volumes due to ingestion or long-term retention requirements using simple API calls. This architecture provides unparalleled operational agility, and the end-user is not burdened with any storage overhead costs.

Real-time performance’s distributed Kubernetes-based compute architecture enables the software to continuously ingest and process high-volume data streams in real-time. We also use S3-compatible storage systems as their primary storage layer, with innovative engineering that enables real-time query performance on what is traditionally considered secondary or cold storage.


Our cloud-native stack is designed with the following principles in mind: scalability, resilience, security, automation, cost optimization, and reliability. Specifically, Apica’s architecture leverages Docker containers on Kubernetes to deploy and scale the processing infrastructure.

In a nutshell, Apica’s architecture supports infinite scalability, elasticity, simplicity, and low cost, making it an ideal platform for any modern technology that processes data. We are dedicated to building a platform that allows our customers to quickly and easily operationalize their data.

In a Glimpse:

  • Apica offers an auto scale-out architecture that is flexible and can support an infinite scale of data ingestion, processing, and storage.
  • Compute and storage are decoupled, allowing for increased flexibility when ingesting or retaining large volumes of data.
  • The system uses Kubernetes containers as its compute layer, allowing for infinite scalability.
  • S3-compatible object stores provide infinite storage capacity with low overhead costs.
  • Real-time query performance is available on secondary or cold storage.
  • Cloud-native stack supports scalability, resilience, security, automation, cost optimization, and reliability.