This document describes considerations that must be taken into account when allocating resources for the a9s Data Services.
You can find specific information for each service on the specified service documentation.
Important: The list of concerns and limitations here and service-specific might grow in the future as the service continues being developed and maintained, and new topics are discovered.
a9s Data Services
As a general note, you must ensure your network and storage IO bandwidth can keep up with replication and service usage. Both network and disk must be able to keep up with the read/write operation flow. Your high availability and user experience depend on it.
Every a9s Data Service service instance has a Logstash process allocated for metrics shipment.
Every Logstash process is configured to use up to
512MB. So the total amount of memory available
for the main process is at least
total_memory - 512MB.
Every a9s Data Service service instance also has an a9s Backup Agent process running, this process
36MB of memory, but the services reserve
256MB of memory for the a9s Backup Agent
process. When running a big restore or backup, since the content is streamed, it can reach higher
peaks, using all the available memory. Usually, the backup is made from a read-only/secondary node.
E.g., on a9s PostgreSQL backup is taken from a standby node.
The same logic applies to the Consul agent processes, in which the data services reserves
Every a9s Data Service service instance has an a9s Parachute process running. This process will stop the
node process when the persistent disk hits
80% of usage. This is valid for all services.
The persistent disk is mounted using ext4, this file system reserves a portion of the filesystem
(5% by default) for the root user to prevent, among other things, normal users (
filling the disk and crashing the system.
The calculation for the used disk space does not consider the reserved disk space, and it is calculated over the overall disk usage.