Using a9s PostgreSQL
This chapter describes how to use a9s PostgreSQL.
Use a9s PostgreSQL with an App
To use the a9s PostgreSQL with an app, follow the procedures in this section to create a service instance and bind the service instance to your app. For more information on managing service instances, see Managing Service Instances with the cf CLI.
View the a9s PostgreSQL Service
After the service is installed, a9s-postgresql
and its service
plans appear in your CF marketplace. Run cf marketplace
to
see the service listing:
$ cf marketplace
Getting services from marketplace in org test / space test as admin...
OK
service plans description
a9s-postgresql postgresql-single-small, postgresql-cluster-small, postgresql-single-big, postgresql-cluster-big This is the a9s PostgreSQL 13 service.
See the next section for instructions on creating PostgreSQL service instances based on the plans listed in the cf marketplace
output.
Create a Service Instance
To provision a PostgreSQL database, run cf create-service
. For example:
cf create-service a9s-postgresql postgresql-single-small my-postgresql-service
Depending on your infrastructure and service broker usage, it may take several minutes to create the service instance.
Check the creation status using cf services
. This displays a list of all your service instances. To check the status of a specific service instance, run cf service NAME-OF-YOUR-SERVICE
.
Bind an Application to a Service Instance
After you create your database, run cf bind-service
to bind the service to your application:
cf bind-service a9s-postgresql-app my-postgresql-service
Restage or Restart Your Application
To enable your app to access the service instance, run cf restage
or cf restart
to restage or restart your app.
Obtain Service Instance Access Credentials
After a service instance is bound to an application, the credentials of your PostgreSQL database are stored in the environment variables of the application. Run cf env APP-NAME
to display the environment variables.
You can find the credentials in the VCAP_SERVICES key.
$ cf env a9s-postgresql-app
Getting env variables for app a9s-postgresql-app in org test / space test as admin...
OK
System-Provided:
{
"VCAP_SERVICES": {
"a9s-postgresql": [
{
"credentials": {
"host": "EXAMPLE-HOST",
"hosts": ["EXAMPLE-HOST"],
"name": "d92e2bd",
"password": "EXAMPLE-PASSWORD",
"port": 5432,
"uri": "EXAMPLE-URI",
"username": "EXAMPLE-USERNAME"
},
"label": "a9s-postgresql",
"name": "my-postgresql-service",
"plan": "postgresql-cluster-small",
"tags": [
"sql",
"database"
]
}
]
}
}
...
You can use the host
, username
and password
values to connect to your database with a PostgreSQL client.
The database is specified as name
, and this is the database that should be used to store the data.
WARNING: The postgres
database is reserved for administrative use only and no data should be stored there.
Best Practices
There are some best practices for using service binding information in apps in a separate document.
Delete an a9s PostgreSQL Service Instance
WARNING: Before deleting a service instance, you must backup data stored in your database. This operation cannot be undone and all the data is lost when the service is deleted.
Before you can delete a service instance, you must unbind it from all apps.
List Available Services
Run cf services
to list your available services.
$ cf services
Getting services in org test / space test as admin...
OK
name service plan bound apps last operation
my-postgresql-service a9s-postgresql postgresql-single-small a9s-postgresql-app create succeeded
This example shows that my-postgresql-service
is bound to the a9s-postgresql-app
app.
Unbind a Service Instance
Run cf unbind
to unbind the service from your app.
cf unbind-service a9s-postgresql-app my-postgresql-service
Delete a Service Instance
After unbinding the service, it is no longer bound to an application. Run cf delete-service
to delete the service:
cf delete-service my-postgresql-service
It may take several minutes to delete the service. Deleting a service deprovisions the corresponding infrastructure resources.
Run the cf services
command to view the deletion status.
Upgrade the Service Instance to another Service Plan
Once created, you can upgrade your service instance to another, larger service plan. A larger service plan provides more CPU, RAM and storage. For more information, see the Update a Service Instance of the Managing Service Instances with the cf CLI topic.
cf update-service my-postgresql-service -p a-bigger-plan
Configuring metrics intervals
It is possible to configure the collection of metrics groups to take place at specific intervals. Metrics groups are collections of related metrics. Each metric groups has a custom parameter that can be used to set its collection interval.
Intervals are in seconds and if the interval is set to 0
, the metric group
is disabled.
Metric groups
Custom parameter name | Suggested (interval in seconds) | Default value (interval in seconds) | Description |
---|---|---|---|
metric_pg_database_size | 300 | 300 | This group lets you see how quickly your database is growing. |
metric_pg_stat_all_indexes | 30 | 0 | This group provides status information on all indexes. |
metric_pg_statio_all_indexes | 60 | 0 | This group provides information on index usage for all indexes. |
metric_pg_statio_all_tables | 30 | 0 | This group provides information on table usage for all tables. |
metric_pg_stat_all_tables | 120 | 0 | This group provides the status of all tables. |
metric_pg_stat_archiver_table | 300 | 0 | This group provides the archival status. Useful with continuous archiving enabled. |
metric_pg_stat_database | 120 | 0 | This group provides the general status of each database. |
metric_pg_stat_database_conflicts | 300 | 0 | This group provides insights on database conflicts. |
metric_pg_replication | 10 | 10 | This group provides the status of replication. Useful when the service instance is a replica set. |
metric_pg_node_status | 10 | 10 | This group provides metrics based on node state |
Metric group patterns
Each metric group has a set of patterns that describe the metrics contained in each group.
Note: These metrics are dynamic between the PostgreSQL versions, therefore some of them might differ based on the PostgreSQL version that is currently running on the service instance. In order to be sure, search into the documentation link provided in the metric item and confirm the information based on PostgreSQL metric and PostgreSQL version.
metric_pg_database_size
Metric Id | Type | Description |
---|---|---|
pg_database_size | Bigint (Bytes) | Database size. |
*.*.postgres.*.*.*.*.database.*.pg_database_size
metric_pg_stat_all_indexes
The pg_stat_all_indexes
view will contain one row for each index in the current database, showing statistics
about accesses to that specific index. The pg_stat_user_indexes
and pg_stat_sys_indexes
views contain the same
information, but filtered to only show user and system indexes respectively.
Metric Id | Type | Description |
---|---|---|
idx_scan | Bigint | Number of index scans initiated on this index. documentation |
idx_tup_read | Bigint | Number of index entries returned by scans on this index. documentation |
idx_tup_fetch | Bigint | Number of live table rows fetched by simple index scans using this index. documentation |
*.*.postgres.*.*.*.*.database.*.table.*.*.index.*.*.*.idx_scan
*.*.postgres.*.*.*.*.database.*.table.*.*.index.*.*.*.idx_tup_read
*.*.postgres.*.*.*.*.database.*.table.*.*.index.*.*.*.idx_tup_fetch
metric_pg_statio_all_indexes
The pg_statio_all_indexes
view will contain one row for each index in the current database, showing statistics
about I/O on that specific index. The pg_statio_user_indexes
and pg_statio_sys_indexes
views contain the same
information, but filtered to only show user and system indexes respectively.
Metric Id | type | Description |
---|---|---|
idx_blks_read | Bigint | Number of disk blocks read from all indexes on this table. documentation |
idx_blks_hit | Bigint | Number of buffer hits in all indexes on this table. documentation |
*.*.postgres.*.*.*.*.database.*.table.*.*.index.*.*.*.idx_blks_read
*.*.postgres.*.*.*.*.database.*.table.*.*.index.*.*.*.idx_blks_hit
metric_pg_statio_all_tables
The pg_statio_all_tables
view will contain one row for each table in the current database (including TOAST tables),
showing statistics about I/O on that specific table. The pg_statio_user_tables
and pg_statio_sys_tables
views
contain the same information, but filtered to only show user and system tables respectively.
Metric Id | type | Description |
---|---|---|
heap_blks_read | Bigint | Number of disk blocks read from this table.documentation |
heap_blks_hit | Bigint | Number of buffer hits in this table. documentation |
idx_blks_read | Bigint | Number of disk blocks read from all indexes on this table. documentation |
idx_blks_hit | Bigint | Number of buffer hits in all indexes on this table. documentation |
toast_blks_read | Bigint | Number of disk blocks read from this table's TOAST table (if any). documentation |
toast_blks_hit | Bigint | Number of buffer hits in this table's TOAST table (if any). documentation |
tidx_blks_read | Bigint | Number of disk blocks read from this table's TOAST table indexes (if any). documentation |
tidx_blks_hit | Bigint | Number of buffer hits in this table's TOAST table indexes (if any). documentation |
*.*.postgres.*.*.*.*.database.*.io_table.*.*.*.heap_blks_read
*.*.postgres.*.*.*.*.database.*.io_table.*.*.*.heap_blks_hit
*.*.postgres.*.*.*.*.database.*.io_table.*.*.*.idx_blks_read
*.*.postgres.*.*.*.*.database.*.io_table.*.*.*.idx_blks_hit
*.*.postgres.*.*.*.*.database.*.io_table.*.*.*.toast_blks_read
*.*.postgres.*.*.*.*.database.*.io_table.*.*.*.toast_blks_hit
*.*.postgres.*.*.*.*.database.*.io_table.*.*.*.tidx_blks_read
*.*.postgres.*.*.*.*.database.*.io_table.*.*.*.tidx_blks_hit
metric_pg_stat_all_tables
The pg_stat_all_tables
view will contain one row for each table in the current database (including TOAST tables),
showing statistics about accesses to that specific table. The pg_stat_user_tables
and pg_stat_sys_tables
views
contain the same information, but filtered to only show user and system tables respectively.
Metric Id | type | Description |
---|---|---|
seq_scan | Bigint | Number of sequential scans initiated on this table. documentation |
seq_tup_read | Bigint | Number of live rows fetched by sequential scans. documentation |
idx_scan | Bigint | Number of index scans initiated on this table. documentation |
idx_tup_fetch | Bigint | Number of live rows fetched by index scans. documentation |
n_tup_ins | Bigint | Number of rows inserted. documentation |
n_tup_upd | Bigint | Number of rows updated (includes HOT updated rows). documentation |
n_tup_del | Bigint | Number of rows deleted. documentation |
n_tup_hot_upd | Bigint | Number of rows HOT updated (i.e., with no separate index update required). documentation |
n_live_tup | Bigint | Estimated number of live rows. documentation |
n_dead_tup | Bigint | Estimated number of dead rows. documentation |
last_vacuum | Timestamp With Time Zone | Last time at which this table was manually vacuumed (not counting VACUUM FULL). documentation |
last_autovacuum | Timestamp With Time Zone | Last time at which this table was vacuumed by the autovacuum daemon. documentation |
last_analyze | Timestamp With Time Zone | Last time at which this table was manually analyzed. documentation |
last_autoanalyze | Timestamp With Time Zone | Last time at which this table was analyzed by the autovacuum daemon. documentation |
vacuum_count | Bigint | Number of times this table has been manually vacuumed (not counting VACUUM FULL). documentation |
autovacuum_count | Bigint | Number of times this table has been vacuumed by the autovacuum daemon. documentation |
analyze_count | Bigint | Number of times this table has been manually analyzed. documentation |
autoanalyze_count | Bigint | Number of times this table has been analyzed by the autovacuum daemon. documentation |
*.*.postgres.*.*.*.*.database.*.table.*.*.*.seq_scan
*.*.postgres.*.*.*.*.database.*.table.*.*.*.seq_tup_read
*.*.postgres.*.*.*.*.database.*.table.*.*.*.idx_scan
*.*.postgres.*.*.*.*.database.*.table.*.*.*.idx_tup_fetch
*.*.postgres.*.*.*.*.database.*.table.*.*.*.n_tup_ins
*.*.postgres.*.*.*.*.database.*.table.*.*.*.n_tup_upd
*.*.postgres.*.*.*.*.database.*.table.*.*.*.n_tup_del
*.*.postgres.*.*.*.*.database.*.table.*.*.*.n_tup_hot_upd
*.*.postgres.*.*.*.*.database.*.table.*.*.*.n_live_tup
*.*.postgres.*.*.*.*.database.*.table.*.*.*.n_dead_tup
*.*.postgres.*.*.*.*.database.*.table.*.*.*.last_vacuum
*.*.postgres.*.*.*.*.database.*.table.*.*.*.last_autovacuum
*.*.postgres.*.*.*.*.database.*.table.*.*.*.last_analyze
*.*.postgres.*.*.*.*.database.*.table.*.*.*.last_autoanalyze
*.*.postgres.*.*.*.*.database.*.table.*.*.*.vacuum_count
*.*.postgres.*.*.*.*.database.*.table.*.*.*.autovacuum_count
*.*.postgres.*.*.*.*.database.*.table.*.*.*.analyze_count
*.*.postgres.*.*.*.*.database.*.table.*.*.*.autoanalyze_count
metric_pg_stat_archiver_table
The pg_stat_archiver
view will always have a single row, containing data about the archiver process of the cluster.
Metric Id | type | Description |
---|---|---|
failed_count | Bigint | Number of failed attempts for archiving WAL files. documentation |
archived_count | Text | Number of WAL files that have been successfully archived. documentation |
last_archived_time_in_seconds | Integer (seconds) | Time of the last successful archive operation. documentation |
wal_files_count | Integer | Amount of WAL files. |
last_archive_successful | Boolean (0 = success/1 = failed)` | Indicate if the last archive operation was successful. |
*.*.postgres.*.*.*.*.stat_archiver.failed_count
*.*.postgres.*.*.*.*.stat_archiver.archived_count
*.*.postgres.*.*.*.*.stat_archiver.last_archived_time_in_seconds
*.*.postgres.*.*.*.*.stat_archiver.wal_files_count
*.*.postgres.*.*.*.*.stat_archiver.last_archive_successful
metric_pg_stat_database
The pg_stat_database
view will contain one row for each database in the cluster, showing database-wide statistics.
Metric Id | type | Description | Versions |
---|---|---|---|
numbackends | Integer | Number of backends currently connected to this database. This is the only column in this view that returns a value reflecting current state; all other columns return the accumulated values since the last reset. documentation | all |
xact_commit | Bigint | Number of transactions in this database that have been committed. documentation | all |
xact_rollback | Bigint | Number of transactions in this database that have been rolled back. documentation | all |
blks_read | Bigint | Number of disk blocks read in this database. documentation | all |
blks_hit | Bigint | Number of times disk blocks were found already in the buffer cache, so that a read was not necessary (this only includes hits in the PostgreSQL buffer cache, not the operating system's file system cache). documentation | all |
tup_returned | Bigint | Number of rows returned by queries in this database. documentation | all |
tup_fetched | Bigint | Number of rows fetched by queries in this database. documentation | all |
tup_inserted | Bigint | Number of rows inserted by queries in this database. documentation | all |
tup_updated | Bigint | Number of rows updated by queries in this database. documentation | all |
tup_deleted | Bigint | Number of rows deleted by queries in this database. documentation | all |
conflicts | Bigint | Number of queries canceled due to conflicts with recovery in this database. (Conflicts occur only on standby servers; see pg_stat_database_conflicts for details.) documentation | all |
temp_files | Bigint | Number of temporary files created by queries in this database. All temporary files are counted, regardless of why the temporary file was created (e.g., sorting or hashing), and regardless of the log_temp_files setting. documentation | all |
temp_bytes | Bigint | Total amount of data written to temporary files by queries in this database. All temporary files are counted, regardless of why the temporary file was created, and regardless of the log_temp_files setting. documentation | all |
deadlocks | Bigint | Number of deadlocks detected in this database. documentation | all |
blk_read_time | Double Precision | Time spent reading data file blocks by backends in this database, in milliseconds. documentation | all |
blk_write_time | Double Precision | Time spent writing data file blocks by backends in this database, in milliseconds. documentation | all |
session_time | Double Precision | Time spent by database sessions in this database, in milliseconds (note that statistics are only updated when the state of a session changes, so if sessions have been idle for a long time, this idle time won't be included). | documentation |
active_time | Double Precision | Time spent executing SQL statements in this database, in milliseconds (this corresponds to the states active and fastpath function call in pg_stat_activity ). | documentation |
idle_in_transaction_time | Double Precision | Time spent idling while in a transaction in this database, in milliseconds (this corresponds to the states idle in transaction and idle in transaction (aborted) in pg_stat_activity ). | documentation |
sessions | Bigint | Total number of sessions established to this database. | documentation |
sessions_abandoned | Bigint | Number of database sessions to this database that were terminated because connection to the client was lost. | documentation |
sessions_fatal | Bigint | Number of database sessions to this database that were terminated by fatal errors. | documentation |
sessions_killed | Bigint | Number of database sessions to this database that were terminated by operator intervention. | documentation |
stats_reset | Timestamp With Time Zone | Time at which these statistics were last reset. documentation | ≥ 15 |
*.*.postgres.*.*.*.*.database.*.*.numbackends
*.*.postgres.*.*.*.*.database.*.*.xact_commit
*.*.postgres.*.*.*.*.database.*.*.xact_rollback
*.*.postgres.*.*.*.*.database.*.*.blks_read
*.*.postgres.*.*.*.*.database.*.*.blks_hit
*.*.postgres.*.*.*.*.database.*.*.tup_returned
*.*.postgres.*.*.*.*.database.*.*.tup_fetched
*.*.postgres.*.*.*.*.database.*.*.tup_inserted
*.*.postgres.*.*.*.*.database.*.*.tup_updated
*.*.postgres.*.*.*.*.database.*.*.tup_deleted
*.*.postgres.*.*.*.*.database.*.*.conflicts
*.*.postgres.*.*.*.*.database.*.*.temp_files
*.*.postgres.*.*.*.*.database.*.*.temp_bytes
*.*.postgres.*.*.*.*.database.*.*.deadlocks
*.*.postgres.*.*.*.*.database.*.*.blk_read_time
*.*.postgres.*.*.*.*.database.*.*.blk_write_time
*.*.postgres.*.*.*.*.database.*.*.session_time
*.*.postgres.*.*.*.*.database.*.*.active_time
*.*.postgres.*.*.*.*.database.*.*.idle_in_transaction_time
*.*.postgres.*.*.*.*.database.*.*.sessions
*.*.postgres.*.*.*.*.database.*.*.sessions_abandoned
*.*.postgres.*.*.*.*.database.*.*.sessions_fatal
*.*.postgres.*.*.*.*.database.*.*.sessions_killed
*.*.postgres.*.*.*.*.database.*.*.stats_reset
metric_pg_stat_database_conflicts
The pg_stat_database_conflicts
view will contain one row per database, showing database-wide statistics
about query cancels occurring due to conflicts with recovery on standby servers. This view will only contain
information on standby servers, since conflicts do not occur on master servers.
Metric Id | type | Description |
---|---|---|
confl_tablespace | Bigint | Number of queries in this database that have been canceled due to dropped tablespaces. documentation |
confl_lock | Bigint | Number of queries in this database that have been canceled due to lock timeouts. documentation |
confl_snapshot | Bigint | Number of queries in this database that have been canceled due to old snapshots. documentation |
confl_bufferpin | Bigint | Number of queries in this database that have been canceled due to pinned buffers. documentation |
confl_deadlock | Bigint | Number of queries in this database that have been canceled due to deadlocks. documentation |
*.*.postgres.*.*.*.*.conflicts.*.*.confl_tablespace
*.*.postgres.*.*.*.*.conflicts.*.*.confl_lock
*.*.postgres.*.*.*.*.conflicts.*.*.confl_snapshot
*.*.postgres.*.*.*.*.conflicts.*.*.confl_bufferpin
*.*.postgres.*.*.*.*.conflicts.*.*.confl_deadlock
metric_pg_replication
Metric Id | type | Description |
---|---|---|
replication.wal_receiver_count | Integer | Amount WAL receiver process running the master what sends data to the slave. |
replication.wal_sender_count | Integer | Amount of WAL sender process that run at the primary node. Each replication has its own. |
replication.replication_lag | Integer (seconds) | Amount of time since the pg_last_xact_replay_timestamp , that is the last transaction replayed during recovery. |
replication.replication_slots_count | Integer | The same as pg_replication_slots PostgreSQL property that this view provides a listing of all replication slots that currently exist on the database cluster. documentation |
replication.inactive_replication_slots_count | Integer | The amount of replication_slots_count in inactive state. |
replication.cluster_replication_slots_count | Integer | The amount of replication_slots_count that marches with the regex /^repmgr_slot_[0-9]+$/ . |
ha.master | Integer (1 = true/0 = false) | If it is a master node. |
ha.standby | Integer (1 = true/0 = false) | If it is a slave node. |
ha.wal_dir_size | Integer (Bytes) | Size of the WAL directory. |
*.*.postgres.*.*.*.*.replication.wal_receiver_count
*.*.postgres.*.*.*.*.replication.wal_sender_count
*.*.postgres.*.*.*.*.replication.replication_lag
*.*.postgres.*.*.*.*.replication.replication_slots_count
*.*.postgres.*.*.*.*.replication.inactive_replication_slots_count
*.*.postgres.*.*.*.*.replication.cluster_replication_slots_count
*.*.postgres.*.*.*.*.ha.master
*.*.postgres.*.*.*.*.ha.standby
*.*.postgres.*.*.*.*.ha.wal_dir_size
metric_pg_node_status
*.*.postgres.*.*.*.*.postgresql_info_status.blocked
Description of Metrics:
postgresql_info_status.blocked
indicates if any of the nodes has blocked
status that means that this node doesn't participate in cluster life.
This metric can be equal:
1
- blocked0
- unblocked
System Metrics
Aside from the metrics shown above, this Data Service also provides system metrics, common to all a9s Data Services. They can be seen in the Graphite metrics documentation.
Configuring metrics databases
By setting the metric_databases
custom parameter on a service instance it is
possible to control which database metrics are generated. The
metric_databases
parameter expects an array
of database names. If the
parameter is not set or the array is set to empty, then metrics will be collected
for all databases.
Add a Graphite Endpoint
Important: Streaming of logs and metrics might not be available for your plan! If unsure, please check your plan description.
If you want to monitor your service with Graphite, you can set the custom
parameter graphite
. It expects the host and port where the Graphite metrics
should be sent to.
For example, in order to send Graphite metrics to an endpoint
yourspace.your-graphite-endpoint.com:12345
, you can use the following
command:
cf update-service my-instance -c '{ "graphite": "yourspace.your-graphite-endpoint.com:12345" }'
The endpoint would then receive metrics in the format:
<service_guid>.<service_type>.<host>.<metric> <metric value> <metric timestamp>
Metrics Frequency
By default, metrics will be emitted every 10
seconds.
You can change the interval via the custom parameter metrics_frequency
.
For example, in order to send Graphite metrics to an endpoint every minute, you
would set the custom parameter metrics_frequency
to 60
using the following command:
cf update-service my-instance -c '{ "metrics_frequency": 60 }'
You can also change the intervals of different metric groups and decide from which databases you require metrics from. To do this look in the Configuring metrics intervals and Configuring metrics databases sections.
Metrics Prefix
Depending on your graphite provider, you might need to prefix the metrics with
a certain value, like an API key for example. In this case you can leverage the
custom parameter metrics_prefix
.
cf update-service my-instance -c '{ "metrics_prefix": "my-api-key-for-a-certain-provider" }'
The resulting metric path would have the format:
<metrics_prefix>.<service_guid>.<service_type>.<host>.<metric>
Cloud Foundry Org and Space Guid
The platform operators can enable support on a global level to prefix the Graphite metrics with the CloudFoundry organization and space. This means that all metrics of all service instances (not only yours!) contain that information.
In this case the Graphite metric paths have the following format:
<organization_guid>.<space_guid>.<service_guid>.<service_type>.<host>.<metric>
When you enable in addition the metrics_prefix
for your instance, you will
end up with the metric path format:
<metrics_prefix>.<organization_guid>.<space_guid>.<service_guid>.<service_type>.<host>.<metric>
Add a Syslog Endpoint
The cf update-service
command used with the -c flag can let you stream your syslog to a third-party service. In this case, the command expects a JSON string containing the syslog
key.
cf update-service my-postgresql-service \
-c '{ "syslog": ["logs4.your-syslog-endpoint.com:54321"]}'
Cloud Foundry Application Security Groups
This topic describes how to check whether a security group was created.
Each a9s Data Service will automatically create and update Cloud Foundry security groups in order to protected service instances to be accessed by applications not running in the same Cloud Foundry applications space. To get a better understanding about Security Groups you can have a look on the Understanding Application Security Groups topic.
Get Service Instance GUID
Run cf service INSTANCE_NAME --guid
to get the guid of the service instance.
$ cf service my-postgresql --guid
ca16f111-5073-40b7-973a-156c75dd3028
Check available Security Groups
To see all available security groups use cf security-groups
.
$ cf security-groups
Getting security groups as demo@anynines.com
OK
Name Organization Space
#0 public_networks
#1 dns
#2 tcp_open
#3 guard_432fb752-876d-443b-a311-a075f4df2237 demonstrations demo
#4 guard_ca16f111-5073-40b7-973a-156c75dd3028 demonstrations demo
There we can see a security group with the named guard_ca16f111-5073-40b7-973a-156c75dd3028
was successfully created.
NOTE: in some circumstances the connection between the application and the service instance is not possible, in this case check if a security group was created.
Backup and Restore Service Instances
a9s PostgreSQL provides an easy way to create backups and restore if needed. For a more detailed description, please see the a9s Service Dashboard documentation.
Caveats
- Logical backups (PITR is disabled) use the tool
pg_dumpall
. This tool tries to acquire shared table locks. It waits 1 hour at most (by default) to get those locks. If it fails to acquire the lock within that amount of time, it will fail. A failed backup attempt will be repeated a few times before a backup is marked as failed.
Make a Service Instance Locally Available
It is possible to access any of the a9s Data Services locally. That means you can connect with a local client to the service for any purpose such as debugging. CF provides a smart way to create SSH forward tunnels via a pushed application. For more information about this feature see the Accessing Apps with SSH section of the CF documentation.
First of all you must have an application bound to the service. How to do this see Bind an Application to a Service Instance.
NOTE: cf ssh
support must be enabled in the platform. Ask your administrator if you are not sure.
Get The Service Url and Credentials
When you follow this instructions Obtain Service Instance Access Credentials you will get the hostname of the service and the user credentials.
$ cf env a9s-postgresql-app
Getting env variables for app a9s-postgresql-app in org test / space test as admin...
OK
System-Provided:
{
"VCAP_SERVICES": {
"a9s-postgresql": [
{
"credentials": {
"host": "d67901c-psql-master-alias.node.dc1.consul",
"hosts": [
"d67901c-pg-0.node.dc1.consul",
"d67901c-pg-1.node.dc1.consul",
"d67901c-pg-2.node.dc1.consul"
],
"password": "a9s-password",
"username": "a9s-brk-usr"
},
"label": "a9s-postgresql",
"name": "my-postgresql-service",
"plan": "postgresql-cluster-small"
}
]
}
}
...
Notice the host d67901c-psql-master-alias.node.dc1.consul
, the username a9s-brk-usr
and the password a9s-password
. You will need this in the next step.
Create a Tunnel to The Service
With the cf ssh
as mentioned before you can create a ssh forward tunnel to the management dashboard.
Use port 5432 to connect to the a9s PostgreSQL Instance.
$ cf ssh a9s-postgresql-app -L 5432:d67901c-psql-master-alias.node.dc1.consul:5432
vcap@956aaf4e-6da9-4f69-4b1d-8e631a403312:~$
When the ssh tunnel is open you can access the instance over the address localhost:5432
.
NOTE: Don't forget to close the session with exit
.
PostgreSQL Extensions
Create or Drop Extensions
It is possible to install PostgreSQL extensions with the Cloud Foundry CLI and additional configuration parameters:
cf create-service a9s-postgresql PLAN_NAME INSTANCE_NAME -c '{"install_plugins": ["<plugin_name>"]}'
cf update-service INSTANCE_NAME -c '{"install_plugins": ["<plugin_name>"]}'
cf update-service INSTANCE_NAME -c '{"delete_plugins": ["<plugin_name>"]}'
Available Extensions
The following PostgreSQL extensions are available:
postgis
uuid-ossp
ltree
pgcrypto
citext
pg_stat_statements
fuzzystrmatch
pg_trgm
hstore
vector
Default Extensions
There are PostgreSQL extensions that are always installed and it's also not possible to delete them. The following PostgreSQL extensions are set as default extensions:
pgcrypto
citext
These two extensions are needed to have a functional replication.
PostgreSQL statistics collector
You can enable support for the statistics collector feature (please note that this feature is PostgreSQL version specific). This feature allows you to retrieve information about the current status of the PostgreSQL cluster. It includes information about currently active queries and other useful informations. For each statistics collector view another one is created by a PostgreSQL super user (with the super users' access privileges) that has the same name with a prefix prepended.
Example: For pg_stat_activity
the associated view is called ds_pg_stat_activity
.
You can set the value of the prefix as you prefer. Read-only access to the views created by this feature is given to every user
allowed to login on the cluster via the public
schema.
You need to connect to the postgres
database with your service
bindings in order to use those views.
This feature is disabled by default. You can ask your operator to enable this feature for your service instance.
Enable AppDynamics support
Enable AppDynamics support to create a view called pg_stat_activity_allusers
which is accessible to all users as described here.
This feature is disabled by default. You can ask your operator to enable this feature for your service instance.
PostgreSQL Custom Parameters
As an end user you have the opportunity to customize your service instance by using custom parameters.
Custom parameters are passed on to a service instance by using the -c
switch of the cf CLI commands cf create-service
and cf update-service
. For example
cf update-service mypg -c '{"temp_file_limit": "-1"}'
would set the temporary file limit to unlimited for the service instance mypg
.
You don't have to utilize those settings. There are sane defaults in place that fit your service plan well.
Every parameter corresponds to a property in the configuration file for the respective PostgreSQL version.
data_checksums
Use checksums on data pages to help detect corruption by the I/O system that would otherwise be silent. This option can only be set during creation of the service instance, and cannot be changed later. Enabling checksums may incur a noticeable performance penalty.
The default value is enabled
.
The allowed values are enabled
and disabled
.
max_connections
Determines the maximum number of concurrent connections to the database server.
Note: You cannot decrease max_connections
as this will break clustering.
If max_connections
is not explicitly specified, a default value will be used based on the service plan.
This default value is 128
for most service plans.
The Platform Operator should specify this parameter for each service plan.
This setup will be enough for most cases but if a different value is needed for a service instance, you can configure it with:
cf create-service <service> <plan> <service-instance-name> -c '{"max_connections": <max_connections_value>}'
When creating a service.
When updating an existing service instance, you can use the following:
cf update-service mypg -c '{"max_connections": <max_connections_value>}'
Note: Standby nodes can only connect to primary nodes when the max_connections
are equal or bigger on the standby node. If you are going to increase the max_connections
for the service instance, the standby node may differ in the max_connection
configuration and will not
be able to connect to the primary.
Example: We have three nodes in our service instance:
Node-Name | Configuration | State |
---|---|---|
pg0 | max_connections = 128 | Primary |
pg1 | max_connections = 128 | Standby |
pg2 | max_connections = 128 | Standby |
While updating the max_connections
to 150
, the following happens:
pg/0
is configured withmax_connections
=150
.pg/2
is still configured withmax_connections
=100
.pg/1
goes down to update
At this moment, pg/2
isn't able to join the primary because the max_connections
is less than on the primary.
This causes a short downtime.
If pg/1
comes up again with max_connections
= 150
it is able to join pg/0
and build a quorum. At this point,
the a9s PostgreSQL Database is reachable again.
An update to a smaller value than the current one is not allowed.
Once you set the value for max_connections
, you cannot reset it to the default
value using the null
value.
The max_connections
of an existing cluster deployment can not be updated to
a smaller value than the current one due to the fact that a standby node
can not connect to a master that has a greater value for this property.
effective_cache_size
Please note, that our BOSH release includes an algorithm which calculates a usable value for this property on the fly based on the resources of the VM running.
The property effective_cache_size
can be specified (e.g. 2GB
) or be
omitted. If effective_cache_size
is not set, it's value is 75%
of the total memory.
If the total memory is less than 4GB
, it will use 50%
of the total memory
with a floor of 256MB
. When specifying this value, make sure you set a value that
leads to a configuration that is higher than shared_buffers
because shared buffers
are included in the PostgreSQL query execution planner.
An integer
value greater than 0
is expected. The units of the property are MB
and no unit should be appended to the value. For example, to set 1GB
of effective_cache_size
the parameter value should be 1024
, and for 512MB
the value would be 512
.
shared_buffers
Sets the amount of memory the database server uses for shared memory buffers.
The property shared_buffers
can be specified (e.g. 2048
or "2048MB"
or "2GB"
) or be omitted.
If shared_buffers
is not set, its value is 25%
of the total memory if the total memory is greater than 1GB
.
If the total memory is less than 1GB
, the shared_buffers
are 15%
of the total memory.
It floors at 128MB
.
To specify the shared_buffers
, It can be set it in two distinct formats integer
or string
.
When is integer
format, an integer
value greater than 0
is expected. The units of the property are MB
and no unit should be appended to the value. For example, to set 1GB
of shared_buffers
the parameter value should be 1024
, and for 512MB
the value would be 512
.
When is string
format, an string
value with number greater than 0
plus unit(MB
or GB
) is expected.
For example, to set 1GB
of shared_buffers
the parameter value should be "1024MB"
or "1GB"
,
and for 512MB
the value would be "512MB"
.
When updating an existing service instance, you can use the following:
cf update-service mypg -c '{"shared_buffers": 1024}'
# or
cf update-service mypg -c '{"shared_buffers": "1024MB"}'
# or
cf update-service mypg -c '{"shared_buffers": "1GB"}'
work_mem
The default unit is kilobytes
but you can also specify MB
or GB
. The
default value is 8 MB
. For example, to set 1 MB
of work_mem
the
parameter value can be 1024
or 1 MB
.
maintenance_work_mem
Specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM
, CREATE INDEX
,
and ALTER TABLE ADD FOREIGN KEY
. See PostgreSQL Documentation
The default unit is kilobytes
but you can also specify MB
or GB
. The
default value is 64 MB
. For example, to set 1 MB
of maintenance_work_mem
the
parameter value can be 1024
or 1 MB
.
Note that when autovacuum runs, up to autovacuum_max_workers
times of this memory may be allocated.
Therefore, you need to be careful not to set the default value too high.
Due to the default value of three, the memory could then be allocated three times simultaneously.
temp_file_limit
An integer
value greater than 0
is expected. Alternatively -1
means no limit. The default value is -1
.
track_io_timing
Enables timing of database I/O calls.
By default this parameter is off
.
WARNING: When it is on
it will repeatedly query the operating system for the current time, which may cause significant overhead on some platforms.
archive_timeout
Enforces the server to switch to a new WAL segment file periodically.
By default the value is 1h
.
The following values are allowed:
Value | Meaning |
---|---|
null | Reset custom value and use default one, so 1h |
0 | Disables archive_timeout |
ìnteger bigger than 59 seconds | Sets archive_timeout to the provided value |
statement_timeout
The default value is 0
(off).
Only integer
values bigger or equal 0
are allowed.
idle_in_transaction_session_timeout
The default value is 0
(disabled).
Only integer
values bigger or equal 0
are allowed.
role_privileges
Configures default privileges for the credentials.
By default, it is not possible to specify a value for this parameter. However, default and permitted values are configured on the SPI by the platform operator. Check with him which privileges are available.
Possible configurable values are:
Privilege | Meaning | Default value |
---|---|---|
CREATEDB | Gives the user permission to create and drop new databases. | NOCREATEDB |
CREATEROLE | Gives the user permission to create, delete, and alter the attributes of a role. | NOCREATEROLE |
REPLICATION | Gives the role permission to create and drop replication slots and connect to the service in replication mode. | NOREPLICATION |
This parameter receives one of these values in the following format:
Value | Type | Meaning |
---|---|---|
null | - | Reset custom values to the default specified by the platform operator. |
[] | array | Removes all privileges. |
['CREATEDB', 'CREATEROLE', 'REPLICATION'] | array | The default and permitted values are defined by the platform operator. Any privilege not specified is disabled. |
This parameter can be specified during instance creation (cf create-service
and
cf update-service
) and credentials creation (cf bind-service
and cf create-service-key
).
When specifying during instance creation, the privileges are applied to the cfuser
role. All other
users inherit this role by default. For example:
cf create-service a9s-postgresql postgresql-single-small my-postgresql-service \
-c '{ "role_privileges": ["CREATEROLE"]}'
If specified during credentials creation, the privileges are applied to the created user only and
are enabled when using the user role instead of the cfuser
role.
cf create-service-key my-postgresql-service \
-c '{ "role_privileges": ["CREATEROLE"]}'
You must ensure that for every new database the owner is the cfuser
role
(ALTER DATABASE <dbname> OWNER TO cfuser;
) and all objects stored in all databases belongs
to the cfuser
.
The CREATEROLE
privilege automation was designed to grant a user the ability to use
ALTER ROLE to change default
values for run-time parameters for a user using SET
.
New users must be created using the instructions above.
CAUTION: Be careful when configuring privileges in your service instances. CREATEROLE
, for
example, gives a user permission to create new users with a weak password, avoid doing this in
favor of using the cf
CLI. CREATEDB
gives permission also to delete databases, and misusing
REPLICATION
can cause issues with the storage space, continuous archiving, and the node
replication of your cluster.
To know more about the a9s PostgreSQL user configuration, see the Overview.
wal_level_logical
This parameter receives a boolean, where true
configures the wal_level
as logical
and false
fallback to the default configuration.
The default value is false
, in this case, the wal_level
is configured as minimal
for single
instances, and replica
for the clusters. For any case, if the instance has
continuous archiving the wal_level is configured as replica
.
The allowed values are true
and false
.
wal_writer_delay
Configures how often the WAL writer flushes WAL.
This parameter receives an integer between 1
and 10000
.
It is taken as milliseconds.
The default is 200
.
Value | Meaning |
---|---|
null | Reset custom value and use the default value. |
ìnteger between 1 and 10_000 | Sets wal_writer_delay to the given number. |
Note: We do not allow to set the custom parameter with a value that
includes the unit ms
. We expect just an integer (in milliseconds).
max_replication_slots
Configures the maximum amount of replication slots.
This parameter receives an integer bigger or equal than 0
.
The default is 4
for single instances, and for clusters, the default is 6
for compatibility reasons.
Value | Meaning |
---|---|
null | Reset custom value and use the default value. |
ìnteger bigger or equal than 0 | Sets max_replication_slots to the given number, plus the number of standby nodes. |
When running a cluster, the necessary number of replication slots necessary to ensure correct replication is added to the value given as parameter.
Note: Updating this property to a value lower than the current number of replication slots will fail.
max_wal_senders
Configures the maximum amount of WAL sender fore each node of the service instance.
This parameter receives an integer bigger or equal than 0
.
The default is 0
for single instances. For clusters, the default is the number of standby nodes
times 4
. For any case, if the instance has continuous archiving
enabled 4
more are added to the WAL senders.
Value | Meaning |
---|---|
null | Reset custom value and use the default value. |
integer bigger or equal than 0 | Sets max_wal_senders to the given number, plus the number of WAL senders needed by the cluster |
When running a cluster, the necessary number of WAL senders to ensure correct replication and backup is added to the value given as parameter.
Note: The default wal_level
for the single instance is minimal
, in this case,
max_wal_senders
can not be configured, as PostgreSQL fails to start. To change that, you can use
wal_level_logical or enable continuous archiving.
synchronous_commit
Configures synchronous_commit.
This value specifies how much WAL processing must complete before the database server returns a 'success' indication to the client.
The default value is on
. Possible configurable values are on
, off
, local
,
remote_write
, and remote_apply
.
On a9s PostgreSQL instances the synchronous_standby_names
is always empty .
Logging parameters
Logging in a9s PostgreSQL is reduced to a minimum by default. This is, to give most of the performance and especially diskspace, to the customer. Additionally it reduces the problems that can appear if queries are failing for any reasons.
The parameters that can be configured are the following:
client_min_messages
Controls which message levels are sent to the client. Valid values (in order of details) are:
Log-Level | Verbosity |
---|---|
DEBUG5 | Most |
DEBUG4 | |
DEBUG3 | |
DEBUG2 | |
DEBUG1 | |
LOG | |
NOTICE | default for a9s PostgreSQL |
WARNING | |
ERROR | |
FATAL | |
PANIC | Least |
pg_log_min_messages
Controls which message levels are written to the server log. Valid values are DEBUG5
, DEBUG4
, DEBUG3
, DEBUG2
, DEBUG1
, INFO
, NOTICE
, WARNING
, ERROR
, LOG
, FATAL
and PANIC
. Each level includes all the levels that follow it. The later the level, the fewer messages are sent to the log.
The default for a9s PostgreSQL is WARNING
.
pg_log_min_error_statement
Controls which SQL statements that cause an error condition are recorded in the server log. The current SQL statement is included in the log entry for any message of the specified severity or higher. Valid values are DEBUG5
, DEBUG4
, DEBUG3
, DEBUG2
, DEBUG1
, INFO
, NOTICE
, WARNING
, ERROR
, LOG
, FATAL
and PANIC
.
The default for a9s PostgreSQL is ERROR
, which means statements causing errors, log messages, fatal errors, or panics will be logged. To effectively turn off logging of failing statements, set this parameter to PANIC
.
repmgr_loglevel
Log level: possible values are DEBUG
, INFO
, NOTICE
, WARNING
, ERR
, ALERT
, CRIT
or EMERG
.
The default for a9s PostgreSQL is NOTICE
.
loglevel
Configures the overall loglevel for the control scripts and components from anynines. Valid levels are INFO
or DEBUG
.
The default for a9s PostgreSQL is DEBUG
.
temp_files
Controls logging of temporary file names and sizes. Temporary files can be created for sorts, hashes, and temporary query results. A log entry is made for each temporary file when it is deleted. A value of zero logs all temporary file information, while positive values log only files whose size is greater than or equal to the specified number of kilobytes. The default for a9s PostgreSQL is -1
, which disables such logging.
log_statement
Controls which SQL statements are logged. Valid values are none
(off), ddl
, mod
, and all
(all statements). ddl
logs all data definition statements, such as CREATE
, ALTER
and DROP
statements. mod
logs all ddl
statements, plus data-modifying statements such as INSERT
, UPDATE
, DELETE
, TRUNCATE
, and COPY FROM
. PREPARE
, EXECUTE
and EXPLAIN ANALYZE
statements are also logged if their contained command is of an appropriate type. For clients using extended query protocol, logging occurs when an Execute message is received, and values of the Bind parameters are included (with any embedded single-quote marks doubled).
The default for a9s PostgreSQL is none
.
log_min_duration_statement
Causes the duration of each completed statement to be logged if the statement ran for at least the specified amount of time.
An integer
value greater than or equal to -1
is expected.
The value -1
disables logging statement durations.
The value 0
prints all statement durations.
The default value is -1
.
log_error_verbosity
Controls the amount of detail written in the server log for each message that is logged. Valid values are TERSE
, DEFAULT
and VERBOSE
, each adding more fields to displayed messages. TERSE
excludes the logging of DETAIL
, HINT
, QUERY
, and CONTEXT
error information. VERBOSE
output includes the SQLSTATE
error code and the source code file name, function name, and line number that generated the error.
The default for a9s PostgreSQL is TERSE
.
More information about logging can be found in Documentation
Create a Fork of a Service Instance
Currently, a9s PostgreSQL supports two approaches to migrate data from a running instance to a new instance. Specifically, it supports the Download Backup and Apply Manually approach described below, and the copy-from feature explained in Migration.
Download Backup and Apply Manually
This procedure of forking a service instance involves creating a backup of a service instance, modifying the backup a bit and restoring it to a different service instance.
This method should only be used to restore the data on the same PostgreSQL version as the source service. For migrating data to a different version, see Dump and Restore From Origin Instance to New Instance.
Having two service instances is a prerequisite for the process:
$ cf s
Getting services in org my_org / space my_space as user@domain.com...
name service plan bound apps last operation
psql1 a9s-postgresql11 postgresql-replica-small create succeeded
psql2 a9s-postgresql11 postgresql-replica-small create succeeded
Having a service key for each of the service instances comes in handy later when we need info about the database and credentials to connect to the database. Create one for each service instance using e.g.: cf create-service-key psql1 mykey
.
You can then have a look at the keys contents:
$ cf service-key psql1 mykey
Getting key mykey for service instance psql1 as user@domain.com...
{
"host": "pcsd85301a-psql-master-alias.node.dc1.consul",
"hosts": [
"pcsd85301a-pg-0.node.dc1.consul",
"pcsd85301a-pg-1.node.dc1.consul",
"pcsd85301a-pg-2.node.dc1.consul"
],
"name": "pcsd85301a",
"password": "a9spass",
"port": 5432,
"uri": "postgres://a9suser:a9spass@pcsd85301a-psql-master-alias.node.dc1.consul:5432/pcsd85301a",
"username": "a9suser"
}
$ cf service-key psql2 mykey
Getting key mykey for service instance psql2 as user@domain.com...
{
"host": "pcsd59c00c-psql-master-alias.node.dc1.consul",
"hosts": [
"pcsd59c00c-pg-0.node.dc1.consul",
"pcsd59c00c-pg-0.node.dc1.consul",
"pcsd59c00c-pg-0.node.dc1.consul"
],
"name": "pcsd59c00c",
"password": "a9spass",
"port": 5432,
"uri": "postgres://a9suser:a9spass@pcsd59c00c-psql-master-alias.node.dc1.consul:5432/pcsd59c00c",
"username": "a9suser"
}
We need the database names for later use as well as the credentials for psql2
. The database name is the trailing part of the URI. For psql1
the database name is pcsd85301a
and for psql2
the database name is pcsd59c00c
Additional prerequisites regarding command line tools:
- BASH (some shell)
cat
sed
openssl
psql
(the command line client for psql)
Open the service dashboard of the service instance you want to fork. We use psql1
for this example. You can find the dashboard URL like this:
$ cf service psql1
Showing info of service psql1 in org my_org / space my_space as user@domain.com...
name: psql1
service: a9s-postgresql11
bound apps:
tags:
plan: postgresql-replica-small
description: Dedicated PostgreSQL service instances and clusters powered by the anynines Service Framework
documentation:
dashboard: https://a9s-postgresql-dashboard.de.a9s.eu/service-instances/271ceaa0-eded-43ba-a305-aaecab326079
[...]
Make sure you set a encryption password for the backups using the service instance dashboard. Create a backup using the dashboard. Download the backup to your local machine. The filename will be something like racsd92baee-1522222422893
.
Decrypt the backup and write its contents to a file:
cat racsd92baee-1522222422893 | openssl enc -aes256 -md md5 -d -pass 'pass:mytopsecretpassword' | gunzip -c > backup.sql
We have to replace the database name of the original instance in the backup with the database name of the other instance:
cat backup.sql | sed -e 's/pcsd85301a/pcsd59c00c/g' > backup.fork.sql
Create a tunnel to the admin interface as explained here of the PostgreSQL instance that will be the fork of the original instance. I used psql1
as the original instance and psql2
as the fork instance.
I assume you're using something like this to set up the tunnel to psql2
:
cf ssh someapp -L 127.0.0.1:5432:racsd92baee.service.dc1.a9ssvc:5432
Finally restore the data:
psql -h 127.0.0.1 -p 5432 -U a9s27ae5c9bfee2937a8dc04b1d7d9b9c9690c97704 pcsd59c00c < backup.fork.sql
Don't get confused if you see error messages. The backup contains data that we lack the necessary permissions to restore. In the end our data gets restored and that's our primary goal.
Setup Disk Usage Alerts
Each service comes with the a9s Parachute. This component monitors ephemeral and persistent disk usage. See the a9s Parachute documentation how to configure the component.
Continuous Archiving and Point-in-Time Recovery (PITR)
Enabled Continuous Archiving
When creating a service you need to pass the custom parameter continuous_archiving
in order to create an a9s PostgreSQL instance with Continuous Archiving enabled:
cf cs a9s-postgresql13 postgresql-replica-small mypsql \
-c '{"continuous_archiving": "enabled"}'
Currently, you cannot update a service using cf update-service
instance to change the custom parameter continuous_archiving
.