Using a9s LogMe
This topic describes how to use a9s LogMe for PCF after it has been successfully installed.
Stream Application Logs to LogMe
To use a9s LogMe with an application, create a service instance and bind the service instance to your application. For more information on managing service instances, see Managing Service Instances with the cf CLI.
View the Service Offering
After the tile is installed, run cf marketplace
to see a9s-logme
service offering and its service plans:
$ cf marketplace
Getting services from marketplace in org test / space test as admin...
OK
service plans description
a9s-logme logme-single-small, logme-single-big, logme-cluster-small, logme-cluster-big This is the a9s LogMe service.
Create a Service Instance
To provision a LogMe instance, run cf create-service SERVICE-NAME PLAN-NAME INSTANCE-NAME
where INSTANCE-NAME
is any name you want to give the instance you create.
cf create-service a9s-logme logme-single-small my-logme-service
Depending on your infrastructure and service broker utilization, it may take several minutes to create the service instance.
Run the cf services
command to view the creation status. This command displays a list of all your service instances. To view the status of a specific service instance, run cf service NAME-OF-YOUR-SERVICE
.
Bind the Logme Service to Your Application
After the LogMe service instance is created, run cf bind-service APP-NAME INSTANCE-NAME
to bind the LogMe service to any applications whose logs should be streamed:
cf bind-service my-app my-logme-service
Restage or Restart Your Application
To enable your application to access the service instance, run cf restage
or cf restart
to restage or restart your application.
See Your Applications Logs
Perform the following steps to see your applications logs:
Grab the service instance's dashboard URL with
cf service my-logme-service
:name: my-logme-service
service: a9s-logme
bound apps: my-app
plan: logme-single-small
description: This is the a9s LogMe service.
documentation url: https://www.anynines.com
dashboard: https://a9s-logme-dashboards.your-domain.com/service-instances/a89f3114-5e77-40a5-b3b0-34a9741f3cd5
service broker: logme-service-brokerExtend the dashboard URL with a postfix
/kibana
. In the above example this would result in the URLhttps://a9s-logme-dashboards.your-domain.com/service-instances/a89f3114-5e77-40a5-b3b0-34a9741f3cd5/kibana
.Open the URL in a browser and then authenticate on the redirected page with your Cloud Foundry credentials:
Click Authorize to approve the authorization request:
In the Kibana dashboard that appears, specify the
Index name or pattern
and theTime-field name
. For theIndex name or pattern
you could use the prefilled value.For the
Time-field name
use the only available value,@timestamp
.Click Create to apply the settings.
Your application's logs appear on the Discover tab of the Kibana dashboard:
Best Practices
There are some best practices for using service binding information in apps in a separate document.
Stream a9s Service Logs to LogMe
To use a LogMe service instance to monitor another service instance, follow the first two steps of Stream application logs to LogMe to create an a9s LogMe service instance.
Create a Service Key
After the LogMe service instance is created, run cf create-service-key my-logme-service SERVICE-KEY-NAME
to create a new key for your service instance:
$ cf create-service-key my-logme-service key1
$ cf service-key my-logme-service key1
{
"host": "syslog://d37f7da-logstash.service.dc1.consul:514",
"password": "a9sfd6e0d814e78c903290ebb5a7207b20c5f0a2653",
"username": "a9sed20b19c769f0804bc68b97d02cba86e9c3a0379"
}
This Data Service does not have support for multiple unique service keys. Therefore, when creating services keys for this Data Service, every key will have the same set of credentials.
Update Your Service
The cf update-service
command used with the -c flag can let you stream your syslog to a third-party service. In this case, the command expects a JSON string containing the syslog
key. For this, you need to give the URL given by the cf service-key
command as a value for the syslog
key.
$ cf update-service service-instance-to-monitor \
-c '{"syslog": ["d37f7da-logstash.service.dc1.consul:514"]}'
See Your Services Logs
To see your service's logs follow the instructions here.
See Your Logs
Regardless of the origin or your logs, be it your application or your service instance, the process to see them on Kibana is as follows:
- Grab the service instance's dashboard URL with
cf service my-logme-service
:
(...)
name: my-logme-service
service: a9s-logme
tags:
plan: logme-single-small
description: This is the a9s LogMe service.
documentation: https://www.anynines.com
dashboard: https://a9s-logme-dashboards.your-domain.com/service-instance/a89f3114-5e77-40a5-b3b0-34a9741f3cd5
service broker: logme-service-broker
This service is not currently shared.
Showing status of last operation from service my-logme-service...
status: create succeeded
message:
started: 2021-12-08T20:18:13Z
updated: 2021-12-08T20:24:33Z
bound apps:
name binding name status message
my-app create succeeded
Extend the dashboard URL with a postfix
/kibana
. In the above example this would result in the URLhttps://a9s-logme-dashboards.your-domain.com/service-instances/a89f3114-5e77-40a5-b3b0-34a9741f3cd5/kibana
.Open the URL in a browser and then authenticate on the redirected page with your Cloud Foundry credentials:
Click Authorize to approve the authorization request:
The Kibana dashboard appears. Specify the
Index name or pattern
and theTime-field name
. For theIndex name or pattern
you can use the prefilled value.Browse to the dashboard URL and authenticate on the redirected page with your Cloud Foundry credentials:
- Click Authorize to approve the authorization request:
- The dashboard appears. Specify the
Index name or pattern
and theTime-field name
. For theIndex name or pattern
you can use the prefilled value.
For the Time-field name
use the only available value, @timestamp
.
After this your logs are visible from Discover
tab of the Kibana
dashboard:
Application's logs: <br><br>
Service instance's logs: <br><br>
Your service's logs appear on the Discover tab of the Kibana dashboard:
Your service's logs appear on the Discover tab of the dashboard:
Stop Streaming Application Logs to LogMe
Follow the steps below to stop streaming the logs of your applications to LogMe.
List Available Services
Run cf services
to list available service instances and get the name of the service instance you want to delete.
$ cf services
Getting services in org test / space test as admin...
OK
name service plan bound apps last operation
my-logme-service a9s-logme logme-single-small a9s-logme-app create succeeded
This example shows that my-logme-service
is bound to the a9s-logme-app
.
Unbind the Service Instance
If your LogMe service instance is bound to an app, you will need to unbind them first. Run cf unbind-service APP-NAME INSTANCE-NAME
to unbind the service from your application:
cf unbind-service a9s-logme-app my-logme-service
Delete the Service Instance
Run cf delete-service INSTANCE-NAME
to delete the service instance:
cf delete-service my-logme-service
It may take several minutes to delete the service. Deleting a service deprovisions the corresponding infrastructure resources.
Run the cf services
command to view the deletion status.
Stop Streaming Service Logs to LogMe
Follow the steps below to stop streaming the logs of your services to LogMe.
Overwrite Your Service configuration
If you want to stop streaming your service instance logs to your LogMe instance,
you can overwrite the syslog
key of your service configuration.
Run cf update-service INSTANCE-NAME -c PARAMETERS-AS-JSON
to update the syslog
endpoint of the instance you don't want to monitor anymore. For this, you need
to overwrite the value of the syslog
key with an empty string.
cf update-service service-instance-to-monitor -c '{"syslog": []}'
Delete the Service Key and Instance
If you are not using it anymore, you may want to delete the service key and the
service instance itself. Run cf delete-service-key INSTANCE-NAME SERVICE-KEY-NAME
and cf delete-service INSTANCE-NAME
to do so.
cf delete-service-key my-logme-service key1
cf delete-service my-logme-service
It may take several minutes to delete the service. Deleting a service deprovisions the corresponding infrastructure resources.
Run the cf services
command to view the deletion status.
Custom Parameters
Tuning
As usage of an a9s Logme instance grows you may need to tune the logstash component to meet demand. There are currently 3 custom parameters available for this.
Name | Description | Min | Max | Default |
---|---|---|---|---|
logstash_heapspace | The amount of memory (in MB) allocated as heap by the JVM for Logstash | 256 | %50 of VM memory (check your plan details) | 256 |
logstash_maxmetaspace | The amount of memory (in MB) used by the JVM to store metadata for Logstash | 256 | 1024 | 256 |
logstash_file_open_limit | The limit on the number of file descriptors Logstash can have open concurrently | 1024 | 65535 (varies between stemcells) | 1024 |
Data Retention
In order to clean up old Elasticsearch indices a tool called Curator runs periodically. By default it deletes indices older than 30 days.
You can overwrite that configuration using the custom parameters
curator_retention_unit
and curator_retention_period
.
For example:
cf create-service a9s-logme logme-single-small my-logme-service -c '{ "curator_retention_unit": "days", "curator_retention_period": 90 }'
cf update-service my-logme-service -c '{ "curator_retention_unit": "hours", "curator_retention_period": 3 }'
For the curator_retention_unit
you can use the following values:
seconds
minutes
hours
days
weeks
months
years
For the curator_retention_period
you can use a positive integer value bigger than zero.
Additional Groks
It is possible to define additional grok patterns, which will be matched against the message part of the syslog. As soon the first grok pattern will match, the next will not apply to it.
To clarify that statement a little further, assume we have the following syslog message:
<133>Feb 25 14:09:07 webserver syslogd: 123 additional-infomation 456
This syslog message is preprocessed by the logstash syslog input plugin. After that preprocessing it already has the structure:
{... syslog specific parts of the log ... "message" : "123 additional-infomation 456" }
The user defined additional groks are now matched against the value of the message
field.
In the example above against "123 additional-infomation 456"
.
Assuming the following additonal grok is defined:
"%{NUMBER:valueOne} %{WORD:someWord} %{NUMBER:valueTwo}"
The parsed result would be:
{
//... syslog specific parts of the log ...
"message": "123 additional-infomation 456",
"valueOne": "123",
"someWord": "additional-information",
"valueOne": "456"
}
How to Add Additional Groks
Additional groks are applied as custom parameter {"additional_groks":[/* List of additional groks */]}
.
Example:
cf cs <service-type> <service-plan> <service-name> \
-c '{"additional_groks":[{"pattern":"%{WORD:Some} %{WORD:Grok} %{WORD:Pattern}"}]}'
Each pattern is a JavaScript object with the fields "pattern"
, "additional_tags"
and "additional_fields"
.
Field | Optional? | Type | Description |
---|---|---|---|
pattern | mandatory | string | The grok pattern to match against |
additional_tags | optional | array | Strings which will be added as tags if the grok matches. |
additional_fields | optional | hash | Key-Value-Pairs which will be added as fields if the grok matches. |
A minimal additional grok could look like this:
{"additional_groks":[{"pattern":"%{GREEDYDATA:data}"}]}
a more complex additional grok could look like this:
"additional_groks" : [
{
"pattern" : "%{GREEDYDATA:data}",
"additional_tags" : ["tagOne", "tagtwo"],
"additional_fields" : {
"fieldOne" : "fieldOneValue",
"fieldTwo" : "fieldTwoValue"
}
},
{
"pattern" : "%{GREEDYDATA:data}"
}
]
More information regarding the available grok patterns.
How to Remove Added Grok Patterns
If you want to remove all additional groks just apply the custom parameter with a empty array:
cf update-service <service-name> -c '{"additional_groks":[]}'
If you want to keep some additional groks, apply the custom parameter with that grok definitions included.
In general the groks are created completely new from scratch with every application of the custom parameter {"additional_groks" : [... Some groks ...]}
.
Backup and Restore Service Instances
a9s LogMe provides an easy way to create backups and restore if needed. For a more detailed description, please see the a9s Service Dashboard documentation.
- The backup process is based on Elasticsearch's' Snapshot functionality, this means that:
- Backups are not encrypted.
- Due to this, the
backup_manager_encryption_key
property cannot be set.
- Due to this, the
- Backups cannot be downloaded.
- The storage backend is limited to AWS S3 and Azure; generic S3 API services are not supported.
- Backups are not encrypted.
- Elasticsearch's default filter plugins are not available.
Setup Disk Usage Alerts
Each service comes with the a9s Parachute. This component monitors ephemeral and persistent disk usage. See the a9s Parachute documentation how to configure the component.
Handle Stack Traces
Java stack traces typically contain more than one line of content. Logstash and
the syslog protocol at all on the other hand, does not support multiline
log-events per se. It is however possible to encode the newline character \n
as \u2028
(line separator). Logstash will then read the stack trace as one
log-event and save it to Elasticsearch with the line separator converted back to
newline for readability. Without this fix, each line of the stack trace would
show up as an event of its own in Elasticsearch.
The application needs to be configured to use the \u2028
line separator
instead of the default \n
line separator. This should be done by configuring
the application logging accordingly.
It follows an example for Java Spring applications.
Java Spring
To get your Exceptions to be recognized as one events instead of many one-lined events by Logstash, you need to change your log format.
application.properties
To do this, you can add the following snippet to logging.pattern.console
in the
application.properties
file of your Java Spring application:
%replace(%xException){'\n','\u2028'}%nopex
With this snippet every line feed \n
in the stack trace will be converted
into \u2028
(line separator). The %nopex
part means that the original text will
not be printed/logged.
If your logging config is just this:
logging.pattern.console=%replace(%xException){'\n','\u2028'}%nopex%s
Then all your logs will be exceptions. As you probably want to log other events too, you can add the above snippet to an existing config. This is shown in an example:
Assume that your logging config is currently the following:
logging.pattern.console=%d{yyyy-MM-dd HH:mm:ss} - %msg%n
This just prints the date, time and then the log message. The %n
at the end
is of course the newline and probably also the last thing you’ll output per
logged event. To add the replace rule to this config, place it in the very
end, but before the final %n
so that your exception will still be terminated
by a newline. Like this:
logging.pattern.console=%d{yyyy-MM-dd HH:mm:ss} - %msg%replace(%xException){'\n','\u2028'}%nopex%n
This causes your regular logging events to print as usual and only exceptions (or rather: their stack traces) will be manipulated.
YAML or XML logging configurations
In case you use a yaml or xml based format, the method is still the same, just add the snippet above just before the final newline of your logging pattern.
Say this is your logback-spring.xml
:
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss} - %msg%n</pattern>
</encoder>
</appender>
<root level="debug">
<appender-ref ref="STDOUT" />
</root>
</configuration>
You have to make same kind of change as for application.properties
, but you
have to do it for every class that may log Exceptions to stdout. For this
example the logback-spring.xml
would look like so:
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss} - %msg%replace(%xException){'\n','\u2028'}%nopex%n</pattern>
</encoder>
</appender>
<root level="debug">
<appender-ref ref="STDOUT" />
</root>
</configuration>
This solution will work for any type of logging configuration, as long as it supports the replace-format of the code snippet
%replace(%xException){'\n','\u2028'}%nopex