Custom Parameters
Custom Parameters
As an end user you have the opportunity to customize your Service Instance by using custom parameters.
Custom parameters are passed on to a Service Instance by using the -c
switch of the cf CLI commands
cf create-service
and cf update-service
. For example
cf update-service my-instance -c '{"my_custom_parameter": "value"}'
would set the custom parameter named my_custom_parameter
to the value value
for the Service Instance my-instance
.
You don't have to utilize those settings. There are sane defaults in place that fit your service plan well.
Every parameter corresponds to a property in the configuration file for the respective OpenSearch version.
Tuning
As usage of an a9s LogMe2 instance grows you may need to tune the OpenSearch component to meet demand.
The following custom parameters are available for tuning the used memory:
Name | Description | Min | Max | Default |
---|---|---|---|---|
java_heapspace | The amount of memory (in MB) allocated as heap by the JVM for OpenSearch | 256 | %50 of VM memory (check your plan details) | not set, 46% of available memory will be used |
java_maxmetaspace | The amount of memory (in MB) used by the JVM to store metadata for OpenSearch | 256 | 1024 | 512 |
Additionally there is a custom parameter available to set the used Garbage Collector:
Name | Description | Default | Available Options |
---|---|---|---|
java_garbage_collector | The JVM Garbage Collector to be used for OpenSearch. | UseG1GC | UseSerialGC, UseParallelGC, UseParallelOldGC, UseG1GC |
Data Retention
The Index State Management
plugin is used to clean up old OpenSearch indices periodically. You can find more information
in the official documentation.
By default it deletes indices older than 30 days.
You can overwrite that configuration using the following custom parameters:
Name | Description | Default |
---|---|---|
ism_job_interval | Time between executions of the Index State Management in minutes | 5 |
ism_deletion_after | Combination of an integer and a timerange when an index will be considered "old" and can be deleted from OpenSearch. Possible values for the timerange are s, m, h and d | 30d |
ism_jitter | Jitter of the execution time (job_interval). Read more here | 0.6 |
For example:
cf create-service a9s-logme2 logme2-single-small my-logme2-service -c '{ "ism_deletion_after": "8h", "ism_job_interval": 60 }'
cf update-service my-logme2-service -c '{ "ism_jitter": 0.3 }'
When the ISM
plugin configuration is set, it is saved in a dedicated OpenSearch index (.opendistro-ism-config
). This
index will be included as part of all generated backups from there on out. This causes the following limitations
regarding the state of the configuration:
- Depending on the chosen backup, you may lose settings that were set on a later date, until a restart of the service
instance takes place.
- For example, if you change the value of the
ism_job_interval
parameter from 5 to 8 on 12.12.2022 but you restore your instance using a backup from 10.12.2022, then the value will not be 8 but 5; this change will hold up until you restart the Service Instance.
- For example, if you change the value of the
TLS
opensearch-tls-protocols
You can specify the allowed TLS protocols via the custom parameter opensearch_tls_protocols
.
The custom parameter opensearch-tls-protocols
correlates with OpenSearch's configuration
parameters plugins.security.ssl.http.enabled_protocols
and plugins.security.ssl.transport.enabled_protocols
, see
Limiting TLS Protocols Used by the Server.
An array with protocol versions is expected. Only Java format is supported.
The allowed protocol version values are TLSv1.3
and TLSv1.2
.
opensearch-tls-ciphers
You can limit the TLS ciphers via the custom parameter opensearch_tls_ciphers
.
The custom parameter opensearch_tls_ciphers
correlates with OpenSearch's configuration
parameters plugins.security.ssl.http.enabled_ciphers
and plugins.security.ssl.transport.enabled_ciphers
,
see Configuring Cipher Suites.
An array with cipher names is expected. Only Java format is supported.
WARNING: There is no validation enabled for the user provided value and therefore existing instances can break when applying this parameter.
fluentd-udp
This property specifies the port for the UDP endpoint of Fluentd.
The default value is 514
.
fluentd-tcp
This property specifies the port for the unencrypted TCP endpoint of Fluentd.
The default value is 0
(disabled).
The port of the UDP and unencrypted TCP endpoint can be the same, but do not have to be.
fluentd-tls
This property specifies the port for the encrypted TCP endpoint of Fluentd.
The default value is 6514
.
fluentd-tls-ciphers
This property specifies the allowed TLS ciphers for Fluentd. See the Fluentd documentation for more information.
This property is only type checked and not fully validated!
The current validation only type checks for a string
. Any misconfiguration will cause Fluentd to malfunction.
fluentd-tls-version
This property specifies the TLS version for Fluentd. See the Fluentd documentation for more information.
If you want to accept multiple TLS protocols, use fluentd-tls-min-version
and fluentd-tls-max-version
instead of fluentd-tls-version
.
fluentd-tls-min-version
This property specifies the minimal TLS version for Fluentd. See the Fluentd documentation for more information.
If fluentd-tls-min-version
is set, fluentd-tls-max-version
must also be set.
fluentd-tls-max-version
This property specifies the maximal TLS version for Fluentd. See the Fluentd documentation for more information.
If fluentd-tls-max-version
is set, fluentd-tls-min-version
must also be set.
Groks
It is possible to define additional grok patterns, which will be matched against the message part of the syslog. As soon the first grok pattern will match, the next will not apply to it.
To clarify that statement a little further, assume we have the following syslog message:
<133>Feb 25 14:09:07 webserver syslogd: 123 additional-infomation 456
This syslog message is preprocessed by the Fluentd syslog input plugin. After that preprocessing it already has the structure:
{... syslog specific parts of the log ... "message" : "123 additional-infomation 456" }
The user defined additional groks are now matched against the value of the message
field.
In the example above against "123 additional-infomation 456"
.
Assuming the following additional grok is defined:
"%{NUMBER:valueOne} %{WORD:someWord} %{NUMBER:valueTwo}"
The parsed result would be:
{
//... syslog specific parts of the log ...
"message": "123 additional-infomation 456",
"valueOne": "123",
"someWord": "additional-information",
"valueOne": "456"
}
How to Add Additional Groks
Additional groks are applied as custom parameter {"groks":[/* List of additional groks */]}
.
Example:
cf cs <service-type> <service-plan> <service-name> \
-c '{ "groks": [{ "pattern": "%{WORD:Some} %{WORD:Grok} %{WORD:Pattern}" }]}'
Each pattern is a JavaScript object with the field "pattern"
.
Field | Optional? | Type | Description |
---|---|---|---|
pattern | mandatory | string | The grok pattern to match against |
A minimal additional grok could look like this:
{ "groks": [{ "pattern": "%{GREEDYDATA:data}" }] }
How to Remove Added Grok Patterns
If you want to remove all additional groks just apply the custom parameter with an empty array:
cf update-service <service-name> -c '{"groks": []}'