fluent-plugin-sumologic_output, a plugin for Fluentd
This plugin has been designed to output logs or metrics to SumoLogic via a HTTP collector endpoint
Released under Apache 2.0 License.
gem install fluent-plugin-sumologic_output
Configuration options for fluent.conf are:
- data_type- The type of data that will be sent to Sumo Logic, either- logsor- metrics(Default is- logs)
- endpoint- SumoLogic HTTP Collector URL
- verify_ssl- Verify ssl certificate. (default is- true)
- source_category* - Set _sourceCategory metadata field within SumoLogic (default is- nil)
- source_name* - Set _sourceName metadata field within SumoLogic - overrides source_name_key (default is- nil)
- source_name_key- Set as source::path_key's value so that the source_name can be extracted from Fluentd's buffer (default- source_name)
- source_host* - Set _sourceHost metadata field within SumoLogic (default is- nil)
- log_format- Format to post logs into Sumo. (default- json)- text - Logs will appear in SumoLogic in text format (taken from the field specified in log_key)
- json - Logs will appear in SumoLogic in json format.
- json_merge - Same as json but merge content of log_keyinto the top level and striplog_key
 
- text - Logs will appear in SumoLogic in text format (taken from the field specified in 
- log_key- Used to specify the key when merging json or sending logs in text format (default- message)
- open_timeout- Set timeout seconds to wait until connection is opened.
- receieve_timeout- Set timeout seconds to wait for a response from SumoLogic in seconds. Don't modify unless you see- HTTPClient::ReceiveTimeoutErrorin your Fluentd logs.
- send_timeout- Timeout for sending to SumoLogic in seconds. Don't modify unless you see- HTTPClient::SendTimeoutErrorin your Fluentd logs. (default- 120)
- add_timestamp- Add- timestamp(or- timestamp_key) field to logs before sending to sumologic (default- true)
- timestamp_key- Field name when- add_timestampis on (default- timestamp)
- proxy_uri- Add the- uriof the- proxyenvironment if present.
- metric_data_format- The format of metrics you will be sending, either- graphiteor- carbon2or- prometheus(Default is- graphite)
- disable_cookies- Option to disable cookies on the HTTP Client. (Default is- false)
- compress- Option to enable compression (default- true)
- compress_encoding- Compression encoding format, either- gzipor- deflate(default- gzip)
- custom_fields- Comma-separated key=value list of fields to apply to every log. more information
- custom_dimensions- Comma-separated key=value list of dimensions to apply to every metric. more information
- use_internal_retry- Enable custom retry mechanism. As this is- falseby default due to backward compatibility, we recommend to enable it and configure the following parameters (- retry_min_interval,- retry_max_interval,- retry_timeout,- retry_max_times)
- retry_min_interval- Minimum interval to wait between sending tries (default is- 1s)
- retry_max_interval- Maximum interval to wait between sending tries (default is- 5m)
- retry_timeout- Time after which the data is going to be dropped (default is- 72h) (- 0smeans that there is no timeout)
- retry_max_times- Maximum number of retries (default is- 0) (- 0means that there is no max retry times, retries will happen forever)
- max_request_size- Maximum request size (before applying compression). Default is- 0kwhich means no limit
NOTE: * Placeholders are supported
Reading from the JSON formatted log files with in_tail and wildcard filenames:
<source>
  @type tail
  format json
  time_key time
  path /path/to/*.log
  pos_file /path/to/pos/ggcp-app.log.pos
  time_format %Y-%m-%dT%H:%M:%S.%NZ
  tag appa.*
  read_from_head false
</source>
<match appa.**>
 @type sumologic
 endpoint https://collectors.sumologic.com/receiver/v1/http/XXXXXXXXXX
 log_format json
 source_category prod/someapp/logs
 source_name AppA
 open_timeout 10
</match>
Sending metrics to Sumo Logic using in_http:
<source>
  @type http
  port 8888
  bind 0.0.0.0
</source>
<match test.carbon2>
	@type sumologic
	endpoint https://endpoint3.collection.us2.sumologic.com/receiver/v1/http/ZaVnC4dhaV1hYfCAiqSH-PDY6gUOIgZvO60U_-y8SPQfK0Ks-ht7owrbk1AkX_ACp0uUxuLZOCw5QjBg1ndVPZ5TOJCFgNGRtFDoTDuQ2hzs3sn6FlfBSw==
	data_type metrics
	metric_data_format carbon2
	flush_interval 1s
</match>
<match test.graphite>
	@type sumologic
	endpoint https://endpoint3.collection.us2.sumologic.com/receiver/v1/http/ZaVnC4dhaV1hYfCAiqSH-PDY6gUOIgZvO60U_-y8SPQfK0Ks-ht7owrbk1AkX_ACp0uUxuLZOCw5QjBg1ndVPZ5TOJCFgNGRtFDoTDuQ2hzs3sn6FlfBSw==
	data_type metrics
	metric_data_format graphite
	flush_interval 1s
</match>
Assuming following inputs are coming from a log file named /var/log/appa_webserver.log
{"asctime": "2016-12-10 03:56:35+0000", "levelname": "INFO", "name": "appa", "funcName": "do_something", "lineno": 29, "message": "processing something", "source_ip": "123.123.123.123"}
Then output becomes as below within SumoLogic
{
    "timestamp":1481343785000,
    "asctime":"2016-12-10 03:56:35+0000",
    "levelname":"INFO",
    "name":"appa",
    "funcName":"do_something",
    "lineno":29,
    "message":"processing something",
    "source_ip":"123.123.123.123"
}
The plugin supports overriding SumoLogic metadata and log_format parameters within each log message by attaching the field _sumo_metadata to the log message.
NOTE: The _sumo_metadata field will be stripped before posting to SumoLogic.
Example
{
  "name": "appa",
  "source_ip": "123.123.123.123",
  "funcName": "do_something",
  "lineno": 29,
  "asctime": "2016-12-10 03:56:35+0000",
  "message": "processing something",
  "_sumo_metadata": {
    "category": "new_sourceCategory",
    "source": "override_sourceName",
    "host": "new_sourceHost",
    "log_format": "merge_json_log"
  },
  "levelname": "INFO"
}
retry_min_interval, retry_max_interval, retry_timeout, retry_max_times are not the buffer retries parameters.
Due to technical reason, this plugin implements it's own retrying back-off exponential mechanism.
It is disabled by default, but we recommend to enable it by setting use_internal_retry to true.
Sumo Logic only accepts connections from clients using TLS version 1.2 or greater. To utilize the content of this repo, ensure that it's running in an execution environment that is configured to use TLS 1.2 or greater.