- boundary
- csv
- circonus
- cloudwatch
- datadog
- datadog_metrics
- elasticsearch
- exec
- file
- graphtastic
- ganglia
- graphite
- google_bigquery
- google_cloud_storage
- gelf
- http
- hipchat
- irc
- influxdb
- jira
- juggernaut
- kafka
- loggly
- librato
- lumberjack
- mongodb
- metriccatcher
- nagios
- nagios_nsca
- null
- opentsdb
- pagerduty
- pipe
- rackspace
- riak
- riemann
- redis
- redmine
- rabbitmq
- solr_http
- sns
- syslog
- stomp
- statsd
- stdout
- sqs
- s3
- tcp
- udp
- websocket
- xmpp
- zabbix
- zeromq
Author: Rodrigo De Castro <rdc@google.com> Date: 2013-09-20
Copyright 2013 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Summary: plugin to upload log events to Google Cloud Storage (GCS), rolling files based on the date pattern provided as a configuration setting. Events are written to files locally and, once file is closed, this plugin uploads it to the configured bucket.
For more info on Google Cloud Storage, please go to: https://cloud.google.com/products/cloud-storage
In order to use this plugin, a Google service account must be used. For more information, please refer to: https://developers.google.com/storage/docs/authentication#service_accounts
Recommendation: experiment with the settings depending on how much log data you generate, so the uploader can keep up with the generated logs. Using gzip output can be a good option to reduce network traffic when uploading the log files and in terms of storage costs as well.
USAGE: This is an example of logstash config:
output { google_cloud_storage { bucket ⇒ "my_bucket" (required) key_path ⇒ "/path/to/privatekey.p12" (required) key_password ⇒ "notasecret" (optional) service_account ⇒ "1234@developer.gserviceaccount.com" (required) temp_directory ⇒ "/tmp/logstash-gcs" (optional) log_file_prefix ⇒ "logstash_gcs" (optional) max_file_size_kbytes ⇒ 1024 (optional) output_format ⇒ "plain" (optional) date_pattern ⇒ "%Y-%m-%dT%H:00" (optional) flush_interval_secs ⇒ 2 (optional) gzip ⇒ false (optional) uploader_interval_secs ⇒ 60 (optional) } }
Improvements TODO list: - Support logstash event variables to determine filename. - Turn Google API code into a Plugin Mixin (like AwsConfig). - There’s no recover method, so if logstash/plugin crashes, files may not be uploaded to GCS. - Allow user to configure file name. - Allow parallel uploads for heavier loads (+ connection configuration if exposed by Ruby API client)
This plugin supports the following configuration options:
Required configuration options:
google_cloud_storage { bucket => ... key_path => ... service_account => ... }
Available configuration options:
Setting | Input type | Required | Default value |
---|---|---|---|
Yes | |||
No |
| ||
No |
| ||
No |
| ||
No |
| ||
No |
| ||
Yes | |||
No |
| ||
No |
| ||
string, one of | No |
| |
Yes | |||
No |
| ||
No |
| ||
No |
|
- This is a required setting.
- Value type is string
- There is no default value for this setting.
GCS bucket name, without "gs://" or any other prefix.
- Value type is codec
-
Default value is
"plain"
The codec used for output data. Output codecs are a convenient method for encoding your data before it leaves the output, without needing a separate filter in your Logstash pipeline.
- Value type is string
-
Default value is
"%Y-%m-%dT%H:00"
Time pattern for log file, defaults to hourly files. Must Time.strftime patterns: www.ruby-doc.org/core-2.0/Time.html#method-i-strftime
- DEPRECATED WARNING: This configuration item is deprecated and may not be available in future versions.
- Value type is array
-
Default value is
[]
Only handle events without any of these tags. Note this check is additional to type and tags.
- Value type is number
-
Default value is
2
Flush interval in seconds for flushing writes to log files. 0 will flush on every message.
- Value type is boolean
-
Default value is
false
Gzip output stream when writing events to log files.
- This is a required setting.
- Value type is string
- There is no default value for this setting.
GCS path to private key file.
- Value type is string
-
Default value is
"logstash_gcs"
Log file prefix. Log file will follow the format: <prefix>_hostname_date<.part?>.log
- Value type is number
-
Default value is
10000
Sets max file size in kbytes. 0 disable max file check.
-
Value can be any of:
json
,plain
-
Default value is
"plain"
The event format you want to store in files. Defaults to plain text.
- This is a required setting.
- Value type is string
- There is no default value for this setting.
GCS service account.
- DEPRECATED WARNING: This configuration item is deprecated and may not be available in future versions.
- Value type is array
-
Default value is
[]
Only handle events with all of these tags. Note that if you specify a type, the event must also match that type. Optional.
- Value type is string
-
Default value is
""
Directory where temporary files are stored. Defaults to /tmp/logstash-gcs-<random-suffix>
- DEPRECATED WARNING: This configuration item is deprecated and may not be available in future versions.
- Value type is string
-
Default value is
""
The type to act on. If a type is given, then this output will only
act on messages with the same type. See any input plugin’s type
attribute for more.
Optional.
- Value type is number
-
Default value is
60
Uploader interval when uploading new files to GCS. Adjust time based on your time pattern (for example, for hourly files, this interval can be around one hour).
- Value type is number
-
Default value is
1
The number of workers to use for this output. Note that this setting may not be useful for all outputs.