Loggly can automatically retrieve new log files added to your S3 bucket(s). Our service supports logs from ELB, ALB, Cloudfront, as well as any uncompressed line-separated text files. Our script will configure all the settings automatically. Alternatively, you could also configure your AWS account yourself by following the manual setup instructions.
It works by listening for events from Amazon that a new object has been created in your bucket. To make the process of sending events reliable, we send them through Amazon’s Simple Queue Service (SQS), which saves the event until we can retrieve it. When we receive the notification, we will download the log file and ingest it into Loggly.
Note: S3 Ingestion has a limit of 1GB Max S3 File size. If file exceeds 1GB, we are going to skip it.
Supported file formats: .txt, .gz, .json.gz, .zip, .log.
Also any plain text or zipped file Provided the S3File Metadata on AWS is text/plain or text/rtf or application/x-gzip or application/zip.
The configuration page is located at "Source Setup" -> "S3 Sources". Here you will be able to create new AWS S3 log sources and manage existing ones:
The image above shows the details of the configured AWS sources. The following information is available on this page:
- Indicates that the configuration is successful.
- Indicates that there is a problem and urgent attention is needed.
- Toggle the check box to enable or disable AWS source(s).
- Path: Location of the S3 logs.
- Tags: Tag(s) related to the S3 bucket.
- Actions: Edit or Delete the corresponding source entry.
This can be accomplished using the script option described below. If you prefer doing it manually, please click the "Manual" tab and follow instructions for the manual setup.
To set up S3 log Ingestion, proceed to the "Source Setup" -> "S3 Sources" tab and click on the "Add New" button.
Enter the name of the S3 bucket from which you would like to send logs. As an option you can provide a Prefix also. A prefix operates similarly to a folder. If you add a prefix here then only keys (or files) that are in that folder will be ingested by Loggly. The prefix can also contain multiple folders separated by slashes, for example "loggly/2017/01":
Note: One prefix per bucket is allowed, if you change the prefix then the keys with the new prefix will be ingested.
Enter the AWS account ID for the account containing the bucket. Your account ID will be used to set up the queue to send notifications about the new objects in your bucket. You can find your account ID on the Security Credentials page:
If you have multiple active tokens, please choose the customer token you would like to use to send logs to Loggly. For example, select the appropriate token from the drop-down field:
If you have only one active token, that token will be used as default. Therefore, this step will not be presented on the page if you have one active token.
The script requires administrator access to configure your account. If you haven’t already, run the aws configure command using administrator credentials. Please install the AWS CLI tool if you haven’t already:
Run the script below to give Loggly access to read from your chosen S3 log bucket, and set up a SQS queue so that we’ll receive notification when new objects are added to your S3 bucket:
s3bucket: The name of the bucket from which you would like to send logs.
acnumber: Your AWS account number, which you can get from your AWS account page in the console.
user (optional): The IAM username that Loggly should use when accessing the queue and bucket. Please use a dedicated user for Loggly. The default is loggly-s3-user.
admin (optional): If you have more than one user set up with AWS CLI, please provide the user with administrative privileges in `.aws/credentials` in your home directory. The default is "default".
sqsname (optional): The name of the SQS queue that Loggly will receive notifications from when objects are added to the bucket. Please use a dedicated queue for Loggly. The default is loggly-s3-queue.
Enter SQS Queue name:
Enter Access key ID and Secret access key:
You may optionally provide one or more comma-separated tags that describe your data to make it easier to search:
Click save after you have entered the information. You will be back to the AWS Sources page and if the configuration was successful, you will see a green checkmark for this source.
If you don’t see any data show up in the search tab, then check for these common problems.
- Wait a few minutes in case indexing needs to catch up.
- Try the manual method if the script method doesn’t help.
- Check if the AWS source is enabled under the AWS Sources tab.
- Check the log files to make sure they exist and you have the right path.
- The objects added previously will not be sent to Loggly, so only test by sending new logs.
- Check the Account overview page to see if you are exceeding the data volume limit per your subscription.
- Check for errors on the page and correct them.
- Search or post your own Amazon S3 Ingestion questions in the community forum.
Learn how Loggly can help with all your AWS Log Management
When the APM Integrated Experience is enabled, Loggly shares a common navigation and enhanced feature set with the other integrated experiences' products. How you navigate the product and access its features may vary from these instructions. For more information, go to the APM Integrated Experience documentation.