Documentation forLoggly

Amazon S3 Ingestion (Manual Setup)

Loggly can automatically retrieve new log files added to your S3 bucket(s). Our service supports logs from ELB, ALB, Cloudfront, as well as any uncompressed line-separated text files. Loggly provides a script to configure your account for S3 ingestion using the Amazon SQS service automatically. This guide is for people who prefer to manually configure their Amazon account themselves. It takes a little bit more work to set up, but you can review and control each step yourself.

It works by listening for events from Amazon that a new object has been created in your bucket. To make the process of sending events reliable, we send them through Amazon’s Simple Queue Service (SQS), which saves the event until we can retrieve it. When we receive the notification, we will download the log file and ingest it into Loggly.

Note: S3 Ingestion has a limit of 1GB Max S3 File size. If file exceeds 1GB, we are going to skip it.

Supported file formats: .txt, .gz, .json.gz, .zip, .log.

Adding a new AWS source

To set up S3 ingestion using Amazon SQS service, proceed to the "Source Setup" -> "S3 Sources" tab and then click on the "Add New" button.

Screen Shot 2016-07-14 at 11.12.16 AM
Now click the "Manual" tab to see the form that needs to be completed. The instructions below will assist you in filling the form. Basically, you need to allow Loggly to read from your chosen S3 bucket and notify Loggly of new objects created in the same bucket.

Screen Shot 2016-06-19 at 5.16.07 PM

Step 1

Amazon Simple Queue Service (SQS) is a fast, reliable, scalable, fully managed message queuing service. SQS makes it simple and cost-effective to decouple the components of a cloud application. You can use SQS to transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be always available.

This is how it works. Whenever a new object is created in S3 buckets, S3 fires ObjectCreated events to the SQS queue. Loggly will then retrieve that notification from the queue, which contains the key and bucket of the S3 object, and then download that object from S3 using an access key and secret access key that you will provide. Please note, the objects added previously will not be sent to the SQS queue.

 A. Create a SQS queue manually

Go to the AWS console and -> Select SQS from the Services drop-down:


Create a New Queue or select an existing queue that is dedicated to Loggly:

Screen Shot 2016-05-24 at 8.36.17 AM

A default region will be selected automatically. The SQS queue needs to be in the same region as the S3 bucket. You can check the S3 bucket region in bucket properties as shown below:

Screen Shot 2016-07-06 at 3.23.24 PM

You can change the region of the SQS queue from the drop down menu located on the right side of the toolbar:

Screen Shot 2016-07-06 at 3.18.42 PM

B. Add permissions to the SQS queue

After creating the queue, select it from the table and go to the permissions tab and click on Edit Policy Document:

Screen Shot 2016-07-11 at 11.25.21 AM

An editor window will open, then, paste then copy and paste the JSON below:

  "Version": "2008-10-17",
  "Id": "PolicyExample",
  "Statement": [
    "Sid": "example-statement-ID",
    "Effect": "Allow",
    "Principal": {
      "AWS": "*"
    "Action": "SQS:SendMessage",
    "Resource": "<SQS ARN>",
    "Condition": {
      "ArnLike": {
        "aws:SourceArn":"arn:aws:s3:*:*:<S3 bucket name>"
    "Sid": "Sid1458662829373",
    "Effect": "Allow",
    "Principal": {
      "AWS":"arn:aws:iam::<Your account number>:root"
    "Action": "SQS:*",
    "Resource": "<SQS ARN>"


  • <Your account number>: with your AWS account number.
  • <S3 bucket name>: with your S3 bucket name.
  • <SQS ARN>: you can find it under the details tab, once you have highlighted the SQS queue.

Screen Shot 2016-07-06 at 3.33.18 PM

  • If you want to use multiple buckets then the Condition block would look like the following:
"Condition": {
  "ArnLike": {
    "aws:SourceArn": [
      "arn:aws:s3:*:*:<First S3 bucket name>",
      "arn:aws:s3:*:*:<Second S3 bucket name>",
      "arn:aws:s3:*:*:<Third S3 bucket name>"

Replace <First S3 bucket name>>, <Second S3 bucket name>> and so on with your S3 bucket names.

C. Configure your S3 bucket to send ObjectCreated events to SQS queue

Select S3 from the Services drop down in the AWS console:

Screen Shot 2016-07-11 at 11.28.50 AM

Select the bucket you put in the SQS policy and right click and select Properties:

Screen Shot 2016-07-11 at 11.32.24 AM

Expand the Events section and under Events, Select "ObjectCreated (All)" from the drop down and select the SQS queue you just created from the SQS queue drop down:

Screen Shot 2016-07-11 at 11.35.18 AM

D. Grant permissions to Loggly to read from your S3 bucket

Loggly will need permission to pull the log data from your S3 bucket. The easiest way to accomplish this is by creating a new IAM user on your account. The new user will only have permission to read from the S3 bucket.

Go to your AWS dashboard and select "Identity & Access Management" from the Security & Identity section:

Screen Shot 2016-07-11 at 11.38.18 AM

From your IAM dashboard, select Users from the left-hand menu. Then, create a new user and make sure to download the credentials. (You’ll need to provide these to Loggly in a later step):

Screen Shot 2016-05-22 at 3.39.20 AM
Screen Shot 2016-05-22 at 3.44.43 AM

Once the user is created, select the user from your user list. Under the "Permissions" tab, click on "Inline Policies" and select "click here":


In order to configure the permissions for the new user, select "Custom Policy". Loggly will need access to list the contents of the bucket, get bucket location and get objects from within the bucket:


In the editor window give a name to your custom policy and paste the following:

  "Version": "2012-10-17",
  "Statement": [{
      "Sid": "Sidtest",
      "Effect": "Allow",
      "Action": [
      "Resource": [
        "<SQS ARN>"
    }, {
      "Effect": "Allow",
      "Action": [
      "Resource": [
        "arn:aws:s3:::<S3 bucket name>/*",
        "arn:aws:s3:::<S3 bucket name>"


  • <S3 bucket name>: with your S3 bucket name.
  • <SQS ARN>: you can find it under the details tab, once you have highlighted the SQS queue.
  • If you want to use multiple buckets then the Resource block would look like the following:
"Resource": [
  "arn:aws:s3:::<First S3 bucket name>/*",
  "arn:aws:s3:::<First S3 bucket name>",
  "arn:aws:s3:::<Second S3 bucket name>/*",
  "arn:aws:s3:::<Second S3 bucket name>",
  "arn:aws:s3:::<Third S3 bucket name>/*",
  "arn:aws:s3:::<Third S3 bucket name>"

Replace <First S3 bucket name>, <Second S3 bucket name> and so on with your S3 bucket names.

Double check that you’ve added the necessary permissions, then click "Continue".

Screen Shot 2016-05-22 at 5.11.56 AM

Name the policy, e.g., loggly-aws-policy, and click "Apply Policy":

Screen Shot 2016-05-22 at 5.15.04 AM

Step 2

Enter the access credentials for the user you just created, including the AWS access key and secret keys:
Screen Shot 2016-05-21 at 10.53.58 PM

Step 2.1

Enter the AWS account number:

Screen Shot 2016-07-13 at 1.39.49 PM

Step 3

Choose the customer token you would like to use to send the logs to Loggly.

If you have multiple active tokens, then choose the customer token you would like to use to send the logs to Loggly. For example, select the appropriate token from the dropdown field. If you have only one active token, then that token will be used as default. Therefore, this step will not presented on the page if you have one active token:
Screen Shot 2016-05-21 at 10.57.08 PM

Step 4

Enter the name of your SQS queue if you would like Loggly to receive notifications of new objects added to the bucket:

Screen Shot 2016-05-21 at 10.59.23 PM

Step 4.1

Enter the S3 bucket name. As an option you can provide a Prefix also. A prefix operates similarly to a folder. If you add a prefix here then only keys (or files) that are in that folder will be ingested by Loggly. The prefix can also contain multiple folders separated by slashes, for example "loggly/2017/01":

Screen Shot 2017-03-03 at 11.16.26 AM

Note: One prefix per bucket is allowed, if you change the prefix then the keys with the new prefix will be ingested.

Step 5

You may optionally provide one or more comma-separated tags that describe your data and make it easier to search in Loggly:

Screen Shot 2016-05-21 at 11.03.11 PM

Click save after you have entered the information. You will then return to the AWS Sources page and you will see a green checkmark under the status column if the configuration was successful.

Troubleshooting S3 Ingestion (Manual Setup)

If you don’t see any data show up in the search tab, then check for these common problems.

  • Wait a few minutes in case indexing needs to catch up.
  • Try our script if the manual method doesn’t help.
  • Check if the AWS source is enabled under AWS Sources tab.
  • Check the log files to make sure they exist and you have the right path.
  • Check the Account overview page to see if you are exceeding the data volume limit as per your plan.
  • Check for errors on the page and correct them.

Still Not Working?

  • Search or post your own Amazon S3 Ingestion (Manual Setup) questions in the community forum.

When the APM Integrated Experience is enabled, Loggly shares a common navigation and enhanced feature set with the other integrated experiences' products. How you navigate the product and access its features may vary from these instructions. For more information, go to the APM Integrated Experience documentation.

The scripts are not supported under any SolarWinds support program or service. The scripts are provided AS IS without warranty of any kind. SolarWinds further disclaims all warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The risk arising out of the use or performance of the scripts and documentation stays with you. In no event shall SolarWinds or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the scripts or documentation.