Derived Fields
Navigation Notice: When the APM Integrated Experience is enabled, Loggly shares a common navigation and enhanced feature set with other integrated experience products. How you navigate Loggly and access its features may vary from these instructions.
Loggly Derived Fields let you define custom parsing rules for log events. With derived fields, you can add structure to logs, create new fields, and index events in ways that matter to you. It’s a way to extend Loggly’s powerful built-in parsing engine by adding your own unique parsing rules.
Like normal parsed fields, derived fields are parsed as Loggly receives a log message. This gives you instant access to the information in your logs as soon as you save a new rule. This also opens up your previously unparsed logs to other Loggly features, such as advanced filtering, aggregation, calculations, graphing, and alerting. You can also add additional parsing rules to complement automatic rules.
Derived fields make it easy to extract information from unparsed fields, or even parse formats that are not currently recognized by Loggly. The derived fields are automatically indexed and appended to the original log event, allowing you to index and search the event through the Loggly Dynamic Field Explorer™.
Creating Derived Fields
The fastest way to get started with derived fields is by clicking Create Derived Fields when viewing a log event. This takes you to the Rule Manager, where you can define your new derived fields rule.
You can also create a new rule by clicking Derived Fields at the top of the screen.
Derived Field Rules
Derived field rules contain the logic necessary to extract your new fields from incoming log events. Almost any kind of log event can be broken into fields that can indexed by Loggly. The four supported types of rules are key-value pairs, anchors, regular expressions (RegEx), and insert tags.
Derived Field Rule Types | |||
Type | Description | Example | |
Key-Value |
Key-value fields consist of a field name and a value separated by a delimiter. Loggly supports only single character delimiters.
|
Unparsed | status_code=700 |
Parsed | status_code: 700 | ||
Anchor |
Anchor fields extract a value by matching against surrounding text. In the example, the preceding anchor is
|
Unparsed | [client 127.0.0.1] |
Parsed | host: 127.0.0.1 | ||
RegEx | RegEx rules extract log data using regular expressions. This example uses the following RegEx to extract the until date: ^.*until=(\d{4}-\d{2}-\d{2}).*$ |
Unparsed | from=2015-10-01T18%3A34%3A03.390ZJW235&until=2015-10-02T18%3A34%3A03.390Z |
Parsed | until: 2015-10-02 | ||
Insert Tag | Insert tag rules insert a tag into a matching event. You can even use multiple rules to mark different events with a common tag. This example uses the following RegEx to tag events containing Exception: ^.*Exception.*$ |
Unparsed | "Sqsqueue_Exception": "No messages received" |
Parsed | Tag: Exception |
While you can have only one rule type active at a time, each rule provides a number of options for greater control. You can also run multiple derived field rules against an event.
Key-Value Rule Definition
Key-value rules identify sets of two related data points by searching for a separator (a character that links a key and its value) and a delimiter (a character that separates two sets).
Key-value rules are particularly powerful because a single rule definition can create multiple fields. This allows you to extract several fields without having to define multiple rules.
Key-Value Rule Configuration | ||
Parameter | Description | Example |
Separator | A character that separates the key from the value. | user=sim27 |
Delimiter |
A character used after a key-value pair that signifies the end of the pair. Loggly supports only single character delimiters. |
key=value, key2=value2 |
Strip surrounding quotes | When checked, any single quotes or double quotes surrounding the key or value are removed. If your keys or values have spaces, it is best to surround them with quotes to clearly identify the parts. | "name"="John Doe" |
Do not extract key-value pair if key is: | The name of a particular key you don’t want to parse. | user |
Extract Field From | Lets you select a previously parsed field to parse further. | syslog.appName |
Anchor Rule Definition
Anchor rules extract values from events based on their context. With an anchor rule, you define the patterns that precede and follow the desired value. For instance, the following example shows how to extract the IP image by searching for [client_ip
and ]
.
Unlike a key-value rule, an anchor rule extracts only the first match it finds.
Anchor Rule Configuration | ||
Parameter | Description | Example |
Field preceded by: | The string that precedes the field of interest | [client_ip 127.0.0.1] |
Field followed by: | The string that follows the field of interest | [client_ip 127.0.0.1] |
Extract Field From: | Lets you select a previously parsed field to parse further. | syslog.appName |
RegEx Rule Definition
RegEx rules let you use regular expressions to match against your logs. Regex rules must match the entire event by starting with ^ and ending with $. Like an anchor rule, a RegEx rule extracts the first match that it finds.
RegEx Rule Configuration | ||
Parameter | Description | Example |
Event matches RegEx: | The regular expression that you want to match, including capture groups. | ^.*until=(d{4}-d{2}-d{2}).*$ from=2015-10-01T18%3A34%3A03.390ZJW235&until=2015-10-02T18%3A34%3A03.390Z |
Extract Field From: | Lets you select a previously parsed field to parse further. | syslog.appName |
Insert Tag Rule Definition
Insert tag rules are somewhat different from key-value, anchor, or RegEx rules. Rather than extracting data from an event, an insert tag rule lets you tag the event based on its contents. The rule matches events using a regular expression, then adds the specified tag to the event. Like the RegEx rule, the regular expression used in an insert tag rule must match the entire event by starting with ^ and ending with $.
RegEx Rule Configuration | ||
Parameter | Description | Example |
Tags to insert | Name of the tag that you want to insert. | access_log |
Event matches RegEx | The regular expression that you want to match, including capture groups. | ^apache-access$ |
Look in: | Lets you select a field to match the RegEx against. | json.basicinfo |
Limiting Rule Scope
You can limit the scope of all derived field rules by specifying the source or contents of the event. Loggly recommends limiting your rules as much as possible to reduce the chances of extracting unnecessary or unwanted data. For instance, you might want to extract the IP address from logs originating from your load balancer, but you probably don’t want the same rule to apply to logs originating from a test or staging server.
You can limit your rule’s scope using four parameters: hosts, applications, tags, and events containing text. If you entered the derived field editor through a log event, these fields auto-populate with any parameters available in the event.
Limit Rule Scope Configuration | |
Parameter | Description |
Host | The name(s) of one or more hosts that the logs will originate from. Note that host names are only available with syslog events. |
Application | The name of the application(s) or service(s) the logs will originate from. Note that host names are only available with syslog events. |
Tag | The name(s) of a Loggly tag that will be applied to the log event. |
Events containing text | As an alternative to the above fields, you can filter on events if they contain specific text. |
Each parameter lets you enter one or more values for filtering. As you type, Loggly provides suggestions based on previous events. You can click on a suggestion to replace the current text with the suggested text. Clicking out of the text box or pressing Enter also converts the typed value into a valid target.
You can also use wildcards to expand the range of your derived fields. For example, the following scope limits the scope of a rule to logs originating from Docker or from any daemon running on the system. Note how Loggly displays Multiple Applications instead of 2 Applications, which it would have displayed if you specified two distinct values. Also note that wildcards can only be used for hosts, applications, and tags.
Testing Your Rule
Before you can save your rule, you need to test it against existing log events to verify that it correctly filters new events. The test runs against a set of sample logs displayed beneath the rule editor. If you want to specify your own set of test logs, click Modify Search to open search controls. Enter your parameters, and then click Search to populate the set with a new list of events.
After the new set is loaded, click Test Rule on Events to test your rule. If the test was successful and you are satisfied with the results, click Continue to move to the next step.
Saving Your Rule
The final step is to save your new rule. Here, you name your field rule, and specify the name and type of the field that contains the extracted data. For key-value rules, the field name is automatically set to the extracted key and the data type is always string. This option is unavailable for insert tag rules, because no new fields are created. When you are finished, click Save Rule.
After the rule is saved, Loggly inspects and automatically parses any new logs that match the requirements for your rule. When an incoming log matches the rule, the new fields appear in the Dynamic Field Explorer. The Dynamic Field Explorer also contains a new Derived Fields category. Clicking the category allows you to browse through all of your derived fields. You can also search on derived fields by entering derived.<fieldname>
in the search bar.
Frequently Asked Questions About Derived Fields
1. How do I force my derived fields to be numeric?
When saving a RegEx or anchor rule, there is an option to define the field as double, long, or timestamp. Key-value rules can only be stored as strings.
2. How do I force a derived field to be treated as the searchable timestamp?
Before saving your rule, select timestamp from the drop-down menu next to the desired field. Loggly then automatically parses out the timestamp from the field as long as it is in one of the supported formats:
yyyy-MM-dd’T’HH:mm:ss
yyyy-MM-dd’T’HH:mm:ssZ
yyyy-MM-dd’T’HH:mm:ss.SSS
yyyy-MM-dd’T’HH:mm:ss.SSSZ
yyyy-MM-dd HH:mm:ss,SSS
yyyy-MM-dd HH:mm:ss,SS
yyyy-MM-dd HH:mm:ss,SSZ
yyyy-MM-dd HH:mm:ss,SSSZ
yyyy-MM-dd HH:mm:ss,SS z
yyyy-MM-dd HH:mm:ss,SSS z
yyyy-MM-dd’T’HH:mm:ss.SSSSSS
yyyy-MM-dd’T’HH:mm:ss.SSSSSSZ
yyyy-MM-dd’T’HH:mm:ss.SSSSSSS
yyyy-MM-dd’T’HH:mm:ss.SSSSSSSZ
3. Do you recommend any tools for learning about RegExes?
There are many great resources online for learning and experimenting with regular expressions. SolarWinds recommends regexr.com. It offers a live testing sandbox as well as cheat sheets.
4. How do I search for the events on which a derived rule was applied?
When Loggly receives a matching event as per the derived rule it adds the logtype automatically to those events in this format:
log type: name_of_the_derived_rule
You could search for the events on which a derived rule was applied by searching as:
logtype:"name_of_the_derived_rule"