Add logs from GCP
You can configure SolarWinds Observability SaaS to collect and monitor Google Cloud Platform (GCP) logs by deploying a log forwarding pipeline in your GCP environment. This pipeline captures selected GCP logs from Cloud Logging and forwards them securely to SolarWinds Observability using an OTLP-compatible endpoint. The deployment is automated using Terraform and uses native GCP services.
How GCP log monitoring works
This integration deploys an event-driven, buffered pipeline to capture GCP logs and forward them to SolarWinds Observability SaaS using OTLP/JSON over HTTPS.
To ensure high reliability and cost-efficiency, this solution utilizes a buffered pipeline pattern. Instead of triggering a function for every individual log entry, which can be costly during log spikes, data is batched before processing.
-
Ingestion: A
google_logging_project_sinkcaptures logs based on your defined filter and routes them to a Pub/Sub topic. -
Buffering (cost optimization): Pub/Sub streams data into Google Cloud Storage (GCS). The system finalizes a new .json file in the bucket when the file reaches 50 MB or when 10 minutes elapse.
-
Processing: When a batch is ready, Eventarc detects the
object.v1.finalizedevent and triggers the Go-based Cloud Function. -
Transformation and forwarding: The function reads the GCS batch, transforms entries into OpenTelemetry Protocol (OTLP) format, GZIPs the payload for egress efficiency, and sends it to the SolarWinds HTTPS endpoint.
Infrastructure components
-
Cloud Logging sink: Filters and exports selected GCP logs.
-
Pub/Sub topic: Acts as the entry point for log exports.
-
Google Cloud Storage (GCS): High-durability buffer that batches logs to reduce function execution costs.
-
Eventarc: Listens for GCS file completions to trigger the forwarder.
-
Cloud Functions (Gen 2): A Go-based service that handles OTLP mapping and delivery.
-
IAM bindings: Orchestrates permissions for Cloud Build, Eventarc, Pub/Sub, and Cloud Run.
Data governance and security
-
Minimal retention: GCS buckets are configured with a 2-day lifecycle management policy to automatically delete processed logs and minimize storage costs.
-
Internal ingress: The Cloud Function is set to
ALLOW_INTERNAL_ONLYto ensure it can only be triggered by the internal Eventarc service. -
Compression: All data sent to SolarWinds Observability SaaS is GZIP-compressed, significantly reducing network egress charges.
What data is collected
You control which logs are forwarded by configuring the Cloud Logging sink filter. Examples include:
-
Compute Engine VM lifecycle events
-
GCP service audit logs
-
Project-level operational logs
Prerequisites
Before you begin, ensure you have:
-
A GCP project with billing enabled
-
An account with a Project Editor or Owner role to deploy the pipeline. If you are using a custom role, it must include the following permissions:
-
roles/logging.configWriterto create the sink -
roles/pubsub.adminto create topics and subscriptions -
roles/cloudfunctions.developerandroles/run.adminfor Gen 2 Cloud Functions -
roles/resourcemanager.projectIamAdminto assign the service agent rolesService account requirements: The deployment automatically configures permissions for the following system-managed service accounts using your GCP project number:
-
Pub/Sub service agent: Authorized to write to GCS
-
Eventarc service agent: Authorized to trigger the function
-
Cloud Build service account: Authorized to write to Artifact Registry
-
-
-
Terraform 1.6.0 or later
-
Google Cloud SDK (gcloud) installed
-
A SolarWinds Observability OTLP ingestion token
Authenticate to Google Cloud
Terraform uses Application Default Credentials (ADC).
-
Clone the SolarWinds GCP poller repository:
git clone https://github.com/solarwinds/cloud-observability-integration -
Navigate to the log forwarding directory:
cd gcp/gcp-log-forwarder/. -
Install the Google Cloud SDK. For more information, see Install the Google Cloud CLI.
-
Authenticate using ADC:
gcloud auth application-default login. -
Set your active GCP project:
gcloud config set project YOUR_PROJECT_ID
No service account keys are required.
Configure log forwarding
Edit the terraform.tfvars file to match your environment:
project_id = "my-gcp-project"
region = "us-central1"
topic_name = "solarwinds-gcp-events"
sink_name = "solarwinds-gcp-events-sink"
function_name = "ForwardLogs"
otlp_endpoint = "https://otel.collector.na-01.cloud.solarwinds.com:443/v1/logs"
api_token = "SOLARWINDS_OTEL_INGESTION_TOKEN"
bucket_name = "SET BUCKET NAME"
Configuration options
| Setting | Description |
|---|---|
project_id
|
GCP project ID |
region
|
GCP region for deployed resources |
topic_name
|
Pub/Sub topic receiving logs |
sink_name
|
Cloud Logging sink name |
function_name
|
Cloud Function name |
otlp_endpoint
|
SolarWinds OTLP ingestion endpoint |
api_token
|
SolarWinds ingestion token |
bucket_name
|
Must be globally unique |
Deploy the integration
From the repository directory, run:
terraform init -reconfigure
terraform plan
terraform apply
Terraform will provision all required GCP resources automatically.
Verify the deployment
After deployment, Terraform prints the following outputs:
-
The Pub/Sub topic ID receiving log entries
-
The Logging sink writer identity
-
The Cloud Run service URI backing the Cloud Function
To verify log ingestion, generate activity in GCP that matches your logging filter, and confirm logs appear in the SolarWinds Observability Logs Explorer.
Remove GCP log monitoring
To remove all deployed resources, run terraform destroy
Notes
-
The Cloud Function uses internal-only ingress.
-
Invocation occurs through Eventarc.
-
Log volume depends on your logging filter.
-
Updating
terraform.tfvarsredeploys the Cloud Function.