Sending audit trail logs to Amazon S3 using fluent-bit

  • Updated

 

 

Summary

LucidLink's Audit Trail feature creates Newline Delimited JSON (NDJSON) formatted log files that record filesystem operations. This guide demonstrates how to automatically ship these logs to an Amazon S3 bucket using Fluent-bit.

The setup provides:

  1. Automated installation of required components
  2. Configuration of Fluent-bit for log shipping
  3. AWS credentials management
  4. Systemd service for automatic startup

We will use the following components:

Prerequisites

  • LucidLink filespace mounted
    • Administrator access required for /.lucid_audit directory
    • Audit Trail feature must be enabled for the filespace
  • Ubuntu host with root/sudo access
  • AWS S3 bucket for log storage
  • AWS credentials (optional)
    • Can be provided during setup or configured later
    • Requires S3 write permissions

Instructions

Copy the deployment files to your Ubuntu host:

  ├── fs-audit-trail-s3
  │   ├── fluent-bit-setup.yml
  │   └── setup-fluent-bit.sh

Run the setup script with your S3 bucket details:

./setup-fluent-bit.sh <s3-bucket-name> <s3-region> --mount-point <mount-point> [--access-key ACCESS_KEY --secret-key SECRET_KEY]

Required Arguments

  • s3-bucket-name: Name of your S3 bucket
  • s3-region: AWS region (e.g., us-east-1)
  • --mount-point: Path where your LucidLink filespace is mounted (e.g., /media/lucidlink)

Optional Arguments

  • --access-key: AWS access key ID
  • --secret-key: AWS secret access key

Note: If AWS credentials are not provided via the script arguments, the fluent-bit service will use any existing AWS credentials configured on the host.

Examples:

# With AWS credentials
./setup-fluent-bit.sh my-audit-logs us-east-1 --mount-point /media/lucidlink --access-key AKIAXXXXXXXX --secret-key xxxxxxxx

# Without AWS credentials (existing credentials on host)
./setup-fluent-bit.sh my-audit-logs us-east-1 --mount-point /media/lucidlink

Note: The mount point should be the root directory where your LucidLink filespace is mounted. The script will automatically look for audit logs in the .lucid_audit subdirectory. In this example, we are using `/media/lucidlink`. Adjust to your specific setup.

Consult this guide for creating LucidLink filespace systemd service.

Configuration Details

Log Source

  • Source path: /media/lucidlink/.lucid_audit/*/*/*/*.log
  • Parser: JSON format
  • State DB: /fluent-bit/db/logs.db

S3 Output

  • S3 key format: /logs/%Y/%m/%d/%H/%M/%S-$UUID.log
  • Buffer directory: /fluent-bit/s3-buffer
  • Upload settings:
    • Total file size: 5MB
    • Upload timeout: 1 minute
    • Preserve data ordering: enabled
    • Using PUT object method

Monitoring

Check service status:

systemctl status fluent-bit-s3

View logs in real-time:

journalctl -u fluent-bit-s3 -f

Troubleshooting

If logs aren't appearing in S3:

  • Check Fluent-bit logs: journalctl -u fluent-bit-s3 -f
  • Verify AWS credentials are correct
  • Ensure the S3 bucket exists and is accessible

If the service fails to start:

  • Check service status: systemctl status fluent-bit-s3
  • Verify configuration files exist and are readable
  • Check system logs: journalctl -xe

Was this article helpful?

0 out of 0 found this helpful