Summary
LucidLink's Audit Trail feature creates Newline Delimited JSON (NDJSON) formatted log files that record filesystem operations on each client system. Although admins can inspect these logs directly, a better user experience is achieved by importing the log data into a searchable database accessed through a web UI.
This article demonstrates a containerized setup of integrated services that:
- Collect and format audit trail log data.
- Index the log data in a database.
- Provide a web UI service for searching and interacting with the indexed data.
We will use the following tools:
- Fluent Bit for log shipping.
- Elasticsearch for the database index.
- Kibana for the web UI.
This example illustrates a basic local host installation for development and testing purposes. For production deployments, please refer to the Elasticsearch Docker documentation for security configuration and best practices.
Docker and Docker Compose will simplify the configuration and deployment of these services on a local host system. Docker Desktop provides both services from a single installer. There are also many open source alternatives, such as podman.io, for running docker compose files.
Prerequisites
-
LucidLink client installed and filespace connected with administrator user
- Administrator user account needed for access to
/.lucid_audit
directory - Consult this KB article on enabling the Audit Trail feature.
- Administrator user account needed for access to
-
Docker and Docker Compose (or alternative) installed
- Docker Desktop provides easy installation of the required binaries.
-
Repository files (available in the attached .zip):
.env
docker-compose.yml Dockerfile-fluent-bit fs-audit-trail.yaml json-parser.conf imports.ndjson import-saved-objects.sh setup.sh start_docker_compose.sh stop_docker_compose.sh
Instructions
Once you have Docker Desktop installed and the Docker runtime is running, download the collection of deployment files attached to this article and unzip into a directory on your system.
├── fs-audit-trail-es │ ├── Dockerfile-fluent-bit │ ├── docker-compose.yml │ ├── fs-audit-trail.yaml │ ├── json-parser.conf │ ├── imports.ndjson │ ├── import-saved-objects.sh │ ├── setup.sh │ ├── start_docker_compose.sh │ └── stop_docker_compose.sh
First, make all the shell scripts executable. In a Terminal session, change into the 'fs-audit-trail-es' directory and run:
chmod +x *.sh
The deployment requires setting up the local mount point of your LucidLink filespace. This is done using the provided setup script. Run the script passing an argument for your local host mount point:
./setup.sh --fsmount /Volumes/production
If you are unsure of the filespace mount-point, refer to the LucidApp or run the CLI command lucid status
.
Once the setup script has created your .env file, you can start the services using:
./start_docker_compose.sh
The container deployment will take a minute or so to complete on the first launch as it downloads and runs the various container images. The script will check if Docker is properly installed and provide helpful messages if it's not.
To stop all services, use:
./stop_docker_compose.sh
Accessing the Web Interface
Once the services are running, you can access Kibana at: http://localhost:5601
A pre-configured dashboard for viewing audit trail data will be automatically imported. You can find it under Analytics > Discover > Saved Searches > audit-trail-layout. Click on the 'audit-trail-layout' link under the Title heading to load the layout. Adjust column widths to preference. Consult the Kibana guide for search query and visualization topics.
Note: This setup uses a development configuration with security disabled. For production environments, it is strongly recommended to enable security features and follow Elastic's security best practices.
Related to
- audit-trail-es.zip10 KB