Elastic Explained: How To Create a Cluster with Docker Compose

Overview

In this guide we’ll walkthrough setting up and running an externally accessible three-node Elastic cluster using Docker Compose on Ubuntu Linux 22.04 that’s suitable for a home lab or developer / test environment.

Our Elastic deployment will include the following:

  • Elasticsearch (three node cluster)
  • Kibana
  • Fleet Server

To do this, we’ll use a forked and customized version of the fantastic Elastic Container Project and walkthrough these steps:

  1. Install Docker to download and deploy our containers.
  2. Git clone our custom repo to your server.
  3. Create an environment (.env) variables file with specific settings.
  4. Create a kibana.yml file with custom configurations.
  5. Create a docker-compose.yml file that enables us to configure and run multiple containers.
  6. Create a Linux service to easily start and stop our containers.
  7. Troubleshooting issues by reviewing logs.
  8. How to “upgrade” to a new Elastic release.

If you aren’t interested in or don’t need the customizations included in my forked version, feel free to use the Elastic Container Project as is. It’s great for quickly standing up a locally accessible cluster and learning the basics of Elastic and Docker.

Note: Do not use this guide to deploy a containerized Elastic cluster for use in a production environment, as that is beyond the scope of this guide.

 

Install Docker

You may already have Docker installed on your server.  If not, install Docker and all the required components by following Docker’s official installation guide.  Additionally, install the following packages:

sudo apt-get install jq git curl

 

Git clone the repo onto your server

Change to the directory you want to store the Docker container (e.g., /data/docker-compose) files and then clone the repo onto your server.

git clone https://github.com/eric-ooi/elastic-container.git

 

Edit Docker environment variables file (.env)

  1. Change into the elastic-container directory.
    cd elastic-container
  2. Edit the .env file with your favorite text editor and modify the included variables as needed.  The defaults should work for most use cases, but the following is a list of variables that should be modified:
    • LOCAL_KBN_URL
      This is the URL you will use to access Kibana on your network.  This can be an IP address or hostname.  It should follow this format: https://<server-dns-name-or-ip>:5601.
    • LOCAL_ES_URL
      This is the URL you will use to access Elasticsearch on your network.  This can be an IP address or hostname.  It should follow this format: https://<server-dns-name-or-ip>:9200.
    • ELASTIC_PASSWORD
      You must change this from the default or the cluster will not start.
    • KIBANA_PASSWORD
      You must change this from the default or the cluster will not start.
    • STACK_VERSION
      Set the desired Elastic stack release version you want to use.
    • ELASTICSEARCH_MEM_LIMIT
      Set the maximum amount of memory (in bytes) each Elastic node can use. The default is 12 GB.
    • KIBANA_MEM_LIMIT
      Set the maximum amount of memory (in bytes) Kibana can use. The default is 4 GB.
    • FLEET_MEM_LIMIT
      Set the maximum amount of memory (in bytes) Fleet Server can use. The default is 2 GB.

 

Edit kibana.yml file

Edit the kibana.yml file with your favorite text editor and modify the settings as needed.  The defaults should work for most use cases, but the following is a list of settings that should be modified:

  • xpack.encryptedSavedObjects.encryptionKey
    This is the encryption key Kibana will use to secure saved objects and should be at least 32 characters.
  • server.publicBaseUrl
    This is the URL you will use to access Kibana on your network and cannot end in a slash (/).  This can be an IP address or hostname.  It should follow this format: https://<server-dns-name-or-ip>:5601.

 

Edit docker-compose.yml file

Edit the docker-compose.yml file with your favorite text editor and modify the configuration as needed.  The defaults should work for most use cases, but the following is a list of configurations that should be modified:

  • If you intend to access your Elastic instance from outside the system you’re installing Elastic on (e.g. accessing Kibana externally or deploying Elastic Agents), you’ll want to add custom DNS hostnames or IP addresses to the “Creating certs” section. You can view the existing entries and add your own by following the same format.  You can find the sections to edit by searching for server-dns-name under the elasticsearch-security-setup container configuration where the certificates are generated.
    ...
    if [ ! -f config/certs/certs.zip ]; then
      echo "Creating certs";
      echo -ne \
      "instances:\n"\
      "  - name: es01\n"\
      "    dns:\n"\
      "      - es01\n"\
      "      - server-dns-name\n"\
      "      - localhost\n"\
      "    ip:\n"\
      "      - 127.0.0.1\n"\
    ...
  • You’ll also want to edit the following variables in the fleet-server container configuration:
      • FLEET_URL=https://<server-dns-name-or-ip>:8220
      • FLEET_SERVER_ELASTICSEARCH_HOST=https://<server-dns-name-or-ip>:9200
      • KIBANA_HOST=https://<server-dns-name-or-ip>:5601
    ...
          environment:
            - FLEET_ENROLL=1
            - FLEET_SERVER_POLICY_ID=fleet-server-policy
            - FLEET_SERVER_ENABLE=1
            - FLEET_URL=https://<server-dns-name-or-ip>:8220
            - FLEET_SERVER_ELASTICSEARCH_HOST=https://<server-dns-name-or-ip>:9200
            - FLEET_CA=/certs/ca/ca.crt
            - FLEET_SERVER_CERT=/certs/fleet-server/fleet-server.crt
            - FLEET_SERVER_CERT_KEY=/certs/fleet-server/fleet-server.key
            - FLEET_SERVER_ELASTICSEARCH_CA=/certs/ca/ca.crt
            - KIBANA_FLEET_USERNAME=elastic
            - KIBANA_FLEET_PASSWORD=${ELASTIC_PASSWORD}
            - KIBANA_FLEET_SETUP=1
            - KIBANA_HOST=https://<server-dns-name-or-ip>:5601
            - KIBANA_FLEET_CA=/certs/ca/ca.crt
    ...
  • If you plan to use Elastic snapshots for backing up your cluster, create a local path on your server and map it to /usr/share/elasticsearch/backup.  Then in the volumes section of each Elasticsearch node, edit the following: /path/to/your/elastic-snaps:/usr/share/elasticsearch/backup but replacing /path/to/your/elastic-snaps with the local directory on your system.
    ...
      es01:
        depends_on:
          setup:
            condition: service_healthy
        image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
        container_name: es01
        volumes:
          - certs:/usr/share/elasticsearch/config/certs
          - logs:/usr/share/elasticsearch/logs
          - ./log4j2.properties:/usr/share/elasticsearch/config/log4j2.properties:Z
          - ./gc.options:/usr/share/elasticsearch/config/jvm.options.d/gc.options:Z
          - esdata01:/usr/share/elasticsearch/data
          - /path/to/your/elastic-snaps:/usr/share/elasticsearch/backup
    ...

 

Edit elastic-container.service file

Edit the elastic-container.service file with your favorite text editor and modify the following variables, replacing the /path/to/your/elastic-container section to your specific local directory path where the container files are stored:

  • WorkingDirectory=/path/to/your/elastic-container
  • ExecStart=-/path/to/your/elastic-container/elastic-container.sh start
  • ExecStop=-/path/to/your/elastic-container/elastic-container.sh stop

So assuming your elastic-container files are located in /data/docker-compose/elastic-container, your service file would look something like:

[Unit]
Description=Elastic Container
After=docker.service
Requires=docker.service
[Service]
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/data/docker-compose/elastic-container
ExecStart=-/data/docker-compose/elastic-container/elastic-container.sh start
ExecStop=-/data/docker-compose/elastic-container/elastic-container.sh stop
[Install]
WantedBy=multi-user.target

Once you’re finished, copy this file to /etc/systemd/system/elastic-container.service.

 

Start your Elastic container and set it to start at system boot

  1. Reload service files.
    sudo systemctl daemon-reload
  2. Start the elastic-container service.
    sudo systemctl start elastic-container
  3. Enable the service to start at boot.
    sudo systemctl enable elastic-container

 

Upgrade to a new Elastic version

In the world of Docker containers, there’s no concept of “upgrading” an existing container.  Instead, we’ll create a new container running new images of the Elastic stack that will access the persistently stored Elastic data in our Docker volumes.

  1. Shutdown the elastic-container service.
    sudo systemctl stop elastic-container
  2. Navigate to where you stored your elastic-container files (.e.g, /opt/docker/elastic-container) and edit the .env file in your favorite text editor and update the STACK_VERSION (e.g., STACK_VERSION=8.12.2) variable to the new Elastic version.  Save the file when you’re done.
    # Version of Elastic products
    STACK_VERSION=8.12.2
  3. Start the elastic-container service again.  It will take longer than normal as new images must be downloaded and prepared.
    sudo systemctl start elastic-container
  4. Once the service is back up and running, you can remove the old Elastic images. Assuming the Elastic images are THE ONLY images you have in Docker, you can do this quickly by using the following command. Do not run this if you have non-Elastic or other images that are not currently in use but would like to keep.
    docker image prune -a
  5. Update any Elastic Agents you may have to the new version through the Fleet UI.

 

Troubleshooting

If you are running into issues with starting or running your Elastic containers, you can check a couple logs to determine what the problem might be.

View docker-compose logs.  The -f will “tail” or follow the logs as they come in.

docker-compose logs -f

View elastic-container.service logs.  The -f will “tail” or follow the logs as they come in.

journalctl -u elastic-container.service -f

 

What’s next

That’s it!  If you’re new to Docker it might be challenging at first to get things going, but the configuration files are generally straightforward and there is plenty of Docker documentation coupled with a strong community that should help you figure it out.  Once you’re up and running, you can start deploying Elastic Agents to your Windows and macOS systems and configuring integrations to collect and analyze your data from your Zeek sensor and your Microsoft 365 environments.

To be clear, this guide is meant for quickly setting up a home lab or developer / test environment.  Do not use it for a production environment.

Related Posts

Zeekurity Zen – Part IX: How To Update Zeek

Zeekurity Zen – Part IX: How To Update Zeek

This is part of the Zeekurity Zen Zeries on building a Zeek (formerly Bro) network sensor. Overview In our Zeek journey thus far, we've: Set up Zeek to monitor some network traffic. Used Zeek Package Manager to install packages. Configured Zeek to send logs to Splunk...

Elastic Explained: How To Guides For The Elastic Stack

Elastic Explained: How To Guides For The Elastic Stack

Elastic develops the popular log analytics platform, the Elastic Stack, which supports a variety of search, observability, and security use cases through its many out of the box integrations.  It's a great platform for collecting, analyzing, and visualizing data from...

Transform Your Business & Operate at Peak Efficiency