Streaming Logs From ECS Fargate Containers To Elastic Search And View On Kibana

Step-by-step guide to ingesting fargate container logs into Elastic search with fluent bit

Bukola Johnson
7 min readOct 1, 2020

Recently I started working on ECS to deploy some containers , I have always been a big fan of Kubernetes because working with it for the past 2 years made me love how seamless it is and the ease of deploying applications. Few weeks ago I had to work on ECS for the first time, as always I loved the challenge as its an opportunity for me to learn something new and an added knowledge to my IT skills.

This article was borne out of the fact that , after deploying my containers in ECS Fargate , I needed my container logs to be ingested into a self-managed Elastic search(ES). I spent some time trying to figure this out before I finally got some information in bits and pieces and I decided to put all of these together in this article so that you can be sure to have a seamless deployment of your container logs to ES if you are using Fargate on ECS.

If you are using AWS ECS as an alternative to Kubernetes for your container orchestrations, you might be stuck with fewer options like me when it comes to getting container logs out to a central place.

Please follow through the steps below and let me know if you find it helpful.

PRE-REQUISITES

This guide assumes that you have the following already installed and configured

  1. Elastic search cluster already deployed and accesible.
  2. Kibana
  3. AWS CLI installed and configured on your laptop.
  4. AWS ECS cluster (Fargate)

CONCEPT EXPLAINED

With Fluent Bit plugin for AWS container image, you can route logs to Amazon CloudWatch and any other destinations like Elasticsearch Service .
In this post I will show you how to get your logs into Elastic search using Fluent Bit plugin(log collector and forwarder) in Amazon ECS Fargate clusters.

AWS FireLens
A new log driver for ECS task where you can deploy a Fluent Bit( or a Fluentd) sidecar with the task and route logs to it. Using AWS FireLens, we can direct container logs to storage and analytics tools without modifying our deployment scripts. With a few configuration updates on AWS Fargate, select the destination and optionally define filters to instruct FireLens to send container logs to where they are needed.

In the rest of this post, I’m going to demonstrate using FireLens with FluentBit plugin to get container logs in Amazon ECS (Fargate), forwarding the logs to a self hosted Elastic search

STEPS

  1. Configure a task definition
    The first step is to create a task definition where we will define our containers configuration.
{
"family": "nginx-firelens-test",
"taskRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecsInstanceRole",
"executionRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecsTaskExecutionRole",
"cpu": "512",
"memory": "1024",
"requiresCompatibilities": [
"FARGATE"
],
"networkMode": "awsvpc",
"containerDefinitions": [
{
"name": "log_router",
"image": "docker.io/amazon/aws-for-fluent-bit:latest",
"essential": true,
"firelensConfiguration": {
"type": "fluentbit",
"options":{
"enable-ecs-log-metadata":"true"
}
},
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-create-group": "true",
"awslogs-group": "firelens-container",
"awslogs-region": "us-east-2",
"awslogs-stream-prefix": "firelens"
}
}
},
{
"name": "nginx-test",
"image": "nginx",
"portMappings": [
{
"containerPort": 80
}
],
"essential": true,
"environment": [],
"logConfiguration": {
"logDriver": "awsfirelens",
"secretOptions": [],
"options": {
"Name": "es",
"Host": "YOUR_ES_DOMAIN_URL",
"Port": "YOUR_ES_DOMAIN_URL_PORT",
"Index": "INDEX_NAME",
"Type": "TYPE"
}
}
}
]
}

Explanation of some aspects in the task definition file above

  • family: The name for multiple versions of the task definition
  • taskRoleArn: IAM role with AmazonEC2ContainerServiceforEC2Role policy
  • executionRoleArn: This is the ecsTaskExecutionRole IAM role with AmazonECSTaskExecutionRolePolicy
  • requiresCompatibilities: This is ‘Fargate’ because we are using Fargate in this deployment
  • networkMode: The network mode for Fargate has to be ‘awsvpc’
  • containerDefinitions: Defines our application image and a sidecar image for the fluentbit. We are sending container logs of this sidecar to CloudWatch (this is optional) while the application log configuration will use firelens as the logdriver and this will log to our self hosted elastic search

Remember to replace the AWS IAM roles with your own taskRoleArn and executionRoleArn IAM roles

2. Save the file as nginx_task_def.json

3. Register the task definition into the cluster using AWS command line interface

aws ecs register-task-definition --cli-input-json file://nginx_task_def.json

4. Confirm the task definition deployed
Connect to your AWS console and browse to the ECS console to confirm that the task definition has been created as shown below

5. Create a service
Create a service with the nginx_task_def.json file in the cluster

aws ecs create-service --cluster test-cluster --service-name nginx-service --task-definition nginx-firelens-test --desired-count 1 --launch-type "FARGATE" --network-configuration "awsvpcConfiguration={subnets=[subnet-xxxxxxxx],securityGroups=[sg-xxxxxxxxxxxxxxxxx],assignPublicIp=ENABLED}" --region=us-east-2

The step 5 above says to create a service called nginx-service in the test-cluster using the task definition we created earlier nginx-firelens-test with a count of 1 and the launch type should be Fargate, in the subnet , security group and assign public IP stated

Here we are deploying into a cluster named test-cluster. We need to put the network configuration option because we are using ‘awsvpc’ as our network mode in task definition

6. Confirm the service is running in the cluster as shown below.
We have 1 service running inside the test cluster

7. Confirm that the task is running

8. Check containers status
Click on the task and confirm the 2 containers defined in the task definition are running.
At this stage we should have the nginx-test container and the log_router sidecar container

9. Check the log_router logs in the log tab or view them in cloudwatch to be sure everything is working as expected

10. Confirm from the browser if the test App is accessible by using the public Ip assigned to the task { Please note that you must have allowed permission to the port on Security group attached to the service }

Confirm the application logs in Elastic Search

Once all the steps above are completed , we will start getting our nginx application logs in elastic search .

Now the 2 containers are up and running ( the application container running nginx and the sidecar container running fluentbit .
Our goal is to get the application logs from Nginx streamed into our self managed elastic search .

A fast way to confirm if your data gets to Elastic search is to use a chrome extension called ElasticSearch Head . Add the extension to chrome and access the endpoint of your elastic search as shown below .
Click on Indices and search through to locate nginx-test , as shown on line 5 in the image below, {this is the index label defined in our task definition file- nginx container log configuration options section}

Congrats our application data logs got ingested into Elastic search .

View the logs in Kibana

In kibana URL , click on Management > Kibana Index Patterns to create an index pattern of our nginx-test {that we confirmed on elastic search in the previous step}

Browse to discover plane on Kibana , and get the logs of the nginx-test application

Yaay , we can now view our logs on Kibana

Conclusion

If you have made it this far, congratulations on getting your fargate container logs streamed to elastic search

If you have any questions feel free to ask me, I am happy to help you out. Also, if you spot any errors or have suggestions please let me know!

Thank you for taking your time to read my post . You can also connect with me on twitter @DevOpArena where I drop DevOps related best practises and everything DevOps.

--

--

Bukola Johnson
Bukola Johnson

Written by Bukola Johnson

I am a DevOps Engineer. Follow me to keep abreast with the latest technology news, industry insights, and DevOps trends.

Responses (5)