Wednesday, October 19, 2022

Easy setup Kibana Nginx reverse proxy with Ansible

Motivation

Amazon OpenSearch Service cluster instance run inside a virtual private cloud. If you want to access Kibana dedicated to this instance you have two options. One is tunneling to EC2 bastion host which is realitvely straightforward. One of the disadvantages of this approach is that you need to share your bastion host keys to clients. Another is reverse proxy on bastion host to private OpenSearch Kibana. In this example, we are going to show how you can setup access to Kibana using Nginx reverse proxy and provision it with Ansible.

This example represent basic setup which can serve as basis for future improvements. This basis don't include secure access configuration (certifications, authentication). It uses HTTP between client and proxy server, for production environment using HTTPS is recommended in this context. It's an easier setup, but for other hand it's also less secure setup.

Example

Inventory

First you need to have inventory defined with one variable (open_search_endpoint) which should point to Kibana instance. Notice, we have two ec2 instances in our inventory. One can be for production environment, second for staging for example. 

[ec2s]
{host1} open_search_endpoint={open_search_endpoint1}
{host2} open_search_endpoint={open_search_endpoint2}
view raw inventory hosted with ❤ by GitHub

Main playbook

Next we are going to define main Ansible playbook, which is pretty straight forward. For it to work, you need to have configuration files (default.config and ngnix.config) located in your path.

# Can be run multiple times (idempotent), main playbook
---
- hosts: ec2s
tasks:
- name: Update all packages
yum:
name: "*"
state: latest
update_only: yes
- name: Enable nginx for amazon linux 2
shell: "amazon-linux-extras enable nginx1.12"
become: yes
- name: Install nginx
yum:
name: nginx
state: latest
- name: Delete existing dist folder
file:
path: "/etc/nginx/conf.d/default.conf"
state: absent
- name: Start nginx
service:
name: nginx
state: started
enabled: yes
- name: Copy website default config
copy:
src: ../default.conf
dest: /etc/nginx/conf.d/default.conf
owner: root
group: root
mode: 0644
- name: Copy nginx default config
copy:
src: ../nginx.conf
dest: /etc/nginx/nginx.conf
owner: root
group: root
mode: 0644
- name: Set correct open search endpoint
lineinfile:
dest: /etc/nginx/conf.d/default.conf
regexp: open_search_endpoint
line: " proxy_pass https://{{open_search_endpoint}};"
- name: Restart nginx
service:
name: nginx
state: restarted
view raw main.yml hosted with ❤ by GitHub

Default configuration

For default.config, we are using simple proxy pass. If you want more secure connection, this is where you would configure HTTPS.

server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
location = / {
rewrite ^ /_dashboards/ redirect;
}
location / {
proxy_pass open_search_uri;
proxy_set_header Authorization "";
proxy_hide_header Authorization;
}
}
view raw default.conf hosted with ❤ by GitHub

Nginx configuration

# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
}
view raw nginx.conf hosted with ❤ by GitHub

Thursday, August 4, 2022

Serverless React, AWS Lambda and API Gateway example

Here is small terraform example how you can create serverless React app that will use AWS API Gateway which will call AWS Lambda

API Gateway is proxying GET request to lambda. Lambda will simply return "Hello World". 


Actual terraform infrastructure definition. This terraform definition include CORS configuration.

locals {
function_name = "hello_world"
handler = "index.handler"
runtime = "nodejs14.x"
zip_file = "hello_world.zip"
}
data "archive_file" "zip" {
source_dir = "${path.module}/lambdas/hello-world"
type = "zip"
output_path = local.zip_file
}
resource "aws_lambda_function" "this" {
description = "${var.config.team}-lambda-stream-es"
// Function parameters we defined at the beginning
function_name = local.function_name
handler = local.handler
runtime = local.runtime
timeout = 15
// Upload the .zip file Terraform created to AWS
filename = local.zip_file
source_code_hash = data.archive_file.zip.output_base64sha256
// Connect our IAM resource to our lambda function in AWS
role = var.config.es_lambda_role_arn
}
resource "aws_apigatewayv2_api" "this" {
name = "${var.config.team}-${var.config.env}-lambda-gw"
protocol_type = "HTTP"
cors_configuration {
allow_origins = ["https://www.first.com", "https://www.second.com"]
allow_methods = ["GET"]
allow_headers = ["content-type"]
max_age = 300
}
}
resource "aws_apigatewayv2_stage" "this" {
api_id = aws_apigatewayv2_api.this.id
name = "${var.config.team}-${var.config.env}-lambda-gw-stage"
auto_deploy = true
access_log_settings {
destination_arn = aws_cloudwatch_log_group.this.arn
format = jsonencode({
requestId = "$context.requestId"
sourceIp = "$context.identity.sourceIp"
requestTime = "$context.requestTime"
protocol = "$context.protocol"
httpMethod = "$context.httpMethod"
resourcePath = "$context.resourcePath"
routeKey = "$context.routeKey"
status = "$context.status"
responseLength = "$context.responseLength"
integrationErrorMessage = "$context.integrationErrorMessage"
}
)
}
}
resource "aws_apigatewayv2_integration" "hello_world_integration" {
api_id = aws_apigatewayv2_api.this.id
integration_uri = aws_lambda_function.this.invoke_arn
integration_type = "AWS_PROXY"
integration_method = "POST"
}
resource "aws_apigatewayv2_route" "hello_world_route" {
api_id = aws_apigatewayv2_api.this.id
route_key = "GET /hello-world"
target = "integrations/${aws_apigatewayv2_integration.hello_world_integration.id}"
}
resource "aws_cloudwatch_log_group" "this" {
name = "/aws/api_gw/${aws_apigatewayv2_api.this.name}"
retention_in_days = 30
}
resource "aws_lambda_permission" "api_gw" {
statement_id = "AllowExecutionFromAPIGateway"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.this.function_name
principal = "apigateway.amazonaws.com"
source_arn = "${aws_apigatewayv2_api.this.execution_arn}/*/*"
}
view raw serverless.tf hosted with ❤ by GitHub


Lambda which that will return "Hello World".

// AWS lambda that call ECS service to get similar products
module.exports.handler = async (event) => {
console.log('Event: ', event);
let responseMessage = 'Hello, World!';
return {
statusCode: 200,
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
message: responseMessage,
}),
}
}
view raw hello.js hosted with ❤ by GitHub

Thursday, April 28, 2022

ALB rule based routing with terraform

From 2017 Amazon started supporting Host-based routing on Application Load Balancer. Content-based routing was supported before, so Application Load Balancer now supports both Host- and Path-based rules. You can combine both routing types to create complex rules for your services. If you are looking how to combine both routing types, please look at this stack overflow answer: https://stackoverflow.com/a/46304567.

In this example we are going to show you how you can use a single Application Load Balancer(ALB) for separate ECS’s. Imagine, you have 30 ECS’s running on Fargate. You want 10 of those to be exposed. If you would use ALB for every single ECS, this would be very inefficient and expensive. To combat this, AWS supports routing traffic on ALB to Target Groups (ECS can be one example of a target group) based on rules. In this way, you can have one ALB which will route traffic to the Target Group (in our example this will be ECS). 

Below is one example of how you can do that with comments about specific details.

# First create a target group for each of your services, this is target group to
# Docker ECS service running on AWS Fargate
resource "aws_lb_target_group" "this" {
name = "${local.this_name}-alb-tg"
port = var.app_port
protocol = "HTTP"
vpc_id = var.config.vpc_id
target_type = "ip"
health_check {
healthy_threshold = "3"
interval = "120"
protocol = "HTTP"
matcher = "200-299"
timeout = "119"
path = var.health_check_path
unhealthy_threshold = "2"
}
tags = {
Team = var.config.team
Environment = var.config.env
Application = var.app_name
}
}
# Then, create ALB. This is the internal ALB.
# Hint: *Always use tags. It's much easier to do cost management and resource tracking.
resource "aws_lb" "this" {
name = "${local.wd_app_name}-alb"
internal = true
subnets = var.config.lb_subnets_ids
load_balancer_type = "application"
security_groups = [aws_security_group.lb.id]
idle_timeout = 300
tags = {
Team = var.config.team
Environment = var.config.env
Application = local.wd_app_name
}
}
# This is how you route traffic to a specific target group based on the host header.
# For each of your services, you need to create specific rules for each target group.
# In this example, we have only one target group and one router.
resource "aws_lb_listener_rule" "this" {
listener_arn = "${aws_lb_listener.this.arn}"
action {
type = "forward"
target_group_arn = "${aws_lb_target_group.this.arn}"
}
condition {
host_header {
values = var.aws_lb_listener_rules
}
}
}
# This is the security group for ALB.
# Be aware, this is a very permissive security group for internal ALB.
# Tailor it to your needs.
resource "aws_security_group" "lb" {
name = "${local.this_name}-lb-sg"
description = "Access to the Application Load Balancer (ALB)"
vpc_id = var.config.vpc_id
ingress {
protocol = "tcp"
from_port = 443
to_port = 443
cidr_blocks = ["0.0.0.0/0"] # All IP ranges
}
egress { # All traffic is allowed
protocol = "-1" # -1 is equivalent to "All"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${local.this_name}-lb-sg"
Team = var.config.team
Environment = var.config.env
Application = var.app_name
}
}