Install RabbitMQ and Minimal Erlang on Amazon Linux

The RabbitMQ website provides instructions on how to install the service on CentOS and Ubuntu Elastic Compute Cloud (EC2) instances. While the Amazon Linux distro uses CentOS as a base, it is different enough to make installing RabbitMQ tricky for system admins. I have identified and addressed the challenges here, and provide instructions on how to install RabbitMQ on Amazon Linux without dificulty.

  1. Determine the init system
  2. Set up a simple RPM build environment
  3. Build and install the minimal Erlang runtime
  4. Install and configure RabbitMQ
  5. Create and deploy a RabbitMQ Security Group

1. Determine the init system

I can boil all of the confusion down to the fact that CentOS changed its init system between the evolution of CentOS 6 to CentOS 7. If you are not a rabid CentOS follower, you would not know this, and not realize that one change would be the root cause of installation pain. Amazon Linux currently runs a version of CentOS 6, and therefore uses the original sysvinit system. The current CentOS 7 runs systemd. You do not need to know the difference between the two, but rather, which version Amazon Linux supports.

Run the following command.

[ec2-user@ip-172-31-4-69 ~]$ if (pidof /sbin/init) ; then echo "sysvinit"; elif (pidof systemd); then echo "systemd"; fi | sed -n '1!p'
sysvinit
[ec2-user@ip-172-31-4-69 ~]$

 

As of May 2017, Amazon Linux uses sysvinit. In order to accomodate sysvinit, you need to download RPMs made for CentOS 6 (i.e. include el6 in the name).

2. Set up an RPM build system

First, install the tools you need to build an RPM.

[ec2-user@ip-172-31-4-69 ~]$ sudo yum -y install rpm-build redhat-rpm-config
Loaded plugins: priorities, update-motd, upgrade-helper
amzn-main                                                                              | 2.1 kB  00:00:00
amzn-updates                                                                           | 2.3 kB  00:00:00
Resolving Dependencies

...

Installed:
  rpm-build.x86_64 0:4.11.3-21.75.amzn1              system-rpm-config.noarch 0:9.0.3-42.28.amzn1

Dependency Installed:
  elfutils.x86_64 0:0.163-3.18.amzn1 elfutils-libs.x86_64 0:0.163-3.18.amzn1   gdb.x86_64 0:7.6.1-64.33.amzn1
  patch.x86_64 0:2.7.1-8.9.amzn1     perl-Thread-Queue.noarch 0:3.02-2.5.amzn1

Complete!
[ec2-user@ip-172-31-4-69 ~]$

 

Now, create the build environment. Here, you are creating the needed sub directories for a build environment. For details, see https://wiki.centos.org/HowTos/SetupRpmBuildEnvironment

[ec2-user@ip-172-31-4-69 ~]$ cd
[ec2-user@ip-172-31-4-69 ~]$ mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
[ec2-user@ip-172-31-4-69 ~]$ echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
[ec2-user@ip-172-31-4-69 ~]$ cat .rpmmacros
%_topdir %(echo $HOME)/rpmbuild
[ec2-user@ip-172-31-4-69 ~]$ ls rpmbuild/
BUILD  RPMS  SOURCES  SPECS  SRPMS
[ec2-user@ip-172-31-4-69 ~]$

 

Now install the development tools.

[ec2-user@ip-172-31-4-69 ~]$ sudo yum -y install autoconf gcc git ncurses-devel openssl-devel
Loaded plugins: priorities, update-motd, upgrade-helper
amzn-main                                                                              | 2.1 kB  00:00:00
amzn-updates                                                                           | 2.3 kB  00:00:00
Resolving Dependencies
--> Running transaction check

...


Installed:
  autoconf.noarch 0:2.69-11.9.amzn1                   gcc.noarch 0:4.8.3-3.20.amzn1
  git.x86_64 0:2.7.4-1.47.amzn1                       ncurses-devel.x86_64 0:5.7-4.20090207.14.amzn1
  openssl-devel.x86_64 1:1.0.1k-15.99.amzn1


Dependency Installed:
  cpp48.x86_64 0:4.8.3-9.111.amzn1                       gcc48.x86_64 0:4.8.3-9.111.amzn1
  glibc-devel.x86_64 0:2.17-157.169.amzn1                glibc-headers.x86_64 0:2.17-157.169.amzn1
  kernel-headers.x86_64 0:4.9.27-14.31.amzn1             keyutils-libs-devel.x86_64 0:1.5.8-3.12.amzn1
  krb5-devel.x86_64 0:1.14.1-27.41.amzn1                 libcom_err-devel.x86_64 0:1.42.12-4.40.amzn1
  libkadm5.x86_64 0:1.14.1-27.41.amzn1                   libselinux-devel.x86_64 0:2.1.10-3.22.amzn1
  libsepol-devel.x86_64 0:2.1.7-3.12.amzn1               libgomp.x86_64 0:4.8.3-9.111.amzn1
  libmpc.x86_64 0:1.0.1-3.3.amzn1                        libverto-devel.x86_64 0:0.2.5-4.9.amzn1
  m4.x86_64 0:1.4.16-9.10.amzn1                          mpfr.x86_64 0:3.1.1-4.14.amzn1
  perl-Data-Dumper.x86_64 0:2.145-3.5.amzn1              perl-Error.noarch 1:0.17020-2.9.amzn1
  perl-Git.noarch 0:2.7.4-1.47.amzn1                     perl-TermReadKey.x86_64 0:2.30-20.9.amzn1
  zlib-devel.x86_64 0:1.2.8-7.18.amzn1 
  
  
Complete!
[ec2-user@ip-172-31-4-69 ~]$

 

Pull the source code for minimal Erlang from git.

[ec2-user@ip-172-31-4-69 ~]$ git clone https://github.com/rabbitmq/erlang-rpm.git
Cloning into 'erlang-rpm'...
remote: Counting objects: 258, done.
remote: Total 258 (delta 0), reused 0 (delta 0), pack-reused 258
Receiving objects: 100% (258/258), 55.33 KiB | 0 bytes/s, done.
Resolving deltas: 100% (147/147), done.
Checking connectivity... done.
[ec2-user@ip-172-31-4-69 ~]$

 

3. Build and install the minimal Erlang runtime

Change directories to erlang-rpm to start the build.

[ec2-user@ip-172-31-4-69 ~]$ cd erlang-rpm/
[ec2-user@ip-172-31-4-69 erlang-rpm]$

 

Execute a make to build the thing. If you encounter any errors, 99.99% of the time the error will be due to missing packages. Simply read the error to identify the missing package and then install that package and execute make once more.

[ec2-user@ip-172-31-4-69 erlang-rpm]$ make
rm -rf BUILDROOT BUILD SOURCES SPECS SRPMS RPMS tmp FINAL_RPMS dist
mkdir -p BUILD SOURCES SPECS SRPMS RPMS tmp dist
wget -O dist/OTP-19.3.4.tar.gz https://github.com/erlang/otp/archive/OTP-19.3.4.tar.gz#
--2017-05-26 17:30:16--  https://github.com/erlang/otp/archive/OTP-19.3.4.tar.gz
Resolving github.com (github.com)... 192.30.253.113, 192.30.253.112
Connecting to github.com (github.com)|192.30.253.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://codeload.github.com/erlang/otp/tar.gz/OTP-19.3.4 [following]
--2017-05-26 17:30:16--  https://codeload.github.com/erlang/otp/tar.gz/OTP-19.3.4
Resolving codeload.github.com (codeload.github.com)... 192.30.253.120, 192.30.253.121
Connecting to codeload.github.com (codeload.github.com)|192.30.253.120|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/x-gzip]
Saving to: ‘dist/OTP-19.3.4.tar.gz’

dist/OTP-19.3.4.tar.gz          [                <=>                       ]  32.42M  7.73MB/s    in 4.2s

...

 

For example, the first time I tried to build the erlang-rpm, I got the following error about not finding crypto libraries.

RPM build errors:
    bogus date in %changelog: Thu Oct 13 2015 Michael Klishin <michael@rabbitmq.com> - 18.1
    Directory not found by glob: /home/ec2-user/erlang-rpm/BUILDROOT/erlang-19.3.4-1.amzn1.x86_64/usr/lib64/erlang/lib/crypto-*/
    Directory not found by glob: /home/ec2-user/erlang-rpm/BUILDROOT/erlang-19.3.4-1.amzn1.x86_64/usr/lib64/erlang/lib/ssl-*/
    File not found by glob: /home/ec2-user/erlang-rpm/BUILDROOT/erlang-19.3.4-1.amzn1.x86_64/usr/lib64/erlang/lib/ssl-*/ebin
    File not found by glob: /home/ec2-user/erlang-rpm/BUILDROOT/erlang-19.3.4-1.amzn1.x86_64/usr/lib64/erlang/lib/ssl-*/src
make: *** [erlang] Error 1

 

A quick Google search for “rpm build errors file not found buildroot crypto” leads me to the following page with the following solution:

 

It turns out during my first attempt, I negleted to install openssl-devel. To fix the Error, I installed openssl-devel

[ec2-user@ip-172-31-4-69 erlang-rpm]$ sudo yum -y install openssl-devel
Loaded plugins: priorities, update-motd, upgrade-helper
amzn-main                                                                              | 2.1 kB  00:00:00
amzn-updates                                                                           | 2.3 kB  00:00:00
Resolving Dependencies
--> Running transaction check

...


Installed:
  openssl-devel.x86_64 1:1.0.1k-15.99.amzn1

Dependency Installed:
  keyutils-libs-devel.x86_64 0:1.5.8-3.12.amzn1            krb5-devel.x86_64 0:1.14.1-27.41.amzn1
  libcom_err-devel.x86_64 0:1.42.12-4.40.amzn1             libkadm5.x86_64 0:1.14.1-27.41.amzn1
  libselinux-devel.x86_64 0:2.1.10-3.22.amzn1              libsepol-devel.x86_64 0:2.1.7-3.12.amzn1
  libverto-devel.x86_64 0:0.2.5-4.9.amzn1                  zlib-devel.x86_64 0:1.2.8-7.18.amzn1

Complete!
[ec2-user@ip-172-31-4-69 erlang-rpm]$

 

…and run make again (from the erlang-rpm directory).

After a while the compile will succeed. You will see success.

Wrote: /home/ec2-user/erlang-rpm/RPMS/x86_64/erlang-19.3.4-1.amzn1.x86_64.rpm
Wrote: /home/ec2-user/erlang-rpm/RPMS/x86_64/erlang-debuginfo-19.3.4-1.amzn1.x86_64.rpm
Executing(%clean): /bin/sh -e /home/ec2-user/erlang-rpm/tmp/rpm-tmp.ekgXf8
+ umask 022
+ cd /home/ec2-user/erlang-rpm/BUILD
+ cd otp-OTP-19.3.4
+ rm -rf /home/ec2-user/erlang-rpm/BUILDROOT/erlang-19.3.4-1.amzn1.x86_64
+ exit 0
find RPMS -name "*.rpm" -exec sh -c 'mv {} `echo {} | sed 's#^RPMS\/noarch#FINAL_RPMS#'`' ';'
mv: ‘RPMS/x86_64/erlang-debuginfo-19.3.4-1.amzn1.x86_64.rpm’ and ‘RPMS/x86_64/erlang-debuginfo-19.3.4-1.amzn1.x86_64.rpm’ are the same file
mv: ‘RPMS/x86_64/erlang-19.3.4-1.amzn1.x86_64.rpm’ and ‘RPMS/x86_64/erlang-19.3.4-1.amzn1.x86_64.rpm’ are the same file

 

Before you install Erlang, delete any old versions.

[ec2-user@ip-172-31-4-69 erlang-rpm]$ sudo yum -y remove erlang-*
Loaded plugins: priorities, update-motd, upgrade-helper
No Match for argument: erlang-*
No Packages marked for removal
[ec2-user@ip-172-31-4-69 erlang-rpm]$

 

Now, install the Erlang RPM you just built. You will find it in the RPMS/x86_64/ directory. It will most likely have a different name than the one I use below. Either way, notice that the RPM includes amzn1 in its filename.

[ec2-user@ip-172-31-4-69 erlang-rpm]$ sudo yum -y install RPMS/x86_64/erlang-19.3.4-1.amzn1.x86_64.rpm
Loaded plugins: priorities, update-motd, upgrade-helper
Examining RPMS/x86_64/erlang-19.3.4-1.amzn1.x86_64.rpm: erlang-19.3.4-1.amzn1.x86_64
Marking RPMS/x86_64/erlang-19.3.4-1.amzn1.x86_64.rpm to be installed
Resolving Dependencies

...

Running transaction
  Installing : erlang-19.3.4-1.amzn1.x86_64                                                               1/1
  Verifying  : erlang-19.3.4-1.amzn1.x86_64                                                               1/1

Installed:
  erlang.x86_64 0:19.3.4-1.amzn1

Complete!
[ec2-user@ip-172-31-4-69 erlang-rpm]$

 

4. Install and configure RabbitMQ

You can follow the instructions on the RabbitMQ web site to install the service. Remember, in step one we discovered that the current version of Amazon linux uses sysvinit. We, therefore, need to download the CentOS 6/ EL6 RPM.

 

If you run sysvinit, then download the RabbitMQ RPM with el6 in the name. If you run systemd, download the RabbitMQ RPM with el7 in the name.

 

Change directories and then wget the RPM. You may have a different URL from this blog post.  Go to https://www.rabbitmq.com/install-rpm.html to fetch the most recent RPM URL.

 

 

[ec2-user@ip-172-31-4-69 erlang-rpm]$ cd
[ec2-user@ip-172-31-4-69 ~]$ wget https://www.rabbitmq.com/releases/rabbitmq-server/v3.6.10/rabbitmq-server-3.6.10-1.el6.noarch.rpm
--2017-05-26 18:21:28--  https://www.rabbitmq.com/releases/rabbitmq-server/v3.6.10/rabbitmq-server-3.6.10-1.el6.noarch.rpm
Resolving www.rabbitmq.com (www.rabbitmq.com)... 192.240.153.117
Connecting to www.rabbitmq.com (www.rabbitmq.com)|192.240.153.117|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4931483 (4.7M) [application/x-redhat-package-manager]
Saving to: ‘rabbitmq-server-3.6.10-1.el6.noarch.rpm’

rabbitmq-server-3.6.10-1.el 100%[=========================================>]   4.70M  3.58MB/s    in 1.3s

2017-05-26 18:21:30 (3.58 MB/s) - ‘rabbitmq-server-3.6.10-1.el6.noarch.rpm’ saved [4931483/4931483]

[ec2-user@ip-172-31-4-69 ~]$

 

Now install the signing key. Go to https://www.rabbitmq.com/install-rpm.html to ensure you use the most recent URL.

 

 

[ec2-user@ip-172-31-4-69 ~]$ sudo rpm --import https://www.rabbitmq.com/rabbitmq-release-signing-key.asc
[ec2-user@ip-172-31-4-69 ~]$

 

Now install the RPM you just downloaded.

 

[ec2-user@ip-172-31-4-69 ~]$ sudo yum -y install rabbitmq-server-3.6.10-1.el6.noarch.rpm
Loaded plugins: priorities, update-motd, upgrade-helper
Examining rabbitmq-server-3.6.10-1.el6.noarch.rpm: rabbitmq-server-3.6.10-1.el6.noarch
Marking rabbitmq-server-3.6.10-1.el6.noarch.rpm to be installed
Resolving Dependencies
amzn-main/latest                                                                       | 2.1 kB  00:00:00
amzn-updates/latest                                                                    | 2.3 kB  00:00:00

...

Installed:
  rabbitmq-server.noarch 0:3.6.10-1.el6

Dependency Installed:
  compat-readline5.x86_64 0:5.2-17.3.amzn1                  socat.x86_64 0:1.7.2.3-1.10.amzn1

Complete!

 

Use chkconfig to start RabbitMQ on system boot. Then, use the service command to start the service. Since Amazon Linux runs sysvinit, we use the “chkconfig” and “service” commands. For systemd operating systems, we would use “systemctl.”

 

[ec2-user@ip-172-31-4-69 ~]$ sudo chkconfig rabbitmq-server on
[ec2-user@ip-172-31-4-69 ~]$ sudo service rabbitmq-server start
Starting rabbitmq-server: SUCCESS
rabbitmq-server.
[ec2-user@ip-172-31-4-69 ~]$

 

Once we have RabbitMQ up and running, we can configure it as needed:

 

[ec2-user@ip-172-31-4-69 ~]$ sudo rabbitmqctl add_user myserver myserver123
Creating user "myserver"
[ec2-user@ip-172-31-4-69 ~]$ sudo rabbitmqctl add_vhost myserver_vhost
Creating vhost "myserver_vhost"
[ec2-user@ip-172-31-4-69 ~]$ sudo rabbitmqctl set_user_tags myserver myserver_tag
Setting tags for user "myserver" to [myserver_tag]
[ec2-user@ip-172-31-4-69 ~]$ sudo rabbitmqctl set_user_tags myserver monitoring
Setting tags for user "myserver" to [monitoring]
[ec2-user@ip-172-31-4-69 ~]$ sudo rabbitmqctl set_permissions -p myserver_vhost myserver ".*" ".*" ".*"
Setting permissions for user "myserver" in vhost "myserver_vhost"
[ec2-user@ip-172-31-4-69 ~]$ sudo rabbitmq-plugins enable rabbitmq_management
The following plugins have been enabled:
  amqp_client
  cowlib
  cowboy
  rabbitmq_web_dispatch
  rabbitmq_management_agent
  rabbitmq_management

Applying plugin configuration to rabbit@ip-172-31-4-69... started 6 plugins.
[ec2-user@ip-172-31-4-69 ~]$ sudo service rabbitmq-server restart
Restarting rabbitmq-server: SUCCESS
rabbitmq-server.
[ec2-user@ip-172-31-4-69 ~]$

 

5. Create a Security Group

To use the service, punch a hole in the EC2 firewall via a custom security group.

First, on the AWS GUI, select EC2 under compute.

 

 

Next,  select Security Groups under NETWORK & SECURITY.

 

Click Create Security Group.

 

 

Edit the name to read rabbit_mq, the TCP port range to 5672 and set the network that can access your new RabbitMQ service.  In the example below, I set it to the address of my RabbitMQ server’s Local Area Network (LAN).

 

 

In the EC2 console, click your rabbit_mq server, click Actions, click Networking and then Change Security Groups.

 

 

Attach the rabbit_mq security group.  If you don’t see the security group, ensure you configured the correct VPC when you created the security group.

 

You now have a dedicated RabbitMQ service. Now you are ready to try a simple “hello world” program.

Connect AWS Lambda to Elasticsearch

Amazon Web Services’ (AWS) Lambda provides a serverless architecture framework for your web applications.  You deploy your application to Lambda, attach an API Gateway and then call your new service from anywhere on the web.  Amazon takes care of all the tedious, boring and necessary housekeeping.

In this HOWTO I show you how to create a proxy in front of the AWS Elasticsearch service using a Lambda function and an API Gateway.  We use Identity and Access Management  (IAM) policies to sign and encrypt the communication between your Lambda function and  the Elasticsearch service.  This HOWTO serves as a simple starting point.  Once you successfully jump through the hoops to connect Lambda to Elasticsearch, you can easily grow your application to accommodate new features and services.

The agenda for this HOWTO follows:

  1. Deploy and configure an AWS Elasticsearch endpoint
  2. Configure your Chalice development environment
  3. Create an app that proxies/ protects your Elasticsearch endpoint
  4. Configure an IAM policy for your Lambda function
  5. Use Chalice to deploy your Lambda function and create/ attach an API gateway
  6. Test drive your new Lambda function

1. Deploy an AWS Elasticsearch Instance

Amazon makes Elasticsearch deployment a snap.  Just click the Elasticsearch Service icon on your management screen:

 

 

If you see the “Get Started” screen, click “Get Started.”

 

 

Or, if you’ve used the Elasticsearch service before and see the option for “New Domain,” click “New Domain.”

 

 

Name your domain “test-domain” (Or whatever).

 

 

Keep the defaults on the next screen “Step 2: Configure Cluster.”  Just click “next.”   On the next screen, select: “Allow or deny access to one or more AWS accounts or IAM users”.  

 

 

Amazon makes security easy as well.  On the next menu they list your ARN.  Just copy and paste it into the text field and hit “next.”

 

 

AWS generates the JSON for your Elasticsearch service:

 

 

Click “Next” and then “confirm and create.”

Expect about ten (10) minutes for the service to initiate.  While you wait for the service to deploy, you should set up your Chalice development environment.

 

2. Configure your Chalice development environment

 

As a convenience, I summarize the instructions from the authoritative Chalice HOWTO here.

First, create a Python virtual environment for a development

 

[ec2-user@ip-172-31-4-69 ~]$ virtualenv chalice-demo
New python executable in chalice-demo/bin/python2.7
Also creating executable in chalice-demo/bin/python
Installing setuptools, pip...done.

 

Change directories to your new sandbox and then activate the virtual environment.

 

[ec2-user@ip-172-31-4-69 ~]$ cd chalice-demo/
[ec2-user@ip-172-31-4-69 chalice-demo]$ . bin/activate

 

Now upgrade pip.

 

(chalice-demo)[ec2-user@ip-172-31-4-69 chalice-demo]$ pip install -U pip
You are using pip version 6.0.8, however version 9.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Collecting pip from https://pypi.python.org/packages/b6/ac/7015eb97dc749283ffdec1c3a88ddb8ae03b8fad0f0e611408f196358da3/pip-9.0.1-py2.py3-none-any.whl#md5=297dbd16ef53bcef0447d245815f5144
  Using cached pip-9.0.1-py2.py3-none-any.whl
Installing collected packages: pip
  Found existing installation: pip 6.0.8
    Uninstalling pip-6.0.8:
      Successfully uninstalled pip-6.0.8

Successfully installed pip-9.0.1
(chalice-demo)[ec2-user@ip-172-31-4-69 chalice-demo]$

 

Finally, install Chalice.

 

(chalice-demo)[ec2-user@ip-172-31-4-69 chalice-demo]$ pip install chalice
Collecting chalice
  Downloading chalice-0.8.0.tar.gz (86kB)
    100% |████████████████████████████████| 92kB 6.6MB/s 
Collecting click==6.6 (from chalice)
  Downloading click-6.6-py2.py3-none-any.whl (71kB)
    100% |████████████████████████████████| 71kB 6.9MB/s 
Collecting botocore<2.0.0,>=1.5.0 (from chalice)
  Downloading botocore-1.5.45-py2.py3-none-any.whl (3.4MB)
    100% |████████████████████████████████| 3.5MB 335kB/s 
Collecting virtualenv<16.0.0,>=15.0.0 (from chalice)
  Downloading virtualenv-15.1.0-py2.py3-none-any.whl (1.8MB)
    100% |████████████████████████████████| 1.8MB 648kB/s 
Collecting typing==3.5.3.0 (from chalice)
  Downloading typing-3.5.3.0.tar.gz (60kB)
    100% |████████████████████████████████| 61kB 9.3MB/s 
Collecting six<2.0.0,>=1.10.0 (from chalice)
  Downloading six-1.10.0-py2.py3-none-any.whl
Collecting jmespath<1.0.0,>=0.7.1 (from botocore<2.0.0,>=1.5.0->chalice)
  Downloading jmespath-0.9.2-py2.py3-none-any.whl
Collecting docutils>=0.10 (from botocore<2.0.0,>=1.5.0->chalice)
  Downloading docutils-0.13.1-py2-none-any.whl (537kB)
    100% |████████████████████████████████| 542kB 2.2MB/s 
Collecting python-dateutil<3.0.0,>=2.1 (from botocore<2.0.0,>=1.5.0->chalice)
  Downloading python_dateutil-2.6.0-py2.py3-none-any.whl (194kB)
    100% |████████████████████████████████| 194kB 5.7MB/s 
Installing collected packages: click, jmespath, docutils, six, python-dateutil, botocore, virtualenv, typing, chalice
  Running setup.py install for typing ... done
  Running setup.py install for chalice ... done
Successfully installed botocore-1.5.45 chalice-0.8.0 click-6.6 docutils-0.13.1 jmespath-0.9.2 python-dateutil-2.6.0 six-1.10.0 typing-3.5.3.0 virtualenv-15.1.0
(chalice-demo)[ec2-user@ip-172-31-4-69 chalice-demo]$ 

 

The quickstart is pretty clear about how to configure credentials.  Here are their instructions verbatim…

Before you can deploy an application, be sure you have credentials configured. If you have previously configured your machine to run boto3 (the AWS SDK for Python) or the AWS CLI then you can skip this section.

If this is your first time configuring credentials for AWS you can follow these steps to quickly get started:

$ mkdir ~/.aws
$ cat >> ~/.aws/config
[default]
aws_access_key_id=YOUR_ACCESS_KEY_HERE
aws_secret_access_key=YOUR_SECRET_ACCESS_KEY
region=YOUR_REGION (such as us-west-2, us-west-1, etc)

If you want more information on all the supported methods for configuring credentials, see the boto3 docs.

 

From the chalice-demo directory, create a new Chalice project.

 

(chalice-demo)[ec2-user@ip-172-31-4-69 chalice-demo]$ chalice new-project eslambda

 

You have set up your development environment.

 

3.  Create an app that proxies/ protects your Elasticsearch endpoint

 

At this point, your Elasticsearch endpoint should be up and running.  Copy the fully qualified domain name (FQDN) for your new endpoint.  You will copy this FQDN into the application below.

 

 

The following application uses the boto library to access an authorized IAM role to sign and encrypt calls to  your Elasticsearch endpoint.  Be sure to configure the host parameter with your Endpoint address.

 

 

Change directories to the new eslambda project.  You will see two automatically created documents:  app.py and requirements.txt

 

(chalice-demo)[ec2-user@ip-172-31-4-69 chalice-demo]$ cd eslambda/
(chalice-demo)[ec2-user@ip-172-31-4-69 eslambda]$ ls
app.py  requirements.txt
(chalice-demo)[ec2-user@ip-172-31-4-69 eslambda]$

 

Overwrite app.py with the app.py code above.  Then, pip install boto.  Use the pip freeze | grep boto command to populate requirements.txt with the proper version of boto.  requirements.txt tells Lambda which Python packages to install.

 

(chalice-demo)[ec2-user@ip-172-31-4-69 eslambda]$ pip install boto
Collecting boto
  Downloading boto-2.46.1-py2.py3-none-any.whl (1.4MB)
    100% |████████████████████████████████| 1.4MB 851kB/s 
Installing collected packages: boto
Successfully installed boto-2.46.1
(chalice-demo)[ec2-user@ip-172-31-4-69 eslambda]$ pip freeze | grep boto >> requirements.txt 

4. Configure an IAM policy for your Lambda function

 

Create a document called policy.json in the hidden .chalice directory and add the following JSON. This will let Lambda use the Elasticsearch service.

 

(chalice-demo)[ec2-user@ip-172-31-4-69 eslambda]$ vim .chalice/policy.json

 

 

5. Use Chalice to deploy your Lambda function and create/ attach an API gateway

 

Cross your fingers, this should work.  Deploy your Chalice application with the following command.  Take note of the endpoint that Chalice returns.

 

(chalice-demo)[ec2-user@ip-172-31-4-69 eslambda]$ chalice deploy
Initial creation of lambda function.
Creating role
Creating deployment package.
Initiating first time deployment...
Deploying to: dev
https://keqpeva3wi.execute-api.us-east-1.amazonaws.com/dev/
(chalice-demo)[ec2-user@ip-172-31-4-69 eslambda]$ 

6. Test drive your new Lambda function

 

Enter the URL of the service endpoint in your browser.  In my case, I will go to https://keqpeva3wi.execute-api.us-east-1.amazonaws.com/dev/

 

 

Yes.  For some reason the steps on the Chalice quick start does not seem to work.  If you take a look at policy.json you’ll see that Chalice over-wrote it.

 

(chalice-demo)[ec2-user@ip-172-31-4-69 eslambda]$ cat .chalice/policy.json 
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:*:*:*",
      "Effect": "Allow"
    }
  ]
}(chalice-demo)[ec2-user@ip-172-31-4-69 eslambda]$

 

Chalice created a policy to allow our Lambda function to log.  Let’s keep that action and add the Elasticsearch verbs.  Edit .chalice/policy.json once more, this time using the enriched JSON encoded policy.

 

 

Redeploy again, this time turn off the auto policy generation.

 

(chalice-demo)[ec2-user@ip-172-31-4-69 eslambda]$ chalice deploy --no-autogen-policy
Updating IAM policy.
Updating lambda function...
Regen deployment package...
Sending changes to lambda.
API Gateway rest API already found.
Deploying to: dev
https://keqpeva3wi.execute-api.us-east-1.amazonaws.com/dev/
(chalice-demo)[ec2-user@ip-172-31-4-69 eslambda]$

 

It may take a few minutes for the new Lambda function to bake in.  Be sure to hit Control+F5 to make sure you’re not hitting a cached version of your new application.  Alternatively, you can pip install httpie.

From the command line, use httpie to access your new proxy.

 

 

Congratulations!  Your Lambda function can hit your Elasticsearch service!

Add @Timestamp to your Python Elasticsearch DSL Model

The Python Elasticsearch Domain Specific Language (DSL) lets you create models via Python objects.

Take a look at the model Elastic creates in their persistence example.

 

#!/usr/bin/env python
# persist.py
from datetime import datetime
from elasticsearch_dsl import DocType, Date, Integer, Keyword, Text
from elasticsearch_dsl.connections import connections

class Article(DocType):
    title = Text(analyzer='snowball', fields={'raw': Keyword()})
    body = Text(analyzer='snowball')
    tags = Keyword()
    published_from = Date()
    lines = Integer()

    class Meta:
        index = 'blog'

    def save(self, ** kwargs):
        self.lines = len(self.body.split())
        return super(Article, self).save(** kwargs)

    def is_published(self):
        return datetime.now() < self.published_from

if __name__ == "__main__":
    connections.create_connection(hosts=['localhost'])
    # create the mappings in elasticsearch
    Article.init()

 

I wrapped their example in a script and named it persist.py.  To initiate the model, execute persist.py from the command line.

 

$ chmod +x persist.py
$ ./persist.py

 

We can take a look at these mappings via the _mapping API. In the model, Elastic names the index blog. Use blog, therefore, when you send the request to the API.

 

$ curl -XGET 'http://localhost:9200/blog/_mapping?pretty'

 

The save() method of the Article object generated the following automatic mapping (schema).

 

{
  "blog" : {
    "mappings" : {
      "article" : {
        "properties" : {
          "body" : {
            "type" : "text",
            "analyzer" : "snowball"
          },
          "lines" : {
            "type" : "integer"
          },
          "published_from" : {
            "type" : "date"
          },
          "tags" : {
            "type" : "keyword"
          },
          "title" : {
            "type" : "text",
            "fields" : {
              "raw" : {
                "type" : "keyword"
              }
            },
            "analyzer" : "snowball"
          }
        }
      }
    }
  }
}

 

That’s pretty neat! The DSL creates the mapping (schema) for you, with the right Types. Now that we have the model and mapping in place, use the Elastic provided example to create a document.

 

#!/usr/bin/env python

# create_doc.py
from datetime import datetime
from persist import Article
from elasticsearch_dsl.connections import connections

# Define a default Elasticsearch client
connections.create_connection(hosts=['localhost'])

# create and save and article
article = Article(meta={'id': 42}, title='Hello world!', tags=['test'])
article.body = ''' looong text '''
article.published_from = datetime.now()
article.save()

 

Again, I wrapped their code in a script.  Run the script.

 

$ chmod +x create_doc.py
$ ./create_doc.py

 

If you look at the mapping, you see the published_from field maps to a Date type. To see this in Kibana, go to Management –> Index Patterns as shown below.

 

 

Now type blog (the name of the index from the model) into the Index Name or Pattern box.

 

 

From here, you can select published_from as the time-field name.

 

 

If you go to Discover, you will see your blog post.

 

 

Logstash, however, uses @timestamp for the time-field name. It would be nice to use the standard name instead of a one-off, custom name. To use @timestamp, we must first update the model.

In persist.py (above), change the save stanza from…

 

def save(self, ** kwargs):
        self.lines = len(self.body.split())
        return super(Article, self).save(** kwargs)

 

to…

 

def save(self, ** kwargs):
        self.lines = len(self.body.split())
        self['@timestamp'] = datetime.now()
        return super(Article, self).save(** kwargs)

 

It took me a ton of trial and error to finally realize we need to update @timestamp as a dictionary key. I just shared the special sauce recipe with you, so, you’re welcome! Once you update the model, run create_doc.py (above) again.

 

$ ./create_doc.py

 

Then, go back to Kibana –> Management –> Index Patterns and delete the old blog pattern.

 

 

When you re-create the index pattern, you will now have a pull down for @timestamp.

 

 

Now go to discover and you will see the @timestamp field in your blog post.

 

 

You can go back to the _mapping API to see the new mapping for @timestamp.

 

$ curl -XGET 'http://localhost:9200/blog/_mapping?pretty'

 

This command returns the JSON encoded mapping.

 

{
  "blog" : {
    "mappings" : {
      "article" : {
        "properties" : {
          "@timestamp" : {
            "type" : "date"
          },
          "body" : {
            "type" : "text",
            "analyzer" : "snowball"
          },
          "lines" : {
            "type" : "integer"
          },
          "published_from" : {
            "type" : "date"
          },
          "tags" : {
            "type" : "keyword"
          },
          "title" : {
            "type" : "text",
            "fields" : {
              "raw" : {
                "type" : "keyword"
              }
            },
            "analyzer" : "snowball"
          }
        }
      }
    }
  }
}

 

Unfortunately, we still may have a problem. If you notice, @timestamp here is in the form of “April 1st 2017, 19:28:47.842.” If you’re sending a Document to an existing Logstash doc store, it most likely will have the default @timestamp format.

To accomodate the default @timestamp format (or any custom format), you can update the model’s save stanza with a string format time command.

 

def save(self, ** kwargs):
        self.lines = len(self.body.split())
        t = datetime.now()
        self['@timestamp'] = t.strftime('%Y-%m-%dT%H:%M:%S.%fZ')
        return super(Article, self).save(** kwargs)

 

You can see the change in Kibana as well (view the raw JSON).

 

 

That’s it!  The more you use the Python Elasticsearch DSL, the more you will love it.

Pass Bootstrap HTML attributes to Flask-WTForms

Flask-WTForms helps us create and use web forms with simple Python models. WTForms takes care of the tedious, boring and necessary security required when we want to use data submitted to our web app via a user on the Internet. WTForms makes data validation and Cross Sight Forgery Request (CSFR) avoidane a breeze. Out of the box, however, WTForms creates ugly forms with ugly validation. Flask-Bootstrap provides a professional layer of polish to our forms, with shading, highlights and pop ups.

Flask-Bootstrap also provides a “quick_form” method, which commands Jinja2 to render an entire web page based on our form model with one line of code.

In the real world, unfortunately, customers have strong opinions about their web pages, and may ask you to tweak the default appearance that “quick_form” generates. This blog post shows you how to do that.

In this blog post you will:

  • Deploy a web app with a working form, to include validation and polish
  • Tweak the appearance of the web page using a Flask-WTF macro
  • Tweak the appearance of the web page using a Flask-Bootstrap method

The Baseline App

The following code shows the baselined application, which uses “quick_form” to render the form’s web page. Keep in mind that this application doesn’t do anything, although you can easily extend it to persist data using an ORM (for example). I based the web app on the following Architecture:

 

 

The web app contains models.py (contains form model), take_quiz_template.html (renders the web page) and application.py (the web app that can route to functions based on URL and parse the form data).

[ec2-user@ip-192-168-10-134 ~]$ tree flask_bootstrap/
flask_bootstrap/
├── application.py
├── models.py
├── requirements.txt
└── templates
    └── take_quiz_template.html

1 directory, 4 files
[ec2-user@ip-192-168-10-134 ~]$ 

Install the three files into your directory. As seen in the tree picture above, be sure to create a directory named templates for take_quiz_template.html.

Create and activate your virtual environment and then install the required libraries.

[ec2-user@ip-192-168-10-134 ~]$ virtualenv flask_bootstrap/
New python executable in flask_bootstrap/bin/python2.7
Also creating executable in flask_bootstrap/bin/python
Installing setuptools, pip...done.
[ec2-user@ip-192-168-10-134 ~]$ . flask_bootstrap/bin/activate
(flask_bootstrap)[ec2-user@ip-192-168-10-134 ~]$ pip install -r flask_bootstrap/requirements.txt

  ...

Successfully installed Flask-0.11.1 Flask-Bootstrap-3.3.7.0 Flask-WTF-0.13.1 Jinja2-2.8 MarkupSafe-0.23 WTForms-2.1 Werkzeug-0.11.11 click-6.6 dominate-2.3.1 itsdangerous-0.24 visitor-0.1.3
(flask_bootstrap)[ec2-user@ip-192-168-10-134 ~]$ 

Start your flask application and then navigate to your IP address. Since this is just a dev application, you will need to access port 5000.

(flask_bootstrap)[ec2-user@ip-192-168-10-134 ~]$ cd flask_bootstrap/
(flask_bootstrap)[ec2-user@ip-192-168-10-134 flask_bootstrap]$ ./application.py 
 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger pin code: 417-431-486

This application uses the quick_form method to generate a web page. Note that the application includes all sorts of goodies, such as CSFR avoidance, professional looking highlights and validation. Play around with the page to look at the different validation pop-ups and warnings.

Now imagine that your customer wants to change the look of the submit button, or add some default text. In this situation, the quick_form does not suffice.

Attempt 1: Use a Flask-WTF Macro

The Flask-WTF docs include a Macro named render_field which allows us to pass HTML attributes to Jinja2. We save this macro in a file named _formhelpers.html and stick it in the same templates folder as take_quiz_template.html.

{% macro render_field(field) %}
  <dt>{{ field.label }}
  <dd>{{ field(**kwargs)|safe }}
  {% if field.errors %}
    <ul class=errors>
    {% for error in field.errors %}
      <li>{{ error }}</li>
    {% endfor %}
    </ul>
  {% endif %}
  </dd>
{% endmacro %}

Now, update the take_quiz_template.html template to use the new macro. Note that we lose the quick_form shortcut and need to spell out each form field.

When you go to your web page you will see the default text we added to the field:

{{ render_field(form.essay_question, class='form-control', placeholder='Write down your thoughts here...') }}

And an orange submit button that spans the width of the page:

{{ render_field(form.submit, class='btn btn-warning btn-block') }}

You can see both of these changes on the web page:

Unfortunately, if you click submit without entering any text, you will notice that we have reverted to ugly validations.

Attempt 2: Use Flask-Bootstrap

Although pretty much hidden in the Flask-Bootstrap documents, it turns out you can add extra HTML elements directly to the template engine using form_field.

As before, we add default text with “placeholder:”

{{ wtf.form_field(form.essay_question, class='form-control', placeholder='Write down your thoughts here...') }}
{{ wtf.form_field(form.email_addr, class='form-control', placeholder='your@email.com') }}

We then customize the submit button. You can customize the button however you would like. Take a look here for more ideas.

{{ wtf.form_field(form.submit, class='btn btn-warning btn-block') }}

This gives us a bootstrap rendered page with pretty validation:

As you can see, we get a popup if we attempt to submit without entering text.

Conclusion

You now have a working web application that easily renders professional looking forms with validation and pop-ups. In the future you can trade ease of deployment against customability.