Turn on TLS on your Postfix email server

So, as someone who runs their own mail server, one of the things I kept noticing was that when sending to GMail, the message arrived, however, there was a red padlock with a cross through it and the message “No encryption”:

From reading about it, GMail turned on the warning message for all servers that didn’t use TLS when sending.

This was a surprise to me as I had thought that TLS was already enabled on the server. It turned out that the encryption was enabled on the “incoming” connection from the mail client, but not for any “upstream” message sending (i.e. when my server is initiating the SMTP protocol).

After a bit of research, I came upon these two links in the Postfix documentation:

http://www.postfix.org/postconf.5.html#smtp_tls_security_level

http://www.postfix.org/TLS_README.html

The key configuration line is:

smtp_tls_security_level = may

Which is a single line in the configuration that turns on “opportunistic TLS”, where “opportunistic” means that our server will encrypt the connection as long as the recipient server supports TLS, as per the documentation:

The SMTP transaction is encrypted if the STARTTLS ESMTP feature is supported by the server. Otherwise, messages are sent in the clear

You can also define a table where you specify encryption settings on a “per recipient domain/server” level (e.g. to say to always use encryption for GMail, but for all others to use opportunistic encryption).

So, after setting the above configuration and reloading the configuration, you can run the test again and verify that the padlock and encryption warning are no longer present.

Dart language

If you’re using Flutter for your mobile applications, you will be having to use Dart as your language to code in. This was weird for me as I struggled a little bit with the syntax (seemingly some kind of mix of Javascript and Java). As such, I went and did he “Dart for Java developers” tutorials here:

https://codelabs.developers.google.com/codelabs/from-java-to-dart/#0

My initial impressions of Dart as a language:

  • It is a lot like Java
  • It adds a whole bunch of “shortcuts” (e.g. fat arrow “=>”) to become more terse to write
  • It uses a mixture of dynamic and static typing
  • It makes all objects interfaces by default, side-stepping the inheritance vs composition argument to a large extent
  • Seems to be a superset of Java in a lot of ways

 

TIL Geolocation doesn’t work on mobile browsers

So, I’m working on a project to help people that are already “out” find a place to eat and as a part of it, there’s a feature to “find the nearest three places”. So, my initial thought was “that’s cool, I’ll just make a website and get it to use the Javascript Geolocation API” and make it available as a “web app” on people’s phones. This was wrong. It seems like the only real way to get reliable location data is to go “native”.

My initial approach was to figure out the “unknown unknowns”, starting off with “can we reliably get the users’ geolocation through the browser. The “proof of concept” code was as follows:

Firstly, I tried opening the file in Firefox and was greeted by the “allow this site to view location data” prompt. I clicked “Allow”, no joy… the site did not display anything and examining the Javascript/Web console, showed my “Allow” click registering as a “Don’t allow”:

Screen Shot 2018-09-01 at 5.11.45 PM

After checking the network traffic tab as well as looking at and messing around with the “about:config”, I came to to the conclusion that Firefox is just disabling the location API (maybe because I’m running the Developer Edition or something). Running this test on Chrome showed the same initial behaviour (prompt to allow website access to location) however the data ended up being displayed (albeit a bit slow) so the Geolocation was working in Chrome at least.

I thought that maybe the issue was with Firefox not allowing location sharing unless the page was shared over HTTPS, so I moved onto the next stage, which was uploading the file to a web server enabled S3 bucket. Even when accessing the file over HTTPS, the same issue happened. I would click “allow” and the error message would come back saying “user has denied geolocation prompt”.

Next up was the all important mobile test (Safari on iOS). I tried loading the previously mentioned S3 website in Safari on iOS and…. nothing… no co-ordinates. After a bit of thinking, I remembered a previous project where a company with a very usable web app had released a “native” app, with a pretty flimsy reason (so we know when you park up in our shop car park). The speculation there was that they were using the location data all the time and having a native app is the only real way to do this. So… there it is, back to the drawing board.

If it is to be a mobile application, I’m inclined to try to build it using Flutter. But I don’t know… it’ll be my first go at mobile application development, so whatever it is, it’s going to be a learning curve.

Deep Learning, AI, Machine Learning etc…

So, every few years, the IT/Tech industry has some kind of shift. Looking at my blog posts, shortly after I finished university, I thought it was the move away from “desktop” applications to “web” applications. This was sort-of true, but failed to take into account the rise of mobile applications and also “the cloud” (a.k.a. fast, ubiquitous internet).

Similarly, I feel like there’s a new shift on now to AI/Machine Learning/Deep Learning (or whatever else you want to call it). Basically, the application of statistical methods to solve problems which previously required human judgement.

As such, I find myself angry that I didn’t concentrate more in Statistics class and also scrambling to find a way to re-learn all about neural networks, bayesian methods as well as their practical implementations in terms of languages/frameworks/libraries/services etc… Not necessarily to completely change professions, rather to be able to understand at a theoretical (and practical to the “hello world” stage) what a proposed method can/cannot do (and be able to call people up on their bullshit).

One of my first attempts was the Qwiklabs Machine Learning API’s “quest”. This was an excellent introduction to the Google AI API’s and what they could do (a sort of “state of the art” demo).

Next up, I wanted to go “under the hood” a bit more and ordered a copy of Deep Learning with Python, which has so far been a really good, but (for me) challenging book. The fact that it’s challenging is good as it’s probably owing more to the fact that I’ve been somewhat lazy in terms of mental challenges for a while now.

I’m still making my way through the book, but have already started thinking about ways in which I could continue the AI learnings once I’ve gotten through it and thought I had better list them:

  • Read another book and try out examples
  • Do an AI/ML course on Coursera/Udemy/other MOOC
  • Do competitions/trainings on kaggle.com
  • Personal project where you collect/analyze data

Super Simple Swagger example – part 1

Introduction

Everyone has an opinion on the best way to do microservice architecture, including me. The following is a series of blog posts where I lay out an opinionated “template architecture” for how to do Microservices in a way that is scalable and sustainable.

The principles I find important are:

  • API first
  • Vendor agnostic
  • Cheap to operate
  • Standardised communication/service mesh layer
  • Language and framework agnostic
  • Black box testable
  • Unit of deployment agnostic

Outline

The first post will be talking about how to make a super simple Swagger API definition and the use this to generate a Python Flask server.

The second post will talk about generating a simple JavaScript client for our API and how to host/run it.

The third post will talk about operational concerns including costs, portability, traceability, deployment, scale ability, backups, service discovery and registration etc…

The fourth post will talk about testing, both at the unit test level and at the “black box”/integration testing/end to end testing

The fifth post will talk about shared libraries and advanced topics

Defining a Trivial Swagger (OpenAPI) API

So, in order to get started, we can define an dummy API which lets us retrieve information about books. The below is a simple Swagger (also now the OpenAPI standard) API to do with books:

The example explicitly leaves out any authentication and only defines a single endpoint with a GET/POST verb and a single object type.

We can verify that our API definition is valid by opening the Swagger Online Editor and pasting it in the editor pane.

So, now we have an API, which defines a “contract” between the client and the server and in an ideal world we would put this into version control (either in it’s own repository or into a monorepo, depending on whether our CI/CD server could trigger off of paths).

Generating the Server Stub

Now that we have an API definition, we can make use of the swagger-codegen tool to use it to generate a Flask “server stub” which we can use as a template for our application. We can install swagger-codegen on OS X with the Homebrew package manager and the following command:

brew install swagger-codegen

After we’ve installed the codegen tool, we can run commands to generate the Flask server:

mkdir book-server
cd book-server
swagger-codegen generate \
-i https://gist.githubusercontent.com/srkiNZ84/3a8f7deb11cf368e25607cf0a66bc140/raw/cac66ce550489538f415734ded075fea192ae94f/book.yml \
-l python-flask

The arguments passed to the command tell it to generate code, point it at our YML file containing the API definition and finally tell the command what kind of server code to generate. (For a list of all of the possible code/framework outputs have a look at the swagger-codegen documentation).

Assuming that the command runs successfully, you should have output such as the below:

We still need to install the Python Flask requirements and start up the server.

Looking at the contents of the directory, we can see that the generator has generated a Flask application as well as a Dockerfile and other code and configuration:

$ ll
total 88
drwxr-xr-x 15 srdan wheel 480B 10 Aug 22:03 .
drwxrwxrwt 11 root wheel 352B 10 Aug 22:02 ..
-rw-r--r-- 1 srdan wheel 885B 10 Aug 22:03 .dockerignore
-rw-r--r-- 1 srdan wheel 786B 10 Aug 22:03 .gitignore
drwxr-xr-x 3 srdan wheel 96B 10 Aug 22:03 .swagger-codegen
-rw-r--r-- 1 srdan wheel 1.0K 10 Aug 22:03 .swagger-codegen-ignore
-rw-r--r-- 1 srdan wheel 349B 10 Aug 22:03 .travis.yml
-rw-r--r-- 1 srdan wheel 246B 10 Aug 22:03 Dockerfile
-rw-r--r-- 1 srdan wheel 1.1K 10 Aug 22:03 README.md
-rw-r--r-- 1 srdan wheel 1.6K 10 Aug 22:03 git_push.sh
-rw-r--r-- 1 srdan wheel 66B 10 Aug 22:03 requirements.txt
-rw-r--r-- 1 srdan wheel 785B 10 Aug 22:03 setup.py
drwxr-xr-x 10 srdan wheel 320B 10 Aug 22:03 swagger_server
-rw-r--r-- 1 srdan wheel 90B 10 Aug 22:03 test-requirements.txt
-rw-r--r-- 1 srdan wheel 143B 10 Aug 22:03 tox.ini

We need to install the Python dependencies with “pip”:

pip3 install -r requirements.txt

To start the server, we can run:

python3 -m swagger_server

We should then be able to see our application running at the URL:

http://0.0.0.0:8080/v2/book

With the Swagger API definition available at:

http://0.0.0.0:8080/v2/swagger.json

We can then start filling out the logic of our application to make it behave like we want.

AWS IAM “InstanceProfiles” are the “who”

Recently, I was trying to create a launch configuration using an AWS IAM Role that I had created through CloudFormation but it was just not letting me, throwing this error:

$ aws autoscaling create-launch-configuration --launch-configuration-name serge-lc-with-instance-profile \
> --image-id ami-baba68d3 --instance-type t2.micro \
> --iam-instance-profile MyCloudWatchAgentRole

An error occurred (ValidationError) when calling the CreateLaunchConfiguration operation: Invalid IamInstanceProfile: MyCloudWatchAgentRole

After a bit of digging around the AWS Console, I realised you can only attach Roles that have an “instance profile” to EC2 instances. This was relatively straight forward to fix, but left me wondering “what’s an instance profile?” and “why do I need one?”. After a bit of searching around, I found this great example on Quora: https://www.quora.com/In-AWS-what-is-the-difference-between-a-role-and-an-instance-profile

With the two parts of access control (authentication and authorization) the Role fills the “authz” bit and the “profile” fills the “authn” bit. I’m not sure why this matters to be honest. I don’t think any other services other than EC2 use profiles.

One guess is that without this, perhaps it’d be hard/impossible to figure out “which instance(s)” carried out a particular action, this being a problem that maybe doesn’t apply to other services? Wonder if Lambda has “profiles”?

Making PEX files (Python EXecutable)

I was in a situation where I needed to run some python on a machine which didn’t have pip installed and I needed some packages from pip for my script. Therefore I was in a situation where I had to work out how to use the pex tool and “documented” it in this repository. Most of it was based off of this tutorial, which is a really good starting point and describes what each of the pex options means.

What is PEX?

This video sums it up pretty well. The best way I can describe it, is that it’s a tool to create something like JAR files for Python.

Why shave this Yak?

My particular use case was that I had to figure out a way to copy files using the pywinrm library to a Windows host and execute a PowerShell script. My initial attempt was to try to run pex on my Macbook to generate the file, however as the PyWinRM library requires the “cryptography” package, it all went a bit south with Python trying to compile C extensions and failing due to old version of OpenSSL on my Mac.

The “fix” was to build (compile?) it in an Ubuntu container, but this presented it’s own problems in how to actually get the binary out.

How to actually do this?

  • Install pex with “pip install pex”
  • Make a directory for your script
  • In the directory make sure you have an “__init__.py”, “setup.py” and your script in the directory (e.g. wingetmem.py)
  • Ensure that the setup file has the correct contents:
from distutils.core import setup
setup(name='wingetmem',
    version='1.0',
    scripts=['wingetmem.py'],
    py_modules=['wingetmem']
)
  •  Run pex to make the binary, making sure that the script name and function name match what’s in your file:
pex wingetmem pywinrm -e wingetmem:wingetmem -o wingetmem.pex
  • Now, if you’re in the same boat as me and need to extract this out of a Docker image, you’ll need to use the “docker save” command and then untar the resulting file:
docker save --output="ubuntu.tar" 0004626ad875
tar xvf ubuntu.tar
[change into each layer and untar the "layer.tar" file]
[check whether the file is in there]
I’m really not happy about that last step, because it’s a pretty bad kludge. Ideally, we’d push the binary to something like Artifactory or Nexus (artifact repositories) rather than just leaving them on “disk” but to be honest, by the time I got this working I had had enough.
The resulting “.pex” file runs fine in a Linux environment without pip, which is what we were after.