Making PEX files (Python EXecutable)

I was in a situation where I needed to run some python on a machine which didn’t have pip installed and I needed some packages from pip for my script. Therefore I was in a situation where I had to work out how to use the pex tool and “documented” it in this repository. Most of it was based off of this tutorial, which is a really good starting point and describes what each of the pex options means.

What is PEX?

This video sums it up pretty well. The best way I can describe it, is that it’s a tool to create something like JAR files for Python.

Why shave this Yak?

My particular use case was that I had to figure out a way to copy files using the pywinrm library to a Windows host and execute a PowerShell script. My initial attempt was to try to run pex on my Macbook to generate the file, however as the PyWinRM library requires the “cryptography” package, it all went a bit south with Python trying to compile C extensions and failing due to old version of OpenSSL on my Mac.

The “fix” was to build (compile?) it in an Ubuntu container, but this presented it’s own problems in how to actually get the binary out.

How to actually do this?

  • Install pex with “pip install pex”
  • Make a directory for your script
  • In the directory make sure you have an “”, “” and your script in the directory (e.g.
  • Ensure that the setup file has the correct contents:
from distutils.core import setup
  •  Run pex to make the binary, making sure that the script name and function name match what’s in your file:
pex wingetmem pywinrm -e wingetmem:wingetmem -o wingetmem.pex
  • Now, if you’re in the same boat as me and need to extract this out of a Docker image, you’ll need to use the “docker save” command and then untar the resulting file:
docker save --output="ubuntu.tar" 0004626ad875
tar xvf ubuntu.tar
[change into each layer and untar the "layer.tar" file]
[check whether the file is in there]
I’m really not happy about that last step, because it’s a pretty bad kludge. Ideally, we’d push the binary to something like Artifactory or Nexus (artifact repositories) rather than just leaving them on “disk” but to be honest, by the time I got this working I had had enough.
The resulting “.pex” file runs fine in a Linux environment without pip, which is what we were after.


Writing Cucumber tests with Protractor

So, one of the things about ClearPoint that’s different to most other places I’ve worked is that there’s a big focus on testing. In particular the “end to end” or “black box” automated testing. Meaning at the “highest” level feasible (e.g. browser, mobile UI, desktop UI).

In fact, I’d argue that there’s such a focus on creating and maintaining automated tests that the measure of whether a project will achieve it’s goals isn’t so much down to the strength of the programmers on the team, but rather the test coders.

However, good automated testers are relatively hard to come by and there are occasions where we’ve been caught out not having anyone with that particular skillset on a project. My own philosophy regarding cross functional teams and work life in general is to assume that everyone is “smart enough” to do “my job” if they only apply themselves to the tasks of learning and practicing the same skills. The other edge of that sword is that I believe that I’m smart enough to do theirs, given enough learning and practice.

In that spirit (and due to the current lack of testing talent) I took it upon myself to learn Protractor and Cucumber in order to be useful in maintaining and writing our automated tests.

I mostly followed this guide:
which is good in getting a simple “hello world” demo of Protractor/Cucumber going, but is somewhat out of date (though this is probably the fault of the Node/Javascript community for moving so fast rather than any other reason).

What were the issues I hit? I hit these NodeJS v10 issues:

which I had to downgrade to NodeJS v8 to fix.

Also the guide was for cucumber 1.3, was using 4.x which has a different syntax
(was getting scenarios/steps being “undefined”). I figured this out after reading through the comments on the article and finding a more up to date version of the code here.

Another issues that was a bit annoying was using NPM and trying to work out where to use “local” packages vs “global”.

Overall though, I really like the BDD/Cucumber approach to writing tests as the ability to write them in “business” language and be able to generate readable reports is amazing for ensuring everyone uses the same language and knows the state of the system at any point in time.


Testing NodeJS K8s graceful shutdown

There’s an excellent article talking about how to do graceful shutdown in Kubernetes here that we used to explain to people developing services how to implement graceful shutdown, the differences between “readiness” and “liveness” probes and about signal handling and IPC.

While it’s an excellent article, to be honest, I never got around to trying it out until today.

The code is provided and I tried it out with Minikube and ab.

The results were as expected, though I did hit one issue where something would sporadically reset the TCP connection:

Benchmarking (be patient)
Completed 5000 requests
Completed 10000 requests
Completed 15000 requests
Completed 20000 requests
apr_socket_recv: Connection reset by peer (54)
Total of 20055 requests completed

I haven’t verified it, but I think this might be something to do with the Minikube networking implementation. The issue only came up when I was running the benchmark tests during the deploy.

Finished my first MOOC!

NOTE: MOOC = Massive Open Online Course

I signed up to Coursera and completed my first online course, Learning How to Learn and got my certification:

Screen Shot 2018-04-05 at 9.24.59 PM

The course is very good in the way it’s presented. The videos are easy to watch, though I had to switch to later sessions because I kept on falling behind due to not being able to find time to watch them.

The material is interesting and relevant. The way it’s presented is approachable. The tests are quite easy, but it’s clear that they’re put there after each video as a way of improving what you’ve learned (this also re-enforces one of the ideas in the course about testing being a great way to retain information/ideas).

For me, I think the parts of the course that I liked the most was learning about procrastination and “zombies” and techniques to counter act them. I really liked the bit about “focussing on the process, not the product” as a way to get started on things.

GoCD Kubernetes support released

So, I saw this announcement on my twitter:

I’ve just had a chance to go through the tutorial and man! does it tick a lot of boxes!

After spending how much time messing around with Jenkins trying to get it into a Helm chart and then trying to get elastic K8s agents that can build docker images and then to have GoCD just provide it “out of the box” is amazing.

Auto creating JIRA issues

One of the ideas I always thought would be a good way to keep on top of technical debt and ensure that fixing bugs in Production was easy was to automatically create a JIRA ticket every time that an application throws an error.

(Background: Previously we had hooked up ElastAlert to our Slack channel to send a message whenever an application threw an error, however, this was considered “too noisy” by the client)

Now, people immediately balk at this idea, generally for two main reasons (though there are probably more):

  • The risk that a single bug could trigger many times (so JIRA ends up with lots of duplicates)
  • The fact that the application throws errors that aren’t bugs (are not actionable)

The first of these is very much a valid concern and the code should have some way of checking whether the error/log message is already in the system and if so, either do nothing or add a comment and/or attach extra information (stack traces, logs, memory dumps etc…) to the existing issue.

The second one is actually a feature of this system as it very quickly leads to better logging/error messages. My reasoning is that when you have the errors going to Slack/email/JIRA and they get directed to the developers, then very quickly you end up in a situation where the exception handling improves (in my opinion). The error log messages which “aren’t really errors” get downgraded to WARN, INFO or DEBUG and the actual message that is logged gets more specific (allows you to more quickly attribute it to a root cause) e.g. “Failed to start as can’t resolve database DNS” as opposed to just “Error on startup”.

So, in that spirit, I started having a play with the JIRA REST API. I used the documentation at these links:

Based on this, automatically creating JIRA’s with your credentials becomes as easy as running the following curl command:

curl -D- -u srdan:YOURPASSWORDHERE -X POST --data "@sample_issue.txt" -H "Content-Type: application/json"

With a sample issue file contents looking like:

  "fields": {
      "key": "PROJ"
    "summary": "REST ye merry gentlemen.",
    "description": "Creating of an issue using project keys and issue type names using the REST API",
    "customfield_12600": {"value": "4-Low"},
    "issuetype": {
      "name": "Bug"

Now, you might get an error back complaining about missing/invalid fields and so forth. Fortunately there’s a “meta” create API to tell you what fields are needed by which issue types. You can access this at:

curl -D- -u srdan:YOURPASSWORDHERE

This gets us started and lets us hook up the curl command to something like ElastAlert to create a JIRA for any errors. We still have work to do to use the “Search” API to de-duplicate issues at creation time, which I might cover in a separate blog post.



GPG with Thunderbird and GMail (in browser with Mailvelope)

So, PGP gets a lot of criticism for being hard to use and easy to mess up. In fact, it’s recently become popular to advocate centralized encryption services (at the control of one company e.g. WhatsApp, Signal, Telegram) that don’t allow the user to use their own keys.

Personally, I think that this attitude is short sighted and defeatist. Instead of working to make strong personal encryption user friendly, people have given up, and started attacking PGP as being “too hard”.

Personally, I think that deferring to third party companies is a crutch and the best way to fix the “PGP is too hard” issue is to “make PGP easier” rather than throw away the baby with the bath water.

In that spirit, I’m going to be talking about how to setup GPG keys and send encrypted mail between two mail accounts. One of the accounts is a GMail account and will be using the Mailvelope browser plugin to encrypt mail “in the browser. The other account is an IMAP account and will be using the Thunderbird email client with the Enigmail plugin.

Generating GPG keys for the two email addresses (and a revocation key)

So, for the purposes of this article, I’m going to be using the OS X GPG Keychain tool to generate an encryption key for two email addresses, one an … address and one using a custom domain (

We’ll be using the GMail email address from the browser and the “” address from the Thunderbird email client (connecting to server using IMAP).

To generate the keys, open up GPG Keychain and click on the “New” button and enter some details as below:

After hitting the “Generate Key” button, there’s a screenshot talking about generating entropy for the keys (which can take some time) and then finally you should see the “success” message for each key generation and the option to publish the keys publicly (NOTE: for the purposes of this tutorial, I’m only going to be publishing the “…” key as the GMail address probably belongs to someone else).

Next we can generate some revocation certificates (in case our keys get stolen/lost or when we want to rotate them):

Save the revocation certificate somewhere you won’t lose it (e.g. USB drive, write it to a “single write” CD etc…)

Installing Thunderbird and Enigmail and adding the keys

Thunderbird Mail client can be downloaded and installed from here and the Enigmail plugin (for use with Thunderbird) can be downloaded and installed by going to the Enigmail website.

The key can be added by going to the “Enigmail” menu option, selecting “Key Management” and then the “Import from file” option in the menu:

For our purposes, we’re going to be adding in the “…” private key to Enigmail and the public key for the GMail account. This is very important. We’re simulating two parties talking across the internet without knowing each others’ private keys and therefore we need to make sure that Enigmail only has the private key for one of the accounts and the public key for the other.

Installing Mailvelope and adding the keys

Mailvelope can be downloaded and installed from it’s website here. It’s a browser plugin for Firefox and Chrome and allows you to encrypt data “in the browser”.

NOTE: I haven’t researched how secure this is from things like other plugins (e.g. Google’s Wide Vine plugin) but you’d hope that the browser sandbox model is strong enough to at least protect plugin data from other plugins

Next, click on the “Mailvelope” icon in the browser bar (Firefox shown) and select the “Keyring” option:

Then click on the “Import keys” section on the left and import the key files we’ve generated previously:

Again, make sure to only import the private key for the GMail account and the public key for the “…” account.

Sending an encrypted email between the two accounts

Now that the keys are setup, we’re going to send an email from Thunderbird/Enigmail (encrypted with our “…” key) to GMail/Mailvelope and then reply with an encrypted response (encrypted with our GMail key).

Click “Write” in Thunderbird, type your Subject and message and make sure that the Enigmail encryption icons show that it’s going to encrypt the message:

When you hit the “Send” button, you’ll be prompted as to whether you’d like to encrypt the Subject line:

For this, click the “Leave subject unprotected” button as Mailvelope currently doesn’t support this functionality. Send the message and make sure it is sent without errors.

NOTE: Currently there is no simple, foolproof way to send group email (CC, BCC, multiple recipients etc…)

Receiving, decrypting and replying to the email message

So, now we can log into our GMail account through the web interface as per normal and (assuming the mail has come through, we should see it in our Inbox.

Clicking on the message, you’ll be presented with an “overlay” over the encrypted message, with an envelope icon and your mouse pointer will be a key:

Click on the envelope with a padlock on it, enter the private key password (if prompted) and Voila! You should now see the message sent from Thunderbird.

Hit the “Reply” button and then select the Mailvelope icon:

This will pop up a secure Mailvelope browser window allowing you to write a response back.

Once you’ve written your response, hit “Send” and wait for it to arrive in the other mailbox. Once there, Enigmail should decrypt it and there you have it, secure communications.

What’s still hard about PGP?

  • No support for multiple recipients/BCC/CC.
  • Decryption in a mail thread doesn’t decrypt “in depth”.
  • Subject not encrypted (fix for this, but not universal)
  • Metadata is still in the open (who emailed who and when)
  • Not clear when sending whether you’re doing “encrypt” or “encrypt and sign” or “double encrypt and sign”.

What are the alternatives?

S/MIME. That’s about it really…

Third party services which “do” do end to end encryption


This took me ages to write, mostly due to the steps which I thought were relatively straight-forward, not being straight-forward at all (experience bias). However, I still hold that ultimately the best way to solve the problem of “software X is hard” is to write about it, improve it where you can and help others. Ultimately, time will likely show that you can’t really trust any company/organization, but that you can trust each other.