Auto creating JIRA issues

One of the ideas I always thought would be a good way to keep on top of technical debt and ensure that fixing bugs in Production was easy was to automatically create a JIRA ticket every time that an application throws an error.

(Background: Previously we had hooked up ElastAlert to our Slack channel to send a message whenever an application threw an error, however, this was considered “too noisy” by the client)

Now, people immediately balk at this idea, generally for two main reasons (though there are probably more):

  • The risk that a single bug could trigger many times (so JIRA ends up with lots of duplicates)
  • The fact that the application throws errors that aren’t bugs (are not actionable)

The first of these is very much a valid concern and the code should have some way of checking whether the error/log message is already in the system and if so, either do nothing or add a comment and/or attach extra information (stack traces, logs, memory dumps etc…) to the existing issue.

The second one is actually a feature of this system as it very quickly leads to better logging/error messages. My reasoning is that when you have the errors going to Slack/email/JIRA and they get directed to the developers, then very quickly you end up in a situation where the exception handling improves (in my opinion). The error log messages which “aren’t really errors” get downgraded to WARN, INFO or DEBUG and the actual message that is logged gets more specific (allows you to more quickly attribute it to a root cause) e.g. “Failed to start as can’t resolve database DNS” as opposed to just “Error on startup”.

So, in that spirit, I started having a play with the JIRA REST API. I used the documentation at these links:

Based on this, automatically creating JIRA’s with your credentials becomes as easy as running the following curl command:

curl -D- -u srdan:YOURPASSWORDHERE -X POST --data "@sample_issue.txt" -H "Content-Type: application/json"

With a sample issue file contents looking like:

  "fields": {
      "key": "PROJ"
    "summary": "REST ye merry gentlemen.",
    "description": "Creating of an issue using project keys and issue type names using the REST API",
    "customfield_12600": {"value": "4-Low"},
    "issuetype": {
      "name": "Bug"

Now, you might get an error back complaining about missing/invalid fields and so forth. Fortunately there’s a “meta” create API to tell you what fields are needed by which issue types. You can access this at:

curl -D- -u srdan:YOURPASSWORDHERE

This gets us started and lets us hook up the curl command to something like ElastAlert to create a JIRA for any errors. We still have work to do to use the “Search” API to de-duplicate issues at creation time, which I might cover in a separate blog post.



GPG with Thunderbird and GMail (in browser with Mailvelope)

So, PGP gets a lot of criticism for being hard to use and easy to mess up. In fact, it’s recently become popular to advocate centralized encryption services (at the control of one company e.g. WhatsApp, Signal, Telegram) that don’t allow the user to use their own keys.

Personally, I think that this attitude is short sighted and defeatist. Instead of working to make strong personal encryption user friendly, people have given up, and started attacking PGP as being “too hard”.

Personally, I think that deferring to third party companies is a crutch and the best way to fix the “PGP is too hard” issue is to “make PGP easier” rather than throw away the baby with the bath water.

In that spirit, I’m going to be talking about how to setup GPG keys and send encrypted mail between two mail accounts. One of the accounts is a GMail account and will be using the Mailvelope browser plugin to encrypt mail “in the browser. The other account is an IMAP account and will be using the Thunderbird email client with the Enigmail plugin.

Generating GPG keys for the two email addresses (and a revocation key)

So, for the purposes of this article, I’m going to be using the OS X GPG Keychain tool to generate an encryption key for two email addresses, one an … address and one using a custom domain (

We’ll be using the GMail email address from the browser and the “” address from the Thunderbird email client (connecting to server using IMAP).

To generate the keys, open up GPG Keychain and click on the “New” button and enter some details as below:

After hitting the “Generate Key” button, there’s a screenshot talking about generating entropy for the keys (which can take some time) and then finally you should see the “success” message for each key generation and the option to publish the keys publicly (NOTE: for the purposes of this tutorial, I’m only going to be publishing the “…” key as the GMail address probably belongs to someone else).

Next we can generate some revocation certificates (in case our keys get stolen/lost or when we want to rotate them):

Save the revocation certificate somewhere you won’t lose it (e.g. USB drive, write it to a “single write” CD etc…)

Installing Thunderbird and Enigmail and adding the keys

Thunderbird Mail client can be downloaded and installed from here and the Enigmail plugin (for use with Thunderbird) can be downloaded and installed by going to the Enigmail website.

The key can be added by going to the “Enigmail” menu option, selecting “Key Management” and then the “Import from file” option in the menu:

For our purposes, we’re going to be adding in the “…” private key to Enigmail and the public key for the GMail account. This is very important. We’re simulating two parties talking across the internet without knowing each others’ private keys and therefore we need to make sure that Enigmail only has the private key for one of the accounts and the public key for the other.

Installing Mailvelope and adding the keys

Mailvelope can be downloaded and installed from it’s website here. It’s a browser plugin for Firefox and Chrome and allows you to encrypt data “in the browser”.

NOTE: I haven’t researched how secure this is from things like other plugins (e.g. Google’s Wide Vine plugin) but you’d hope that the browser sandbox model is strong enough to at least protect plugin data from other plugins

Next, click on the “Mailvelope” icon in the browser bar (Firefox shown) and select the “Keyring” option:

Then click on the “Import keys” section on the left and import the key files we’ve generated previously:

Again, make sure to only import the private key for the GMail account and the public key for the “…” account.

Sending an encrypted email between the two accounts

Now that the keys are setup, we’re going to send an email from Thunderbird/Enigmail (encrypted with our “…” key) to GMail/Mailvelope and then reply with an encrypted response (encrypted with our GMail key).

Click “Write” in Thunderbird, type your Subject and message and make sure that the Enigmail encryption icons show that it’s going to encrypt the message:

When you hit the “Send” button, you’ll be prompted as to whether you’d like to encrypt the Subject line:

For this, click the “Leave subject unprotected” button as Mailvelope currently doesn’t support this functionality. Send the message and make sure it is sent without errors.

NOTE: Currently there is no simple, foolproof way to send group email (CC, BCC, multiple recipients etc…)

Receiving, decrypting and replying to the email message

So, now we can log into our GMail account through the web interface as per normal and (assuming the mail has come through, we should see it in our Inbox.

Clicking on the message, you’ll be presented with an “overlay” over the encrypted message, with an envelope icon and your mouse pointer will be a key:

Click on the envelope with a padlock on it, enter the private key password (if prompted) and Voila! You should now see the message sent from Thunderbird.

Hit the “Reply” button and then select the Mailvelope icon:

This will pop up a secure Mailvelope browser window allowing you to write a response back.

Once you’ve written your response, hit “Send” and wait for it to arrive in the other mailbox. Once there, Enigmail should decrypt it and there you have it, secure communications.

What’s still hard about PGP?

  • No support for multiple recipients/BCC/CC.
  • Decryption in a mail thread doesn’t decrypt “in depth”.
  • Subject not encrypted (fix for this, but not universal)
  • Metadata is still in the open (who emailed who and when)
  • Not clear when sending whether you’re doing “encrypt” or “encrypt and sign” or “double encrypt and sign”.

What are the alternatives?

S/MIME. That’s about it really…

Third party services which “do” do end to end encryption


This took me ages to write, mostly due to the steps which I thought were relatively straight-forward, not being straight-forward at all (experience bias). However, I still hold that ultimately the best way to solve the problem of “software X is hard” is to write about it, improve it where you can and help others. Ultimately, time will likely show that you can’t really trust any company/organization, but that you can trust each other.



SSH audit and secure settings

So, there’s a tool called ssh-audit which is like the SSL Labs of SSH. The first run against some servers showed a whole bunch of “fails” due to issues with use of weak Key Exchange algorithms, Host Key Algorithms and MACs (Message Authentication Code algorithms).

After a bit of fiddling around, you can get a much more secure setup using the config below:


HostKeyAlgorithms ssh-rsa,rsa-sha2-512,rsa-sha2-256,ssh-ed25519


There’s still a couple of warnings around using SHA1 and potentially a weak/bad modulus size with SHA256, but it’s a lot better than the default configuration.

After adding the files to the config, you can test the config with:

sshd -t

and the restart and voila! you should have a much more secure SSH server.

I put the above into a bunch of tasks into an Ansible playbook:

- name: Ensure SSH settings are in config file
  tags: ['ssh-audit','ssh']
  become: true
    path: /etc/ssh/sshd_config
    block: |
      HostKeyAlgorithms ssh-rsa,rsa-sha2-512,rsa-sha2-256,ssh-ed25519

- name: Verify settings are not going to break SSHd
  tags: ['ssh-audit','ssh']
  become: true
  command: sshd -t

- name: Restart SSHd
  tags: ['ssh-audit','ssh']
  become: true
  systemd: name=sshd state=restarted

- name: Run the ssh-audit against the server
  tags: ['ssh-audit','ssh']
  connection: local
  shell: "./ -n -b -l warn {{ ansible_ssh_host }}"
  register: sshauditoutput

- name: Output the ssh-audit results
  tags: ['ssh-audit','ssh']
  debug: msg="{{ sshauditoutput.stdout_lines }}"



Vulnerability scanning Docker images with CoreOS Clair and Klar

What and motivation

This post is going to talk about how to get the CoreOS Clair container security tool running from the command line, with a view to integrating it into a CI/CD workflow using the Klar CLI tool.

The way that most people are using Docker is as a “better VM” in that they’re taking Ubuntu LTS releases and deploying on top of them. This is as opposed to using a “stripped down”/minimal image like Alpine Linux. The advantage of the minimal images is that they often remove a whole bunch of software that’s not needed in a container, reducing the potential attack surface and also make the images smaller and easier to work with.

One of the thorny issues everyone doesn’t think about in this scenario is how to upgrade/patch the “OS” (libraries etc…) running in Docker as it’s still an issue. Also, how to check whether a Docker image is missing patches.

To this end, CoreOS released a tool called Clair to allow someone to “scan” Docker images in order to ensure that all patches/upgrades have been applied.


Install CoreOS Clair onto Minikube using the Helm chart. Ignore the PostgeSQL part of the chart as the PV doesn’t work with Minikube. Ensure it can talk to DB.

Expose Clair to the cluster. Download and install Klar, run it, pointing at the Clair service and pass it an image to scan (needs docker locally?).

Ignore v1 API error message from Clair logs

Install CoreOS Clair onto Minikube

Installing Clair onto Minikube is fairly straight forward thanks to Clair providing a Helm chart in GitHub, which you can find here. Personally, I found an issue with the “PostgreSQL” container within this helm chart, which caused it to fail to start, namely something to do with PhysicalVolumes:

Warning Failed 12h (x675 over 2d) kubelet, minikube Error: lstat /tmp/hostpath-provisioner/pvc-16c78bd3-12c1-11e8-8c48-080027fa3e9c: no such file or directory
Normal SuccessfulMountVolume 9m kubelet, minikube MountVolume.SetUp succeeded for volume "pvc-16c78bd3-12c1-11e8-8c48-080027fa3e9c"
Normal SuccessfulMountVolume 9m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-5v2tq"
Warning Failed 7m (x12 over 9m) kubelet, minikube Error: lstat /tmp/hostpath-provisioner/pvc-16c78bd3-12c1-11e8-8c48-080027fa3e9c: no such file or directory

Therefore, I just ran the “vanilla” PostgreSQL image and exposed it as a service with:

kubectl run postgres --image postgres
kubectl expose deployment postgres --port 5432 --target-port 5432 --type NodePort

and pointed Clair at the PostgreSQL instance using this bit of configuration in the custom values Helm YAML:

postgresURI: "postgres://postgres@postgres:5432/postgres?sslmode=disable"

(the full file can be found here)

Next up, we can install Clair itself from the Helm chart with our custom values:

helm dependency update clair
helm install clair -f mycustomvalues.yaml

NOTE: This assumes you’ve got Helm and Tiller installed and setup (have run helm init etc…)

Assuming it’s all gone well, you should see the following output when running “kubectl get pods”:

and the Clair pod logs should look something like the following:

Now we need to grab the port that the Clair service is running on in our Minikube cluster with the “kubectl get services” command:

The value we’re after is the port mapped onto 6060 (in the above screenshot 32527).

Assuming we’ve installed Klar (instructions here) we can now run Klar against a test image with the following command (update port and Minikube ip for you config):


The output currently will likely be “Found 0 vulnerabilities” which is a bit of an issue (it should be > 0). The reason for this false report is that the Clair “updaters” take a while to download the full list of vulnerabilities  and lists from the Ubuntu/Debian/Alpine etc… databases.

Integrating into Continuous Integration workflow

So, at this point, we can automate the above to fail our build if the newly built/pushed image has an unacceptable level of vulnerabilities. The Klar command exit value is determined by whether the vulnerabilities exceeded the configured thresholds (0 = all good, 1 = fail). Therefore, it’s trivial to integrate into the build workflow.

We do need to sit down and work out where to set the thresholds at (i.e. what constitutes an unacceptable level of risk).

Conclusion and Further work

It would have been great if Klar supported scanning images locally (before they were pushed to a registry) as it would somewhat further reduce the risk that a “insecure” image might be deployed.

Also, we didn’t cover any alerting or notifications when Clair does find security issues in an image (who should be alerted? what should they do?).

Finally, there was no discussion of scanning already deployed images or any kind of periodic “background” scanning of a deployed environment.

For other options for Clair integration instead of Klar, have a look at the Clair integrations page.


Using GitHub Pages with Hugo and TravisCI

What and Motivation

I recently setup a GitHub Pages page (see here) as a personal info/who am I type of page as I was curious how GH Pages works and also because I wanted to get familiar with TravisCI.


Source code is here. GH Pages only allows you to host off of “master” branch and TravisCI deletes history on deploy, meaning you have to have a “long running” branch which is annoying. Hugo is very easy to use and setup (probably due to being written in Go lang).


Need a GitHub account and a TravisCI account (both are free, TravisCI won’t charge unless you go “over” a certain number of builds or something)

Need to be able to run Hugo locally


Create a repository in GitHub with the name [YOURUSERNAME] (e.g. By using this naming convention it automatically flags the repository as a GitHub Pages repo.

Then go to the “Settings” for the repo and ensure that Pages is enabled:

Then, if you want to you can commit a “index.html” to master and see it show up when you hit the URL (just to verify it’s working).

Create a simple Hugo page

So, if you go through the Hugo Quick Start you should get a fairly good idea how to use it to generate web pages/blog posts etc… If you’re just after a simple website, with similar content to mine, have a look at the index template in my source. If you’re after something more complicated you can have a look at the Hugo documentation page.

Create a “dev” branch (can name it whatever you like)

So, in order to get around the limitation that you have to publish to the “master” branch and that when you publish it’ll nuke the “history” of that branch, we need to create a separate “long lived” branch. I’ve called mine “dev” but really, you can call it whatever you like.

This is the branch that we’ll be deploying from and that TravisCI will be “listening on” for any changes to build and deploy.

I’d imagine that the workflow would be that when adding a feature/bugfix/content you’d branch off of the “dev” branch, do the work until it’s “ready” and then create a PR to merge the changes back into the “dev” branch.

Generate a GitHub OAuth token

As per the TravisCI Pages Deploy documentation, we have to generate a “Personal Access Token” in GitHub (it’s an OAuth token associated with your account) before TravisCI will be authorized to “commit” (also “deploy” in this context) to the repository. The instructions for generating a Personal Access Token can be found here.

Create TravisCI file

The description of how to configure the GitHub Pages deploy from TravisCI is taken from the documentation, which can be referred to for more details. The contents of the .travisci.yml file (which is used to create and configure the build in TravisCI) are as follows:

  - wget -O hugo.tar.gz
  - tar xvfz hugo.tar.gz
  - rm -rf public

# Build the website
  - ./hugo version
  - ./hugo

# Deploy to GitHub Pages
  provider: pages
  skip_cleanup: true
  local_dir: public
  github_token: $SERGE_GH_TOKEN
  target_branch: master
    branch: dev

The first “install” section deals with downloading the Hugo binary and extracting it.

The “script” section prints out the version of Hugo that we’re using and executing it to generate the HTML/CSS/JS/XML.

Finally, the “deploy” section using the “pages” deploy provider to deploy our “public” directory to the “master” branch (and the “on branch” bit just tells TravisCI to only deploy off of the “dev” branch).

Commit, push and watch it build and deploy

Finally, after you’ve modified your content and tested it locally to ensure that it looks the way you want it to, commit your source (exclude the “public” directory) and push it to the “dev” branch.

Shortly after, you should see the build starting in the TravisCI interface:

Hopefully the build logs show that the build was successful and it “goes green”:

At this point, we should have a website that’s publicly viewable, in version control as well as having a build/deploy pipeline in place to automate the boring bits.

Further stuff to add in future

So, while this works, there are some things that would be nice to add, off the top of my head:

  • A “health check” to verify the site is “up” and is showing the correct version of our website
  • Verify there’s no broken links on the website
  • A check to ensure that our website conforms to accessibility standards
  • A check to ensure that our website conforms to search engine optimization standards

Basically a “test” suite to ensure that not only did we build and deploy, but also that we tested that the website still met our requirements.