Implementing Conway’s Game of Life

There’s a phenomenon when a famous writer, musician or artist dies that upon the news of their passing, their works experience a rise in popularity. I don’t know if I’m proud of it (or whether I should be proud of it) but I do this also.

Recently the mathematician John Conway passed away, succumbing to the COVID-19 virus, taken before his time: https://www.wired.com/story/the-legacy-of-math-luminary-john-conway-lost-to-covid-19/

At the same time, I had taken out the Python Playground book from the public library, with the ambitious intention to read it cover to cover. This didn’t happen and I had to return the book to the library, hopefully for some other soul with more time and motivation to read. In an act of stubbornness, I did then go and buy the ebook, with a secret promise to myself that I would read it some day.

One of the chapters that really caught my attention in the book was the one about implementing the “Game of Life”, a famous example of a computer simulation that I had only previously briefly heard about and never tried to implement. The simulation was created by John Conway.

On hearing of his death, I put aside any thoughts of studiously reading every page and trying every example and instead focused on just completing the chapter implementing the game of life. By this time in my life, I had heard so much about this algorithm that it had reached mythical status in my mind. I had expected and mentally prepared myself for several days of trying to implement and understand the code and the algorithm.

Unsurprisingly to people that have implemented the “Game of Life” themselves I read through and finished the code example in about an hour.

Conway’s Game of Life

Then it hit me as to why this was so special and interesting. This whole “Game of Life” which had become ginormous in my mind was actually just a relatively small set of rules imposed a large number of times on an “unlimited” space. The beauty and lesson wasn’t in the complexity of the system, but of the simplicity of it and how it could lead to complex behaviour.

I feel bad that I had let it build up so much in my mind. I feel bad that it took John Conway dying for me to actually just try implementing the game. I am really glad I did though. Life is full of things we know we “should” do or look into or things that we want to know or look into. Unfortunately, we are not invincible and we don’t know when our time will be up. Seemingly small things that we’ve always wanted to do are important to find time for before it’s too late.

What’s your “Game of Life”?

Super Simple Swagger example – part 1

Introduction

Everyone has an opinion on the best way to do microservice architecture, including me. The following is a series of blog posts where I lay out an opinionated “template architecture” for how to do Microservices in a way that is scalable and sustainable.

The principles I find important are:

  • API first
  • Vendor agnostic
  • Cheap to operate
  • Standardised communication/service mesh layer
  • Language and framework agnostic
  • Black box testable
  • Unit of deployment agnostic

Outline

The first post will be talking about how to make a super simple Swagger API definition and the use this to generate a Python Flask server.

The second post will talk about generating a simple JavaScript client for our API and how to host/run it.

The third post will talk about operational concerns including costs, portability, traceability, deployment, scale ability, backups, service discovery and registration etc…

The fourth post will talk about testing, both at the unit test level and at the “black box”/integration testing/end to end testing

The fifth post will talk about shared libraries and advanced topics

Defining a Trivial Swagger (OpenAPI) API

So, in order to get started, we can define an dummy API which lets us retrieve information about books. The below is a simple Swagger (also now the OpenAPI standard) API to do with books:

The example explicitly leaves out any authentication and only defines a single endpoint with a GET/POST verb and a single object type.

We can verify that our API definition is valid by opening the Swagger Online Editor and pasting it in the editor pane.

So, now we have an API, which defines a “contract” between the client and the server and in an ideal world we would put this into version control (either in it’s own repository or into a monorepo, depending on whether our CI/CD server could trigger off of paths).

Generating the Server Stub

Now that we have an API definition, we can make use of the swagger-codegen tool to use it to generate a Flask “server stub” which we can use as a template for our application. We can install swagger-codegen on OS X with the Homebrew package manager and the following command:

brew install swagger-codegen

After we’ve installed the codegen tool, we can run commands to generate the Flask server:

mkdir book-server
cd book-server
swagger-codegen generate \
-i https://gist.githubusercontent.com/srkiNZ84/3a8f7deb11cf368e25607cf0a66bc140/raw/cac66ce550489538f415734ded075fea192ae94f/book.yml \
-l python-flask

The arguments passed to the command tell it to generate code, point it at our YML file containing the API definition and finally tell the command what kind of server code to generate. (For a list of all of the possible code/framework outputs have a look at the swagger-codegen documentation).

Assuming that the command runs successfully, you should have output such as the below:

We still need to install the Python Flask requirements and start up the server.

Looking at the contents of the directory, we can see that the generator has generated a Flask application as well as a Dockerfile and other code and configuration:

$ ll
total 88
drwxr-xr-x 15 srdan wheel 480B 10 Aug 22:03 .
drwxrwxrwt 11 root wheel 352B 10 Aug 22:02 ..
-rw-r--r-- 1 srdan wheel 885B 10 Aug 22:03 .dockerignore
-rw-r--r-- 1 srdan wheel 786B 10 Aug 22:03 .gitignore
drwxr-xr-x 3 srdan wheel 96B 10 Aug 22:03 .swagger-codegen
-rw-r--r-- 1 srdan wheel 1.0K 10 Aug 22:03 .swagger-codegen-ignore
-rw-r--r-- 1 srdan wheel 349B 10 Aug 22:03 .travis.yml
-rw-r--r-- 1 srdan wheel 246B 10 Aug 22:03 Dockerfile
-rw-r--r-- 1 srdan wheel 1.1K 10 Aug 22:03 README.md
-rw-r--r-- 1 srdan wheel 1.6K 10 Aug 22:03 git_push.sh
-rw-r--r-- 1 srdan wheel 66B 10 Aug 22:03 requirements.txt
-rw-r--r-- 1 srdan wheel 785B 10 Aug 22:03 setup.py
drwxr-xr-x 10 srdan wheel 320B 10 Aug 22:03 swagger_server
-rw-r--r-- 1 srdan wheel 90B 10 Aug 22:03 test-requirements.txt
-rw-r--r-- 1 srdan wheel 143B 10 Aug 22:03 tox.ini

We need to install the Python dependencies with “pip”:

pip3 install -r requirements.txt

To start the server, we can run:

python3 -m swagger_server

We should then be able to see our application running at the URL:

http://0.0.0.0:8080/v2/book

With the Swagger API definition available at:

http://0.0.0.0:8080/v2/swagger.json

We can then start filling out the logic of our application to make it behave like we want.

Making PEX files (Python EXecutable)

I was in a situation where I needed to run some python on a machine which didn’t have pip installed and I needed some packages from pip for my script. Therefore I was in a situation where I had to work out how to use the pex tool and “documented” it in this repository. Most of it was based off of this tutorial, which is a really good starting point and describes what each of the pex options means.

What is PEX?

This video sums it up pretty well. The best way I can describe it, is that it’s a tool to create something like JAR files for Python.

Why shave this Yak?

My particular use case was that I had to figure out a way to copy files using the pywinrm library to a Windows host and execute a PowerShell script. My initial attempt was to try to run pex on my Macbook to generate the file, however as the PyWinRM library requires the “cryptography” package, it all went a bit south with Python trying to compile C extensions and failing due to old version of OpenSSL on my Mac.

The “fix” was to build (compile?) it in an Ubuntu container, but this presented it’s own problems in how to actually get the binary out.

How to actually do this?

  • Install pex with “pip install pex”
  • Make a directory for your script
  • In the directory make sure you have an “__init__.py”, “setup.py” and your script in the directory (e.g. wingetmem.py)
  • Ensure that the setup file has the correct contents:
from distutils.core import setup
setup(name='wingetmem',
    version='1.0',
    scripts=['wingetmem.py'],
    py_modules=['wingetmem']
)
  •  Run pex to make the binary, making sure that the script name and function name match what’s in your file:
pex wingetmem pywinrm -e wingetmem:wingetmem -o wingetmem.pex
  • Now, if you’re in the same boat as me and need to extract this out of a Docker image, you’ll need to use the “docker save” command and then untar the resulting file:
docker save --output="ubuntu.tar" 0004626ad875
tar xvf ubuntu.tar
[change into each layer and untar the "layer.tar" file]
[check whether the file is in there]
I’m really not happy about that last step, because it’s a pretty bad kludge. Ideally, we’d push the binary to something like Artifactory or Nexus (artifact repositories) rather than just leaving them on “disk” but to be honest, by the time I got this working I had had enough.
The resulting “.pex” file runs fine in a Linux environment without pip, which is what we were after.

 

Quick guide to virtualenv

So, recently I’ve had a work mate start preaching about how we should all be using “virtualenv” for everything python/Ansible related, however, they failed to explain the why’s and how’s of virtualenv.

A quick google search brought up this how to article which does a pretty good job of explaining why you would want to use it (separate project dependencies) and then goes on to detail how to use the tool.