Investing and intrinsic value

One of the things that being a part of the technology industry comes with is the awareness that there’s a lot of money in technology. Almost everyone knows of someone working in a startup or making money off of technology companies. Personally, I’ve invested small amounts (up to $1,000NZD) into NZ tech companies here and there. These have resulted in some gains, some losses. Overall, not much either way (probably would be about “even” if I kept track of things).

One thing that always did bother me was that I had no idea how to calculate the “fair price” (also called “intrinsic value”) of a company that isn’t paying any dividends. (I “roughly” knew how to work out the value of a company that was paying dividends).

A few months ago, I heard about Hatch Investments and that they allow you to buy US stocks directly from NZ (ASB only let you buy NZX and ASX listed stocks and Sharsies didn’t have US stocks, but apparently this is coming).

After signing up to Hatch I first just bought some Index funds, figuring it was a low risk way to see how it all works. The way it works is roughly:

  • Transfer NZD to Hatch bank account
  • Wait a (working) day or two for the money to get converted to USD and made available in your account (not sure whether they take a cut here)
  • Money is there to invest!

Then, at some point I started wondering about individual companies, but still had my doubts as to how to actually work out the “value”. With this question on my mind, I read through one of the Hatch articles, where they profiled a NZ investor by the name of Tom Botica. This led me to check out his Youtube video explaining his method of working out intrinsic value. Going through it, the method made a lot of sense to me, in that I could understand it and couldn’t see any obvious problems with it (it does come with caveats that it doesn’t apply to stocks not from well established, large capitalization companies).

So, armed with a spreadsheet, I was kicked into action by the crash around March 20th, triggered by the COVID-19 virus. I set about working out the intrinsic value for Amazon and Google stocks. Lo and behold, they were under priced and I bought some.

Will this bet pay off? I don’t know. In the short time since then (about six weeks?) a lot of people have started pointing out that the market recovery is not logical and that it resembles a “dead cat” bounce. This is probably true. What will the world look like in a year or two? Who knows.

Implementing Conway’s Game of Life

There’s a phenomenon when a famous writer, musician or artist dies that upon the news of their passing, their works experience a rise in popularity. I don’t know if I’m proud of it (or whether I should be proud of it) but I do this also.

Recently the mathematician John Conway passed away, succumbing to the COVID-19 virus, taken before his time:

At the same time, I had taken out the Python Playground book from the public library, with the ambitious intention to read it cover to cover. This didn’t happen and I had to return the book to the library, hopefully for some other soul with more time and motivation to read. In an act of stubbornness, I did then go and buy the ebook, with a secret promise to myself that I would read it some day.

One of the chapters that really caught my attention in the book was the one about implementing the “Game of Life”, a famous example of a computer simulation that I had only previously briefly heard about and never tried to implement. The simulation was created by John Conway.

On hearing of his death, I put aside any thoughts of studiously reading every page and trying every example and instead focused on just completing the chapter implementing the game of life. By this time in my life, I had heard so much about this algorithm that it had reached mythical status in my mind. I had expected and mentally prepared myself for several days of trying to implement and understand the code and the algorithm.

Unsurprisingly to people that have implemented the “Game of Life” themselves I read through and finished the code example in about an hour.

Conway’s Game of Life

Then it hit me as to why this was so special and interesting. This whole “Game of Life” which had become ginormous in my mind was actually just a relatively small set of rules imposed a large number of times on an “unlimited” space. The beauty and lesson wasn’t in the complexity of the system, but of the simplicity of it and how it could lead to complex behaviour.

I feel bad that I had let it build up so much in my mind. I feel bad that it took John Conway dying for me to actually just try implementing the game. I am really glad I did though. Life is full of things we know we “should” do or look into or things that we want to know or look into. Unfortunately, we are not invincible and we don’t know when our time will be up. Seemingly small things that we’ve always wanted to do are important to find time for before it’s too late.

What’s your “Game of Life”?

Differences between MySQL and PostgreSQL with INSERT

So, recently when investigating an issue with a MySQL to PostgreSQL migration, I found a surprising difference in behaviour between the two databases.

The difference has to do with how each behave when explicitly passed NULL values in an INSERT statement to columns with defaults or auto-increments (called “serials” in Postgres)

What do I actually mean? To compare, in MySQL you can do this:

mysql> create table baz(whattimeisit TIMESTAMP DEFAULT CURRENT_TIMESTAMP);
Query OK, 0 rows affected (0.02 sec)
mysql> show create table baz;
| Table | Create Table                                                                                                              |
| baz   | CREATE TABLE `baz` (
  `whattimeisit` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
1 row in set (0.00 sec)

mysql> INSERT INTO baz (whattimeisit) VALUES (NULL);
Query OK, 1 row affected (0.00 sec)

mysql> select * from baz;
| whattimeisit        |
| 2019-08-25 20:35:47 |
1 row in set (0.00 sec)

Where it decides to revert to the “DEFAULT” value when passed a NULL.

In Postgres, we get the following error:

postgres=# \d baz
                                 Table "public.baz"
    Column    |           Type           | Collation | Nullable |      Default      
 whatsthetime | timestamp with time zone |           | not null | CURRENT_TIMESTAMP

postgres=# INSERT INTO baz(whatsthetime) VALUES (NULL);
ERROR:  null value in column "whatsthetime" violates not-null constraint
DETAIL:  Failing row contains (null).

Funnily enough, when it comes to PostgreSQL if you don’t pass the column at all, the default is used and you don’t get the same error. summary

Awesome conference, the march of open source continues and technological progress has led us to the point where we can automatically identify predators in NZs bush as well as sequence DNA live during a conference talk.

The Internet of Things theme want as prominent as I thought it would be. The embedded space is still very much a mess in my opinion. To get started you need to make sure you have a ton of adapters, cables and patience. It just feels like yak shaving all the way down at the moment. However, this opinion might be due to my biases coming from the software side of things. The stuff I’m finding hard might just be teething pains or the learning curve.

From a open source philosophy perspective there was a bit of ignoring the elephant on the room, namely the rise of public clouds and the Apache vs GPL license model. However, when it was addressed, it was quite inspirational. One of the main philosophical points being that the four freedoms are inherently tied to what users want and therefore on a long enough timeline going against them will lead to giving users something inferior.

One of the most salient practical points was that using Apache creates a lot of confusion and work for the legal team. It causes each commit potentially to have to go through the “shall we open source or not” process.