So, every few years, the IT/Tech industry has some kind of shift. Looking at my blog posts, shortly after I finished university, I thought it was the move away from “desktop” applications to “web” applications. This was sort-of true, but failed to take into account the rise of mobile applications and also “the cloud” (a.k.a. fast, ubiquitous internet).
Similarly, I feel like there’s a new shift on now to AI/Machine Learning/Deep Learning (or whatever else you want to call it). Basically, the application of statistical methods to solve problems which previously required human judgement.
As such, I find myself angry that I didn’t concentrate more in Statistics class and also scrambling to find a way to re-learn all about neural networks, bayesian methods as well as their practical implementations in terms of languages/frameworks/libraries/services etc… Not necessarily to completely change professions, rather to be able to understand at a theoretical (and practical to the “hello world” stage) what a proposed method can/cannot do (and be able to call people up on their bullshit).
One of my first attempts was the Qwiklabs Machine Learning API’s “quest”. This was an excellent introduction to the Google AI API’s and what they could do (a sort of “state of the art” demo).
Next up, I wanted to go “under the hood” a bit more and ordered a copy of Deep Learning with Python, which has so far been a really good, but (for me) challenging book. The fact that it’s challenging is good as it’s probably owing more to the fact that I’ve been somewhat lazy in terms of mental challenges for a while now.
I’m still making my way through the book, but have already started thinking about ways in which I could continue the AI learnings once I’ve gotten through it and thought I had better list them:
- Read another book and try out examples
- Do an AI/ML course on Coursera/Udemy/other MOOC
- Do competitions/trainings on kaggle.com
- Personal project where you collect/analyze data