Reading List: Jan to June 2021

The blog has been quiet for the last year as li, but to try and get things started again I’m sharing the list of books I’ve read in the first half of 2021:

  • Goliath’s Revenge: How Established Companies Turn the Tables on Digital Disruptors by Scott Andrew Snyder and Todd Hewlin
  • Ten Lessons for a Post-Pandemic World by Fareed Zakaria
  • The Name of the Wind by Patrick Rothfuss
  • The Rationalist’s Guide to the Galaxy: Superintelligent AI and the Geeks Who Are Trying to Save Humanity’s Future by Tom Chivers
  • The Wise Man’s Fear by Patrick Rothfuss
  • The Ghost Map: The Story of London’s Most Terrifying Epidemic – and How it Changed Science, Cities and the Modern World by Steven Johnson
  • Pattern Recognition by William Gibson
  • Working Backwards: Insights, Stories, and Secrets from Inside Amazon by Colin Bryar & Bill Carr
  • The Player of Games by Iain Banks
  • Nomadland by Jessica Bruder
  • Use of Weapons by Iain Banks
  • How to Change Your Mind: What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence by Michael Pollan
  • Excession by Iain Banks
  • Microserfs by Douglas Coupland

I’m trying to alternate fiction and non-fiction. Some of these are re-reads of SF/Fantasy books that I found very influential though my teens and early twenties. Non fictions reads mostly come from referrals and now walking though bookshops looking for topics on the intersection of technology, economics and history.

I have to recommend Goliath’s Revenge and Working Backwards as most useful from a work perspective. Pattern Recognition and Microserfs are great fiction books.

First Thoughts on the Wolfram Language

A post on reawakening ideas from my past and modelling the future

Wolfram technology has always been one of those things I think I would like to know more about but some inertial stops me from getting more deeply involved. 

However, over the 2019/2020 holiday period I was able to catch up on some podcasts and the TWIT network’s Triangulation came to the top of the list including an Interview with Stephen Wolfram. The whole interview is pretty cool and worth a listen. Critically, I learnt that there is a free tier that lets any use the Wolfram Language via the Wolfram Cloud for free. This is really enabling as it lets you explore the language only needing a browser and really helps take the inertia away to exploring it. It limited to the amount of computation and support available, but pretty amazing anyway.

Over the next few days I was able to learn the basics of the language and tie that back to points that Stephen was highlighting in his interview. I will admit that I have not had so much fun for years. Two major things that I think are worth further exploration because the resonated so much for me are: 

  • Symbolic Computing
  • Computational Thinking

Symbolic Computing

Why did I resonate so much with symbolic computing. For that I need to take a small digression into the past (apologies) …

I like to think of my first 10 professional years (1999-2009) a Career 1.0. I joined a startup company called InforSense in parallel to working on a PhD program. Although I started as a software engineer, I spent most of my time in product management working on the design of the platform. InforSense developed what we might call today a Data Science Workbench: Our users built visual programs (we called them “WorkflowsW) from a toolbox of Data Access, Data Manipulation, Machine Learning (Supervised and Unsupervised) & Visualisation components. We subsequently developed extensions this workbench to include Bioinformatics, Cheminformatics, Text and Imaging Mining. 

The history of InforSense and its underlying technology is another story, but it’s important to know that the semantics of our workflows were based on functional programming (Haskel, ML, etc) principles. Each workflow was a function, typically over a collection of data such as a relational table. Data was immutable, Workflows had no side effects. This has some great advantages in allowing a workflow to be automatically partitioned into HPC environment, but made workflow construction complex as there are subtle dependencies between components that needed to be managed. To solve this we developed a technology called “Metadata Propagation”. This allowed a workflow designer to use a symbolic representation of how each component would transform data to assist in the building of workflows. This symbolic representation of “Metadata” would “propagate” around the workflow and enable the much more sophisticated control of the complex component dependencies.

Why the long trip down memory lane – Because symbolic computing is at the heart of the Wolfram Language. For InforSense it was a highly powerful but unappreciated part of the system. For Stephen Wolfram symbolic computing became the core of everything he built in Mathematica and the Wolfram Language.

Why is symbolic computing powerful? For me it is because it allows top down programming. I can write a program that models the logic of a process, without having to apriori define all of my components in the process. 

For example: recently I have been working on technology to optimize the control strategy for bioreactors for antibody production. You start by modelling the reactor control set points, the reactor transfer functions, the reactor input and output materials. Typically in most programming languages you would need to define some some sort of object or type for each of you entities and then you can program the logic between them. In Wolfram I can program the logic between them and just have a symbol representing a concept that has not been fully thought through. Critically. The Wolfram language will still execute code that is written with a symbol representing an uncoded idea. Which means you can play and explore with the high level computational logics without having to do lots of the details upfront. 

For complex system design this is hugely helpful because you can really think through and get your computational logic right without having to write heaps of scaffolding code. It the type of work I love to do as a programmer more thinking and less yack shaving.

Computational Thinking

Beyond the symbolic computing paradigm, there is another factor about both the interview and the associated API library of the Wolfram Language that really resonated with me – that was the approach to computational thinking that Stephen Wolfram was articulating. It started when he described the approach the Wolfram team use to develop new language features and how that increases their computational vocabulary. He viewed this is so useful the design review meetings are actually broadcast live online (see an example here). 

Stephen’s rational for this is that when the language is designed, it forces the designers to create a computational model for the underlying topic. Essentially defining an API creates a way of thinking and vocabulary for the computational manipulation of an object. When hearing this I immediately started to reflect on the computational topics I can think very clearly about and the type of computation I do not have such clarity about. 

The answer to this comes again for thing I have either become an expert in or designed. So going back to InforSense. I am very comfortable in thinking around the core InforSense components:

  • Relational Table Manipulation (Selects, Filters, Derives, Joins, Grouping, Pivoting)
  • Unsupervised Learning (Various Clustering algorithms)
  • Supervised Learning (Various Classification and Regression algorithms)
  • Dimensionality Reductions (Algorithms like Principal Components Analysis)
  • Data Visualization Approaches

I would say I am even more comfortable in components sets where I was responsible for the API design:

  • Cheminformatics
  • Bioinformatics
  • Translational Informatics
  • Statistical NLP
  • Image Minings

These are areas I have built a good computational vocabulary around. What Iam finding is there are areas that I am finding myself needing to operate in 2020 such as Tensors or Graphs that I have not developed such a sophisticated vocabulary. These areas however are things that the Wolfram Language has very good models for already.

So simply by starting to internalise the Wolfram API’s (as I did with the core InforSense components 20 yrs ago) I are able to increase the range of vocabulary and range to design systems that base their computation on these types of structures. 

Summary

I’ll wrap these post up at this point – Symbolic Computing and Rich APIs for complex computational structures – These are the things I like very much about the Wolfram Language. I like them both because of the elegance in modelling computation. But also because they resonate so deeply with ideas my colleagues and I used to play with everyday 20 years ago.

They are unfamiliar to many programmers and you’re not going to get a fancy app built using these technology. But you will design a much better application architecture or service back end if you have designed in Wolfram first.

The Unicorn Project

A Novel about Developers, Digital Disruption, and Thriving in the Age of Data by Gene Kim https://itrevolution.com/the-unicorn-project

The Unicorn Project is a companion book to Kim’s previous work: “The Phoenix Project”.  Unicorn covers the same timeline, same company and same business/devops transformation as Phoenix, but with an overlapping cast of characters. In Unicorn, the focus is on the transformation is from the perspective of the IT engineers on the coal face. Essentially, this is the “redshirt” version of Phoenix.

Kim uses the same novel-like style used in the Phoenix project to keep the material engaging and moving at a high tempo. From that perspective it works, but I do find some of the situations overly contrived and distracting.

Moving past the style there are a lot of ideas and tools presented in the book. The two main ones are: The Five Ideals and The Three Horizons. The Five Ideals are the main lessons of the book and our redshirts embrace them step by step on their journey from frustration to devops masters. The 5 ideals are:

  • The first ideal: Locality and Simplicity
  • The Second Ideal: Focus Flow and Joy
  • The Third Ideal: Improvement of Daily Work
  • The Fourth Ideal: Psychological Safety
  • The Fifth Ideal: Customer Focus.

By leaning into the ideals progressively though the book our protagonists are able to become more engaged, move value creating members of the team. This allows them to start exploring the second idea, the Three Horizons from Geoffrey Moore. The Three Horizons help and organisation think about a business in terms of three time horizons:

  • Horizon 1: Your current core business
  • Horizon 2: Near term expansions to the core business
  • Horizon 3: Longer term innovations that allow you transform the business

The group then try to make context based decisions on how to rationalize their IT group to find headroom to invest in innovation.

Takeaways

After a first reading, what struck me most was an observation towards the end of the book that the three core metrics critical to any successful business are:

  1. Employee Engagement
  2. Customer Satisfactions
  3. Cash Flow

While all good ideas seem obvious once they have been pointed out – I particularly liked this one. The book did a good job of demonstrating how the 5 Ideals could first build employee engagement and then deliver increased customer satisfaction.

I also really appreciated that idea of psychological safety. In a laboratory based environment physical safety is taken extremely. But so frequently psychological safety is overlooked in a workplace. This is really something to watch out for in everyday situations.

Living with AI – Some lessons from Chess

At the end of last year I finished reading Range by David Epstein 

One of the interesting things that came out of reading this was the discussion about how Chess has changed, since AI engines became strong enough that they could not be directly beaten.

There was an analysis presented by from Garry Kasparov was about how he had mastered a large collection of chess tactics. Essentially he could map from a situation on the board  to an appropriate play to respond. As his “library” of tactics was better than other players he became the strongest player.

While not a chess player, this made me recall how the best players (well better players than me) won in the Street Fighter 2 video game. Find the best combo for a player (Normally Guile as he had a 4 hit combo and everyone else had 3) and then get tactically very good at applying those combos in a bout.

The best human players were beaten by the Chess AI engines when that became better at this tactics led approach to playing they the game. The famous games being against Deep Blue. A Chess Engine running on the smart phone is as strong as Deep Blue that beat Kasparov in the 90s. But now a new form of advanced chess has formed with AI Engines running tactics but with human “generals” guiding an overall strategic direction. This has led to new type of specialists in the game. Those who can direct their AI’s better that their opponent.

In a world of automation and increased AI usage. This has to be the model we look at and train our people operate in. How can the future of work be based around centaur’s operating using AI to assist in solving complex problems.