Select Page

Building levers

Paul Golding: My Philosophy 


At the risk of baring my soul, I will share only a few glimpses so that those who wonder about the kind of person they might do business with will have a few clues. This is more a ramble of principles, or beliefs, as it were, not a particularly well structured defense, as that’s not that important to most of my clients.

I can sum up my philosophy in a single idea: do what’s right, even if against one’s soul.

Keep in mind that I have maintained a career at the forefront of applied innovation, having to defend my ideas each and every time, often switching between supervisory roles and individual contributors, which as far as I can tell is rare. I have not defaulted to some position in a bubble of corporate seniority beyond critique. (Note: this is not an easy path and I am not sure I’d advise it for everyone.)

Creativity and curiosity are natural human instincts.

As far as is humanly possible, these instincts should be encouraged, developed and allowed to thrive. In work, this translates into a number of outlooks.

Firstly, I maintain a high degree of skepticism towards orthodoxy because it can easily stifle creativity. Authority is not self-justifying. As such, I believe that ideas should judged by their own merits, not career-position, and no one should be excluded from proposing and defending an idea.

I never rely upon my position or past achievements as a justification for my own ideas. Every time I come to the table, I hold myself the standard that I must defend my ideas and attempt to influence through information, not position, or even imposition.

Most heuristics are dumb.

But my pet hate is blind following, or the lazy-copying pattern – “So-and-so does this, so should we.”

No we should not!

A method should be defended based upon its merits within context. As such, whenever I have interviewed someone who parrots some heuristic, yet cannot defend it with insights (and results) versus platitudes, they are immediately disqualified.

Of course, we can be inspired. I have tried to adopt various techniques to see how they work in situ. Almost always, a method needs tuning to circumstances. Blind following can actually make things worse.

I don’t know why it isn’t obvious that a good way to interview someone is to identify their unique insights via experience. This will almost certainly tell you if you are dealing with a thinker or a parrot.

I am a big believer in critical thinking and trying to unpack ideas by their foundational, first, principles. Without knowing it, I have routinely adopted the principle of Richard Feynman to know the nature of things versus the names of things.

As an example, I recall when I first encountered the term “Growth Hacking” presented to me as if it were a novel principle. Perhaps because I am British, one understands from the history of empire, as mired as it is in morally dubious justifications, that meta-thinking is how the game is won – i.e. understanding the true nature of things in order to get results.

For example, when you remove all the jargon of “Growth Hacking”, it is, at its core, a method of optimization using feedback (data). Anyone with even a rudimentary understanding of control theory, systems thinking or design of experiments would wonder what all the fuss is about.

“But”, I hear you say: “It’s all about chunking into small parts, blah blah.”

Yes, but that had been well understood in the realms of planning for centuries: that the level of resolution of planning (i.e. the smallest unit of plan) determines the scale, and therefore cost, of predicting and pivoting. Agile proponents didn’t invent this, nor discover it – they applied it.

Often, the historical precedence of an idea, like how software architecture attempted to mimic actual architecture (with grand motifs and waterfall execution) was due to ignorance. Look at how methods of back propagation in AI, namely auto-differentiation, had been known by some mathematicians for decades prior to discovery by AI researchers. 

Of course, entire swathes of jargon are often invented in order to justify positions and careers that cannot be defended otherwise. This is the pattern used by sophists, like post-modernist intellectuals or pseudo-scientists (per Feynman). It has been repeatedly adopted throughout history. Witness the rise of a community of “AI experts” who introduce phrases, like “Alignment”, without any productive definitions, only more word salad. A major feature of engineering has always been to anticipate, identify and handle side-effects, edge cases and failure modes. We didn’t need a new language to describe this for AI. If engineers are attempting to skip these disciplines, that is another matter, perhaps related to incentives and economics, but not AI.

Modern life in the realm of self-publicizing via social media has lead to a massive onslaught of people desperately trying to be original and so resorting to faux theories, frameworks and techniques heavily laden with the usual suspects: jargon!

For heaven’s sake, we have folks on LinkedIn whose main accolade is being a Top Voice. There is a giant industry in how to have a voice, so much so that the career-du-jour is having a voice about having a voice, etc.

But this principle to avoid jargon and to appeal to basic ideas is simply in line with my first one about justification of ideas via their merits, not authority (real or faux). When you come to the whiteboard to defend an idea, then it must be defended critically, which essentially means resorting to first principles, not using a bunch of jargon as if that has a meaning.

And this relates to my next principle, namely that one should strive to explain things in the simplest terms possible, which is the hallmark of a great educator. I recall studying with the great Lajos Hanzo, a world authority in mobile communication theory who had written books as thick as a mattress. Yet he always took time to explain things almost as if you knew nothing, constantly attempting to avoid jargon and striving to demystify potentially arcane ideas: “Oh, it’s just multiplexing, but a bit fancier using codes.”

 This too, is my approach.

It is also encouraged by my adherence to an idea expressed by Noam Chomsky in his contributions to philosophy of science: “Discovery is the ability to be puzzled by simple things.”

For this, one must cultivate an attitude of always questioning why things are the way they are, getting back to basics and fundamentals until the core principle is naked.

It is astounding that many students have little instinct that the alphabet (any one) is a human invention. They have yet to be puzzled by its existence, never mind its mysterious operation. Indeed, Descartes was so puzzled by language that he believe it ought not to work: the notion that nearly every sentence ever uttered is entirely novel in the history of mankind, yet requires almost no effort to say or hear (and decode). Only now, in the era of Large Language Models are we beginning to understand how deep this mystery really is.

Indeed, despite recent disgraces, the philosopher John Searle is an expert in trying to explain why so many simple objects in the world, like money, appear to work despite the fact that they shouldn’t. We are so used to money that we assume it is somehow a kind of objective feature of the world, yet it is a subjective reality brought into existence by conventions and beliefs, such as the belief that a piece of paper in our wallets could be “worth” 20 dollars, say.

When Newton wondered about gravity, he overcame the centuries-old assumption that objects move to their natural places (an idea established by Socrates). He questioned the simple occurrence of the falling apple until it became obvious to him that the only explanation must be the presence of a force, as if someone or something moved the apple, which is how objects ordinarily move (per F=MA).

The idea was completely preposterous that the force might be invisible. Indeed, Newton thought it so absurd that he tried later to disprove his own conclusions, believing them to be occult like, not science.

Curiosity and creativity are in unlimited supply. Whilst we tend to think that “everything’s been solved”, we are far, far, far from such an absurd claim. Just look at how many of our tools are ridiculously lacking in innovation, like the dumb word processor. Putting aside reasons for the lack of strategic innovation (e.g. due to skewed markets), the list of problems to solve is endless. Moreover, we live in a golden age where knowledge is in abundance, tools are free, and so on. For my kids (whom we homeschooled), we forbade the use of the term “I’m bored”.

As such, the notion of protecting knowledge and ideas is ridiculous. Anyone who works for me is encouraged to have ideas and my goal is always to mentor them to do their best work.

There is simply too much to be discovered and one of the hardest achievements is to free one’s self of the shackles of conventional wisdoms. Why not amplify such efforts by encouraging intellectual freedom and the pursuit of excellence.

But there is another side to the encouragement and mentoring of others.

Moving into the spiritual realm, I have an unshakeable sense of duty to put good out into the world. This is why I strive to help others to do their best works regardless of how it benefits myself. Of course, in work, I aim for a sweet spot whereby someone doing their best work overlaps with the goals of the team. And, one has to be pragmatic. 

This is a commitment I take seriously: doing the best work.

This includes doing the best work for the client.

The benchmark I always have is that whatever I am doing, I could stand in front of the CEO and justify it as in the interests of the company. This is the standard I try to keep in the front of my mind so as not to fall into some localized political trap of making moves in some career-advancing sense.

Put bluntly, I have never sought to advance my career. I stick to the principles outlined above and go wherever they take me.

I believe in polymathy as a wholesome approach to life. You’d think that as a technologist, I spent most of my time teaching my kids how to code. I did not. Besides, most of the effort teaching my kids was teaching them how to teach themselves, and each other, especially via first-principles thinking. But in terms of hands-on interventions, I’d say I spent more time reading literature and building stuff together than we did doing tech stuff.

I saw the overarching goal was to encourage enthusiasm and creativity. As such, one should be able to find satisfaction in any subject through the lens of curiosity. When visiting the book store, we would pick up magazines about unfamiliar subjects and try to decode their patterns and motivations. This was not just a method to keep an open mind, but a recognition that many of the greatest ideas in history came about via synthesis and applied pattern recognition.

Above all else, and perhaps as a conclusion to my philosophical outlook, the pursuit of first principles extends to an attitude that wherever possible, one should be attempting to build levers if one is privileged to have the opportunity to do so. Don’t build a single AI app when you can build a lever to build a thousand. That could be via inspiring someone else to take up the baton and become the next Newton, or via the exploration of first principles that reveal a lever design versus a point solution.

In life, we are often playing chess, so to speak. You can learn the moves, but you’ll never get far without learning strategies, which are a type of lever. If you build levers, you are always, in some sense, building for a scalable and better future, even if you spend most of your time, as I try to do, living in the moment, grateful for every day on this planet, every cup of tea shared with a loved one or friend, every new discovery that comes my way.

Above all else, I believe in caring: about each other, about craft, about opportunity, about life.

Everything worthwhile falls from caring and curiosity.

I care about your time, so thank you if you got this far. I hope my words provided something more than the time spent reading.