Paul Golding: Deep Mind
We al have a back story. Here’s mine, in case you’re interested.
We all have a back story.
Sharing them helps others to understand our motivations, circumstances and achievements in context. My story is below and is probably TLDR, but let me tell it anyway for those who want to get to know me a little better. Of course, feel free to contact me, follow me on Twitter or link with me on Linked In.
You might want to read my definition of technology before I tell you that I invent technologies.
I do this to help businesses innovate, often to achieve growth or bring about some substantial (rather than incremental) change.
Besides being a hardcore technologist, I am also an amateur philosopher, a composer and an artist. I am enthralled by philosophy of mind, but that is perhaps a natural interest for anyone fascinated by AI, a field in which I filed my first patent early on in my career (1994) after persuading a colleague that we ought to explore NNs.
My art is mostly digital art, but more about that later.
I believe in what you might call renaissance thinking, but not really in the classical sense. I mean that a person should aspire to “practical polymathy” whereby it is advantageous to study and implement a range of topics besides one’s speciality. Whilst there is no “secret” to innovation, nor any single approach, it is a truism that innovation stems from the kind of creativity that relates to synthesis (conceptual blending).
Synthesis is aided by the ability to think abstractly, as aided by polymathy, whilst innovation (“practical synthesis”) is aided by possession of the tools, resources and opportunities to act practically. Hence “practical polymathy” should span a range of subjects (horizontally) and a range of depths (vertically), meaning both theoretical and practical knowledge. It needs to be grounded in a desire to dig deep into subjects until the fundamentals become apparent.
Polymathy, especially knowledge of fundamentals, gives the inventor sufficient tools of speculation to see new meanings in things. As an example, an engineer might think of ways to improve the mobile address book. For example, he or she might add a method to search for contacts based upon time and location (“who did I speak with last week in Fresno?”) — this was once a very novel idea, but a very natural one.
However, these are improvements of the same idea. Can we do better?
Way before the social network was born, I attempted to persuade carriers to turn the ubiquitous address book app into an open social platform (using a modified form of FOAF that I had invented). In other words, my proposal was to bring new meaning to the address book – i.e. a way to track friendships and relationships, not just record phone numbers.
[Note: carriers didn’t listen, nor understand. But this taught me a lot about how incumbents view the world. This proved useful when I found myself, many times since, trying to innovate from within incumbents.]
I have been lucky enough to pursue a “career” at the frontier of technology. It has given me insights into innovation, or how it actually works (if we really know, which is doubtful when we step back from the convenient narratives that we love to devour, and mostly ignore).
My own “theory of innovation” is that it stems from emergence out of complex biological systems that tend to exhibit chaotic behavior (in the mathematical sense). In short, we try to use our collective minds to redistribute and convert information into meaningful outputs, but random (chaotic) processes often intervene to allow new ideas to emerge. If that is the case, then all one can do is be prepared for its emergence by creating the right environment. This might equate to golfer Nick Faldo’s aphorism: “I make my own luck.” (Indeed, if we believe that Silicon Valley is the peak of technological innovation activity, then we need look no further than the behavior of VCs, which is essentially “more swings at the bat” with optimized initial conditions.)
If what I am saying is true, then the focus of innovation should be on accepting uncertainty rather than trying to overcome it. It is my view that much of the business literature from the “giants” (e.g. Porter) has hamstrung innovation by using language that places too much emphasis upon strategy as if there is a single well-scoped endgame. This is probably a hangover from manufacturing. But it leads to all kinds of game-like language, such as “tactics” and so on, all of which deny the underlying reality of the average company, which is that it doesn’t really know what it’s doing – i.e. uncertainty dominates, yet is masked by false narratives and false accounts of progress, even with data!
[For a good discussion of just how far language affects our thinking, read something like Searle’s The Rediscovery of Mind or Lakoff’s Metaphors We Live By or Mitchell’s Less Than Words can Say.]
The “realities” of delivering technology in a chaotic setting – i.e. the messy workplace and market – led me deep into the world of philosophy of mind. The biggest tension in innovation and work is between the stories we tell ourselves (or read in management books) and what we really think, and do. In science, it is said that the advancement of ideas proceeds one funeral at a time – i.e. not at the speed of discovery of ideas, but at the speed of shedding dogmas (as the proponents of those dogmas die). Innovation within an org is somewhat similar – i.e. velocity is not just a matter of process, but of mindset. Hence why I found it necessary to understand how the mind works.
I believe that this is the only real way to make progress with “innovation theory” rather than the dominant pseudo-behavioral approach (i.e. case studies that purpose to tell us how successful organizations behave). To borrow from Noam Chomsky’s rebuttal of behavioralism, defining innovation as the science of organizational behavior is like defining physics as the science of meter reading.
Just as behavioral economics has reinvented our naive (mis)understanding of economic activity, if our theories were ever right to begin with (see Steve Keen) I expect that similar approaches are needed to shed light on how innovation really works at both the macro and micro levels. This work has yet to be done and I have found no real insights amongst the SV elite.
My view is that large corporations should probably be viewed as biological entities rather than “systems of economic production” per se – i.e. cognitive science and evolutionary psychology probably have more explanatory and predictive power than management science or economics. Indeed, our increasing reliance upon the god of data “science” is short-sighted. Actual science works by rethinking simple things.
Graduating with a First Class Honors degree in Electronic Engineering in the UK, where I also won the coveted IEE Prize, I started my career by designing silicon chips for cell phones at the dawn of the digital mobile era (GSM). Via a series of significant inventions, patents and accolades, I later qualified for a US “Extraordinary Ability” visa without any sponsor (which is very rare – i.e. I did not have a company to back me or fancy lawyers to write fanciful claims).
I initially wanted to work in the field of Digital Signal Processing (DSP) because it presented the perfect blend of engineering, silicon design and mathematics. I was lucky to get a job as a DSP engineer in Motorola back when it was the tour de force of communications chip design (before it drifted off into Neverland).
[Note: as an aside, when I left Motorola in 1996 to start my own mobile company, I had said to colleagues that Motorola’s days were coming to an end. Such an opinion was received with deep skepticism by colleagues. And yet, if we are to believe Collins’ account in How The Mighty Fall, then this period (mid to late 90s) was considered the peak of Motorola’s “hubris” that was to become the cause of their demise.]
Engineering is deeply practical, even in its complex use of seemingly abstract mathematics. It should come as no surprise that I am also highly practical by habit, including a tendency to do most of my own construction projects, including electrical and plumbing, around my Oakland home. I still dream of constructing my own house (i.e. entirely using my labor and a framing hammer), but this is perhaps an uneconomical use of my time. But then the only measure of time should not be economic. That too, is a dogma.
I am also a hands-on solver in the workplace, beyond building software – i.e. I don’t mind getting my hands dirty to do whatever it takes.
As an example, when a client needed an electronics lab to design and prototype digital art devices, I did not hesitate to spec out and build the lab (i.e. benches and all) in order to get things moving. Similarly, when the CEO agreed to fund the hardware lab only if I provided a business case for digital art, I did not hesitate to open a spreadsheet and construct some business-case scenarios, somewhat out of my comfort zone.
I am no financial planner, but I just got on with it. I did whatever it took to get the job done.
This attitude has got me a lot further in life than if I just sit around waiting for things to happen because “it’s not my job to do X”.
On another occasion, I struggled to get the O2 UK executive team to fund a proposal I had made to build a telephony application platform atop of their network. So I flew to Madrid and met with every Telefonica R&D sponsor I could find until one of them gave me funding from some obscure fund. Things moved quickly after that.
I had to say goodbye to O2, one of my all-time favorite clients, when our family packed our bags and moved to Silicon Valley. Now, you might think that the Bay Area is the obvious destination for someone like myself. Indeed, I had planned to make the move many moons ago, but I met someone — and married her. So life meandered down a different path for a while.
Years later, I returned to the idea of becoming a “tech migrant” after having kids and wanting to avail them of the Bay Area’s creative potential, even though, as I have come to discover, the region and its culture has a somewhat narrow vision of creativity.
I also came in search of counterculture, but struggled to find it in the way I was expecting. The valley is surprisingly conformist to a certain set of techno-culture rules. That said, my middle son is now running his own start-up without having ever setting foot in a college. And my daughter is a technical co-founder after graduating from a top college. I doubt this would have happened so easily in my home town in semi-rural Wiltshire where, many years earlier, I had tried — and failed — to raise VC funds for a wireless email platform. I watched as far poorer solutions in SV raised millions during the dot.com boom and sold for hundreds of millions.
As an interesting anecdote that speaks of the Bay Area’s creative attitude, I took one of my sons to meet a science tutor in San Jose. She didn’t think it at all odd when my son suggested that he might design and build a bat suit (yes, as in Batman) as a genuine science project (to protect motorcyclists from injury). In my home town, such a proposal would have fallen well into the boundaries of eccentricity, even though the British eccentric tradition has yielded plenty of great engineers. Indeed, our family built many eccentric contraptions in our garage in Swindon (yes — that Swindon of The Office fame).
On the other hand, when I went in search of a math tutor with the brief: “teach my kids something interesting about math — anything — and forget about grades”, most of them were too entrenched in the culture of exam-passing that I failed quite miserably to find a tutor who could teach the magic of math as opposed to the mechanics of math.
In the early 90s I had developed a 3D compression technique (for augmented reality) whilst studying for a PhD (sponsored by Motorola). I studied at the prestigious Mobile Multimedia Lab in University of Southampton. Upon advice of my sponsor, I had to abandon 3D compression in favor of “something more practical.” So I dug into the much lauded challenge of solving the canonical wireless interference problem (“co-channel interference”) that sits at the root of determining the capacity of a cellular system (other than shrinking the cell size). I did this in conjunction with a colleague at HP Labs using fuzzy logic AI (and later neural nets). It strengthened my interest in algorithms.
By 1996, it was already obvious to me that mobile apps (or “mobile multimedia” as we called it back then) was the future, even though the mobile app was yet to be invented. This is what motivated me to start my own company to build mobile apps, after leaving one of the most prestigious R&D centers in Europe and one of the most prestigious post-grad programs in mobile communications.
I wrote one of the world’s first books about mobile apps, a category that I helped to invent. I was one of the only individual expert members of the Java community that invented the first mobile app framework. The underlying tech was the foundation for Android years later. But before it existed I had already built the world’s first wireless email system using the Wireless Application Protocol (WAP). And even before that, I built a wireless sync system on Windows CE. And — before that — I had built what was probably the world’s first mobile email gateway using text messaging back when the total messages on the UK network (Vodafone) was in the thousands per day.
So yes, I really am one of the inventors of the mobile app. (For a more complete list of my inventions, see some of the entries in my work.)
By way of “confirmation” of my mobile-first status, I was also the first developer, some years later, to get an Apple staff pick for an iPhone productivity app. I had built a mobile web app (pre iPhone native apps) to take notes on the phone. It was a simple first step of a more ambitious vision to build an AI that made sense of my notes. I have resurrected this ambition many times since and even postulated a “Gravitational Theory of Thought” that I have used to build a novel knowledge-based AI, mostly as a passion project.
Some of my best work in my “early career” was done via my start-up. It was Europe’s first ever mobile apps company, 11 years before the iPhone. As early as 97, I invented and built the world’s first mobile portal (Zingo) for Lucent Technologies that featured location-based services and was later became the basis of a partnership between Netscape and Lucent. Ironically, or perhaps not, even the great Netscape failed to see the significance of mobile and dismissed the mobile opportunity.
I designed one of the world’s first smartphone interfaces when I consulted for NTT DoCoMo (98) and this led to me becoming a consulting CTO for MetroWalker, Asia’s first location-services start-up (before mobiles had GPS) based out of Hong Kong.
Honestly, I had given up on mobile once it became a mainstream technology, but I was tempted back into mobile when asked to consult for O2 UK in 2008 with a brief: “help us to think like internet guys”. This effort took a number of forms, but mostly centered upon the notion of converting their network into an open platform and to inculcating a kind of “agile” method of working using the latest web stack technology. We used this to completely reinvent their Bluebook product into a modern open API service called Hash Blue. It was built using agile methods for the first time inside of O2. [Note that my start-up Magic E Company had been using the pre-cursor to agile, called Extreme Programming, since 1999.] (As an interesting loop, my wife, many years later, after re-inventing herself as a data scientist, ended up pioneering “Data Ops” (a kind of Agile approach to data) in the same company where the inventor of Extreme Programming now resides.)
In 2008, I helped to found a CEO-sponsored innovation lab (“O2 Labs”) where I invented a number of products, including an open telephony platform (connFu) with its own Ruby-flavored domain-specific language (DSL) that would allow anyone to build mobile services with just a few lines of code. I think this is probably the first and last time that any carrier in the world created a programming language!
“Programmable telephony” has since become a convention of the next-gen platform/API economy with the likes of Twilio (whom I recommended Telefonica to acquire back in Twilio’s infancy).
Whilst consulting for O2, I strongly advocated that carriers needed to become “connected services” companies rather than “dumb pipes” and wrote a book to explain the concept. It was widely read, but mostly ignored, by carrier executives the world over. The essence of the concept was to reimagine carriers as platforms. The idea failed because carrier execs tried to view it through the lens of carrier economics versus software economics. Well, that’s a long story (TL;DR).
After moving to the US in 2011, I continued to invent strategies, products and technologies — and file patents — for various clients. For example, as consulting Chief Scientist at Art.com, I helped to create a number of technologies. One of them was a suite of color-matching technologies with the goal of creating a new web destination (i.e. something like “colors.com”) where users could create color schemes for their decor projects – i.e. a much bigger audience, potentially, then just reproduction art buyers.
Sadly, the sponsor for the project (Ivy Ross) left to work for Google and the project became a sad compromise of trying to insert the color technology into the existing art e-commerce business. These kinds of compromises happen often and are well documented by Christensen (e.g. “cramming”).
I subsequently led a far bolder attempt at innovation in art.com — Klio, an attempt to establish digital décor as a new type of art experience.
I still believe that the “digital art” category has the potential to reinvent art and our relationship with it, but it requires some kind of “tipping point” around the notion of owning digital objects (NFTs?) Nonetheless, I filed various patents fundamental to digital art rendering and displays after I helped invent a novel file format (.art) to allow for long-playing art pieces that could evolve over the period of a year, or indefinitely! As far as I know, no other long-playing format has ever been invented.
Klio re-awakened a latent interest in art and I have since produced several “generative” works of my own that use algorithms to create an aesthetic. Most of these works were available on Klio (now defunct after art.com abandoned the project and sold itself to Walmart) but I plan to release others in due course once I figure out where best to publish them (and if I get time) — perhaps as NFTs. Many of my pieces explore our relationship with data and AI at a more philosophical level. (And there is plenty to say about that, but here isn’t the place.)
I have often found myself leading technological transformation within a kind of “intrapreneurial” setting within existing companies, often within the rubric of an “innovation lab.” Back when I was consulting for O2, I had already pioneered the use of start-up techniques (such as a modified form of Lean/Agile) inside of large orgs, as contradictory as that sounds (and often is).
However, the bigger challenges I faced were in overcoming organizational cognitive illusions, hence why a good deal of my personal research has gravitated towards understanding the types of cognitive delusions that causes innovation to fail. This research project has often proved to be far more valuable in my consulting work than writing code to experiment with the latest AI methods.
I am told that I am hard to pigeon-hole, but I’m fine with that. I believe in challenging established thought patterns and striving to achieve progress at the speed of thought rather than the rate of decay of dogma. I suspect that I have anarchic tendencies, within the classical anarchist tradition (of bottom-up organization) and have often harnessed this to “disrupt from below” when the usual “top down” methods fail, which is often (and for too many reasons to elaborate here).
This attitude informs a personal philosophy about the power of directed bottom-up (or emergent) use of technology to transcend biological biases (e.g. misplaced tribalism) that might distract us from certain truths, if they really exist. That said, such a philosophy is riddled with problems and contradictions. But that’s the work of thinkers – to figure the hard stuff out and try, if we can, to illuminate a pathway.
Related to a passion for using tech to boost my productivity, my personal research interests include developing models of creative thinking with a view to using machines to “amplify thought.” I call this “augmented cognition” and it relates to the aforementioned obsession with trying to meaningfully digitize my notes and build an AI to “make sense” of them beyond mere NLP-type probabilistic classifications. In essence, I am fascinated by the prospect of so-called “Symbolic AI” and feel like it ought to become my full-time obsession at some point.
It is different to probabilistic AI and certainly, although not an attempt to solve the “Hard AI” problem, which, on a good day, I tend to think is insolvable per Searle’s position (even though I am not convinced entirely by his arguments).
If AI is making machines more “human”, I am excited by making humans more machine at the risk of sounding dystopian or even utopian. What I mean is finding ways to put digital interfaces directly into the cognitive loop. I had hoped to get a chance at this when I interviewed at Google (with Sergey himself) for the Google Glass R&D role. However, sadly that project was killed in terms of any meaningful innovation.
I think the most sensible starting point for putting AI directly into our cognitive loop is to begin with education.
For example, I favor methods of teaching mathematics that assume the use of available computation wherever possible (e.g. Mathematica) instead of rote-learning useless facts like sin^2(a) + cos^2(a) = 1. In this regard, I consider the work of Conrad Wolfram as worthy of attention. But I think it can go a lot further by building an AI that creates content dynamically – i.e. the ultimate in personalized education. Only I think it will work best in a “classroom” setting where human interaction (e.g. peer learning) is mixed with computational approaches.
Homeschooling (“alt ed”) of my kids greatly informed these attitudes towards education. When you take education into your own hands, especially for renaissance reasons, many of its common forms and supposed agendas become quite puzzling (or not). In this regard, living in Silicon Valley isn’t what you might think. The schools here are obsessed with teaching to the grade, although there are glimmers of hope of alternative models.
I like helping people to improve themselves and I am strongly optimistic about my view that I can learn something from everyone I meet, and that no innovation problem is insurmountable, at least in the first degree. Of course, humans have limitations and scope, but that is another topic.
Perhaps I will learn from you one day, or get a chance to solve a pressing problem that keeps you awake at night. Life is full of problems, but that’s what makes it interesting.
If you got this far, then thank you for spending some of your valuable and irreplaceable Earth minutes reading this. I am in your debt.