Turing And The Increasingly Important Case For Theory

Editor’s note: Zavain Dar is an early-stage VC at Lux Capital and Lecturer at Stanford University. He invests and supports deep technology companies leveraging advancements in Artificial Intelligence, Infrastructure, and emerging data and has taught courses on cryptocurrency and the intersections of AI, philosophy and venture. 

Like many in Silicon Valley, I recently saw Morten Tyldum’s The Imitation Game. I have a soft spot for underdog academic narratives and actually teared up. However, I couldn’t shake the feeling the film pigeonholed the breadth and depth of Turing’s work to early cryptography and its mechanized instantiation during WWII.

Cryptography aside, Turing’s work, theory and models still underline undergraduate curriculums in computer science, mathematics and philosophy. His models for computation form the basis for how mathematicians and computer scientists structure both what is solvable and the efficiency with which we can algorithmically solve answerable questions. His Church-Turing Thesis coupled with Godel’s Incompleteness Theorems still has philosophers debating the existence of universal constraints around human knowledge.

Finally, and perhaps most pressing given the ongoing renaissance in machine learning, the Turing Test remains the de facto yardstick against which we measure progress and traction in artificial intelligence.

Whereas The Imitation Game focuses on cryptography and wartime technology, there is no doubt that Turing’s work also includes AI, theoretical computer science, mathematics and even epistemology.

The realization that Turing’s work still has high relevance in both academia and industry got me thinking about how and why this is the case. What lessons can today’s technology entrepreneurs and investors pull from Turing’s intellectual longevity?

I start with a few very basic and generalized broad-stroke assumptions. (The focus on high level ‘Big-O’ approximations only seems fitting for this piece).

all-graphs

  1. With time, the state of infrastructure grows linearly. That is, infrastructure layers of software and technology have pressure to innovate and mature yet they don’t necessarily compound. The development of SQL didn’t push the standardization of TCP/IP and didn’t accelerate the advent and growth of bitcoin. While certainly some are necessary precursors to others, we see linear growth and creation here: one building off the other at a roughly constant velocity.
  2. As infrastructure grows, we see an exponential increase in the pace of innovation. Innovators no longer need to build full stack and proprietary solutions and hence are able to spend the majority of their cycles and bandwidth on the most important and differentiated components of their businesses. As such, latency in feedback loops drop, and the pace of innovation itself increases. Taking Nos. 1 and 2 into account, we get the following rule:
  3. The pace of innovations grows exponentially with respect to time. Conversely, as the pace of innovation increases, the life expectancy or expected window of relevancy for the status quo decreases. This should intuitively check out: The more innovation there is, the more disruption occurs, the shorter any one entity can retain hegemony without being thwarted by a disruptive upstart. Calculus aside, let’s take for granted that this too has a linear relationship.
  4. Life expectancy of the status quo decreases linearly as pace of innovation increases. And combining Nos. 3 and 4, we get our final result:
  5. Life expectancy of the status quo decreases exponentially with time. This is perhaps the most interesting. We have an argument that knowledge, hegemony, and power associated with the current status quo is becoming increasingly less valuable and stable. For example, top software companies from the late 80s and 90s had less pressure to innovate because the technology was more difficult to disrupt and hence had longer shelf lives. Over time, infrastructure matured, the pace of innovation increased, the potential of disruption grew, and the inertia associated with the current state, or status quo, dropped.

How does Turing’s work relate to this pseudo-anthropologic and economic postulating? Well, etched in Turing’s work was his ability to cut through the engineering limitations of his time and grapple with the underlying theory. By decoupling the current state of the art from the theoretical ground truth, Turing produced work that hasn’t lost applicability and has shown a near infinite shelf life.

His work today carries just as much, if not more (given the just-now relevant engineering possibilities), applicability as it did during his own time. This isn’t wholly dissimilar from how theoretical physicists working on chalk boards view their work in juxtaposition to applied physicists in cutting-edge linear accelerators.

As the window of relevancy for the status quo shrinks, investors and entrepreneurs alike must shift emphasis toward a Turing-like approach and work to hone a deeper understanding of underlying theory associated with the applied engineering of the day-to-day.

While technology and technology hegemony is increasingly vulnerable to disruption, we can use a sound grasp of baseline theory to start, build and invest in companies that aren’t fully dependent on the status quo but rather are aware of the accelerating state of art.

As seen through the longevity of Turing’s work, the increasing pace of innovation highlights the increasing importance of theoretical understanding for entrepreneurs and investors alike.