Platt Perspective on Business and Technology

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 1

Usually I know when setting out to write a posting, precisely where I would put it in this blog, and that includes both prior decision as to what directories it might go into, and decision as to what series if any that I might write it to. And if I am about to write a stand-alone posting that would not explicitly go into a series, new or already established, I generally know that too, and once again with that including where I would put it at a directories level. And I generally know if a given posting is going into an organized series, and even if as a first installment there, or if it is going to be offered as a single stand-alone entry.

This posting is to a significant degree, an exception to all of that. More specifically, I have been thinking about the issues that I would raise here, and have been considering placing this in a specific ongoing series: Reconsidering Information Systems Infrastructure as can be found at Ubiquitous Computing and Communications – everywhere all the time 2 (as its postings 374 and loosely following.) But at the same time I have felt real ambivalence as to whether I should do that or offer this as a separate line of discussion in its own right. And I began writing this while still considering whether to write this as a single posting or as the start to a short series.

I decided to start this posting with this more behind the scenes, editorial decision making commentary, as this topic and its presentation serves to highlight something of what goes on as I organize and develop this larger overall effort. And I end that orienting note to turn to the topic that I would write of here, with one final thought. While I do develop a number of the more central areas of consideration for this blog, as longer series of postings, I have also offered some of my more significantly important organizing, foundational ideas and approaches in single postings or as very brief series. As a case in point example that I have referred back to many, many times in longer series, I cite my two posting series: Management and Strategy by Prototype (as can be found at Business Strategy and Operations as postings 124 and 126.) I fully expect this line of discussion to take on a similar role in what follows in this blog.

I begin this posting itself by pointing out an essential dynamic, and to be more specific here, an essential contradiction that is implicit in its title. Moore’s Law, as initially posited in 1965 by Gordon Moore at Intel, proposed that there was a developmental curve-defining trend in place, according to which the number of transistors in integrated circuits was doubling approximately every year and a fraction – but without corresponding price increases. And Moore went out on a limb by his reckoning and predicted that this pattern would persist for another ten years so so – roughly speaking, up to around 1975. And now it is 2018 and what began as a short-term prediction has become enshrined in the thinking of many, and as if an all-but law of nature, from how it still persists in holding true. And those who work at developing next generation integrated circuits are still saying the same things about its demise: that Moore’s law will run its course and end as ultimate physical limitations are finally reached … in another few next technology generations and perhaps in ten years or so. This “law” is in fact eventually going to run its course, ending what has been a now multi-decade long golden age of chip development. But even acknowledging that hazy end date limitation, it also represents an open ended vision and yes, an open ended expectation of what is essentially unencumbered disruptively new growth and development that is unburdened by any limitations of the past, or present for that matter.

Technology lock-in does not deny the existence of or the impact of a Moore’s Law but it does force a reconsideration as to what this still ongoing phenomenon means. And I begin addressing this half of the dynamic that I write of here, by at least briefly stating what lock-in is here.

As technologies take shape, decisions are made as to precisely how they would be developed and implemented, and many of these choices made are in fact small and at least seemingly inconsequential in nature – at least when they are first arrived at. But these at-the-time seemingly insignificant specific design and implement decisions can and often do become enshrined in those technologies as they develop and take off, and as such take on lives of their own. That certainly holds true when they, usually by unconsidered default, become all but ubiquitous for their application and for the range of contexts that they are applied to as those technologies that they are embedded in, mature and spread. Think of this as the development and elaboration of what effectively amount to unconsidered standards for further development, that are arrived at, often as here-and-now decisions and without consideration of scalability or other longer-term possibilities.

To cite a specific example of this, Jaron Lanier is a professional musician as well as a technologist and a founder of virtual reality technologies. So the Musical Instrument Digital Interface (MIDI) coding protocol for digitally representing musical notes, with all of its limitations in representing music as actually performed live, is his personal bête noire, or at least one of them. See his book:

• Lanier, J. (2011) You Are Not a Gadget: a manifesto. Vintage Books,

for one of his ongoing discussion threads regarding that particular set-in-stone decision and its challenges.

My point here is that while open and seemingly open ended growth patterns, as found in examples such as Moore’s Law take place, and while software applications counterpart to it such as the explosive development of new database technology and the internet arise and become ubiquitous, they are all burdened with their own versions of “let’s just go with MIDI because we already have it and that would be easy” decisions, and their sometimes entirely unexpected long-term consequences. And there are thousands of these locked-in decisions, and in every wide-spread technology (and not just in information technology systems per se)

The dynamic that I write of here arises as change and disruptive change take place, with so many defining and even limiting constraints put in place in their implementations and from their beginnings: quick and seemingly easy and simple decisions that these new overall technologies would then be built and elaborated and scaled up around. And to be explicitly clear here, I refer in this to what become functionally defining and even limiting constraints that were more backed into than proactively thought through, than anything else.

I just cited a more cautionary-note reference to this complex of issues, and one side to how we might think about it and understanding it, with Lanier’s above-cited book. Let me balance that with a second book reference that sets aside the possibilities or the limitations of lock-in to presume an ever green, always newly forming future that is not burdened by that form of challenge:

• Kaku, M. (2018) The Future of Humanity. Doubleday.

Michio Kaku writes of a gloriously open-ended human future in which new technologies arise and develop without any such man made limitations: only with the fundamental limitations of the laws of nature to set any functionally defining constraints. Where do I stand in all of this? I am neither an avowed optimist nor a pessimist there, and to clarify that I point out that:

• Yes, lock-in happens, and it will continue to happen. But one of the defining qualities of truly disruptive innovation is that it can in fact start fresh, sweeping away the old lock-ins of the technologies that it would replace – to develop its own that will in turn disappear at least in part as they are eventually supplanted too.
• In this, think of evolutionary change in technology as an ongoing effort to achieve greater effectiveness and efficiency with all of the current, basic constraints held within it, remaining intact there.
• And think of disruptive new technology as break away development that can shed at least a significant measure of the constraints and assumptions that have proven to at least no longer be scalable and effectively so. But even there, at least some of the old lock-ins are still likely to persist. And this next revolutionary step will most likely bring its own suite of new lock-ins with it too.

Humanity’s technology is still new and young, so I am going to continue this narrative in a next posting to what will be this brief series, with an ancient example as drawn from biology, and from the history of life at its most basic, biochemically speaking: the pentose shunt, or pentose phosphate pathway as it is also called. I will then proceed from there to consider the basic dynamic that I raise and make note of in this series, and this source of at least potential innovative development conflict, as it plays out in a software and an artificial intelligence development context, as that is currently taking shape and as decisions (backed into or not) that would constitute tomorrow’s lock-ins are made.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And also see its Page 1.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: