Platt Perspective on Business and Technology

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 3

This is my 3rd posting to a short series on the growth potential and constraints inherent in innovation, as realized as a practical matter (see Part 1 and Part 2.) And I begin this next installment to that progression by sharing a quotation that I find particularly relevant here:

“Nothing about computers is inevitable. But we’ve put such a massive number of bits into place that it is often too much work to remember how each brick of the edifice we live in is nothing but a peculiar obsession someone else put into place, once upon a time.”

You can find this quoted text in its original context on page 44 of:

• Lanier, Jaron. (2017) Dawn of the New Everything: encounters with reality and virtual reality. Henry Holt and Company.

But for purposes of this posting and the series long narrative that it fits into, I cite the above for two reasons. The first is to highlight how complex systems such as computers – or essentially any complex systems that are designed to serve complex far-reaching functions, evolve. And the scope and range of what they are intended to do, evolve too with new functional goals and sub-goals added and old ones shifted for their overall priorities and perceived importance, or even dropped entirely. And second, complex systems are never designed and built and tested and integrated together by single individuals, or even just single businesses or other discrete organized groups of them. That holds for computers, the internet, and every other complex far-reaching technological endeavor, and certainly when preconceived and predesigned standards and design elements have to be built in.

Individual components: individual bricks in these edifices are conceived and built separately from each other, and not always entirely with a preconceived goal of their being used in the types of systems that they end up in – and certainly not for the full range of applications and contexts that they might be brought into. I wrote in Part 2 about how a particular brick in the edifice of electronic technology for recording, storing and playing digitally formatted music has unfortunately-restricting limitations built into it, when briefly addressing the MIDI musical note encoding standard and its issues. One of the biggest challenges that enters into any really large scale software design and development project, is that of connecting all of the disparately sourced pieces that have to fit and work together, so they can do so and at least well enough to meet overall end user needs. The principle raised there applies to complex, multi-sourced technologies in general.

I have been writing in this series about a dynamic that arises within essentially any system of overall technological progress or in the context of evolutionary change per se for that matter, that in its most simply stated form, pits two understandings of innovation advancement against each other:

• One of these competing visions presumes that new technologies, to continue to pose this in human artifact terms, can and will develop to the outer limits of what is possible for them, given the fundamental physical constraints imposed by nature on what can be done: the fundamental limitations imposed by the laws of nature as empirical reality and its observation have shaped them. Think of innovation advancement in this context as being open to achieving what amounts to absolute perfectibility, at least within the outermost constraints of what is physically possible, and with that serving as an at least inevitable outcome, given enough time.
• Moore’s law and the still ongoing race to build progressively smaller and more compactly interconnected integrated circuit elements, and to pack more and more such elements into a single chip and without corresponding cost increases, stands out for its apparent significance precisely because it seems to fit this race to perfection pattern. For decades now, it has seemed that only the ultimate physical laws of nature and their constraints as set by the size of individual atoms and by the properties that emerge for them at a quantum mechanical level, can set any real limit to how far this scaling prediction can be pushed. But Moore’s law is an exception to the more standard pattern of innovative development, from what we actually see in most innovation contexts and certainly when looking at them in detail. And that point of conclusion is true even if this basic vision: this basic paradigm of physical laws-limited perfection is more commonly presumed to be more generally, empirically valid than it ever actually is or can be.
• The other paradigm that I would cite here offers a competing vision as to what is, and what can be developed technologically. And it is one that can be summarized by a simple statement: history has impact and what comes next depends on what came before. Next round innovation becomes constrained and shaped and limited by what has come before it in the innovation progression that it arises within and from.
• This is a vision and understanding in which what can be developed through a progression of innovations, is at least as much constrained by the historical pathway of design and implementation decisions already made along the way, as it is by any given outer limits of attainability as would ultimately be imposed by the laws of nature of the above stated Vision 1. And in fact this vision of development, when realized in practice, essentially always leads to innovation development dead ends that would fall far short of what might theoretically be possible, at least as assumed by the first of these two bullet pointed understandings.

I just recapitulated this distinction here, in terms of human artifice and the flow of human-sourced design that leads to it. One of my goals in Part 2 of this series was to argue a case as to how the additional set of constraints on what innovation can lead to, as noted in the second of the above-stated scenarios, has directly comparable counterparts in nature, and certainly in biologically evolved and evolving systems.

Returning to my biological systems examples of Part 2, I explicitly note that they can easily be divided into two basic forms, both of which have direct counterparts in a human developed technological arena too:

• Design and implementation decisions that can become grandfathered in, and that with time can become system limiting, and even as they continue to be used and as entrenched building block elements in standardized designs, and
• Design and implementation decisions that can persist and even long after they no longer functionally apply at all, and that are simply maintained and replicated into new designs just because they have always been there.

And this leads me directly to the issues that I said I would address here, at the end of Part 2:

• A point of biological evolutionary understanding that I would argue, is crucially important in understanding the development of technology in general, and of more specific capabilities such as artificial intelligence in particular: the concepts of fitness landscapes as a way to visualize systems of natural selection, and adaptive peaks as they arise in these landscapes.

Let’s begin addressing that by reframing the basic terminology of this biological systems-oriented statement into more business systems-familiar terms. And I begin with fitness and natural selection, and by making two fundamental points about them, from a biological systems perspective that translate nicely into a business and a technology innovation setting too:

• Fitness is a measure of the relative likelihood of survival of a given genetically (plus environmentally) determined trait, into the next generation as those organisms that bear that trait compete for representation in that next generation through more successful reproduction. But fitter in this sense is no guarantee of greater success there. A species might live and thrive in an ecological niche where ability to run and dodge faster can significantly improve chances for both avoiding being eaten and for eating more reliably and fully too. But a faster individual with a trait variation that should give them an advantage there, still might die before successfully reproducing and with their “less fit” counterparts succeeding anyway. So this is all probabilistic – not strictly deterministic, and even for what should in principle be larger fitness-defining differences.
• And fitness is often considered as it arises in individual traits or qualities, and particularly where that means simple single gene determination. But it is entire organisms that live to successfully reproduce and pass their genes on to a next generation, or not. So ultimately it is the overall cumulative gene plus environment package, and chance that determine who does and does not help to populate that next generation.

From a technology development perspective, this means:

• Better brick/component by brick/component design can improve the odds for a new innovation as it seeks to compete in the marketplace, and particularly where this means features and qualities that should really catch consumer attention and positively so. But better and even much better and even for critically important features, does not always absolutely guarantee long-term success. At most that can only improve the odds of success.
• And it is the entire package that does or does not succeed there, so even great design and implementation features can be drowned out by less inspiring ones, and particularly if they catch public attention. The complete innovation package makes it or not, and even if technologists focus on what they see as the top, key, defining features when they build and when the business that they work for markets and sells.

And this brings me to fitness landscapes. And my goal for the balance of this posting is to briefly introduce this concept, as a starting point for the next installment in this series to come:

Picture in your mind, a mountainous landscape with high peaks and with slopes leading up to them. And all of those mountains are surrounded by areas of more significantly depressed elevation: deep valleys and canyon-like depths in the more extreme case, but with some peaks connected by mountain passes that might dip significantly below the high peaks, but that are still fairly high in elevation and certainly when compared to the valleys and canyons of this region. The higher the elevation of any given spot in this complex terrain, the higher the fitness value for whatever traits or combinations of them are being modeled by this graphically oriented display. There, different traits and combinations of them, can be seen as being uniquely assigned their own specific latitudes and longitudes: their own specific points on this topographic map.

One trait that I have been discussing in this series, is the biological systems example of the pentose shunt: a short but crucial biochemical pathway found in a vast array of species that is of ancient evolutionary lineage. And to cite a technological example that I have also touched upon in this series, I make note again of the MIDI digital encoding format for representing musical notes. I begin addressing those specific examples, and the pentose shut in particular here by reframing my topographical model by adding in one additional detail: the mountains in this representation are surrounded by sea level water, and for purposes of this narrative an elevation of zero or lower (sea level or lower) represents zero fitness and is fatal.

The peak representing the pentose shunt and the gene systems based biochemical pathway that it forms, might not be anywhere near the highest possible elevation that could in principle be attained for carrying out the same basic functions that that current pathway carries out. But it would be impossible to reach any other, and perhaps better peak: any better alternative to the pentose shunt as we know it, and regardless of the new benefits that would be gained from that, because any attempted move away from the current peak would be lethal.

I am going to continue this narrative and its examples in the next series installment, where I will consider the issues and challenges of moving from a current MIDI standard trait to a new and presumably better alternative. And I will explore and discuss a set of cost/benefits issues in that context that serve to explain both how an early solution such as MIDI encoding can be arrived at and adapted, and how such a technical solution can become entrenched.

Looking forward in what I will include here, one of my core goals in this series is to consider the development of artificial intelligence based systems, and hoped for artificial general intelligence systems, in light of the two vision dynamic that I have been discussing here. I am leading this overall series and its narrative in that direction, and will turn to that complex of issues as soon as I have completed developing a foundation for that line of discussion.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3 and also see Page 1 and Page 2 of that directory. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And also see its Page 1.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: