Platt Perspective on Business and Technology

Reconsidering Information Systems Infrastructure 7

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on December 4, 2018

This is the 7th posting to a series that I am developing here, with a goal of analyzing and discussing how artificial intelligence, and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 374 and loosely following for Parts 1-6. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

I began Part 6 of this series with a continuation of a line of discussion that I began in Part 4, on emergent properties and how they might as a concept be at least semi-mathematically defined (with a first take on that offered in Part 5.) I then continued on in Part 6 to further add to an also-ongoing discussion of structured, functionally interconnected simple artificial intelligence agents, there focusing for the most part on what might perhaps best be thought of as the functional specifications explosion that can arise when seeking to parse out complex overall task goals into simpler constituent building block parts, and particularly where the overall problems faced have not been fully thought through for their functionally implementable detail.

Quite simply, if you have not fully thought through a complex and far-reaching problem for what it actually involves, that lack of clarity and focus as to what has to be done to resolve it can make it difficult at best to even know what does and does not have to be included in any real-world solution to it. And that can lead to what can become a seemingly open-ended analytically driven inclusion of what might arguably be more finely distinguished “probably essential parts” that are also at most just partly understood from an implementation perspective, and with that open-endedness stemming from the flood of task boundary and scope uncertainties that this overall lack of clarity and focus here, brings. Simplified conceptual understanding and design, and certainly as captured in the basic model offered by Occam’s razor, can only work when you have a good idea of what you actually seek to do, and in sufficient depth and detail so as to know what “simplest” and most direct even mean for the overall problem at hand.

I chose meaningful general conversational capability in Part 6, as might be carried out by an artificial intelligence agent, or rather by an information technology system-backed array of them as a working example for that, where the goal of such a technological development effort would be to arrive at a system that could pass a traditionally framed Turing test. And I intentionally chose one of the oldest, long-standing general and only partly understood problems that we face there, going back to the dawn of what we now know as artificial intelligence research, and certainly in an electronic computer age context.

As a reminder here, the Turing test in effect seeks to define intelligence in terms of what it is not – a conversational capability that does not seem to have come from an algorithm-driven computer source and that as a result must be coming from an actual person, and from a presumed true intelligence. That leaves fuzzy and undefined gray area boundaries as to the What and How of any functional resolution to this problem, a predominant feature of artificial intelligence-based conversational capability in general. And that lack of clarity seeps into every other true artificial general intelligence validating or achieving context that has been worked upon to date too.

And with at least the core of that set of points offered in Part 6, I concluded that posting by stating that I would follow it in this series installment, by addressing:

• “Neural networks as they would enter into and arise in artificial intelligence systems, and in theory and in emerging practice – and their relationship to a disparaging term and its software design implications that I first learned in the late 1960’s when first writing code: spaghetti code. (Note: spaghetti code was also commonly referred to as GoTo code, for the opprobrium that was placed on the GoTo command for how it redirected the logical flow and execution of software that included it (e.g. in programs with lines in them such as: ‘if the output of carrying out line A in this program is X, then GoTo some perhaps distantly positioned line Z of that program next and carry that out and then proceed from there, and wherever that leads to and until the program finally stops running.’) Hint: neural network designs find ways to create lemonade out of that, while still carefully and fully preserving all of those old flaws too.”

And I further added that I would “proceed from there to pick up upon and more explicitly address at least some of the open issues that I have raised here up to now in this series, but mostly just to the level of acknowledging their significance and the fact that they fit into and belong in its narrative. And as part of that, I will reconsider the modest proposal artificial intelligence example scenario that I began this series with.”

I am in fact going to continue this narrative from here by more thoroughly discussing a few issues that I have already been addressing, as promised in the immediately preceding paragraph. And then I will discuss neural networks and other implementation level issues that would go into actualizing what I write of here in real systems. And then after more fully developing a background for further discussion of it, I will finally turn back to my initial “modest proposal” example that I effectively began this series with.

That editorial update noted and to put what follows into perspective, I offer an alternative way of thinking about intelligence, and general intelligence in particular, that seeks to capture the seemingly open-ended functional requirements that I so briefly touched upon in Part 6 by way of its automated but nevertheless realistic and natural-seeming conversational task goal:

• General intelligence can be thought of as beginning when information processing reaches a level of flexibility that cannot readily be descriptively or predictively captured in any identifiable set algorithm. Intelligence in this sense begins with emergence of the organized and goal oriented unpredictable.

I begin readdressing issues already touched upon here in this series in order to advance its discussion as a whole, by reconsidering precisely what emergent properties are. And I begin that by drawing a line of distinction between two basic forms of emergence which I refer to as structural and ontological.

Structural emergence is task performance emergence that arises when currently available hardware, software, firmware, and data resources become repurposed through the assembly of what amounts to virtual agents from them, or rather from their component elements.

Think of virtual agents as artificial intelligence constructs that can carry out new functions and that are formed out of pre-existing systems resources, and parts thereof. And think of this functionality emergence as arising in a manner analogous to that of how software emulators are created that would for example allow a computer with one type of central processing unit chip design to run an operating system designed to only run on another type of central processing unit with its differing design – but as if that software were running on the chip design that it was built to be native to. Capacity for this type of emergence would arise from the repurposing of underlying resource sets and elements in novel ways and in novel combinations, from what is already in place in the overall artificial intelligence-supportive system. And at least in retrospect the actual emergence of such a structurally arrived at capability might seem to arise essentially all at once as a capable functionality is achieved, with all that precedes that explicable as a collectively organized example of what biologists call preadaptation (or exaptation.)

Viewed from that perspective, this form of emergence can be seen as a time-independent form of new agent emergence, at least as functionally realized. To connect this with earlier discussion, the original semi-mathematical definition of emergence as offered in Part 5, can be viewed as having been framed in a conceptually simpler, time-independent manner that would be consistent with structural emergence per se. (Note: I am going to discuss this set of issues from a more practical implementation perspective in future postings to this series, when discussing neural networks and when doing so in contrast to the object oriented programming paradigm, as the issues of object encapsulation and its leakage and cross-connection alternatives are considered.)

That de novo emergence vision of an at least seemingly sudden appearance of new properties and functionalities from a set of ostensibly set preexisting resources in a system, begs the question: what of change and its role here? And that leads me to a second form of emergence that I would consider in this series and its context:

Ontological emergence: this is a form of in this case new agent formation, that would arise through the repurposing and evolution of existing underlying elements, rather than from their reuse and essentially as-is on a component-by component basis, as would be presumed in the case of structural emergence.

Let me rephrase and repeat that for greater clarity. Structural emergence occurs when what are in effect off-the-shelf resources are brought together in novel ways, creating new and unplanned for capabilities in the process. Ontological emergence can also involve bringing unexpected combinations of functionally supportive elements together, but the emphasis here is on those elements changing: evolving, to meet and serve those new needs.

This, by contrast to structural emergence, is time-dependent and as such calls for a new and updated formulation of the semi-mathematical definition of emergence that I initially offered in Part 5. First, let’s review the initial time-independent definition and its more predictably determinant, non-emergent properties and elements:

• FA({the set of all possible input values}) → {the set of all possible output values}A

where FA can be said to map specific input values that this process can identify and respond to as arise in a pre-action state A, as a one-to-one isomorphism to specific counterpart output values (outcomes) that the agent F can produce. Emergent functionalities then construed as (generally simplest model, Occam’s razor compliant) descriptors as to how unexpected, additional outcomes could have arisen too, along with more predictably expected outcomes of the above input to output mapping.

I would propose the following, refined and expanded version of the basic formula as first offered there as a starter definition of what time-dependent emergent properties are, once again starting with the non-emergent, predictable systems framework in place:

• FA({the set of all possible input values}, t0) → ({the set of all possible output values}, t1)

Where t0 and t1 represent a last significantly pre-emergent time point and a time point where a new capability has effectively, functionally first emerged respectively.

And more generally, this becomes:

• FA({the set of all possible input values}, t) → ({the set of all possible output values}, t)

where t simply represents time as a whole and as a separate dimension and not just a source of specific incident defining points along that dimension.

I begin discussing this time-dependent approach to emergence by offering a specific, in this case biological systems-based example: gene duplication, with the evolution through accumulation of mutations and through natural selection of new functions in the extra duplicated copy. See for example these two research papers from Cold Spring Harbor’s Perspectives in Biology:

Evolution of New Functions De Novo and from Preexisting Genes and
Recurrent Tandem Gene Duplication Gave Rise to Functionally Divergent Genes in Drosophila

Novel de novo functionalities arise during generation to generation reproduction, at least in large part in these biological examples. In contrast to that, I write here of emergent functionality as arising in an artificial intelligence context, doing so in a more ontological self-evolving context and within single artificial intelligence systems, as for example would arise as a neural network-design system learns, and redesigns and rebuilds itself as implementation of that process.

I am going to continue this discussion of emergent properties in their (here stated) two forms, in my next installment to this issue. Then I will return to the issue of the functional specifications explosion as initially discussed in Part 6 and as noted again here at the start of this posting. I will proceed from there to complete a foundation for explicitly discussing neural network and other possible approaches to actually developing a true artificial general intelligence.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: