Platt Perspective on Business and Technology

Reconsidering Information Systems Infrastructure 5

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on July 13, 2018

This is the 5th posting to a series that I am developing here, with a goal of analyzing and discussing how artificial intelligence, and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Ubiquitous Computing and Communications – everywhere all the time 2, postings 374 and loosely following for Parts 1-4. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

This is a series, as just stated, that is intent on addressing how artificial intelligence will inform, and in time come to fundamentally reshape information management systems, and both in general for more localized computer-based hardware and software systems, and in larger networked contexts as they interconnect: the global internet included. And I have been focusing on the core term that enters into that goal in this series, up to here: what artificial intelligence is, and with a particular focus on the possibilities and potential of the holy grail of AI research and development: the development of what can arguably be deemed to be true artificial general intelligence.

• You cannot even begin to address how artificial intelligence, and artificial general intelligence agents in particular would impact upon our already all but omnipresent networked information and communications systems, until you at least offer a broad brushstroke understanding as to what you mean by and include within the AI rubric, and outline at least minimally what such actively participating capabilities and agencies would involve and particularly as we move beyond simple, single function-limited artificial intelligence agents.

So I continue here with my admittedly preliminary, and first step discussion as to what artificial general intelligence is, and when moved past the simply conceptual of a traditionally stated Turing test, to include at least a starting point discussion of how this might be operationally defined too.

I have been developing my line of argument, analysis and discussion here around the possibilities inherent in more complex, hierarchically structured systems, as constructed out of what are individually just single task, specialized limited-intelligence artificial agent building blocks. And I acknowledge here, a point that is probably already obvious to many if not most readers: I am modeling my approach to artificial general intelligence when I begin from that starting point, at least to the level of analogy, on how the human brain is designed for its basic architecture, and on how a mature brain develops ontologically out of simpler more specialized components and sub-systems of specialized and single function units, that both individually and collectively show developmental plasticity that is experience based.

Think bird’s wings versus insect wings there, where different-in-detail building block elements, come together to achieve essentially the same overall functional results, while doing so by means of distinctly different functional elements. But while those differently evolved flight solutions differ in structure and form details, they still bear points of notable similarity, and because they seek to address the same functional need and in the context of the same basic physical constraints, if for no other reason. Or if you prefer, consider bird wings and the wings found on airplanes, where bird wings combine both air foil capability for developing lift, and forward propulsive capability in the same mechanism, while fixed wing aircraft separate air flow and resulting lift capability from engine-driven propulsion. This difference highlights, among other details how even the basic functional organization of the underlying components of complex systems can differ, even as essentially the same overall functional results are achieved.

And this brings me to the note that I offered at the end of Part 4, in anticipation of this posting and its discussion. I offered a slightly reformulated version of the Turing test in that installment, in which I referred to an artificial intelligence-based agent under review as a black box entity, where a human observer and tester of it can only know what input they provide and what output the entity they are in conversation is offering in response. But what does this mean, when looking past the simply conceptual, and when peering into the black box of such an AI system? I stated at the end of Part 4 that I would at least begin to address that question here, by discussing two fundamentally distinct sets of issues that I would argue enter into any valid answer to it:

• The concept of emergent properties as they might be more operationally defined: emergent properties that could lead to functional capabilities that disruptively go beyond the basic functional outcomes limits of simpler systems, and
• The concept of awareness as a process of preemptive and even anticipatory information sorting, filtering and analysis (e.g. with pre- or non- directly empirical simulation and conceptual model building that would be carried out prior to any actualized physical testing or empirical validation of approaches and solutions considered – and the emergence of what can be considered to be analogous to human forethought.)

And as part of that, I added that I will address the issues of specific knowledge based expert systems in this, and of granularity in the scope and complexity of what a system might be in-effect hardwired to address, as for example in a Turing test context. What would more properly be developed as hardware, software, and I add firmware here? And what would best be developed and included as database resources that would serve as a counterpart to human long-term and short-term memory? I will also discuss error correction as a dual problem, with at least a significant proportion of it at least conceptually arising and carried out within the simpler agents that arise within an overall intelligent hierarchically structured system, and the rest of it carried out at a higher level within such a system, and as a function of properties and capabilities that are emergent to that larger system.

I begin addressing the first of those topics points and I add the rest of the above-stated to-address issues, by specifically delving into one two-word phrase that I offered in it, that all else here, hinges upon: “emergent properties.”

• What is an emergent property? Property, as that word is used here, refers to functionalities: mechanisms that directly cause or prevent, or directly modify an outcome step in a process or flow of them. So I use this term here in a strictly functional, operational manner.
• What makes a specific property as so defined and considered here, to be emergent?
• Before you can convincingly cite the action of a putative emergent property as a significant factor in a complex system, it has to be able to meet two specific criteria:
• You have to be able to represent it as the direct and specific causal consequence of an empirically testable and verifiable process or flow of them, and
• This process or process flow should not arise at simpler organizational levels within the overall system under consideration, than the lowest level at which you claim it to hold significant or defining impact.

Let me explain that with an admittedly gedanken experiment level “working” example. Consider a three tiered hierarchically structured system of what could best be individually considered to be simple, artificial specialized intelligence, single task agents. And you observe a type of measurable, functionally significant intermediate-step outcome arising within that system as you track its processing, and both overall and for how this system functions internally (within its black box.) Is this outcome the product of an emergent property in this system? The answer to that would likely be yes, if the following criteria can be demonstrably met, here assuming for purposes of this example that this outcome appears in the second, middle level up in the network architecture hierarchy here:

• The outcome in question does not and cannot arise from any of the lowest, first level agent components in place in this system as functioning on their own.
• The outcome only arises at the second level of this hierarchy if one or more of the agents operating at that level receive input from a correct combination of agents that function at the lowest, first level, and with appropriate input from one or more agents at the second level too, as they communicate together as part of their feedback control capabilities.
• And this outcome, while replicable and reliably so given the right inputs, is not simply a function of one of the second level agents. It is not a non-emergent property of the second level and its agents.
• And here, this emergent property would most likely become functionally important at the third, highest level of this hierarchical stack, where its functional presence would serve as meaningfully distinct input to agents operating there.

Think of emergent properties here, as capabilities that add to the universe of possible functional output states and the conditions that would lead to them, that would not be predictably expected to arise when simply reviewing the overall functional range of this system by looking at its element agents and their expectable outcomes: individually and collectively as if they were more independently functioning simple elements of a simple aggregation.

To express that at least semi-mathematically, consider a networked array of what begin as a connected set of simple artificially intelligent agents: A. And think of this assemblage as being functionally defined in terms of what it does and can do as defined by its output states, coupled with the set of input states that it can act upon in achieving those outputs. As such it could be representable as a mapping function:

• FA(the set of all possible input values) → {the set of all possible output values}A

where FA can be said to map specific input values that this process can identify and respond to, as a one-to-one isomorphism as a simplest case, to specific counterpart output values (outcomes) that it can produce. This formulation addresses the functioning of agents that can and would consistently carry out one precisely characterizable response to any given separately identifiable input that it can respond to. And when the complete ensemble of inputs and responses as found across the complete set of elemental simple function agents in a hierarchical array system of this type, is considered at this simplest organizational level, where no separate agents in the system duplicate the actions of any others (as backups for example), their collective signal recognition and reaction system: their collective function space (or universe if considered in set theory terms) of input recognized and input-specific actions taken, would also fit a one-to-one isomorphism, input to output mapping pattern.

That simplest case assumes a complete absence of emergent process, response activity. And one measure of the level of emergent process activity in such a system would be found in the level of deviation from a more basic one-to-one input signal to output response pattern observed when studying this system as a whole.

The more extra activity is observed, that cannot be accounted for in this type of systems analysis, the more emergent property activity must be taking place, and by inference, the more emergent properties there must be there – or the more centrally important those that are there, must be to the system.

Note that I also exclude outcomes convergence in this simplest case model too, at least for single simple agents included in these larger systems, where several or even many inputs processed by at least one of the simple agents in such a system, would lead to some same output from it. That, however, is largely a matter of how specific inputs are specified. Consider, for example, an agent that is set up algorithmically to group any input signal X that falls within some value range (e.g. greater than or equal to 5) as functionally identical for processing and output considerations. If a value for X of 5, or 6.5 or 10 is registered, all else remaining the same as far as pertinent input is concerned, a same output would be expected – and for purposes of this discussion, such input value deviation for X would be evaluated as moot and the above type of input to output isomorphism would still be deemed valid.

I am going to continue this narrative in a next series installment, where I will conclude my discussion of emergent properties per se, at least for purposes of this phase of this series. And then I will move on to consider the second major to-address bullet point as offered above:

• The concept of awareness as a process of preemptive and even anticipatory information sorting, filtering and analysis (e.g. with pre- or non- directly empirical simulation and conceptual model building that would be carried out prior to any actualized physical testing or empirical validation of approaches and solutions considered – and the emergence of what can be considered to be analogous to human forethought.)

And I will also, of course discuss the issues that I listed above, immediately after first offering this bullet point:

• The issues of specific knowledge based expert systems in this, and of granularity in the scope and complexity of what a system might be in-effect hardwired to address, as for example in a Turing test context. What would more properly be developed as hardware, software, and I add firmware here? And what would best be developed and included as database resources that would serve as a counterpart to human long-term and short-term memory? I will also discuss error correction as a dual problem, with at least a significant proportion of it at least conceptually arising and carried out within the simpler agents that arise within an overall intelligent hierarchically structured system, and the rest of it carried out at a higher level within such a system, and as a function of properties and capabilities that are emergent to that larger system.

And my overall goal in all of this, is to use this developing narrative as a foundation point for addressing how such systems will enter into and fundamentally reshape our computer and network based information management and communications systems.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: