Platt Perspective on Business and Technology

Reconsidering Information Systems Infrastructure 9

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on April 18, 2019

This is the 9th posting to a series that I am developing, with a goal of analyzing and discussing how artificial intelligence and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 374 and loosely following for Parts 1-8. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

I stated towards the beginning of Part 8 of this series that I have been developing a foundation in it for thinking about neural networks and their use in artificial intelligence agents. And that has in fact been one of my primary goals here, as a means of exploring and analyzing more general issues regarding artificial agents and their relationships to humans and to each other, and particularly in a communications and an information-centric context and when artificial agents can change and adapt. Then at the end of Part 8, I said that I would at least begin to specifically discuss neural network architectures per se and systems built according to them in this complex context, starting here.

The key area of consideration that I would at least begin to address in this posting as a part of that narrative, is that of flexibility in range and scope for adaptive ontological change, where artificial intelligence agents would need that if they are to self-evolve new types of, or at least expanded levels of functional capabilities for more fully realizing the overall functional goals that they would carry out. I have been discussing natural conversation as a working artificial general intelligence-validating example of this type of goal-directed activity in this series. And I have raised the issues and challenges of chess playing excellence in Part 8, with its race to create the best chess player agent in the world as an ongoing computational performance benchmark-setting goal too, and with an ongoing goal beyond that of continued improvement in chess playing performance per se. See in that regard, my Part 8 discussion of the software-based AlphaZero artificial intelligence agent: the best chess player on the planet as of this writing.

Turning to explicitly consider neural networks and their emerging role in all of this, they are more generically wired systems when considered at a hardware level, that can flexibly adapt themselves on a task performance level basis, for which specific possible circuit paths are actually developed and used within them, and for which of them are downgraded and in effect functionally removed too. These are self-learning systems that in effect rewire themselves to more effectively carry out data processing flows that can more effectively carry out their targeted functions, developing and improving circuit paths that work for them and culling out and eliminating ones that do not – and at a software level and de-facto at a hardware level too.

While this suggestion is cartoonish in nature, think of these systems as blurring the lines between hardware and software, and think of them as being at least analogous to self-directed and self-evolving software-based hardware emulators in the process, where at any given point in time and stage in their ongoing development, they emulate through the specific pattern of preferred hardware circuitry used and their specific software in place, an up to that point most optimized “standard” hardware and software computer for carrying out their assigned task-oriented functions. It is just that neural networks can continue to change and evolve, testing and refining themselves, instead of being locked into a single fixed overall solution as would be the case in a “standard” design more conventional computer, and certainly when it is run between software upgrades.

• I wrote in Part 8 of human-directed change in artificial agent design and both for overall systems architecture and for component-and-subsystem, by component-and-subsystem scaling. A standard, fixed design paradigmatic approach as found in more conventional computers as just noted here, fits into and fundamentally supports the systems evolution of fixed, standard systems and in its pure form cannot in general self-change either ontologically or evolutionarily.
• And I wrote in Part 8 of self-directed, emergent capabilities in artificial intelligence agents, citing how they might arise as preadapted capabilities that have arisen without regard to a particular task or functional goal now faced, but that might be directly usable for such a functional requirement now – or that might be readily adapted for such use with more targeted adjustment of the type noted here. And I note here that this approach really only becomes fundamentally possible in a neural network or similar, self-directed ontological development context, with that taking place within the hardware and software system under consideration.

Exaptation (pre-adaptation) is an evolutionary development option that would specifically arise in neural network or similarly self-changing and self-learning systems. And with that noted I invoke a term that has been running through my mind as I write this, and that I have been directing this discussion towards reconsidering here: an old software development term that in a strictly-human programmer context is something of a pejorative: spaghetti code. See Part 6 of this series where I wrote about this phenomenon in terms of a loss of comprehensibility as to the logic flow of whatever underlying algorithm a given computer program is actually running – as opposed to the algorithm that the programmer intended to run in that program.

I reconsider spaghetti code and its basic form here for a second reason, this time positing it as an alternative to lean code that would seek to carry out specific programming tasks in very specific ways and as quickly as possible and as efficiently as possible, as far as specific hardware architecture, system speed as measured by clock signals per unit time, and other resource usage requirements and metrics are concerned. Spaghetti code and its similarly more loosely structured counterparts, are what you should expect and they are what you get when you set up and let loose self-learning neural network-based or similar artificial agent systems and let them change and adapt without outside guidance, or interference if you will.

• These systems do not specifically, systematically seek to ontologically develop as lean systems as that would most likely mean their locking in less than optimal hardware-used and software-executed solutions than they could otherwise achieve.
• They self-evolve with slack and laxity in their systems, while iteratively developing towards next step improvements in what they are working on now, and in ways that can create pre-adaptation opportunities – and particularly as these systems become larger and more complex and as the tasks that they would carry out and optimize towards become more complex and even open-endedly so (as emerges when addressing problems such as chess, but that would come fully into its own for tasks such as development of a natural conversation capability.)

If more normative step-by-step ontological development of incremental performance improvements in task completion, can be compared to more gradual evolutionary change within some predictable-for-outline pattern, then the type of slack allowance with its capacity for creating fertile ground for possible pre-adaptation opportunity that I write of here, can perhaps best be compared to disruptive change or at least opportunity for it – at least for the visible outcome consequences observed as a pre-adapted capability that has not proven particularly relevant up to now is converted from a possibility to a realized current functionally significant actuality.

And with this noted, I raise a tripartite point of distinction, that I will at least begin to flesh out and discuss as I continue developing this series:

• Fully specified systems goals (e.g. chess rules as touched upon in Part 8 for an at least somewhat complex example, but with fully specified rules defining a win and a loss, etc. for it.),
• Open-ended systems goals (e.g. natural conversational ability as more widely discussed in this series and certainly in its more recent installments with its lack of corresponding fully characterized performance end points or similar parameter-defined success constraints), and
• Partly specified systems goals (as in self-driving cars where they can be programmed with the legal rules of the road, but not with a correspondingly detailed algorithmically definable understanding of how real people in their vicinity actually drive and sometimes in spite of those rules: driving according to or contrary to the traffic laws in place.)

I am going to discuss partly specified systems goals and agents, and overall systems that would include them and that would seek to carry out those tasks in my next series installment. And I will at least start that discussion with self-driving cars as a source of working examples and as an artificial intelligence agent goal that is still in the process of being realized, as of this writing. In anticipation of that discussion to come, this is where stochastic modeling enters this narrative.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: