Platt Perspective on Business and Technology

Reconsidering Information Systems Infrastructure 8

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on February 11, 2019

This is the 8th posting to a series that I am developing here, with a goal of analyzing and discussing how artificial intelligence, and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 374 and loosely following for Parts 1-7. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

I have been developing and offering a foundational discussion for thinking about neural networks and their use, through most of this series up to here, and more rigorously so since Part 4 when I began discussing and at least semi-mathematically defining and characterizing emergent properties. And I continue that narrative here, with a goal of more cogently and meaningfully discussing the neural network approach to artificial intelligence per se.

I have been pursuing what amounts to a dual track discussion in that, as I have simultaneously been discussing both the emergence of new capabilities in functionally evolving systems, and the all too often seemingly open-ended explosive growth of perceived functional building block needs, that might arguably have to be included in any system that would effectively carry out more complex intelligence-based activities (e.g. realistic and human-like speech and in a two-way conversational context: natural speech and conversation as first discussed here in Part 6.)

Let’s proceed from that point in this overall narrative, to consider a point of significant difference between those new emergent capabilities that putative artificial intelligence agents develop within themselves as a more generally stated mechanism for expanding their functional reach, and the new presumed-required functional properties and capabilities that keep being added through scope creep in systems design if nothing else for overall tasks such as meaningfully open ended two way, natural conversation.

• When new task requirements are added to the design and development specifications of a human-directed and managed artificial intelligence agent and its evolution, for carrying out such tasks, they are added in there in a directed and overall goal-oriented manner and both for their selection and for their individual component by component design.
• But when a system develops and uses new internally-developed emergent property capabilities on its own, that development is not necessarily end-goal directed and oriented and in anything like the same way. (The biological systems-framed term exaptation (which has effectively replaced an older presumed loaded term: pre-adaptation, comes immediately to mind in this context, though here I would argue that the serendipitous and unplanned for of pre-adaptation might make that a better term in this context.)

Let me take that out of the abstract by citing and discussing a recent news story, that I will return to in other contexts in future writings too, that I cite here with three closely related references:

One Giant Step for a Chess-Playing Machine,
A General Reinforcement Learning Algorithm that Masters Chess, Shogi, and Go Through Self-Play and
Chess, a Drosophila of Reasoning (where the title of this Science article refers to how Drosophila genetics and its study have opened up our understanding of higher organism genetics, and its realistic prediction that chess will serve a similar role in artificial intelligence systems and their development too.)

The artificial intelligence in question here is named AlphaZero. And to quote from the third of those reference articles:

• “Based on a generic game-playing algorithm, AlphaZero incorporates deep learning and other AI techniques like Monte Carlo tree search to play against itself to generate its own chess knowledge. Unlike top traditional programs like Stockfish and Fritz, which employ many preset evaluation functions as well as massive libraries of opening and endgame moves, AlphaZero starts out knowing only the rules of chess, with no embedded human strategies. In just a few hours, it plays more games against itself than have been recorded in human chess history. It teaches itself the best way to play, reevaluating such fundamental concepts as the relative values of the pieces. It quickly becomes strong enough to defeat the best chess-playing entities in the world, winning 28, drawing 72, and losing none in a victory over Stockfish.” (N.B until it met AlphaZero, Stockfish was the most powerful chess player, human or machine on Earth.)

Some of the details of this innovative advance as noted there, are of fundamental game-changing significance. And to cite an obvious example there, AlphaZero taught itself and in a matter of just a few hours of self-development time to become by far the most powerful chess player in the world. And it did this without “benefit” of any expert systems database support, as would be based in this case on human chess grandmaster sourced knowledge. I put “benefit” in quotes there because all prior best in class computer-based, artificial intelligence agent chess players have been built around such pre-developed database resources, and even when they have been built with self-learning capabilities built in that would take them beyond that type of starting point.

I will cite this Science article in an upcoming series installment here, when I turn to address issues such as system opacity and the growing degradation and loss of human computer programmer understanding as to what emerging, self-learning artificial intelligence systems do, and how. My goal here is to pick up on the one human-sourced information resource that AlphaZero did start its learning curve from: a full set of the basic rules of chess and of what is allowed as a move and by what types of chess pieces, and of what constitutes a win and a draw as a game proceeds. Think of that as a counterpart to a higher level but nevertheless effectively explanatory functional description of what meaningful conversation is, as a well defined functional endpoint that such a system would be directed to achieve, to phrase this in terms of my here-working example.

Note that AlphaZero is defined by the company that owns it: Alphabet, Inc., strictly as software and as software that should be considered platform-independent as long as the hardware that it is run on has sufficient memory and storage and computational power to support it, and its requisite supportive operating system and related add-on software.

But for purposes of this discussion, let’s focus on the closed and inclusive starting point that a well defined and conclusive set of rules of the game hold for chess (or for Shogi or Go for AlphaZero), and the open-ended and ultimately functionally less informative situation that would-be natural conversation-capable artificial intelligence agents face now.

This is where my above-made point regarding self learning systems really enters this narrative:

• … when a system develops and uses new internally-developed emergent property capabilities on its own, that development is not necessarily end-goal directed and oriented and in the same way.

That type of self-learning can work and tremendously effectively so and even with today’s human-designed and coded starting-point self-learning algorithms and with a priori knowledge bases in place for them – if an overall goal that this self-learning would develop towards: a clear counterpart to the rules of chess here, is clearly laid out for it, and when such an agent can both learn new and question the general validity of what it has built into it already as an established knowledge base. When that is not possible, and particularly when a clear specification is not offered as to the ultimate functional goal desired and what that entails … I find myself citing an old adage as being indicative of what follows:

• “If you don’t know where you are going, any road will do.”

And with that offered, I will turn in my next series installment to offer some initial thoughts on neural network computing in an artificial intelligence context, and where that means self-learning and ontological level self evolution. And yes, with that noted I will also return to consider more foundational issues here too, as well as longer-term considerations as all of this will come to impact upon and shape the human experience.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

2 Responses

Subscribe to comments with RSS.

  1. Alan Singer said, on February 11, 2019 at 7:13 am

    You need to watch more dystopian movies. Beware the Matrix!

    • Timothy Platt said, on February 18, 2019 at 9:28 pm

      I have to assume that you offered your comment here at least partly in jest from how you ended it with a Matrix movies reference. But you also at least touch on a more serious issue that I have decided to respond to.

      My intention in a posting and in a series of this type, is to offer a basically positive perspective on new and emerging technologies for what they can become. This means my attempting to find and pursue a middle ground between what can become blind enthusiasm and blind fear. Selecting the types of technology solutions to pursue and deciding the where and how of that, calls for a balanced judgment as sought after from both a short-term and a longer-term perspective.

      To take this out of the abstract, and from a “blind enthusiasm” perspective, consider the consequences that we should all be seeing around us by now, of our having pursued large carbon footprint technologies such as coal fired electrical power plants for so long, and even as their down-side consequences have become more and more overtly pressing. And to take this out of the “blind fear” side too, consider the ultimately stultifying and future denying impact that simply following a pure Luddite approach would bring, where new technologies are attacked, blocked and turned away from, and simply because they are new and in ways that might compel societal change – any real societal change.

      I try approaching new and the disruptively new in particular, with open eyes and from an awareness of the perils of both of those more genuinely dystopian possibilities: each blind in its own ways. And I add, in the alternative third way context that I propose here, that sometimes the only real way to move beyond the perils of actually pursuing one of those blind approaches, is to attempt new and with as much of an awareness of both its positive and negative potential sides as possible.

      To take that out of the abstract, and to continue my electrical power generation (and distribution) example from above here in this reply, traditional electrical power plants and the grids that they connect in, both tend to be tremendously inefficient and for a variety of reasons. From an individual power plant perspective this definitely includes electrical power generation scheduling, where what is achieved for that rarely actually meshes with actual user demand patterns. So power plants all too often still waste energy and produce what should be avoidable carbon footprints from that, from trying to over-produce for low demand periods, while trying to play catch-up for high and peak demand periods. Just consider in that regard, the risks of brown-outs and voltage reductions that we still face and even in more technologically advanced nations, during severe heat waves when seemingly everyone has suddenly turned their air conditioners on at the same time and kept them on. And from a larger, networked power grid perspective, it is tremendously technically challenging to load balance power availability and need across geographically larger regions. And that leads to both increased overall systems-wide inefficiencies and overall systems-wide risk where one power plant failure could bring down another, suddenly overwhelmed with unplanned for peak demand needs thrust upon it from the first plant’s failure. And that type of breakdown can cascade and bring down even larger regional systems – at least as a risk that has to be considered, and certainly if explicit action is not taken to disconnect failing plants from the larger grid and right away.

      In principle, and I fully expect in practice to, smart technology there, for managing both individual power plant operating levels and overall power distribution load balancing, could improve overall efficiencies, reducing the pollution levels generated on a per kilowatt and per megawatt basis, while increasing overall systems safety and reliability and at all organizational levels here. Does even this just create new risks? Potentially yes, and in that regard I suggest your reviewing some of my cyber-security postings. But managing this can be possible too. All that takes in an active, well considered pursuit of the type of third path middle ground that I write of here.

      So do I see and acknowledge potential risks in all of this if it s not done right? Certainly. But I also see real positive benefits too, and realizing them means finding and developing toward that thought-through middle ground. And as for The Matrix: it is an entertaining but also cartoonishly scripted movie and the same could be said of its sequels too. Yes, they are all fun; no they should not set policy or drive political agendas.

      That said, say hello to Neo and Morpheus and the gang for me (and their cyber-villain opponent too as he is really, really funny!), Tim

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: