Platt Perspective on Business and Technology

Some thought concerning a rapidly emerging internet of things 8: fully automated systems and the implications of effective removal of direct human oversight 1

Posted in business and convergent technologies, social networking and business by Timothy Platt on July 3, 2013

This is my eighth posting to a series on a rapidly emerging new level of online involvement and connectedness: the internet of things (see Ubiquitous Computing and Communications – everywhere all the time 2, postings 211 and loosely following for Parts 1-7.) And I begin by explicitly noting a discrepancy between the note that I added to the end of Part 7 where I stated that I would write this posting on:

• Intelligent networks, and how artificial intelligence and systems complexity are already blurring the line where a human user can tell when they are connected to another person or to a device,

and the title to this series installment where I state that I am writing this posting about:

• Fully automated systems and the implications of effective removal of direct human oversight.

One of my goals for this posting is to at least begin to argue the case that these approaches simply address different but inextricably connected sides to the same one emerging networked capability. And in that,

• Processes and emergent properties that arise from the large scale organized assembly of these networks can clearly be expected to create outcomes and possibilities that we cannot anticipate in any meaningful way as to detail now, as of this writing,
• But it is already clear that the implications of this new technologically driven advancement are already becoming profound.

I begin this discussion with the closely connected issues of artificial intelligence and systems complexity. And I begin addressing them here from a starting point that I have touched upon a number of times in the course of writing this blog: the Turing test.

• When Alan Turing first proposed his artificial intelligence test, he posited that a human of normal intelligence or above, communicate with a second party through a single test-specific channel. The second party they would share these messages with is otherwise out of sight and unavailable as a source of any other, collateral information as to their nature. And their goal is to determine whether they are communicating with a machine such as a computer, or with another human being. If they cannot tell, or decide they must be communicating with another person, but the second party is in fact a computer or other artifact it can be said to be displaying intelligence.
• Think of this as the “I don’t quite know what it is but I recognize it when I see it” test. And barring the development of an unequivocal defining test as to what intelligence is, and where it is being displayed, this is still the gold standard for identifying information processing activity as displaying intelligence. So this test sets the minimal threshold standards needed for claiming that artificial or electronic information processing displays true artificial intelligence.

In practice, the word “intelligence” is used a lot more loosely when “artificial intelligence” is proposed or suggested. And in that wider and more open context essentially any algorithmic data processing schemes or processes that show any significant complexity or subtlety seem to qualify and certainly where that means software-driven systems that:

• Significantly exceed the functional data processing range and flexibility of up-to-now current systems,
• That manage and control, or at the very least report on processes or conditions of human significance that people generally think of as calling for human oversight to carry out,
• Or both.

And this all continues to stay very close to that “I don’t quite know what it is but I recognize it when I see it” test criterion, but here the bar that has to be surpassed to pass the test keeps rising as human expectations do, as to what computerized information processing systems can do. And this brings me to networks and what I see as the network of things, or distributed systems form of the classic Turing test (see my series: Some Thought Concerning a Rapidly Emerging Internet of Things at Ubiquitous Computing and Communications – everywhere all the time 2, as postings 211 and loosely following.)

According to the black box design of Turing’s test, when the human evaluator can only detect and respond to the specific stream of data and communications signals they are to evaluate, they do not in fact know what they are communicating with or where it is – or if it is in fact physically localized to one site or it is in in fact a networked array of functional units – a network of things that are for the purpose of this test at least, working together in an organized manner and as if a single functioning entity.

And with that, I posit a network of things counterpart to the artificially intelligent computer system that Turing envisioned. And in anticipation of that I note a formal definition of a class of networks of things that I offered in my posting Some Thought Concerning a Rapidly Emerging Internet of Things 1: starting a new series that I then more formally expanded upon in that same series in its Part 3, Part 4 and Part 5: active networks.

As initially noted in Part 1 of that series, I divide the emerging internet of things into two basic and fundamentally distinct spheres of activity:

• The Internet 1.0 of Things where more and more items and objects are tagged and in ways that can be connected into the internet and tracked through it. These objects – these nodes in this system are passively connected in so this can also be thought of as the passive internet of things.
• The Internet 2.0 of Things where more and more nodes and types of node are added that do communicatively, 2-directionally interact with the internet and with other nodes, and more actively and even proactively than would be possible with simple ID tagging or other 1.0 activity. This can be thought of as the active internet of things.

As intelligence does fairly clearly involve and even require feedback and the multidirectional flow of information needed to process, evaluate and refine what begins as raw input data into meaningful knowledge, any Turing test intelligent network of things would be expected to be, or to at least significantly include, active network of things sub-networks. For what follows in this discussion I will simply assume that as a given.

And this brings me to the issues of:

• Functional distribution of information collection, processing and storage across the nodes of a network,
• Workload distribution and the functional simplicity and complexity of individual nodes in these systems, and
• Top-down versus bottom-up command and control of these systems,
• And of course the role of overall systems complexity in the emergence of Turing-defined intelligence.

I will turn to those issues in my next series installment. Meanwhile, you can find this and related postings at Ubiquitous Computing and Communications – everywhere all the time and its continuation page, and at Social Networking and Business.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: