Platt Perspective on Business and Technology

Some thought concerning a rapidly emerging internet of things 10: fully automated systems and the implications of effective removal of direct human oversight 3

Posted in business and convergent technologies, social networking and business by Timothy Platt on July 14, 2013

This is my tenth posting to a series on a rapidly emerging new level of online involvement and connectedness: the internet of things (see Ubiquitous Computing and Communications – everywhere all the time 2, postings 211 and loosely following for Parts 1-9.)

I stated at the end of Part 9 that I would discuss in this posting,

• Exception handling and capacity to learn, and
• On an operational level at least, some of the issues of command and control oversight that functionally enable that. And in that regard, I note that this is where the issues of top-down versus bottom-up command and control really enter this narrative.

And I also noted that these issues are bridging connections between this general topic and series, and a line of discussion that I have been pursuing in a second concurrent series: Commoditizing the Standardized, Commoditizing the Individually Customized (see Business Strategy and Operations – 2, postings 363 loosely following, and particularly its Part 12: neural nets and self-assembling systems.)

I will at least begin addressing these issues here, with a simplified self-organizing systems example:

• A network that is designed to dynamically allocate and reallocate from a fixed maximum available overall bandwidth,
• Which subsystems within it will have what share of that, available to them.

I will start out with a simplest working example of this and then add a few complications and contextual details in, as I develop this as a self-learning and self-organizing model.

• As a first step consider a model system with one central command and control computer that is connected to five sub-network assemblies, each containing one sensor. The overall function of this system is to monitor and report pressure fluctuations in a set of control points in pipes that are used to transfer a volatile raw material in a factory.
• When a pressure fluctuation is observed and reported to the central monitor that coordinates the activity of this system – its command and control computer, that controller requests a confirming sensor report, and both to determine if this out of normal range signal represented a fluke fluctuation in the piping system it monitors, and to track ongoing activity if it represents an emerging problem.
• This central controller has limited overall bandwidth for monitoring its array of sensors, so following a rules-based set of procedures, it connects equal time with all of its sensors when all incoming signals from them fall within normal pressure ranges, and less frequently than equal time with the sensor subassemblies that continue to show normal flow and pressure when one of its sensors shows a deviation in pressure that surpasses a threshold that would label it as significant.
• If this pressure irregularity resolves itself and returns to normal, then central monitoring rates return to baseline, and equal attention is given to all five of the sensor subsystems again with equal bandwidth devoted to monitoring each of them.
• If a pressure irregularity persists for more than some set period of time, or exceeds some critical threshold level at any one time that would call for reporting outside of this overall monitoring system, this network’s command and control unit reports that to a higher level command and control computer that manages a larger more inclusive array of what under normative conditions would be divided into relatively autonomous separate network systems.

This sounds like a completely artificial example, but it is in fact at least loosely modeled on real if much more complex and multi-leveled systems. Put a pebble in one of your shoes and walk around and chances are you will not have taken many steps before a great deal of your attention is being devoted to the precise point on your foot that this pebble is pressing against. We do not usually think of artificial command and control computers as having such tightly limited overall bandwidth capabilities. But if this manufactured computer controller was for whatever reason simultaneously monitoring and tracking input from millions of such sensors and with need to dynamically shift which proportions of its processing capabilities it would devote where among all of them, depending on what real-time input it was receiving from them, even this limiting factor might come into play.

Now let’s look into the types of rules based systems and data management capabilities that would go into making this type of system work, and here just considering the simplified five sensor plus control unit network of my example.

• As a normative baseline the central command and control unit in this system devotes essentially identical proportions of its overall clock cycles to monitoring each of its sensors. And it compares values received from them to a set of normal value ranges that are stored in it for each of them.
• Incoming data from each sensor is matched to its normal value range, and logged, and an attention requirement score is assigned to that sensor depending on match or discrepancy of its data signals to its designated normal. If a value measured and received falls within the normal range for that sensor, it is assigned a minimal/normal attention score and continues to receive a standard/minimal proportion of functional attention from the command and control unit.
• If it falls at a boundary value between normal and abnormal it is given an attention score one point higher and is monitored proportionately that much more closely, and if it receives a score fully outside of the normal range the central unit’s programming assigns that sensor a still higher attention score with a value set according to a pre-set formula. And the overall proportion of its monitoring activity assigned to tracking that sensor increases still further accordingly. And that control unit records any new values assigned and uses them as its new baseline values for determining what proportion of its information processing and sensor monitoring activity it will devote to each sensor, adjusting these values dynamically on the basis of input received.
• If a score is recorded for a sensor that exceeds a critical reporting-out threshold value that fact and the sensor ID and its pressure scores history would be reported to a higher level command and control computer for further action.
• If an out of normal range value for a sensor is recorded and a next value is closer to normal or back within the normal range, its attention score would be reduced in one point steps until back to normal.
• This way a sensor that trends back to normal readings or that simply returns to them in one step, would be brought back to normative level monitoring, while one that continued to show abnormal pressure readings would continue to be monitored more closely.
• Regardless of precise value, any sensor that displays out of normal pressure readings for more than a set threshold number of monitor readings, would be reported to that higher level command and control computer. With more frequent monitoring for greater deviations from normal, a more divergent from normal value would be reported more quickly.
• And any sensor that was reported to that higher level would be assigned a new and higher minimum baseline value for its attention score that would be maintained unless and until that higher command and control unit reset it back to this system’s normative baseline value.

The whole idea here is that learning is context specific, that it occurs in a rules-based context, and that it requires memory of both new and previous as relative points of comparison, from which responding action could be taken. This model system includes an exception handling system for tracking and managing abnormal sensor readings, and for reporting them on, outside of this specific system itself and to a higher level command and control capability if appropriate. And this particular example happens to be built around a top-down centralized command and control capability.

• A crowdsourced distributed systems example might have distributed command and control capabilities that communicate together to homeostatically maintain overall systems stability,
• With rules based systems for achieving overall consensus and with rules based score comparison and reconciliation systems that would resolve any control unit to control unit conflicts.

The core point to all of this discussion up to here is that:

• While learning and related functionalities can be viewed as open-ended, they can also be described on a fixed systems basis and even a simple systems basis, as the emergent consequence of applying specific rules based systems to systematically gathered and stored data – and even simple, fixed format data.

And this brings me to the issues of synthetic neural networks and self-assembling systems as discussed from the manufacturing perspective in Commoditizing the Standardized, Commoditizing the Individually Customized 12: neural nets and self-assembling systems.

In my example above, I discussed a functionally set network that could still dynamically change its levels of activity allocation through its component subsystems. But one of the defining features of that model system is that its array of network nodes is set and static. In the real world, sensors are swapped out and replaced and new ones are added as needed and even in new locations. The number five in that example was arbitrary and real world systems can and do scale up and down – and a fully autonomous system can self-assemble to dynamically manage that, enlisting and bringing in new nodes or pruning them out as needed. And that capability becomes a real possibility when connectivity is wireless and where the limitations of hardwiring do not limit connectivity complexity or pattern arrived at.

I offer this as a thought piece and as a start to a larger narrative that I am certain to come back to in future postings and series. But I am going to switch directions in my next series installment to address a set of issues that have been implicit to this series from its beginning – and from where I have posted this in my topic area directories if nowhere else. I have been writing about networks of things in this series, but I have been posting it to my Social Networking and Business directory. Why? As a foretaste of that discussion to come, I note a conceptual tool that I have invoked repeatedly up to here in this series: the Turing test.

Meanwhile, you can find this and related postings at Ubiquitous Computing and Communications – everywhere all the time and its continuation page, and at Social Networking and Business.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: