Platt Perspective on Business and Technology

Online social networking and community when machines think – top-down or bottom-up artificial intelligence?

Posted in business and convergent technologies by Timothy Platt on May 3, 2010

I started this series (see Part 1 posting) with a focus on the Turing test and this posting continues from there with a second concept added in – indeterminacy. And both enter into this discussion through consideration of two fundamentally distinct approaches to understanding artificial intelligence and in fact intelligence in general – top-down and bottom-up models for the emergence of intelligence as a higher level functional capability than simple algorithmic processing per se.

I want to start this by considering the best single working example of intelligence that we have as a potential model to build towards: the human mind. We know from studies of brain and behavior, and both from interventive and non-interventive studies that our brains are divided into a multitude of special-function circuits and our increasingly detailed understanding of how function maps onto structure continues to parse function more and more finely into progressively more specialized neural circuits. We know this from brain mapping studies in both humans and in animal models and with increasingly more finely localized capabilities for observing wider ranges of behavior in synchrony with brain activity, using single neuron testing, and with much less invasive approaches such as functional-MRI (fMRI) imaging.

At the same time it has become increasingly clear that we also carry within our brains some very high level organizing neural circuits and that synthesis and organization of outputs from these finer grained more detailed processes takes place in a hierarchy of organizational levels. Executive level neural circuits towards the top of these organizational hierarchies have started to be identified with increasing levels of neural structure specificity. So for example it has been known for almost ten years now that lesions in the dorsolateral prefrontal cortex correlate closely with and are probably causative to loss of a set of specific executive organizational brain functions (Tekin S. 2002. Frontal–subcortical neuronal circuits and clinical neuropsychiatry – an update. Journal of Psychosomatic Research, Volume 53, Issue 2, Pages 647-654).

This means our brains and our minds as enacted through them have both lower level, more functionally specialized and higher level, more functionally generalized and organizational components. And the core question for top-down versus bottom-up is one of which level intelligence more specifically emerges from.

Top-down proved to be a very illusive goal in developing artificial intelligence, though work is still done from that direction. Bottom-up has proven illusive too, as systems that are built with progressively more complex special rules and networks of special-case processes do not readily begin to show the types of emergent properties that we would consider general intelligence on any real level. And I come back to the Turing test and the way it is conceptually carried out – in specific testing contexts.

A fully automated system lacking any human executive input or intervention need not be or act intelligent on everything to pass the Turing test. It only has to pass it in the specific test context and within the specific test design parameters that it is presented with, and that it was built to work within.

And I step back from this to look at social networks and all of their interactive processes, and to the underlying hardware and software infrastructure that supports it and makes it possible in an online context. When you look at a specific node operating in isolation you can in principle readily determine if any intelligence processing is of automated or human origin. All you have to do is pull aside the curtain to see if you are dealing with a human or a machine. When processes are carried out over a network and through distributed systems it may not be possible even in principle to tell where executive decision making is a result of human intervention and where it is automated and coming from the hardware and software per se. Networks create a vast grey area of organizational indeterminacy as to what decision making may be a result of human intervention and which of artificial intelligence.

This becomes potentially very important in knowing and aligning goals and priorities, and in both overall results and in process-based intermediate steps. And for anything like homeostatic processes (e.g. managing ongoing systems such as electrical power grids) virtually every goal and priority is going to be an intermediate step, adjusted on the fly to meet immediate and immediately anticipatable needs.

Is the emergence of true intelligence a top-down or a bottom-up phenomenon? That is probably not even the right question and when it is viewed in the context of distributed systems operating in homeostatic balance it is almost certainly the wrong question. I will add that the human brain (and mind) can be best understood using a distributed system model of this type too so this same basic set of issues discussed here applies to us too.

I am going to turn to SCADA (Supervisory Control And Data Acquisition) systems in part three of this series and will focus on homeostasis across wider geographic regions as an organizing goal and priority, and with its mix of automated and human managed processes and with the social networking on both levels needed to make these systems work.

3 Responses

Subscribe to comments with RSS.

  1. […] when it is met in Part 1 of this series and I brought in the concept of indeterminacy in Part 2. I mentioned in that posting that Part 3 of this series would focus on SCADA (Supervisory Control […]

  2. […] this series with discussion of machine intelligence and the Turing test and continued that with a second installment following through on this foundational topic. I then turned to examine two specific test cases with a posting on next generation electrical […]

  3. […] Bottom-up attacks on an intelligent infrastructure network will most likely mirror the types of attack profile and target the vulnerabilities found in our current less than intelligent networks. Software vulnerabilities would in that event match pretty closely to those currently listed in the OWASP top ten list. And sloppy design and coding practices maintaining their current persistent state as they probably will, the same basic remediations will be called for to limit the impact of these same problems in the future that are called for now. I have a browser window open to the OWASP site as I write this and it’s what Casey Stengel once called “déjà vu all over again” for the difficulty of making issues like SQL injection go away as current and recurrent problems of some frequency. […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: