Platt Perspective on Business and Technology

Reconsidering Information Systems Infrastructure 2

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on February 22, 2018

This is the second posting to a series that I am developing here, with a goal of analyzing and discussing how artificial intelligence, and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Part 1 of this. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

Think of those two background postings as offering a starter perspective on a conceptual problem I would address in this brief series. And with their evolving starting point perspective on this overall conceptual problem noted, I began looking at and considering some of the more specific implementation level details of it in Part 1 to this series, where I began discussing artificial intelligent agents and to be more specific here, ones that are capable of learning and self-improving: ones that are capable of self-directed automated evolution in what they are and in how, and how well they perform their tasks.

There are a number of ways to parse artificial intelligent systems and the intelligent agents that comprise them. But one that is particularly pertinent here arises from distinguishing between artificial specialized or single task intelligence, and artificial general intelligence: a capability that if realized would indicate an artificial agent that is capable of general abstract reasoning coupled with a general wide ranging capability for carrying out a diversity of different types of tasks, including new ones that it has never encountered.

As of now and for the foreseeable future, artificial general intelligence is a distant goal, and one that seems to recede into the future by one more year with every passing year. But the development of progressively more and more capable single task specific artificial intelligence has proceeded at a very rapid pace. And some of the more advanced of those systems, have even at times come to be seen as if showing at least glimmers of more general intelligence, even if that is more mirage than anything else – yet.

I find myself thinking back as I write that last sentence, to how IBM’s Deep Blue computer beat the then world’s leading human chess champion: Garry Kasparov in the first of a series of six chess games in 1997 (see alsoDeep Blue versus Garry Kasparov .) This marked the first time that an artificial agent: a computer running an algorithm, was able to beat the top rated human chess player of the time, and as such was seen as a watershed moment in the development of artificial intelligence per se. Until then, highest level chess play was widely considered a human-only capability, and by many.

And looking back at those games and how this automated system played them, this international grandmaster in chess saw what he understood to be at least glimmers of real intelligence and creativity in his machine opponent. But I set that perception aside and assume here that at least as of now, the only artificial intelligence-based agents that need to be considered here are essentially entirely one task-only in capability and range. For purposes of this series, or at least this part of it, the possibility of artificial general intelligence does not matter. The implications and the emerging consequences of even just highly limited and specialized one task-type agents have already proven to be profound. And we have only seen the beginning of what that will come to mean and for its impact on all of us and on all that we do. So I at least begin this series with a focus on that arena of development and action.

I began Part 1 of this series by picking up on a very simple example of a self-learning and evolving algorithm-based agent, that becomes more and more adept at carrying out its task as it works at performing it. Its task itself is very simple at least from a human performance perspective, consisting of placing each of a set of objects into the right containers, and regardless of the presence of alternative containers that it could put them into and regardless of how all of these containers are positioned relative to each other. Think of this as a simple case in point, proof of principle example of how a self-adjusting and self-learning/self-changing algorithm can in fact evolve under its own action towards greater and greater efficiency and accuracy in carrying out its task.

Then I began discussing as a perhaps “modest proposal,” a next step self-learning enabled artificial intelligence based project: a much more ambitious task goal, that in fact addresses a holy grail problem for the pharmaceutical industry and for new drug development, that I only partly tongue in cheek referred to as a more complex and demanding variation on the same basic target container identification and object placement task of the above already-achieved example:

• Identifying possible drugs that would selectively bind to the correct types of chemical reaction sites on the right target molecules that are specifically associated with specific diseases or other pathological states, that would have to be acted upon to achieve therapeutic benefit.
• Note that this can mean blocking biologically significant activity there, activating or enhancing it, or otherwise modulating or modifying it in ways that would likely lead to the achievement of therapeutic value.

The goal of this task is simply to put the right objects: the right prospective drug molecules, in the right containers: the right chemical binding sites on the right target molecules, to express this in terms of the above cited proof of principle agent and its example.

I offered this challenge in Part 1, in what is referred to as a rational drug design context, based upon a detailed and effectively usable understanding of the disease processes in question and of their normal biological counterparts, at a deep molecular level. And I freely admit that while I did note how this type of task would be solved through an algorithmic self-learning process, starting with simpler systems, I did sketch out the basic problem itself in what can only be considered a dauntingly intimidating manner. I actually did that for a reason and quite intentionally; my goal there was to posit a problem to be solved from the perspective of square one in doing so, when you know basically what you seek to achieve overall, but when you do not know any of the intermediate steps yet that you will face as you have yet to really start addressing any of this.

My goal here is to at least briefly discuss a development process for managing and resolving such complex tasks, using this one as a working example. And I begin doing that, by drawing a distinction that parallels and complements the more standard and common one of parsing artificial intelligence into specific-task-only, and general forms. That type of distinction applies to and at least categorically describes what does task work: what categorical type of agent is involved. The distinction I would offer here is one of categorically parsing the overall pool of tasks to be carried out, into:

Simple tasks that can essentially by definition be encompassed in a single progressively more efficient and compact algorithm, and
General tasks that can and do become more open ended, and both for the range of possible actions that could be taken, that an effective task performance would have to be selected from, and that cannot simply be encompassed in a single algorithm that could be optimized with time in the manner of a simple task, as defined here.

The object placement task of Part 1 of this series, and of my second background posting as noted above, is clearly a simple task as that term is offered here. And it can demonstrably be carried out by one single task agent, acting on its own. The potential drug development task as touched upon in Part 1 and here in this posting, is a general task. But that does not mean it can only be carried out to effective completion by a general intelligence: human or artificial. What that means is that it is a task that can in principle be carried out at a specialist-only single task intelligence agent level, and more specifically by a coordinated team of such agents, if the overall problem to be resolved can be broken down into a series of specific-task, algorithm organized components: simple tasks. Note that carrying out this now-suite of subtasks requires partitioning an overall complex task into a set of simpler tasks that individual single task agents could perform. And it requires the development and implementation of command and control organizing, single task agents whose single task would be to coordinate the functioning of other (lower level) agents, feeding output from some lower level agents to others as their input, and with feedback and collectively enabled learning built into this system to help evolve and improve all agent level elements of these systems.

• That means assembling artificial intelligence agents into at least two organizational levels, both (all) of which would in fact consist of simpler single task agents but that collectively can accomplish overall tasks that no such simple agent could perform on its own; this is where emergent properties and emergent capabilities enter this narrative.

I will discuss the issues of artificial general intelligence in light of this conceptual model in a later posting to this series, but I set that aside for now, simply noting it as a possibility worth keeping in mind.

With that offered, let’s reconsider the poster child example of a successful rational drug development initiative that I made brief note of in Part 1: the development of a new drug by researchers at Novartis International AG, for treating chronic myelogenous leukemia (CML). And I begin addressing that by pointing out a crucial detail from Part 1 that I have simply noted in passing, but that I have not discussed yet for its crucial relevance to this series and its discussion. Rational drug development as a new drug discovery and development approach is based upon a tremendously detailed understanding of the underlying biology of the disease processes under consideration. And in this case that means these researchers started out knowing that CML, and I add a group of other, otherwise seemingly unrelated types of cancer, have a very specific molecular level Achilles’ heel. CML, to focus on the specific cancer type that they did their initial research on here, consists of pathologically transformed cells that come to require an active supply of functional tyrosine kinase molecules that can bind phosphate groups to specific types of protein that they specifically produce and need, in order to activate them and give them functionality. Normal cells that these transformed cells arise from do not require this type of tyrosine kinase activity in the same way.

So these researchers knew a precise chemical target to try to bind to with their test drug possibilities, for stopping these cancer cells. And they had access to and extensive literature on, and research findings on kinases and on tyrosine kinases in particular, and that told them precisely what parts of those molecules their test drugs would have to bind to, in this case to block their functioning. CML cancer cells require active functioning tyrosine kinase to even survive. A drug that would effectively remove their tyrosine kinase from them would be expected to stop those cancer cells cold. And that is precisely what their test drug Imatinib (Gleevec) did and does.

I wrote the goals specifications of this type of drug discovery approach in Part 1, as an activity that might be carried out by artificially intelligence-based agents, assuming that the researchers that set these agents loose on a problem, do so starting out with a much more modest foundational understanding of the underlying disease in question itself, and certainly at a molecular level. Here, in the case of CML they started out with a clear road map and with a great deal of relevant information that could go into shaping the initial starting form for any search algorithm that might be pursued. This tremendously limited the search range that would have to be considered in finding possible drugs for testing.

Think of this as a Type 1 drug discovery and development problem. Now consider a disease that is known and understood for its symptoms and severity and for its mortality and morbidity demographics. And assume that along with knowing its overt overall symptoms, physicians also know a significant amount about the laboratory test findings that would be associated with it, such as blood chemistry abnormalities that might consequentially arise from it. But little if anything is known of its underlying biology and certainly at a chemical and a molecular level, beyond what might be inferred from higher organizational level tests and observations – for its impact on specific organs and organ systems and on the complete person. This is where much or even all of the research and discovery requirements listed in Part 1 for addressing this challenge in its most general terms, enters this narrative and as a matter of necessity. And I refer to these drug discovery problems as Type 2 challenges.

I am going to continue this narrative in a next series installment where I will delve into at least some of the conceptual and organizational details of addressing Type 1 and Type 2 problems here. And in anticipation of that, I note that I will delve into the issues of a priori knowledge based expert systems, versus more entirely self-learning and self-discovery based ones. And as briefly touched upon in Part 1, I will also discuss stepwise development of these agents from simple-problem systems exposure and mastery to more complex and perhaps real world problem exposure and mastery. And I will do so in terms of specific and general tasks as identified above and in terms of how groups of individually single task agents can be coordinately applied to address larger and more complex tasks. And yes, after developing a conceptual system and approach from these examples in order to develop a tool set for further use, I will turn to consider information systems and their architectures per se, using them in that process.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: