Platt Perspective on Business and Technology

Reconsidering Information Systems Infrastructure 1

I initially wrote about the pace of innovative change, and on the assertion of possible singularities arising from that in one of my earliest postings to this blog, as a response to sentiments appearing in the literature of the time (see Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground.) And I have much more recently posted a follow-up posting to that in which I reconsidered both innovation development singularities and open-endedly ongoing linear change too, as possibilities.

The reason why I revisited the issues of my earlier 2010 posting in late 2017 is that ongoing disruptive technological advancement, enabling the development of at least very simple self-learning artificial intelligence (AI) systems that can design next step improvements of themselves, has now become possible. These test case systems are all specific-task limited; no one has developed anything like artificial general intelligence yet. Though even if that were to prove impossible, which I seriously doubt, self-learning and self-improving specialized-only, AI expert systems will fundamentally change the world and particularly when and as they can evolve themselves toward greater and greater efficiency and capability.

I would offer two book recommendations for more general discussions of where artificial intelligence-based systems might be headed as they become more and more capable, and as they acquire and advance an ability to self-evolve as part of their basic functionality:

• Tegmark, Max. (2017) Life 3.0. Alfred A. Knoph, and
• Bostrom, Nick. (2014) Superintelligence: paths, dangers, strategies. Oxford University Press.

And I will continue to more widely discuss the complex of issues that AI has already so significantly begun to raise and for all of us, in this blog in future postings and series. But for now, I would focus on more of a baseline point of discussion for that – as retrospect would indicate I did when offering my 2010 posting as cited above.

I focused in my December, 2017 update posting on AI as a phenomenon, and without at least explicit consideration as to how it would or would not be implemented and integrated into larger systems. My goal here is to at least begin a discussion of implementation, as famed in terms of:

• Our here-and-now AI capabilities, and in terms of likely next steps as might be achieved through simpler evolutionary advancement of them and without need of more wild-card disruptive change,
• And with correspondingly less speculation than the above two book references pursue in their narratives when considering these issues.

And I do so by focusing on a specific case in point example, that I would argue is only a relatively small technology evolution step beyond what can be done now:

I focused in my 2017 posting on a specific task-oriented proof of principle example that has been developed at Google, in which a robot that is controlled by an AI system can teach itself to more and more effectively and accurately put objects into the right types of containers (e.g. containers of the right color or precise shape), and regardless of their relative positions within a mix of other containers that it could put those objects into instead. Think of that as representing the self-learning capabilities needed for addressing a very simple type of expert system problem: here solving a problem that would arise for example, in an entirely automated warehousing system when stocking inventory into the right bins for subsequent removal and use was required and where the system in question would have to be able to find and target the right types of storage containers or bins for each of the item types that it has to be able to manage.

The example that I would touch upon here is both more complex and more economically significant, and certainly for the pharmaceutical industry: the rational drug development challenge of predictively designing possible drugs for meeting specific pre-specified medical needs, based on detailed molecular understanding. Let’s consider this problem itself for a moment; it is actually, in many key respects just a somewhat more complicated next step up example of the proof of principle self-learning AI problem of my 2017 posting, with is goal of putting objects in the right places.

Drugs work by binding chemically to specific sites on specific target molecules. This is true whether that means an antibiotic binding to a particular receptor or other binding site on the coat of some specific bacteria or class of them that can cause disease, or whether that means a specific chemotherapeutic toxin binding to a target molecule on the surface of rapidly dividing cancer cells, or whether this means an analgesic blocking a pain signal pathway by binding to a specific type of target molecule on pain receptors or on specific sensory neurons leading from them. I could of course go on and on with the specific examples there; the basic principle noted here applies for drugs in general and one of the overarching goals of pharmaceutical research has essentially always been to find better chemicals for more effectively binding, with greater and greater specificity, to the right target molecules and to the right parts of them, in order to address specific medical challenges.

This, I add is a molecular level description of what pharmaceutical research at least seeks to do that goes back to well before disease and illness, or drugs were understood at a chemical and a molecular level. Traditionally, researchers tried possible drugs on specific medical disorders because they had familiarity with similar drugs. Or they took a “throw everything at it and see what sticks” approach. Antibiotic research and the search for new classes and types of antibiotics, fits that pattern as a famous source of examples, and with researchers looking for microbes in soil and other samples taken from anywhere and everywhere, in search of one that happens to be producing something novel and new to them, that works against at least some class of pathogens.

Then when a candidate drug is found that works on cell cultures and similar test contexts, an effort is made to test their safety and efficacy when used on organisms, people included. Rational drug design seeks to shorten and streamline this process by taking an emerging cellular and molecular level understanding of diseases and disease processes, and identifying possible molecular targets associated with them that if blocked or modulated in some way, would affect a cure. Then the goal is to identify and test specific molecules: specific possible drugs that would be expected to bind to those targets and in ways that would accomplish that, and hopefully without also showing toxic side effects. The goal here is to take as much of the random of a traditional new drug discovery process as possible, out of next generation new drug discovery and development.

The first real breakthrough success story in this type of research and development can be found in a drug called Imatinib (Gleevec): a tyrosine kinase inhibitor to cite its specific molecular target, that for its action all but completely suppresses chronic myelogenous leukemia (CML) and a variety of other cancers. Basic biomedical research had identified that molecular target as one worth pursuing to see if blocking it would block these cancers, with an initial focus there on CML. And this drug development process was carried out on the basis of a deep understanding of these diseases themselves and of tyrosine kinase and its chemistry. And Imatinib was tested on the basis of its known and expected chemical binding properties and not only did it block tyrosine kinase as predicted: it stopped the cancer cells that relied on it in their tracks.

The problem is that we do not know enough to be able to quickly and directly identify possible target molecules for treatment, for diseases and disorders in general, and we do not know enough about the three dimensional folding or the chemically available structures of complex molecules in general, or about their possible reaction kinetics when they do chemically interact, to automatically be able to trim down the list of possible targets and drugs a priori to any real testing. For many of our most pressing diseases, molecular level testing would literally involve doing millions of tests, at least through computer simulation to separate out a significantly smaller enough subset of them so as to make more detailed and real world testing possible.

One approach to that “volume of initial screening” challenge, that I have at least briefly noted in this blog is to crowd source it with as many as millions of participants testing possible drug molecules against possible target sites using simple test algorithms that they can allow to run on their home computers in the background: using what are called their computer’s free cycles. I stress here that this is actually more complex a problem than just matching one molecule against another. It is about identifying what parts of those molecules would be available for chemical interaction, and not blocked from that from being folded inside a molecular interior and blocked from being functionally available for that (e.g. by steric hindrance.)

Let’s, as I suggested above, consider this as a more complex variation on the ball in the right bowl problem of my earlier posting, as repeated above. And let’s consider the development of a first step, inefficient and ineffective algorithm that would both seek to solve this type of problem, and be able to test and score its own effectiveness, and that had a self-learning capability for tweaking itself into a next generation form that would repeat this now-cycle of testing, refinement and (hopefully) improvement until an updated version of this process began to be able to effectively and then cost-effectively find real answers to drug development challenges.

This type of cyclical development problem would probably start with much simpler chemicals and chemical systems contexts and it would then be shifted into facing and addressing more complex systems, until it had developed a sufficient level of capability for identifying and matching for chemical binding, to be tested in this larger pharmaceutical challenge context and with data derived from the study of specific diseases.

I readily admit that I have conflated several separate challenges into this example, all of which are currently at least, computationally daunting. One of the subordinate of those tasks that would have to enter into resolving this, is the problem of calculating out the precise folding pattern and resulting geometry of complex proteins and other large molecules. That type of information is needed in order to know what portions of those molecules can be physically accessible for chemical reaction and not simply hidden away in a folded molecular interior. I assume that any necessary related problem of this type, at least would be significantly worked on and for the classes of target proteins that would be reasonable candidates for study in this type of pharmaceutical development scenario, and for whatever specific disease was being pursued when “growing out” a pharmaceutical development AI as envisioned here. And I also assume here that better, more nuanced algorithms for these tasks might be more complex to outline than simpler and less effective ones would be, but they would reduce the overall numbers of computations needed to arrive at working solutions. So individual computations carried out would as such, on average become more informative as the algorithms they were arrived at through, evolved and improved.

I have at least briefly and admittedly cartoonishly outlined a specific self-learning specialized task AI problem here, that we are not ready to tackle yet but that I expect our emerging and growing AI capabilities to be able to handle in the coming years, and with growing efficiency once a threshold of performance has been reached. That certainly holds true as supercomputer and networked cloud based versions of that continue to advance in capability, while becoming more and more cost-effective to use. I am going to continue this discussion in a next installment where I will at least briefly begin to discuss some of the options as to how this type of AI driven system might actually be deployed and used as it approaches and surpasses a threshold of development where it can provide effective answers, cost-effectively.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: