Platt Perspective on Business and Technology

Reconsidering Information Systems Infrastructure 5

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on July 13, 2018

This is the 5th posting to a series that I am developing here, with a goal of analyzing and discussing how artificial intelligence, and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Ubiquitous Computing and Communications – everywhere all the time 2, postings 374 and loosely following for Parts 1-4. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

This is a series, as just stated, that is intent on addressing how artificial intelligence will inform, and in time come to fundamentally reshape information management systems, and both in general for more localized computer-based hardware and software systems, and in larger networked contexts as they interconnect: the global internet included. And I have been focusing on the core term that enters into that goal in this series, up to here: what artificial intelligence is, and with a particular focus on the possibilities and potential of the holy grail of AI research and development: the development of what can arguably be deemed to be true artificial general intelligence.

• You cannot even begin to address how artificial intelligence, and artificial general intelligence agents in particular would impact upon our already all but omnipresent networked information and communications systems, until you at least offer a broad brushstroke understanding as to what you mean by and include within the AI rubric, and outline at least minimally what such actively participating capabilities and agencies would involve and particularly as we move beyond simple, single function-limited artificial intelligence agents.

So I continue here with my admittedly preliminary, and first step discussion as to what artificial general intelligence is, and when moved past the simply conceptual of a traditionally stated Turing test, to include at least a starting point discussion of how this might be operationally defined too.

I have been developing my line of argument, analysis and discussion here around the possibilities inherent in more complex, hierarchically structured systems, as constructed out of what are individually just single task, specialized limited-intelligence artificial agent building blocks. And I acknowledge here, a point that is probably already obvious to many if not most readers: I am modeling my approach to artificial general intelligence at least to the level of analogy, on how the human brain is designed for its basic architecture, and on how a mature brain develops ontologically out of simpler more specialized components and sub-systems of specialized and single function units, that both individually and collectively show developmental plasticity that is experience based.

Think bird’s wings versus insect wings there where different-in-detail building block elements, come together to achieve essentially the same overall functional results, while doing so by means of distinctly different functional elements. But while those differently evolved flight solutions differ in structure and form details, they still bear points of notable similarity, and because they seek to address the same functional need and in the context of the same basic physical constraints, if for no other reason. Or if you prefer, consider bird wings and the wings found on airplanes, where bird wings combine both air foil capability for developing lift, and forward propulsive capability in the same mechanism, while fixed wing aircraft separate air flow and resulting lift capability from engine-driven propulsion.

And this brings me to the note that I offered at the end of Part 4, in anticipation of this posting and its discussion. I offered a slightly reformulated version of the Turing test in that installment, in which I referred to an artificial intelligence-based agent under review as a black box entity, where a human observer and tester of it can only know what input they provide and what output the entity they are in conversation is offering in response. But what does this mean, when looking past the simply conceptual, and when peering into the black box of such an AI system? I stated at the end of Part 4 that I would at least begin to address that question here, by discussing two fundamentally distinct sets of issues that I would argue enter into any valid answer to it:

• The concept of emergent properties as they might be more operationally defined, and
• The concept of awareness as a process of information processing (e.g. with pre- or non- directly empirical simulation and conceptual model building that would be carried out prior to any actualized physical testing or empirical validation of approaches and solutions considered.)

And as part of that, I added that I will address the issues of specific knowledge based expert systems in this, and of granularity in the scope and complexity of what a system might be in-effect hardwired to address, as for example in a Turing test context. What would more properly be developed as hardware, software, and I add firmware here? I will also discuss error correction as a dual problem, with half of it at least conceptually arising and carried out within the simpler agents that arise within an overall intelligent hierarchically structured system, and half of it carried out at a higher level within such a system, and as a function of properties and capabilities that are emergent to that larger system.

I begin addressing the first of those topics points and I add the rest of the above-stated to-address issues, by specifically delving into one two-word phrase that I offered in it, that all else here, hinges upon: “emergent properties.”

• What is an emergent property? Property, as that word is used here, refers to functionalities: mechanisms that directly cause or prevent, or directly modify an outcome step in a process or flow of them. So I use this term here in a strictly functional, operational manner.
• What makes a specific property as so defined and considered here, to be emergent?
• Before you can convincingly cite the action of a putative emergent property as a significant factor in a complex system, it has to be able to meet two specific criteria:
• You have to be able to represent it as the direct and specific causal consequence of an empirically testable and verifiable process, and
• This process should not arise at simpler organizational levels within the overall system under consideration, than the level at which you claim it to hold significant or defining impact.

Let me explain that with an admittedly gedanken experiment level “working” example. Consider a three tiered hierarchically structured system of what could best be individually considered to be simple, artificial specialized intelligence, single task agents. And you observe a type of measurable, functionally significant intermediate-step outcome arising within that system as you track its processing, and both overall and for how this system functions internally (within its black box.) Is this outcome the product of an emergent property in this system? The answer to that would likely be yes, if the following criteria can be demonstrably met, here assuming for purposes of this example that this outcome appears in the second, middle level up in the network architecture hierarchy here:

• The outcome in question does not and cannot arise from any of the lowest, first level agent components in place in this system.
• The outcome only arises at the second level of this hierarchy if one or more of the agents operating at that level receive input from a correct combination of agents that function at the lowest, first level, and with appropriate input from one or more agents at the second level too, as they communicate together as part of their feedback control capabilities.
• And this outcome, while replicable and reliably so given the right inputs, is not simply a function of one of the second level agents. It is not a non-emergent property of the second level and its agents.
• And here, this emergent property would most likely become functionally important at the third, highest level of this hierarchical stack, where its functional presence would serve as meaningfully distinct input to agents operating there.

Think of emergent properties here, as capabilities that add to the universe of possible functional output states and the conditions that would lead to them, that might be predictably expected when reviewing the overall functional range of this system when looking at its element agents and their expectable outcomes as if they were more independently functioning elements of a simple aggregation.

To express that at least semi-mathematically, consider an agent A, as being functionally defined in terms of what it does and can do as defined by its output states, coupled with the set of input states that it can act upon in achieving those outputs. As such it could be representable as a mapping function:

• FA(the set of all possible input values) → {the set of all possible output values}A

where FA can be said to map specific input values that this process can identify and respond to, as a one-to-one isomorphism as a simplest case, to specific counterpart output values (outcomes) that it can produce. This formulation addresses the functioning of agents that can and would consistently carry out one precisely characterizable response to any given separately identifiable input that it can respond to. And when the complete ensemble of inputs and responses as found across the complete set of elemental simple function agents in a hierarchical array system of this type, is considered at this simplest organizational level, where no separate agents in the system duplicate the actions of any others (as backups for example), their collective signal recognition and reaction system: their collective function space (or universe if considered in set theory terms) of input recognized and input-specific actions taken, would also fit a one-to-one isomorphism, input to output mapping pattern.

That simplest case assumes a complete absence of emergent process, response activity. And one measure of the level of emergent process activity in such a system would be found in the level of deviation from a more basic one-to-one input signal to output response pattern observed when studying this system as a whole.

The more extra activity is observed, that cannot be accounted for in this type of systems analysis, the more emergent property activity must be taking place, and by inference, the more emergent properties there must be there – or the more centrally important those that are there, must be to the system.

Note that I also exclude outcomes convergence in this simplest case model too, at least for single simple agents included in these larger systems, where several or even many inputs processed by at least one of the simple agents in such a system, would lead to some same output from it. That, however, is largely a matter of how specific inputs are specified. Consider, for example, an agent that is set up algorithmically to group any input signal X that falls within some value range (e.g. greater than or equal to 5) as functionally identical for processing and output considerations. If a value for X of 5, or 6.5 or 10 is registered, all else remaining the same as far as pertinent input is concerned, a same output would be expected – and for purposes of this discussion, such input value deviation for X would be evaluated as moot and the above type of input to output isomorphism would still be deemed valid.

I am going to continue this narrative in a next series installment, where I will conclude my discussion of emergent properties per se, at least for purposes of this phase of this series. And then I will move on to consider the second major to-address bullet point as offered above:

• The concept of awareness as a process of information processing (e.g. with pre- or non- directly empirical simulation and conceptual model building that would be carried out prior to any actualized physical testing or empirical validation of approaches and solutions considered.)

And I will also, of course discuss the issues that I listed above, immediately after first offering this bullet point:

• The issues of specific knowledge based expert systems in this, and of granularity in the scope and complexity of what a system might be in-effect hardwired to address, as for example in a Turing test context. What would more properly be developed as hardware, software, and I add firmware here? I will also discuss error correction as a dual problem, with half of it at least conceptually arising and carried out within the simpler agents that arise within an overall intelligent hierarchically structured system, and half of it carried out at a higher level within such a system, and as a function of properties and capabilities that are emergent to that larger system.

And my overall goal in all of this, is to use this developing narrative as a foundation point for addressing how such systems will enter into and fundamentally reshape our computer and network based information management and communications systems.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Advertisements

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 2

This is my second posting to a short series on the growth potential and constraints inherent in innovation, as realized as a practical matter (see Part 1.)

My primary goal in Part 1 was to at least briefly lay out the organizing details of a technology development dynamic, that is both ubiquitously present and certainly in rapidly changing technologies, and all but ubiquitously overlooked and even by technologists.

• One half of this dynamic represents what can be seen as the basic underlying vision of futurologists, and certainly when they essentially axiomatically assume a clear and even inevitable technology advancement forward. I add that this same basic assumption can be found in the thoughts and writings of many of the more cautionary and even pessimistic of that breed too. The basic point that both of those schools of thought tend to build from is that whether the emergence and establishment of ongoing New is for the good or bad or for some combination in between, the only real limits to what can be achieved through that progression are going to be set by the ultimate physical limits imposed upon us by nature, as steered towards by some combination of chance and more intentional planning. And chance and planning in that, primarily just shape the details of what is achieved, and do not in and of themselves impose limits on the technological progression itself.
• The second half of this dynamic, as briefly outlined in Part 1, can be represented at least in part by the phenomenon of technology development lock-out, where the cumulative impact of mostly small, early stage development decisions in how new technologies are first implemented, can become locked in and standardized as default norms.

I cited a simple but irksome example of this second point and its issues in Part 1, that happens to be one of the banes of existence for professional musicians such as Jaron Lanier, who chaff at how its limitations challenge any real attempt to digitally record and represent live music with all of its nuances. I refer here to the Musical Instrument Digital Interface (MIDI) coding protocol for digitally representing musical notes, that was first developed and introduced as an easy-for-then, way to deal with and resolve what was at the time a more peripheral and minor-seeming challenge: the detail-level digital encoding of single notes of music as a data type, where the technologists working on this problem were more concerned with developing the overall software packages that MIDI would be used in. The problem was that this encoding scheme did not allow for all that much flexibility or nuance on the part of the performer in how they shaped those musical notes, leaving the resulting music recordings more crudely, stereotypically formed then the original had been, and lacking in the true character of the music to be recorded.

One of the hallmarks of technological lock-ins is that they arise when no one is looking, and usually as quick and at least easier solutions to what at the time seem to be peripheral problems. But with time they become so entrenched and in so many ways, as the once-early technology they were first devised for grows, that they become de facto standards, and in ways that make them very hard to move beyond or even just significantly change. And such entrenched “solutions,” to cite a second defining detail of this technology development constraint, are never very scalable. The chaffing constraints that they create make them lock-ins because of this and certainly as the technologies that they are embedded in, in effect outgrow them. The way that they become entrenched in the more developed technologies that form around them, leaves them rigidly inflexible in their cores.

Human technology is new in the universe, so I decided while writing Part 1 to turn to consider biological evolution and at least one example drawn from that, to illustrate how lock-in can develop and be sustained over long periods of time: here on the order of over one billion years, and in a manner that has impacted upon essentially all multi-cellular organisms that have arisen and lived on this planet, as well as all single cell organisms that follow the eukaryotic cell pattern that is found in all multi-cellular organisms. My point in raising and at least briefly discussing this example is to illustrate the universality of the basic principle that I discuss here.

The example that I cited at the end of Part 1 that I turn to here, is a basic building block piece of the standard core metabolic pathway system that is found in life on Earth: the pentose shunt, or pentose phosphate pathway as it is also called.

• What is the pentose shunt? It is a short metabolic pathway that plays a role in producing pentoses, or 5-carbon sugars. Pentose sugars are in turn used in the synthesis of nucleotides: basic building blocks of the DNA and RNA that carry and convey our genetic information. So this pathway, while short and simple in organizational structure, is centrally important. And any mutations in any of the genes that code for any of the enzyme proteins that participate in this pathway are 100% fatal, every time and essentially immediately so.
• Think of this as one of biochemistry’s MIDI’s. If a change were made in the MIDI protocol that prevented a complete set of digitized notes from being expressed, any software incorporating that mutation would fail to work, and would constitute a source of fatal errors as far as any users are concerned. Any change: any mutation in the pentose shunt that limited its ability to produce the necessary range of metabolic products that it is tasked with producing, would be fatal too.

Does this description of the pentose shunt suggest that it is the best of all possible tools for producing those necessary building block ingredients of life? No, it does not, any more than any current centrality of need for MIDI and its particular standard in music software as that has been developed, indicates that MIDI must be the best of all possible solutions for the technology challenge that it addresses. All you can say and in both cases is that life for one, and music software for the other, have evolved and adapted around these early, early designs and their capabilities and limitations, as they have become locked-in and standardized for use.

Turning back to biology as a source of inspiration in this, and to the anatomy of the human body with its design trade-offs and compromises, and with its MIDI-like design details, I cite a book that many would find of interest: and particularly if they have conditions such as arthritis or allergies, or know anyone who does, or if they are simply curious as to why we are built the way we are:

• Lents, N.H. (2018) Human Errors: a panorama of our glitches, from pointless bones to broken genes. Houghton Mifflin Harcourt Publishing Co.

This book discusses a relatively lengthy though still far from complete listing of what can be considered poor design-based ailments and weaknesses: poor design features that arose early, and that have become locked-in for all of us. So it discusses how poor our wrist and knee designs are from a biomechanical perspective, how humans catch colds some 200 times more often than our dogs do from how our upper respiratory tract is designed and from immune system limitations that we have built into us, and more. Why for example do people have a vermiform appendix still as an evolutionary hold-over, when the risk of and consequences of acute appendicitis so outweigh any purported benefits from our still having one? Remember that surgery is still a very, very recent innovation, so until very recently acute appendicitis and a resulting ruptured appendix was all but certain to lead to fatal consequences. And this book for all of its narrative detail just touches upon a few primarily anthropocentric examples of a much larger list of them that could be raised there, all of which serve as biological evolutionary systems examples of design lock-in as discussed here.

Looking at this same basic phenomenon more broadly, why do cetaceans (whales, dolphins, etc), for example, all develop olfactory lobes in their brains during embryonic development, just to reabsorb them before birth? None of these animals have, or need a sense of smell from birth on but they all evolved from land animal ancestors who had that sense and needed it. See for example: Early Development of the Olfactory and Terminalis Systems in Baleen Whales for a reference to that point of observation.

I am going to continue this narrative in a next series installment, where I will introduce and briefly discuss a point of biological evolutionary understanding that I would argue, is crucially important in understanding the development of technology in general, and of more specific capabilities such as artificial intelligence in particular: the concepts of fitness landscapes as a way to visualize systems of natural selection, and adaptive peaks as they arise in these landscapes. In anticipation of that line of discussion to come, I add that I began to at least briefly make note of the relationships between steady evolutionary change and disruptively new-direction change, and the occurrence and stability of lock-ins in Part 1. I will return to that set of issues and discuss it more fully in light of adaptive peaks and the contexts that they arise in. Then after developing this foundational narrative for purposes of this series, I will turn very specifically to consider artificial intelligence and its development – which I admit here to be a goal that I have been building to in this progression of postings.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3 and also see Page 1 and Page 2 of that directory. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And also see its Page 1.

Innovation, disruptive innovation and market volatility 42: innovative business development and the tools that drive it 12

Posted in business and convergent technologies, macroeconomics by Timothy Platt on June 28, 2018

This is my 42nd posting to a series on the economics of innovation, and on how change and innovation can be defined and analyzed in economic and related risk management terms (see Macroeconomics and Business, posting 173 and loosely following for Parts 1-5 and Macroeconomics and Business 2, posting 203 and loosely following for Parts 6-41.)

I began a more detailed discussion of the bookkeeping and accounting, operational-details side of innovation in a business in Part 40 and Part 41. And I focused in that on the more budget organizing and management level of analysis and decision making as they determine how a larger, longer-term research project or program would be funded out of a business’ overall operating budget. Then I added on outside-sourced pressures to that narrative with inclusion of marketplace and business competitor-sourced factors and complications, and how they shape product and service prioritization decisions as possible marketable offerings are prioritized for funding and for more general support for their possible development, production and sale.

My goal for this next installment in that progression is to tie the above-noted lines of discussion as offered up to here, back to the accounting and bookkeeping systems decision making processes that are in place in this would-be innovative business, in order to tie together all of the perspectives to this complex of issues that I have been addressing here. That of necessity means my breaking open the essentially black box, monolithic representation of research and development per se that I have been citing here, to consider it as a dynamic and mutable process and system of them too, and with all of the strategic and operational trade-offs and compromises that that can entail: cash flow and money management ones included. My goal here is to prepare for that discussion by offering an organizing framework for it.

There are a number of starting points that I could develop this line of discussion to come from, but my goal here is to develop this narrative in as hands-on practical a manner as possible. So I will begin with the basic choice options that arise that would have to be managed, and both for their possible planning and execution and for their funding, if pursued. This means thinking through and understanding what research and development, and what more immediately-marketing facing specific product or service design and development options might be pursued. And it means thinking these possibilities through and for their costs and benefits in light of a detailed understanding of what else at that business is clamoring for such support. Basically, think of this as addressing the dual questions of what resources and funding levels that they would require are going to be needed, and what of this can be made available and with what impact on the business as a whole as those decisions are made and carried out.

• Peeling back the layers of the onion there, what has to be done on an ongoing and/or more remediative or business development basis that any research and development effort would have to compete with for available funds and support?
• What is the priority level for supporting at least the single most promising and potentially beneficial product or service development initiative under consideration here? And as a coordinated bookend question to match that with, what other, non-product and service development needs have as high or higher a priority level on a due diligence and risk management basis as that? Here, think special case and emerging for those business needs, and think of them in terms of the funding capacity that this business has after accounting for more standard necessary ongoing business functions.
• The goal of these questions is to lay out the basic business needs and expenditures context that any new and next product or service development would have to compete with,
• And then see where those potential funding requirements would fit into that overall picture, as far as relative needs and benefits are concerned. Then it becomes possible to filter out the lower priority but perhaps nice contextual business needs and opportunities from the more pressing ones actually faced, and certainly when focusing on the most pressing new and next development goals for this business for what they could bring to market going forward.

The above, of course requires that realistic possible product and service development options be reviewed and considered, so as to effectively and realistically identify and focus on the most pressingly important of them in that prioritization exercise. And as more explicitly noted in the above four bullet points, the same would be done for business processes in place, and with a focus on anything that would not simply fit into that business’ ongoing routine for its ongoing operations without offering specific defining value. What is routine and standard, and pursued in a way that creates positive net value for the organization that carries it out? What is simply done because it is done, and even when it might cost more to continue, at least in-house than it is worth, for whatever positive value it might still create? And what non-routine, internally facing possible business development expenses might be in place or under consideration? And what is the potential overall funding pool level left over:

• Net of basic essential ongoing business processes and particularly for processes and systems of them that should be maintained,
• And when also setting aside any reserve funds for risk management purposes that might need to be drawn from the incoming revenue stream,
• And of course the setting aside of revenue sourced funds for business profits that would leave the business in the hands of shareholders or other owners?

This is all about putting research and development, and from more open-ended endeavors on through specific more one-off product-focused ones, into an explicit context and both for overall levels of funding that might be available for this type of expense, and for the overall priority that this type of expenditure would hold for the business as a whole.

Up to here I have primarily focused on a context where the business under consideration would have one clearly identifiable highest priority research and development project, that might potentially be taking place and that would have to be budgeted for and supported, at any one time. Any references to greater complexity of need in this up to this point have been detail-free and of minimal analytical value.

Realistically, any innovative business is going to have to develop and maintain an innovation development pipeline, with a stream of New in it, and with various innovation possibilities at different stages of development pursued in it. So the above discussion as presented in this posting is more of an innovate-or-not one, than it is a representation of an analysis as to how to prioritize and fund innovation per se. As such the above type of analysis, which I have been offering here can be thought of as a simplified presentation of a more complex possibility where this business would have to take several or even many innovation development options and opportunities into account, and through the same types of prioritization comparisons as briefly outlined above. Then, and with an ongoing innovation development pipeline assumed for the moment, and with explicit consideration of the costs (and risks) versus benefits analyses assumed, and the prioritization decisions that would come from them presumed to be in place,

• What would have to betake into account in detail as far as expenditures already made for efforts already underway are concerned, and as new and next expenditure requirements are considered, if a decision to proceed with a given innovation effort continues?

This posting’s narrative, as noted towards its start, has been offered as a foundational background to a line of discussion and analysis that I initially offered to develop at the end of Part 41 and that I repeated offering here, above: “breaking open the essentially black box, monolithic representation of research and development per se that I have been citing here, to consider it as a dynamic and mutable process and system of them too, and with all of the strategic and operational trade-offs and compromises that that can entail: cash flow and money management ones included.”

The devil as they say is in the details, and any realistic analytically based decision making process of the type that I have been outlining here, has to explicitly consider those details, if the right decisions are to be made. I will explicitly turn to that next phase of this overall discussion in my next installment of this series, where I will at least initially focus on the single highest priority innovation development project under consideration and just that, as I did here – but with consideration of what goes into making that innovative effort work. Then after addressing that simple organizational model example, as I did here, I will at least begin to add in the nuances of simultaneously (and more realistically) seeking to develop and maintain an ongoing innovative presence in your industry and when addressing the ongoing needs of your market and your customer base. I will expand this discussion to explicitly consider true innovation pipelines.

Meanwhile, you can find this and related postings at Macroeconomics and Business and its Page 2 continuation. And also see Ubiquitous Computing and Communications – everywhere all the time 3 and that directory’s Page 1 and Page 2.

Reconsidering the varying faces of infrastructure and their sometimes competing imperatives 2: adding in a second case study example

Posted in business and convergent technologies, strategy and planning, UN-GAID by Timothy Platt on June 19, 2018

This is my second installment to a series on infrastructure as work on it, and as possible work on it are variously prioritized and carried through upon, or set aside for future consideration (see Part 1.)

I began this series in Part 1 with a negative example of how this type of need and response system can play out, as drawn from recent events on the island of Puerto Rico. That example centered on how that island was devastated by Hurricane Maria in 2017, and how this disaster and its impact on the island’s critical infrastructure was addressed. More specifically there, I wrote of the failure of the United States government to bring their Federal Emergency Management Agency (FEMA) and its emergency response capabilities to bear on this problem at anything like a significant level of action. And I wrote of the US governmental decision as made by President Trump, to in effect abandon Puerto Rico and its people in the face of this disaster. Puerto Ricans are American citizens; and they have been facing a fundamental need for what amounts to comprehensive critical infrastructure rebuilding in the face of what they have gone through from this storm: the worst historically to have ever hit their island and with records going back centuries now for that, to the time of the early Spanish explorers. And Puerto Rico and its people have been officially and formally left to their fate by this failure to follow through, and in the face of both longstanding American tradition and in the face of FEMA’s basic charter as a government agency.

I stress here that that charter mandates that this agency spearhead national government-led responses to disasters, and that it has in fact stepped forth to do that on numerous other occasions and for other American communities. But this time, President Trump visited the site of devastation to proclaim that any help from his government would be limited and of very short duration, and with nothing else to be expected. That “and with nothing else” has come to include a lack of any federal coordination or support in any longer term recovery effort too. And what effort has been mounted there has been plagued by corruption in how recovery and reconstruction contracts have been doled out, and inefficiency, and by what can only be called large-scale theft of funds that were to go towards this effort where funds have been allocated for it.

• This disaster still continues and for many as I write this, with many on the island still without electrical power to their homes or businesses, and over half a year after the hurricane first hit.

This is a series about infrastructure and its priorities, and politics. I began it with an overtly toxic example of how need and even pressing need, and political ideology and personal political ambition do not always align and in either a functional, or a moral or ethical sense. I picked a very real example to start this series with, that is still painfully playing out as I write this second installment. And it is one that I am sorry to say will likely still be playing out: dragging on, and for as long as I write to this series and beyond. And this still in the news story, is one that highlights how need and justification for action and commitment, and an idealized presumption of how infrastructure development and maintenance should be carried out, do not necessarily hold true in the real, politically charged and politically governed world that we live in.

I finished my discussion of that case study as far as I went with it in Part 1, by offering a somewhat cryptic comment as to one of the consequences of all of this, that I said I would explain and clarify here:

• “The impact that this failure to lead or to act (n.b. in addressing Puerto Rico’s problems), have had significant repercussions in the continental United States too. In anticipation of that, I note here that ongoing and unresolved damage to Puerto Rico and its infrastructure and its businesses, have had repercussions that reach into virtually every hospital in the United States.”

That assertion calls for clarification. And I begin offering that clarification with some ideologically grounded background points that can be found in President Trump’s tweets and in his more lengthy public statements and actions. Trump likes to proclaim that all Mexicans are “thieves and rapists” (though “some of them might be nice people”), to cite a parallel example of his disdain for Hispanics of all sorts. And he likes to say in justification of his decisions and actions in this disaster’s context, that Puerto Rican’s are all indolent and lazy and that they just live off of handouts and welfare from the US government. But Puerto Rico has come to play a significant, and even crucial role in the overall US economy too, and in some very specific areas of its production and manufacturing systems. Pertinently to the above repeated consequences bullet point and as an example of the island’s critical role in American manufacturing, essentially all of the intravenous hydration fluids used at essentially every hospital and clinic in the continental United States were produced by businesses located in Puerto Rico. These are vitally important healthcare resources for treating a very wide range of hospital and clinic patients, and for a very wide range of conditions and in meeting a great many types of patient needs. And those businesses were heavily damaged by Hurricane Maria, and were left without electrical power after that. They still have not recovered and a real, full recovery for them might take years if it is to happen at all. Meanwhile, US hospitals have found themselves rationing the IV fluids that they can acquire from alternative sources, and prioritizing what necessary medical care they can afford to offer that calls for this type of resource, with the more limited supplies they still have. This affects people who have to be able to receive medications that have to be delivered intravenously and with supporting IV hydration, and hospitalized patients who cannot take fluids orally among others, and impacts on the healthcare of many in need.

• When critical infrastructure systems are maintained and even improved to accommodate advancing need, this positively affects individuals and communities and even entire nations. And the ripple effects that spread out from that type of development effort through indirect benefits accrued, can be just as profound as the direct impact of this work being done where it is.
• And a failure to so act, has consequences that are at least as impactful and that can be just as far reaching – just in a different direction.

I said in Part 1 that I would turn to consider a second case study example, after concluding my at least initial take here on what has been happening and not happening in Puerto Rico. And I begin that with the New York City Metropolitan Transportation Authority (MTA) and its convoluted City versus State politics – and the impact that the tug of war resulting from that ownership conflict have had on this crucial transportation infrastructure system and on its reliability and safety.

I began addressing this complex of issues at the end of Part 1 in my brief anticipatory note as to what I would discuss here. But I begin fleshing out that brief opening note here with some relevant background material that I offer in order to more clearly indicate what is involved in this example. The most recent ridership numbers that I have access to from the MTA itself indicate that as of the end of 2016:

• An average of 5,655,755 rides were taken on this subway system every weekday,
• And average of 5,758,201 rides were taken on it every two day weekend, and
• A total of 1,756,814,800 rides were taken on this subway system for that year as a whole.

This makes the New York City MTA the seventh most heavily used subway system in the world, by ridership numbers (see this MTA facts sheet.) And to share another scale metric, this subway system as of this writing, now includes in it more than 665 mainline miles of track that run along 22 interconnected route lines, along with several permanent shuttle lines added in to more effectively interconnect this system, each with their miles of track too. This system currently includes 472 stations in operation (with a total of 424 if stations connected by transfer walk-throughs are counted as single stations), located throughout the boroughs of Manhattan, Brooklyn, Queens, and the Bronx and with a line running along the Hudson River-facing coast of Staten Island too. The NYC MTA is the largest metropolitan subway system in the world and significantly so, when scale is determined on the basis of numbers of subway stations included. And New York City would basically grind to a halt if this system were to significantly go down and for any significant period of time. The NYC MTA is very genuinely an example of a crucially important, vital critical infrastructure system.

I stress here that this is the New York City Metropolitan Subway system that I write of here, managed and run by a government agency that has Metropolitan in its title. And this was a City agency and the management and maintenance of it was under City government control and oversight – until that is, the State government moved in to take what effectively amounts to control over the MTA. Why and how did this happen?

I have already at least started to address that dual question at the end of Part 1, in anticipation of this installment, and for continuity of narrative repeat what I said of this there. Then governor Nelson Rockefeller imposed a layer of state control over the city’s MTA and its decision making authority in 1968, in order to garner more votes for his own reelection bid of that year. He saw his polling numbers to be weak in the New York City metropolitan area and decided that if he stepped in and blocked a planned, and I have to add needed five cent fare increase in the cost of a ride on the subway, he would garner more votes from appreciative New Yorkers (see Why Does New York State Control the Subway? That’s the 20-Cent Question.) So Rockefeller stepped in with the help and support of the state legislature in Albany that would suddenly gain veto level control over the New York City MTA and its budget and its planning, and comprehensively for that: not just control over fares charged in that system.

• And yes, Rockefeller did win his reelection bid, so this political gambit did at least seem to work for him. But this change of controlling authority was of open-ended duration, and it still holds as a defining fact for the MTA and for New York City as a whole for its ongoing consequences: 30 years later and with no end to that in sight.

I am going to continue this narrative in a next series installment, where I will at least briefly and selectively discuss how Albany politics, and New York City subway system priorities as set by Upstate politicians who never themselves ride on this subway system, and how their priorities and their resulting decisions are skewed to put that politely when compared to actual need. So, for example, even when a subway station is being rebuilt as a major redevelopment initiative that is approved by Albany, it is rare that any effort be made to comply with the US Federal government’s Americans with Disabilities Act (ADA) in that. As a result, most of the subway stations in this system are still not handicapped accessible. And that federal law was passed in 1990, fast approaching 30 years ago too!

I will discuss systems maintenance and how the MTA’s track and signaling systems are in disrepair, and how as a matter of irony the fare for a ride on the subway system has kept going up and up in spite of these and other failures to actually effectively prioritize or maintain this system. And I will at least briefly make note of New York State’s current governor and his political ambitions, bookending the starting point to this example as that began with then Governor Rockefeller and his political ambitions.

After concluding that case study example, I will turn from the negative and the cautionary-note view of infrastructure development and maintenance, to consider the positive side to this. I will discuss the post-World War II European Recovery Program: commonly known as the Marshall Plan. And then I will step back to address the topics and issues of this series in more general terms. As part of that, I will explore and discuss the questions and issues of what gets supported and worked upon and why in this, and what is set aside in building and maintaining infrastructure systems. I will discuss China’s infrastructure building outreach as a part of that, as that nation seeks to extend and strengthen its position globally. And I would at least touch upon and make note of a variety of other infrastructure development and maintenance examples too.

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. I also include this in Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory. And I include this in my United Nations Global Alliance for ICT and Development (UN-GAID) directory too for its relevance there. I begin this series with an American example, but it addresses globally impactfull issues and events.

Rethinking national security in a post-2016 US presidential election context: conflict and cyber-conflict in an age of social media 10

Posted in business and convergent technologies, social networking and business by Timothy Platt on June 4, 2018

This is my 10th installment to a new series on cyber risk and cyber conflict in a still emerging 21st century interactive online context, and in a ubiquitously social media connected context and when faced with a rapidly interconnecting internet of things among other disruptively new online innovations (see Ubiquitous Computing and Communications – everywhere all the time 2, postings 354 and loosely following for Parts 1-9.)

I sought to at least briefly lay out a fundamental challenge that we all face, globally, in Part 9. And I continue developing that narrative here by briefly repeating a four point starting assumptions list that I began that posting with, and a core point of conclusion that I arrived at from it, and from recent in the news events as well. I begin with the four basic points, somewhat edited for use in this next discussion step context:

• The underlying assumptions that a potential cyber-weapon developer (and user) holds, shape their motivating rationale for developing (and perhaps actively deploying and using) these capabilities. (Yes, I phrase that in terms of developer and user representing the same active agent, as a developer who knowingly turns a cyber-weapon over to another, or others is in effect using that capability through them them and with those “outside” users serving as their agents in fact.)
• The motivating rationales that are developed and promulgated out of that, both determine and prioritize how and where any new such weapons capabilities would be test used, and both in-house if you will, and in outwardly facing but operationally limited live fire tests.
• And any such outwardly facing and outwardly directed tests that do take place, can be used to map out and analyze both adversarial capability for the (here nation state) players who hold these resources, and map out the types of scenarios that they would be most likely to use them in if they were to more widely deploy them in a more open-ended and large scale conflict.
• And crucially importantly here, given the nature of cyber-weapons it is possible to launch a cyber-attack and even with a great deal of impact on those under attack, in ways that can largely mask the source of this action – or at least raise questions of plausible deniability for them and even for extended periods of time. That, at least is a presumption that many holders of these weapons have come to assume, given the history of their use.

And with that restated, I offer my point of conclusion and concern as arise out of the above:

• Think of this as a matter of cyber-weapon capability, by its very nature, setting up what can amount to the opposite of the long-presumed threat-reducing result of nuclear deterrence. The more damaging the potential and even certain outcome of anyone launching nuclear weapons against an enemy is, the more likely it becomes that all would be annihilated from them. This is the by-now widely and all but axiomatically assumed Mutually Assured Destruction or MAD hypothesis, and a hypothesis that few if any are willing to even seriously consider testing experimentally. And the more advanced and capable the nuclear weapons are that are developed, the greater the perceived and shared fear that they generate and for all from this, and the greater the impetus that this creates to prevent that from happening. Here in contrast, the more advanced and sophisticated that cyber-weapons become, the greater the risk that they will be used and certainly in “limited and controllable” live fire tests, that become increasingly likely to get out of control and with all of the escalation of conflict that that could lead to.

I stress the last sentence of that bullet pointed statement here, explicitly noting that while nuclear weapons and their development and even-just limited proliferation, led directly to a recognition and acceptance of the MAD doctrine: the MAD hypothesis, cyber-weapons and their much wider proliferation have led to what amounts to an anti-MAD presumption: a presumption of anonymity-based safety for any who would deploy and use such weapons. And that leads me to the to-address point that I added at the end of Part 9 as my intended area of discussion here:

• The issues of how better to respond to all of this, and reactively where that is necessary and proactively where that can be possible, have become an absolute imperative and for the safety of all. And my goal in addressing this is ambitious as my intent here is to at least touch upon all involved levels of conflict and its potential, and from that of the individual to that of the nation state and of national alliances. And in the course of discussing issues that arise from all of that, I will of necessity reconsider a point of issue that has informed most all that I have written in this blog regarding cyber-security and the challenges that it faces: the impact of change and of disruptive change in all of this, where any solutions and approaches arrived at, of necessity have to be dynamically updatable and as part of their basic definitions.

I begin addressing that Gordian knot challenge by at least raising a perhaps simple sounding, wave of the hand solution to all of this, which I will address in the course of what follows: a need to develop a convincing cyber-weapons counterpart to the old nuclear weapons context MAD doctrine. And a key to that would of necessity require making the core of my above repeated fourth assumptions bullet point obsolete:

• Effectively ending any possible realistic presumption of anonymity as a protective cloak around any cyber attacker by making the consequences of relying upon it too costly and the chances of being found out and identified as the attacker too high.

This would require a coordinated, probably treaty-based response that would most likely have to be organized with United Nations support if not direct United Nations organizing oversight:

• Possible cyber-attack victims, and at all organizational levels from nation states on down to local businesses and organizations, have to be willing to both publically acknowledge when they have been breached or compromised by malware (cyber-weapons.)
• And organizations at all levels in this from those smaller local organizations on up to national organizations and treaty groups of them have to develop and use mechanisms for coordinating the collection and analysis of this data, and both to more fully understand the scope and nature of an attack and any pattern that it might fall into, and to help identify its source.
• And a MAD approach can only work if this type of analysis and discovery would in effect automatically lead to action, with widely supported coordinated sanctions imposed on any offenders so identified and verified, and with opportunity built into this to safeguard third parties who an actual attacker might set up as appearing to be involved in an attack event when they were not. (I made note of this type of misdirection as to attack source in Part 9 and raise that very real possibility here again too.)

There is an old saying to the effect that the devil is in the details. The above “solution” approach to this challenge might sound positive and nice when simply presented in the above type of broad brush stroke manner and without regard to, or even acknowledgment of the very real world complexities that any such resolution would require. I am going to at least briefly begin to chip away at the edges of the perhaps naive if well intentioned simplicity of what I have proposed here, in my next series installment where I will begin to delve into some of the details that any valid resolution to this challenge, would have to accommodate and deal with.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time 3, and at Page 1 and Page 2 of that directory. And you can also find this and related material at Social Networking and Business 2, and also see that directory’s Page 1.

Meshing innovation, product development and production, marketing and sales as a virtuous cycle 13

Posted in business and convergent technologies, strategy and planning by Timothy Platt on May 19, 2018

This is my 13th installment to a series in which I reconsider cosmetic and innovative change as they impact upon and even fundamentally shape the product design and development, manufacturing, marketing, distribution and sales cycle, and from both the producer and consumer perspectives (see Ubiquitous Computing and Communications – everywhere all the time 2, postings 342 and loosely following for Parts 1-12.)

I have been delving into the issues of business process innovation in recent installments to this series, and when it would make more sense to retain these sources of value in-house as proprietary resources, and when it might make more sense to market them, either within closed supply chain business-to-business collaboration contexts, or as more openly available marketable and sellable products, or even as recurringly updated commodities (see Part 11 and Part 12 in particular for that.)

But I have taken a relatively static, timeframe-independent approach to this set of issues up to here, and one that has at most just acknowledged a simplified approach to larger contextual change that this type of innovation would arise and play out in. More specifically, I have assumed and allowed for specific individual innovation-based change events and have considered how they would be evaluated, for how best they would be maintained in-house or shared outside of the creating business’ walls. And I have treated these innovation instances as separate distinct events, and with little real regard of the longer term contexts that they would arise in, hold value in, and with time become routine and then obsolete within. At the end of Part 12 I stated that I would continue that posting’s narrative here, by discussing:

• “Contextual pace of change issues, and innovation shelf lives as sources of consideration that would impact upon strategy and its decision making processes here. And I will also discuss all of this in the dynamic and at times less than clear-cut context of global flattening as it is taking place in this 21st century, as accompanied by the reactive (if nothing else) global wrinkling and push back that accompanies that. I will at least briefly consider how those types of factors would impact upon business process improvement and innovation, and its retention or transfer that I have been addressing here too.”

I begin addressing that complex of issues from the perspective of the above-offered preamble paragraph that I immediately preceded this bullet point topics list with, and with the innovation life cycle. I do so because as I stated in recent installments to this, timing can be everything in both setting and executing strategy and planning where innovation is concerned.

Let’s begin with the fundamentals and with the initial development of a new innovation or change. And at least initially, let’s set aside the issues of evolutionary versus revolutionary change and simply consider the basic life cycle steps that any new development in what is offered faces, and whether that means offered strictly in-house and on a more trade secret basis or to an outside market. A new, in this context business process resource or new improvement on an existing one, arises and is locally prototype tested in one area of a business or otherwise vetted. And it is updated and refined based on this real world, end user facing beta testing process. Then it is rolled out in-house, and a more formal process begins if appropriate that would determine whether this would be retained in-house or offered in some manner to other businesses. I presume here that this is not a business that develops such resources to sell or license as its basic business model. And independently of that, a more product evolution processes of refinement and improvement is probably going to start too, and certainly if the initial innovative change in question is more than just a simple cosmetic one – in which case this will have already been taking place.

Now let’s set aside one of the key starting assumptions of the above paragraph and start parsing out some of the additional factors that enter into that with an at least relatively simple cosmetic change versus a disruptively novel and even game changing innovation, as this distinction would likely impact on value longevity and innovation cycles:

A. Cosmetic changes, as a simplest case in point example, can hold as ephemeral a defining value as an impulse-buy oriented fad, and can disappear into the business productivity counterpart of a discount isle at mark down prices, and seemingly overnight as a next cosmetic update arrives, and a next after that. This means that cosmetic changes, as an extreme case can and do hold only short term value, and very little marketable value even then. And if this applies to consumer markets and store settings, it does so just as strongly for minor and cosmetic changes that might be brought to business productivity tool user interfaces, as a business process-supportive example, and particularly when its users see this change as only offering cosmetic value without making those tools easier to use or more productively effective when doing so. (Note: most software changes, office productivity software included, are minor and more cosmetic in nature than they are fundamental in nature, and certainly if you set aside behind-the-scenes security patch updates from consideration here and only consider user-visible changes.)
B. A genuine disruptively novel innovation that leads to, for example, a new type of business productivity tool that would hold real value for those who use it, on the other hand, is going to hold both larger and longer lasting value. And that will hold true both for those who use it and for those who own it and license or sell it. And focusing on the later of those two stakeholder categories for the moment, that holds for those who would retain this innovation in-house in the developing company and for exclusive use there. And it would hold true for any outside customer/users who would come to depend on this new capability too. And this defining source of value will in all likelihood continue to emerge and unfold for its end users, and for its developer/owners and according to both of those scenarios as it is evolved and improved upon, and until it has become effectively mainstreamed into general use with look-alike alternatives out there on the market, produced by other competing businesses, taking away any initial first mover effect benefit that the initial developer might have started out with, or until it is supplanted by a fundamentally new next-step alternative, or both.

I offer these two examples as representing what amounts to extreme end-case alternatives that would fit upon an innovation novelty and longevity spectrum, with most innovative change fitting somewhere between them.

• Businesses in general would see little if any incentive to retain a more Type A innovation in-house (to use the above-cited labeling) and certainly if this was a possible source of at least short-term revenue generating value if marketed and sold. And return value there could mean either a bump in brand name recognition, or a probably short-term cash profitability or both.
• But those same businesses would see both possible risk and possible value from either of the scenarios of retaining in-house or offering more publically, for anything more like a Type B innovation, to further cite the above designating labels. And the more B-like an innovation is, when considered for how it fits on the innovation novelty and longevity spectrum, the more important it becomes to carry out effective cost benefits analyses that would help evaluate what type of usage and deployment strategy would work best for the developing business.

My primary goal for the next installment to this series will be to at least begin to more fully explore the options and possibilities of the second of these two bullet points with its Type B innovation focus. And I will at least begin to do so by posing a set of organizing due diligence questions that a business owner or executive facing this type of decision would want to be able to address:

• How much value would this innovation actually create, for its implementation and use?
• And how much value would it create, net the costs of developing and implementing it?
• And how would this innovation best be evaluated and value determined for its likely competitive value created, from how it would reshape and at least hopefully improve business efficiency for the enterprises that bring it into their systems and use it?

Then after addressing this set of issues, I will proceed with my to-address list of topic points as repeated for orienting purposes towards the top of this posting.

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 4, and also at Page 1, Page 2 and Page 3 of that directory. And see also Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations.

Reconsidering Information Systems Infrastructure 4

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on May 15, 2018

This is the 4th posting to a series that I am developing here, with a goal of analyzing and discussing how artificial intelligence, and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Ubiquitous Computing and Communications – everywhere all the time 2, postings 374 and loosely following for Parts 1-3. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

I began discussing what I have referred to as a possible “modest proposal” task for artificial intelligence-based systems in Part 1 and Part 2 of this series, as drawn from the pharmaceutical industry and representing what can be considered one of their holy grail goals: developing new possible drugs that would start out having a significant likelihood of therapeutic effectiveness based on their chemistry and a chemistry-level knowledge of the biological processes to be affected. And I have been using that challenge as a performance benchmark case-in-point example of how real world tasks might arise as problems that would be resolved by artificial intelligence, that go beyond the boundaries of what can be carried out using single, specialized artificial intelligence agents, at least in their current here-and-now state of development.

Then I further developed and explored that real world test-case challenge and what would go into effectively addressing it, in Part 3 where I began delving in at least preliminary-step detail, into the possibility of what might be achieved from developing and deploying what could become multiple hierarchical-level, complex problem solving arrays of what are individually only distinctly separate specialized, single task artificial agents, where some of them would carry out single specialized types of command and control functions, in coordinating the activities of lower level agents that report to them and that more directly work on solving specific aspects of the initial problem at hand.

Up to here, I have simply sidestepped the issue of whether such a system of individually simple, single algorithm agents, with its collective feedback and self-directing control systems could in some sense automatically qualify as having achieved some form of general intelligence: a level and type of reasoning capacity that would at least categorically match what is assumed by the term “human intelligence,” even if differing from that in detail. I categorically state here, that that type of presumptive conclusion need not in any way be valid or justified. And I cite, by way of well known example for justifying that claim, the feedback and automated control functionalities that have been built into complex steam powered systems going back to the 19th century, for regulating furnace temperatures, systems pressures and the like and even in tremendously complex systems. No one would deny that well designed systems of that sort do in fact manage and control their basic operational parameters for keeping them functioning and both effectively and safely. But it would be difficult to convincingly argue that a steam power plant and associated equipment in an old steam powered ship, for example, were in some sense intelligent and generally so. There, specialized and limited in overall aggregated form, is still specialized and limited, and even if at the level of managing larger and more complex problems than any of its single component-level parts could address.

With that noted, I add here that I concluded Part 3 of this series by stating that I have been attempting to offer a set of building block elements that would in all likelihood have to go into creating what would become an overall artificial intelligence system in general, and an arguably genuinely intelligent information management and communications network and system of such networks as a whole, in particular. Think of my discussion in this series up to here as focusing on “necessary but not sufficient” issues and systems resources.

I am going to turn to at least briefly discuss the questions and issues of infrastructure architecture in this, and how that would arise and how it would be managed and controlled, in what follows. (Think self-learning and self-evolving systems there, where a probably complex structured hierarchical system of agents would over time, optimize itself and effectively rewire itself in that process.)

And my goal there will be to at least offer some thoughts as to what might go into the “sufficient” side of intelligent and generally intelligent systems. And as part of that, I will more fully consider at least some basic-outline requirements and parameters for functionally defining an artificial general intelligence system per se, and in what would qualify as more operational terms (and not just in more vaguely stated conceptual terms as are more usually considered for this.)

Let’s start addressing all of that with those “higher” level but still simple, single algorithm agents that function higher up in a networked hierarchy of them, and that act upon and manage the activities of “lower” level agents, and how they in fact connect with them and manage them, and the question of what that means. In that, let’s at least start with the type of multi-agent system that I have been discussing in the context of my above noted modest proposal example, and build from there. And I begin that by raising what might be considered a swear word in this type of narrative for how it can be expansively and misleadingly used: “emergent properties.” And let me begin addressing that, by in-effect reframing Turing’s hypothesis in at least somewhat operationalized terms:

• A system of simple, pre-intelligent components does not become intelligent as whole because some subset of its component building block elements does. It becomes intelligent because it reaches a point in its overall development where examination of that system as a whole, and as a black box system, indicates that it is no longer possible to distinguish between it and its informational performance output, and that of a benchmark presumed-general intelligence and its output that it would be compared to (e.g. in Turning’s terms, a live and conscious person who has been deemed to be of at least average intelligence, when tested against a machine.)

And with a simple conceptual wave of the hands, an artificial construct becomes at least arguably intelligent. But what does this mean, when looking past the simply conceptual, and when peering into the black box of such a system? I am going to at least begin to address that question starting in a next series installment, by discussing two fundamentally distinct sets of issues that I would argue enter into any valid answer to it:

• The concept of emergent properties as they might be more operationally defined, and
• The concept of awareness, as a process of information processing level (e.g. pre- or non- directly empirical) simulation and model building.

And as part of that, I will address the issues of specific knowledge based expert systems, and of granularity in the scope and complexity of what a system might be in-effect hardwired to address, as for example in a Turing test context.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 1

Usually I know when setting out to write a posting, precisely where I would put it in this blog, and that includes both prior decision as to what directories it might go into, and decision as to what series if any that I might write it to. And if I am about to write a stand-alone posting that would not explicitly go into a series, new or already established, I generally know that too, and once again with that including where I would put it at a directories level. And I generally know if a given posting is going into an organized series, and even if as a first installment there, or if it is going to be offered as a single stand-alone entry.

This posting is to a significant degree, an exception to all of that. More specifically, I have been thinking about the issues that I would raise here, and have been considering placing this in a specific ongoing series: Reconsidering Information Systems Infrastructure as can be found at Ubiquitous Computing and Communications – everywhere all the time 2 (as its postings 374 and loosely following.) But at the same time I have felt real ambivalence as to whether I should do that or offer this as a separate line of discussion in its own right. And I began writing this while still considering whether to write this as a single posting or as the start to a short series.

I decided to start this posting with this more behind the scenes, editorial decision making commentary, as this topic and its presentation serves to highlight something of what goes on as I organize and develop this larger overall effort. And I end that orienting note to turn to the topic that I would write of here, with one final thought. While I do develop a number of the more central areas of consideration for this blog, as longer series of postings, I have also offered some of my more significantly important organizing, foundational ideas and approaches in single postings or as very brief series. As a case in point example that I have referred back to many, many times in longer series, I cite my two posting series: Management and Strategy by Prototype (as can be found at Business Strategy and Operations as postings 124 and 126.) I fully expect this line of discussion to take on a similar role in what follows in this blog.

I begin this posting itself by pointing out an essential dynamic, and to be more specific here, an essential contradiction that is implicit in its title. Moore’s Law, as initially posited in 1965 by Gordon Moore at Intel, proposed that there was a developmental curve-defining trend in place, according to which the number of transistors in integrated circuits was doubling approximately every year and a fraction – but without corresponding price increases. And Moore went out on a limb by his reckoning and predicted that this pattern would persist for another ten years so so – roughly speaking, up to around 1975. And now it is 2018 and what began as a short-term prediction has become enshrined in the thinking of many, and as if an all-but law of nature, from how it still persists in holding true. And those who work at developing next generation integrated circuits are still saying the same things about its demise: that Moore’s law will run its course and end as ultimate physical limitations are finally reached … in another few next technology generations and perhaps in ten years or so. This “law” is in fact eventually going to run its course, ending what has been a now multi-decade long golden age of chip development. But even acknowledging that hazy end date limitation, it also represents an open ended vision and yes, an open ended expectation of what is essentially unencumbered disruptively new growth and development that is unburdened by any limitations of the past, or present for that matter.

Technology lock-in does not deny the existence of or the impact of a Moore’s Law but it does force a reconsideration as to what this still ongoing phenomenon means. And I begin addressing this half of the dynamic that I write of here, by at least briefly stating what lock-in is here.

As technologies take shape, decisions are made as to precisely how they would be developed and implemented, and many of these choices made are in fact small and at least seemingly inconsequential in nature – at least when they are first arrived at. But these at-the-time seemingly insignificant specific design and implement decisions can and often do become enshrined in those technologies as they develop and take off, and as such take on lives of their own. That certainly holds true when they, usually by unconsidered default, become all but ubiquitous for their application and for the range of contexts that they are applied to as those technologies that they are embedded in, mature and spread. Think of this as the development and elaboration of what effectively amount to unconsidered standards for further development, that are arrived at, often as here-and-now decisions and without consideration of scalability or other longer-term possibilities.

To cite a specific example of this, Jaron Lanier is a professional musician as well as a technologist and a founder of virtual reality technologies. So the Musical Instrument Digital Interface (MIDI) coding protocol for digitally representing musical notes, with all of its limitations in representing music as actually performed live, is his personal bête noire, or at least one of them. See his book:

• Lanier, J. (2011) You Are Not a Gadget: a manifesto. Vintage Books,

for one of his ongoing discussion threads regarding that particular set-in-stone decision and its challenges.

My point here is that while open and seemingly open ended growth patterns, as found in examples such as Moore’s Law take place, and while software applications counterpart to it such as the explosive development of new database technology and the internet arise and become ubiquitous, they are all burdened with their own versions of “let’s just go with MIDI because we already have it and that would be easy” decisions, and their sometimes entirely unexpected long-term consequences. And there are thousands of these locked-in decisions, and in every wide-spread technology (and not just in information technology systems per se)

The dynamic that I write of here arises as change and disruptive change take place, with so many defining and even limiting constraints put in place in their implementations and from their beginnings: quick and seemingly easy and simple decisions that these new overall technologies would then be built and elaborated and scaled up around. And to be explicitly clear here, I refer in this to what become functionally defining and even limiting constraints that were more backed into than proactively thought through, than anything else.

I just cited a more cautionary-note reference to this complex of issues, and one side to how we might think about it and understanding it, with Lanier’s above-cited book. Let me balance that with a second book reference that sets aside the possibilities or the limitations of lock-in to presume an ever green, always newly forming future that is not burdened by that form of challenge:

• Kaku, M. (2018) The Future of Humanity. Doubleday.

Michio Kaku writes of a gloriously open-ended human future in which new technologies arise and develop without any such man made limitations: only with the fundamental limitations of the laws of nature to set any functionally defining constraints. Where do I stand in all of this? I am neither an avowed optimist nor a pessimist there, and to clarify that I point out that:

• Yes, lock-in happens, and it will continue to happen. But one of the defining qualities of truly disruptive innovation is that it can in fact start fresh, sweeping away the old lock-ins of the technologies that it would replace – to develop its own that will in turn disappear at least in part as they are eventually supplanted too.
• In this, think of evolutionary change in technology as an ongoing effort to achieve greater effectiveness and efficiency with all of the current, basic constraints held within it, remaining intact there.
• And think of disruptive new technology as break away development that can shed at least a significant measure of the constraints and assumptions that have proven to at least no longer be scalable and effectively so. But even there, at least some of the old lock-ins are still likely to persist. And this next revolutionary step will most likely bring its own suite of new lock-ins with it too.

Humanity’s technology is still new and young, so I am going to continue this narrative in a next posting to what will be this brief series, with an ancient example as drawn from biology, and from the history of life at its most basic, biochemically speaking: the pentose shunt, or pentose phosphate pathway as it is also called. I will then proceed from there to consider the basic dynamic that I raise and make note of in this series, and this source of at least potential innovative development conflict, as it plays out in a software and an artificial intelligence development context, as that is currently taking shape and as decisions (backed into or not) that would constitute tomorrow’s lock-ins are made.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And also see its Page 1.

Innovation, disruptive innovation and market volatility 41: innovative business development and the tools that drive it 11

Posted in business and convergent technologies, macroeconomics by Timothy Platt on May 5, 2018

This is my 41st posting to a series on the economics of innovation, and on how change and innovation can be defined and analyzed in economic and related risk management terms (see Macroeconomics and Business, posting 173 and loosely following for Parts 1-5 and Macroeconomics and Business 2, posting 203 and loosely following for Parts 6-40.)

I began a more detailed discussion of the bookkeeping and accounting, operational-details side of innovation in a business in Part 40, where I at least briefly touched on the questions of where funding would come from for a specific research project, through what lines on an overall budget. And as a part of that I at least briefly discussed the complications that can arise from resorting to multiple funding sources, and perhaps multiple budget lines for managing them. The types and levels of financial management complexity that I make note of there can add real opacity into any overall effort to track and coordinate overall research efforts, for their overall funding requirements and for tracking their overall cumulative costs accrued.

I add here that that type of complexity and opacity can even mask what presumably earmarked funds are being expended on, where what would be expected to be non-research directed funds might be diverted into supporting more explicitly research efforts, and where expected and earmarked research funds might be used for more routine funding purposes instead. And the boundaries there, and from both of these perspectives can be very blurry and uncertain at times, so none of this need involve intent and at any level to misdirect funds too.

• The types of information coordination and sharing needs that of necessity can and do arise in such complex budgeting systems, can create challenges when managing overall task and goals prioritization and when tracking and measuring overall levels of performance and cost-effectiveness achieved.

So I also raised the possibility of running all such expenses and their accounting for research endeavors, through single channel systems set up for that purpose. More specifically, and expanding on what I offered on this in Part 40, I note that:

• I have already raised the possibility of setting up a dedicated research center within a business in a variety of contexts in this blog, with its own leadership and management, and its own organizational structure and its own budget lines. See, for example: Keeping Innovation Fresh (as can be found at Business Strategy and Operations – 2 for its Parts 1-16.) I raise this business organization and management approach again here too, as offering a possible mechanism for managing research financing and for better addressing the above-cited, dispersed system challenges.
• Alternatively, businesses can and do also set up special needs and related funding streams in their overall fiscal planning and in their bookkeeping and accounting of that, and both for maintaining specific risk management reserve funds, and for enabling their capturing and developing research and development opportunity too.
• And I offer these possibilities as two of a larger possible set of them that can be pursued, to keep this line of discussion more practically oriented.

Up to here, this all addresses these issues from strictly within the business, and in terms of its more internal organizational issues. My goal here is to face outward and put that line of discussion into its proper, wider context and certainly where the innovation in question is market-facing and arising in a product or service development context. To be more specific in that, my goal here in this posting is to “add in consideration of market and marketplace stability and consistency, and uncertainty and volatility.” And I begin addressing that by offering a general point of conceptually organizing observation:

• One of the primary consequences of adding consumer interest and its evolution, and marketplace stability and consistency, and the uncertainty and volatility that they bring with them into this type of discussion, is that including those and similar factors into this type of narrative adds timing and timeline pressures into it, and as strategically and operationally defining considerations.
• These considerations become essential task and goal-prioritization factors for determining the When and How of this innovation, and for when setting its overall and step-by step scale of effort and commitment too.

Let’s consider the above and at least some of its consequences and collateral issues, starting from the fundamentals. And with that perspective in mind, I continue on from here in this narrative by acknowledging and laying out some specific assumptions that enter into this, and certainly as I analyze and discuss this set of issues. I presume in this line of discussion that:

• A business in question here, functions and competes in an industry and business sector that is highly competitive and with that competitive pressure coming in large part from a demanding marketplace that always expects and wants new and exciting: next and different.

Technology and product generation cycles, of necessity have to be very short for any business facing these types of pressure, and both for same-technology generation updates that would serve to keep more current offerings fresh, and for the development and marketable offering of the new and disruptively new – or at least the new that can be successfully marketed as such.

As products and product types mature, next generation new development steps can and generally do become smaller for their overall level of change created and for their market impact from that – unless and until one of the businesses in such a sector, or some disruptively new entrant outsider business moving into that arena, comes up with a game changing innovation that basically reinvents the product category as a whole for what is possible and for what its market would now come to demand, and resets the clock for this evolutionary process back to zero again.

And the pressures that I write of here, tend to lead to marketing hype and certainly when businesses face pressures to tout less significant innovative change as if it were more, in order to capture or retain market share. That only complicates and confounds the marketplace as far as valid reviews and product representations, and consumer awareness are concerned, adding market side friction to this entire system.

What I just did was to discuss at least briefly a basic dual perspective approach to both innovation and its realization, and to funding it and certainly from a more practical hands-on, operational perspective where accounting and bookkeeping are carried out. Then I stepped back to discuss a set of business and marketplace factors that would enter into any strategic, hence any operational approaches for understanding this business context and for attempting to operationally address it. My goal for the next installment to this series, is to tie the above line of discussion as offered up to here, back to the accounting and bookkeeping systems decision making processes that are in place, in order to tie together all of the perspectives to this complex of issues that I have been addressing here.

In anticipation of that line of discussion to come, this will of necessity mean my breaking open the essentially black box, monolithic representation of research and development per se that I have been citing here, to consider that as a dynamic and mutable process and system of them too, and with all of the strategic and operational trade-offs and compromises that that can entail: cash flow and money management ones included.

Meanwhile, you can find this and related postings at Macroeconomics and Business and its Page 2 continuation. And see also Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation.

Reconsidering the varying faces of infrastructure and their sometimes competing imperatives 1: Hurricane Maria and Puerto Rico

Posted in business and convergent technologies, strategy and planning, UN-GAID by Timothy Platt on April 27, 2018

I have been posting to this blog since late 2009, so patterns have probably become evident in what I write here, and in how I schedule and prioritize that. I am at least loosely following a very specific plan with long-term goals as to what to cover and how in all of this. And I organize what I write and offer here in terms of largely-organized series and topics directories, with the occasional lone, one-off posting added into that mix.

I do in fact at least occasionally address current in-the-news issues and have even developed some of my series around them, at least for the purposes of establishing a level of news-based third party validation of the approaches that I take to addressing those topics. And linked references of that type can and do offer current supportive detail that I can use in developing more fully fleshed out lines of reasoning and analysis there too. But all of that noted, I have to add that there are also some long-term areas of interest and concern for me, that I have in effect mulled over for extended periods before getting to them here at all in this blog, too. I have by now touched upon and at least begun to address some of them; some are still pending for inclusion here. And I begin this posting by stating that my goal here is to at least begin to start an analysis and discussion of one of those topic areas that I have considered, but set aside.

I have held off on posting on some of those issues because I saw need for developing specific foundations for discussing them, and in ways that would fit into this blog as an overall organized effort. And a few of them have simply percolated in my mind as I have waited for the right moment to address them. In this case, I could cite some very specific reasons for that. I have in fact written about politics and even very directly and explicitly and certainly over the course of the last roughly two years now. But I am still reluctant to do so, and the topic at hand here is as much about political ideology and political ambition, as it is about anything.

My goal here is to discuss publically significant infrastructure and what does and does not get funded and supported, and by whom. Let me begin that by putting this into perspective:

• I have written of infrastructure systems and their development and maintenance per se, on multiple occasions in this blog, and in that regard cite series that I have offered throughout my United Nations Global Alliance for ICT and Development (UN-GAID) directory. In fact essentially everything offered there fits into the area of infrastructure development and in ways that directly fit into this series, and as foundational material that it would connect to. The issues that I would address here in this posting and in its series to come, do in fact fit into and take on deeper meaning when considered in terms of a groundwork foundation that I have laid for them, there and elsewhere in this blog.
• I have very real and I add pressingly important current events issues in mind as I set out to write about this now: another topics arena that I have written to here and in many topical contexts.
• And the core issues that I would write of here, or at least begin to address in this posting, have been on my mind for quite a while now, certainly going back as far as my experience leading up to my Haiti postings as appear at the start of my above-cited UN-GAID directory. This interest and concern on my part, means for me going back to very early in my work life and life experience too.

I begin this series and this posting in it, with a still too currently topical ongoing news story that for its severity and for the inertia that it faces, has not been resolved or even resolvable and certainly not up to now. Hurricane Maria made landfall in Puerto Rico on September 20, 2017 with sustained winds of some 155 miles an hour, and before that storm was done there it has laid waste to the island. Homes and businesses were lost, and lives too, and the island’s critical infrastructure systems: their electrical grid and telecommunications systems, their fresh water and sewer systems and roadways and bridges and more were in shambles. And there are still large parts of the island that lack electrical power, just to cite one of many possible recovery metrics that I could make note of here, as I write this on April 5, 2018 and six and a half months later. The most recent figures I have seen for that rebuilding and recovery failure indicate that 11% of the entire island is still without electrical power. And with all of that still going on and unresolved from last year, Puerto Ricans are now facing the start of a 2018 hurricane season too, with its risks of new damage.

Why? How can this be allowed to happen, and particularly when Puerto Rico is an American territory and its citizens are United States citizens, and the United States is supposedly one of the wealthiest and most capable nations on this planet, and one that is based on both democratic principles, and a history and tradition of helping one’s neighbors in time of need? Other, much poorer and resource-limited nations in the Caribbean region have more fully recovered from the devastation that they suffered from that hurricane and from others of the 2017 season. How can this have happened and how and why does this persist and with so little hope left for a real and comprehensive resolution to it and certainly insofar as anything like a nationally led recovery effort might be concerned?

I keep going back in my mind as I raise those questions to the televised sight of President Trump making a brief appearance in San Juan, Puerto Rico after the storm had passed, to among other things proclaim that the people there should not expect any help of any duration or extent from the US federal government and that they would be on their own to recover as best they could. And there he was tossing rolls of paper towels from the back of a truck at the people in a surrounding crowd, as a photo opportunity for his own use and benefit – paper towels that others had sent as part of a private sector relief effort and not on his initiative or with US Federal Emergency Management Agency (FEMA) support.

One possible approach to answering those questions and certainly for this troubling and still ongoing event, would be to cite Donald Trump’s well established anti-Hispanic biases with his overtly bigoted hostility towards Mexicans, Puerto Ricans and others. But simply focusing on that type of response would only take this event out of context and treat it as if it were an isolated incidence that did not necessarily fit into a larger and more nuanced pattern. My goal here is to look beyond this one troubling incident and its still unfolding aftermath, to at least consider why some infrastructure challenges remain unaddressed, while others garner more sustained effort for resolving them, and even regardless of what might more objectively be viewed as their relative levels of need and consequence as priorities for action are set.

I began this narrative with this particular example to highlight its importance and for real people and their lives and for real communities and their lives too. And I will have more to add to that narrative in my next installment to this series, where I will discuss the impact that this failure to lead or to act has had in the continental United States. In anticipation of that, I note here that ongoing and unresolved damage to Puerto Rico and its infrastructure and its businesses, have had repercussions that reach into virtually every hospital in the United States, and in ways that president Trump would not have imagined as he threw those rolls of paper towels at people who had just lost seemingly everything. That, as I will discuss in some detail is critically important; failure to address what might seem to be more localized infrastructure challenges can and do bring much more widespread consequences, and in directions that might not be readily anticipated.

I will also at least begin a discussion of New York City’s Metropolitan Transportation Authority (MTA) and how its development and maintenance have been a political football, and a political hostage caught between New York City’s City Hall and Albany. And that has been a significant if often overlooked issue in New York and for both the city and the state, at least since then governor Nelson Rockefeller imposed a layer of state control over the city’s MTA and its decision making authority in 1968, in order to garner more votes for his own reelection bid of that year (see Why Does New York State Control the Subway? That’s the 20-Cent Question.) I will raise and discuss a number of other case study examples in further postings to this too, as I further explore and discuss the questions and issues of what gets supported and worked upon and why, and what is set aside in building and maintaining our critical infrastructure systems.

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 4, and see Page 1, Page 2 and Page 3 of that directory. I also include this in Ubiquitous Computing and Communications – everywhere all the time 2, and see its Page 1. And I include this in my United Nations Global Alliance for ICT and Development (UN-GAID) directory too for its relevance there. I begin this series with an American example, but it addresses globally impactfull issues and events.

%d bloggers like this: