Platt Perspective on Business and Technology

Rethinking the dynamics of software development and its economics in businesses 11

Posted in business and convergent technologies by Timothy Platt on August 4, 2020

This is my 11th installment to a thought piece that at least attempts to shed some light on the economics and efficiencies of software development as an industry and as a source of marketable products, in this period of explosively disruptive change (see Ubiquitous Computing and Communications – everywhere all the time 3, postings 402 and loosely following for Parts 1-10.)

Up to here in this series, I have been successively discussing the first six of a set of eight paradigmatic steppingstone advances in software development. The first five of them can be thought of as representing mature and established technologies:

1. Machine language programming
2. And its more human-readable and codeable upgrade: assembly language programming,
3. Early generation higher level programming languages (here, considering FORTRAN and COBOL as working examples),
4. Structured programming as a programming language-defining and a programming style-defining paradigm,
5. Object-oriented programming,

And the sixth on that list can be thought of as a transitional example, insofar as it is grounded in established coding approaches and their implementations, but that is more likely to come into its own to the extent that it does, as disruptively new developmental capabilities are built into it:

6. Language-oriented programming,

And this leads me to the last two entries in this list:

7. Artificial Intelligence programming, and
8. Quantum computing.

Artificial intelligence and its programming, represent a second transitional example here, and a much more important one as its implemented reach and its emerging capability have already developed to a point where it is likely to become a defining feature for understanding this 21st century as a whole, and certainly for how it is being shaped and advanced technologically.

Quantum computing represents the far side of transitional and beyond, and for what will come next. And when I look to it and to where it is in its current still-embryonic form, and with all of the unknowns that that involves, I understand something of how a one-time technology visionary and president of a major technology-oriented corporation that was renowned for producing computational devices: IBM’s Thomas J. Watson, could find himself speculating “I think there is a world market for maybe five computers.” Who could have even begun to imagine what was possible there and either for the technology that could and would be produced, or for the range of uses that that would be turned to, and literally ubiquitously and by all? We see evidence of a few test-case quantum computers and cannot even begin to imagine what will actually arise from them, and certainly as this comes into real fruition. And given the challenges of developing and maintaining technologies at liquid nitrogen and lower temperatures, even that “maybe five computers,” at least as a rough order of magnitude guess, does not sound entirely crazy.

• Watson was thinking in terms of physically huge, electrical power devouring devices that requires teams of electrical engineers to maintain let alone run. How many businesses, to consider one early adaptor group, would be expected to maintain that with all of the expenses it would bring them?
• And how many businesses, and certainly ones that are not advanced technology focused, can be expected to take on the ongoing and open ended expenses of today’s large scale cryogenic technology driven quantum computers?

That will change and in disruptively unexpected ways and directions, and certainly as and when quantum computing really begins to prove its worth. But I will address that veritable cloud of the as-yet unknowable, or at least something of the early and more visible edges of it next. My point of focus for this posting is the at least significantly more known and established of artificial intelligence and its programming. And I begin doing so by pointing out that the dream of true artificial intelligence goes back to the dawn of computer development and efforts to achieve that.

• Charles Babbage, in the 19th century, dreamed of developing it through the mechanical engineering technology of gears and shafts and escapements, and related components of his day.
• Alan Turing developed what became known as his Turing Test for establishing that a (there early design paradigm electronic) computer has achieved true intelligence, with his arriving at that understanding in the early days of a still vacuum tube-driven computer era.
• One of the very first software programming computer languages developed anywhere that has had any staying power was an artificial intelligence oriented language called LISP (for LISt Processor). This was in fact the second higher level programming language devised of any historical note, with its first formal specifications release coming out in 1958. The only higher level programming language that could claim to be older was FORTRAN, beating that first formal specification date by one year.

The dream of artificial intelligence in fact goes back centuries further than that would suggest. But for purposes of this narrative, I simply note that artificial intelligence as a computer technology goal, is at least as old as computer technology itself. And efforts to achieve it have been pursued just as long too. And when artificial intelligence programming based on LISP and newer computer languages is considered, with significant levels of that being carried out in languages such as Python, R, Prolog and Java, as well as LISP (still), this is a transitional stage example with a firmly established current and established base. But that said, it is also one that is currently developing at an explosive rate and in disruptively new and novel directions and ways.

What do we face when looking towards the more still-to-come side of that transition? I would argue that any meaningful answer to that, that we can turn to in our here-and-now, can be found in more fully considering what we seek to accomplish that we at least currently see as requiring artificial intelligence per se. And that list is already vast and it is still very actively growing, and that growth of need is certain to continue and in unexpectedly novel ways.

• What would qualify for inclusion on that needs list, at least categorically and as its entries would be more generally considered?
• Any task that would call for a flexibility of response and action that would not cost-effectively fit into a more fully specified a priori algorithm based coding construct, and even if such a more standard approach could, at least in principle be used.
• And any task that might at least nominally be amenable to other approaches according to that criterion, but that could still be significantly improved upon for speed and quality of results if an artificial intelligence based within-system flexibility could be built into whatever code that would resolve it.

Let’s consider a few, relatively randomly selected candidate examples there, to put that set of generalities into a more meaningful and perhaps convincing perspective. And setting aside the types of working or at least sought after examples that I have already been considering in this blog (e.g. self-driving cars, computer-based natural speech capability and the like), I offer:

• Rationalized new drug design where that is computationally driven and on a massively open-ended scale,
• And product design and development systems in general, where the scale of complexity there becomes sufficiently high.
• Managing high volume information flow through complex and mutable networks with a goal of real-time homeostatically optimizing resource use while achieving all information flow needs on an ongoing basis. This is a normative function example.
• Managing complex system processes where decisions have to be made in seconds or less and even in milliseconds or less, and where apparent randomness of a type characterized by chaos theory arises. And as a case in point example there, I would cite the identification of and response to cyber-attacks on critical needs networks, where that can include protecting industrial and infrastructure SCADA systems as just one source of more specific working examples.

The world is full of possible examples there, and with artificial intelligence capabilities offering a potential for transforming lives everywhere, where such change might or might not be for the good, and certainly depending on whom you ask. My point here, is that this impact: good, bad or mixed, is already profound. And its reach will continue to expand out and eclipse all that has come of it so far.

How artificial intelligence evolves and develops moving forward will be shaped by what it is being used to do. But that only addresses the perhaps more predictable side to this. As a second change driver and shaper, we have to consider the essentially inevitable role of the disruptively unexpected too, and both for how that can create unexpected barriers to specific lines of development, and for how it can open up entirely new developmental opportunities.

My goal for this posting has been to lay a needs-and-goals based foundation for a more focused discussion to come, that will address this software paradigm in the terms of this series as a whole. I will at least begin to address those issues in a next installment.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory.

Meshing innovation, product development and production, marketing and sales as a virtuous cycle 25

Posted in business and convergent technologies, strategy and planning by Timothy Platt on August 1, 2020

This is my 25th installment to a series in which I reconsider cosmetic and innovative change as they impact upon and even fundamentally shape product design and development, manufacturing, marketing, distribution and the sales cycle, and from both the producer and consumer perspectives (see Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 342 and loosely following for Parts 1-24.)

I have been discussing two basic paradigmatic models of how individuals and communities respond to change and innovation and to New in general here, since Part 16:

• The standard innovation acceptance diffusion curve that runs from pioneer and early adaptors on to include eventual late and last adaptors, and
• Patterns of global flatting and its global wrinkling, pushback alternative.

And as part of that, I have raised five specific case in point contexts in which the issues of acceptance or rejection of change, become important and both individually and societally. I raised three of them in Part 23 and began discussing them there:

1. The development of drought and disease resistant crops that can be grown with little if any fertilizer and without the use of insecticides or other pesticides,
2. Russia’s Novichok (Новичо́к or newcomer) nerve agents, and
3. Disposable single use plastic bags and other petrochemical plastics-based wrapping materials.

And I cited two more in Part 24 and at least briefly and selectively discussed the first of them there:

4. Antibiotics and their widespread use, and
5. Vaccinations and certainly as they have become vilified and particularly in online social media.

My primary goal for this posting is to discuss that fifth and final case study example, and how it sheds light more widely than its specific issues might suggest, on the acceptance and rejection of change per se. In this context that means citing this example as a poster child example of how science per se, and its findings are coming under ideologically framed and supported challenge and even direct attack.

I will circle back to at least briefly reconsider the first three of these case study examples and will then move on from them to address the issues of impact and of who is affected by what acceptance or rejection-driven action, where they might be very different than the people who seek to shape the messages that drive this. But before turning to those issues, I begin here with example Point 5 and vaccinations, and with a particular focus on childhood vaccinations against diseases such as measles, mumps, rubella, tetanus and polio in mind.

I wrote in Part 24, of antibiotics as “at the very least qualify as a strong candidate for being considered the most significant healthcare advancement of the 20th century.” There are two other candidates that come to mind for me as qualifying for at least top four or five status as greatest healthcare investments, and even for all time up to now:

• Improved public sanitation with that including widespread access to safe potable water and safe and effective removal of sewage and other potentially disease carrying waste,
• And the development and widespread use of safe and effective vaccinations against diseases that were once highly contagious, deadly scourges.

I wrote in Part 24 in an example 4, antibiotics context of epistemic bubbles, where people only listen to others who start out sharing their views, and with those connected communities only seeing, hearing, considering or believing facts, “facts”, rumors or opinions that support their already pre-established conclusions on whatever issues are under consideration.

I write here, in this context of the anti-vaccination movement. This began in its earliest iteration when Edward Jenner first developed his cowpox based vaccination against smallpox in the early 1800’s, with people claiming, among other things, that this violated their religious beliefs to administer an animal-sourced material into their body through a wound through their skin. See this History of Anti-vaccination Movements.

The modern version of this that I would primarily focus upon here, stems in large part from concerns over the use of a particular preservative agent: thimerosal – a mercury containing organic compound that was first used in the 1930’s and in both medications and vaccines.

To be clear here, thimerosal is a toxic compound in higher dose exposure. But it is safely broken down and disposed of by the body, with most of it eliminated through the intestines in fecal matter and in a matter of days, when exposure is very small. Only trace amounts of it are used in vaccination preparations when this compound is used there at all. But a since-discredited study was published, based on falsified data that claimed that even the most minute exposure to this in children could lead to their developing autism. See Thiomersal and Vaccines. And with that study, the modern anti-vaccination movement was born.

• This study has been disproven and with carefully conducted clinical research to back that. But perhaps more to the point, backlash from a concerned public brought the pharmaceutical companies that produce vaccination materials, to remove thiomersal from their preparations. That should have made this a non-issue.
• But the same people who said that they would never vaccinate their children because of possible thiomersal-based risk, now generalized their fears and their responses to them through social media and related activism to attack any childhood vaccination at all.
• I have been framing this narrative in terms of two basic paradigmatic models of acceptance and rejection and this story fits fairly stronger into the second of them and even if global flattening per se is not always in play here; this is pushback driven. And as I intimated above, this fits into an ideologically driven pushback against science per se with that including global warming denial, and in our current COVID-19 context, denial of the relevance or the positive value of disease containment efforts such as social distancing and the use of personally protective equipment.

Bringing this back to the issues of vaccinations and of childhood vaccinations per se, communities that have come to exhibit higher levels of anti-vaccination resistance, have also shown reemergence of the childhood disease scourges that those vaccinations had seemed to end: that they had at least seemingly made nightmare stories of a not to be repeated historical past.

• I have to ask this question in this context. How could a parent possibly explain to a child of theirs why they became paralyzed from polio when a safe and effective vaccination was available that would have prevented that from happening, that that parent refused? Unfortunately, this denial has led to new cases of that disease too, so this is not just a theoretical question.
• My point here is that the issues that I raise here are consequential and in the lives of real people and real communities. And this is true for a much wider range of possible accept and embrace, or reject and refuse contexts than just the few that I could raise here.

I am going to return to the first three examples again in the next installment to this series where I will consider, among other factors, the risks and costs, versus positive benefits balances that they raise – and how those balances can be variously understood and evaluated as even just being meaningful possibilities. And as noted above, I will also consider the dichotomies and even disparities of who pushes accept or reject messages, and who more directly faces and has to deal with the consequences of that where they might in fact be very different people. I will at least briefly reconsider all five of my above-listed examples in light of that.

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. And see also Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations.

Reconsidering Information Systems Infrastructure 16

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on July 23, 2020

This is the 16th posting to a series that I am developing, with a goal of analyzing and discussing how artificial intelligence and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 374 and loosely following for Parts 1-15. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

I have been discussing the first of a set of four topics points since Part 13 that I have been offering as a tool kit, or rather a set of indicators as to where a set of tools might be found. And these tools would be used for carrying out at least part of a development process for enabling what would ideally become true artificial general intelligence.

Those topics points are:

• The promotion of ontological development that is both driven by and shaped by self-learning behavior (as cited in Part 13 in terms of antagonistically positioned subsystems, and similar/alternative paradigmatic approaches),
• Scope and range of new data input that might come from the environment in general but that might also come from other intelligent agents (which might mean simple tool agents that carry out single fully specified tasks, gray area agents that carry out partly specified tasks, or actual general intelligence agents: artificial or human, or some combination of all of these source options.)
• How people or other input providing agents who would work with and make use of these systems, simplifying or adding complexity to the contexts that those acting agents would have to perform in, shift tasks and goals actually required of them either more towards the simple and fully specified or more towards the complex and open-ended.
• And I add here, the issues of how an open ended task to be worked upon and goal to be achieved for it, would be initially outlined and presented. Think in terms of the rules of the road antagonist in my two subsystem self-driving example of Part 12 here, where a great deal of any success that might be achieved in addressing any overtly open-ended systems goal will almost certainly depend on where a self-learning agent would begin addressing it from.

Note: my focus here, insofar as I will bring that to bear on general intelligence embryogenesis, is on startings for that per se. The above repeated fourth tool set point begins, for what it addresses, by assuming that this is possible and perhaps even inevitable and addresses the issues of optimization so as to enable greater long-term developmental potential and its realization, from such a starting point.

I focused on the first of those points in Part 15, and primarily on the bootstrap problem of initially developing what could be considered an at least embryonic artificial general intelligence there, and with a supporting discussion of what ontological development might encompass, at least in general terms, in this overall context. And that led me to a set of Point 1-related issues that I raised there but that I held off on delving into:

• The issues of understanding, and deeply enough, the precise nature and requirements of open-ended tasks (n.b. tasks that would call for general intelligence to resolve) per se,
• The roles of “random walk” and of “random mutational”, and of “directed change” and how they might be applied and used in combinations, and regardless of what starting points are actually started from, ontologically, in developing such an intelligence capacity.

I went on to say at the end of Part 15 that I would in fact address these issues here, and I added, in anticipation of that discussion to come, that doing so will lead me directly into a more detailed consideration of the second tool packet as repeated above. And I will in fact at least begin to cover that ground here. But before doing so and in preparation for those lines of discussion to come, I want to step back and reconsider, and in somewhat further detail than I have offered here before, exactly what artificial or any other form of general intelligence might entail: what general intelligence as a general concept might mean and regardless of what form contains it.

I am going to address this from an artificial construct perspective here, as any attempt to analyze and discuss general intelligence from an anthropocentric perspective, and from the perspective of “natural” intelligence as humans display it, would be essentially guaranteed to bring with it a vast clutter of extraneous assumptions, presumptions and beliefs that revolve around our all too human expectation of our somehow being a central pivot point of universal creation. Plato’s and Aristotle’s scala naturae lives on, and with humanity presumed to be at the very top of it and with that held as an unassailable truth and for many.

• As a here-pertinent aside, the only way that we may ever come to understand what general intelligence really is: what it really means, is if we can somehow find opportunity to see it in a non-human form where we can in fact view and come to terms with it without that baggage.
• And this ultimately, may prove to be the only way that we can ever really understand ourselves as humans too.

But setting aside that more philosophical speculation to address the issues at hand here, I turn to consider general intelligence from the perspective of an artificial general intelligence to be, and from the perspective of a perhaps embryonic beginning to one as cited in passing in Part 15. And my goal here is one of at least hopefully shedding light on what might be the source of that first bootstrap lifting spark into intelligent being.

I am going to pursue that goal from two perspectives: the first of which is a broadly stated and considered general framework of understanding, that I will offer as a set of interconnected speculations. Then after offering that, I will delve at least selectively and in broad brushstroke detail into some of the possible specifics there. And I begin the first of those two lines of discussion by making note of an at least ostensibly very different type of series that I have been offering here, that has nevertheless prompted my thought on this area of discourse too: Some Thoughts Concerning a General Theory of Business as can be found at the directory Reexamining the Fundamentals for its Section VI and at its Page 2 continuation for its Section IX. And I specifically cite the opening progression of postings as offered in that Section IX here for their discussion of closed axiomatic systems that are entirely self-contained, and open axiomatic systems that are developed and elaborated upon, on the basis of outside-sourced information too.

I would argue, with the lines of reasoning offered there as impetus for this, that:

• The simple essentially tool-only specialized artificial intelligence agents that I write of here are limited to that level and type of development because they are limited to simple deductive reasoning that is tightly constrained by the rigid axiomatic limitations of the pre-specified algorithms that they execute upon.
• At least potential artificial general intelligences will only be able to advance to reach that goal to the extent that they can break away from such limitations. This of necessity means they’re at least attempting to successfully execute upon open ended tasks that by their very nature would fit an open axiomatic model, as discussed in my business theory series, where new and even disruptively new and different might arise and have to be accommodated too. And this means allowing for and even requiring inductive reasoning as well as its more constrained deductive counterpart.
• Gray area tasks, or rather the gray area agents that would carry them out might simply remain over time in this in-between state, or they might come with time to align more with a simple specialized artificial intelligence agent paradigm, or a more and more genuinely artificial general intelligence one. Ultimately, that will likely depend on what they do and on whether they canalize into a strictly deductive reasoning form, or expand out into becoming a widely inductive reasoning-capable one.

Even if this is necessary as a developmental and enablement requirement for the formation of a true artificial general intelligence I expect that it can on no way be considered to be sufficient. So I continue from that presumption to add a second one, and it involves how deeply interconnected the information processing is and can be, in an entity.

• Tightly compartmentalized, context and reach limited information processing that separates how tasks and subtasks are performed and with little if any cross-talk between these process flows beyond the sharing of output and the providing of input can make for easier, more efficient coding, as object-oriented programming proves.
• But when this means limited at most, and even stringently prevented cross-process and cross-systems learning with rigid gatekeeper barriers limiting the range of access to information held and with that established on an a priori basis, that would almost by definition limit or prevent anything like deep learning, or widely inclusive and involving ontological self-development as might be possible from accessible use of the full range of experience that is held within such a system.
• This means a trade-off between simpler and perhaps faster and more efficient code as a short-term and here-and-now imperative, and flexible capacity to develop and improve longer-term.
• And when I cite an adaptive peak model representation of developmental potential in this type of context, as I have recurringly done so in a concurrently running series: Moore’s Law, Software Design Lock-In, and the Constraints Faced When Evolving Artificial Intelligence (as can be found at Reexamining the Fundamentals 2 as its Section VIII) I would argue that constraints of this type may very well be among the most significant in limiting the maximum potential that an ontological development can reach in achieving effective general intelligence, or just effective functionality per se for that matter.

I am going to continue this narrative in a next series installment where I will add a third such generally stated puzzle piece to this set. And then, as promised above, I will proceed to address the second basic perspective that I made note of above here, and at least a few more-specific points of consideration. Beyond that I will more explicitly address two points that I said that I would delve into here in this posting, in Part 15 but that I have only approached dealing with up to this point in this overall narrative:

• The issues of understanding, and deeply enough, the precise nature and requirements of open-ended tasks (n.b. tasks that would call for general intelligence to resolve) per se,
• The roles of “random walk” and of “random mutational”, and of “directed change” and how they might be applied and used in combinations, and regardless of what starting points are actually started from, ontologically, in developing such an intelligence capacity.

And I will finish my discussion of the second of four main topic points that I repeated at the top of this posting, with that. And then I will move on to address the above-repeated Points 3 and 4 from the tools list that I have been discussing here:

• How people or other input providing agents who would work with and make use of these systems, simplifying or adding complexity to the contexts that those acting agents would have to perform in, shift tasks and goals actually required of them either more towards the simple and fully specified or more towards the complex and open-ended.
• And the issues of how an open ended task to be worked upon and goal to be achieved for it, would be initially outlined and presented, and how the capability of an agent to develop will depend on where it begins that from.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 13

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on July 17, 2020

This is my 13th posting to a short series on the growth potential and constraints inherent in innovation, as realized as a practical matter (see Reexamining the Fundamentals 2, Section VIII for Parts 1-11.) And this is also my tenth posting to this series, to explicitly discuss emerging and still forming artificial intelligence technologies as they are and will be impacted upon by software lock-in and its imperatives, and by shared but more arbitrarily determined constraints such as Moore’s law (see Parts 4-12.)

I have for the most part, focused here in this series on electronic computers and on the underlying technologies and their implementations that make them possible. And that has meant my at least briefly addressing the historical sweep of what has been possible and of what has been accomplished in that, and from the earliest vacuum tube computers on to today’s more advanced next generation integrated circuit-driven devices. And I have addressed issues of design and development lock-in there, and in both a software and a hardware context, and Moore’s law as an integrated circuit oriented, seemingly self-fulfilling prognosticator of advancement. Then I shifted directions in Part 12 and began at least laying a foundation for discussing quantum computing as a source of next-step advancement in this still actively advancing history.

I ended Part 12 by offering this to-address list of issues and topics that will arise in this still embryonically forming new computer systems context, adding that I will address them from both a technological and a business perspective:

• A reconsideration of risk and of benefits, and both as they would be viewed from a short-term and a longer-term perspective,
• A reconsideration of how those issues would be addressed in a design and manufacturing context,
• The question of novelty and the challenges it creates when seeking to discern best possible technological development paths forward, where they can be clear when simple cosmetic changes are under consideration, but opaque and uncertain when more profound changes are involved,
• And lock-in and certainly as that becomes both more likely to arise, and more likely to become more limiting as novelty and the unexpected arise.

I begin at least preparing for a discussion of those issues here by offering a more fundamental science and technology perspective on what is involved in this new and emerging next technology advance context. And to put that in a need and demand based perspective, I begin addressing those issues by sharing links to two relevant news-oriented pieces that relate to the current state of development of quantum computing, and a link to a more detail clarifying background article that relates to them:

Google Claims a Quantum Breakthrough That Could Change Computing,
Quantum Supremacy Using a Programmable Superconducting Processor and this piece on
Quantum Supremacy per se, where that represents an at least test-case proof of the ability of a quantum computing device to solve a problem that classical computers could not solve as a practical matter, for how long its resolution would take with them (e.g. seconds to minutes with even just our current early stage quantum computer capabilities, as opposed to thousands to tens of thousands of years with the fastest current supercomputers that we have now.)

Think of the above news story (the first reference there), the research piece that directly relates to that same event (the second reference there), and the detail clarifying online encyclopedia article (the third reference offered there) as representing proof that quantum computing has become a viable path forward for the development of dramatically more powerful new computer systems than have ever been possible.

That noted, quantum computing really is still just in its early embryonic stage of development so we have only seen a faint preliminary glimmer of what will become possible from this advance. And given the disruptive novelty of both this technology and its underlying science, and certainly as that would be applied in anything like this type of context, we cannot even begin to imagine yet, how these devices will develop.

• Who, looking at an early generation vacuum tube computer could have imagined anything like a modern cutting edge, as of this writing, solid state physics based supercomputer, of the type so outpaced now in the above cited benchmark test?

And with that noted for orientation purposes if for no other reason, I turn to consider quantum computing per se, starting with some here-relevant quantum physics. And that means starting with at least a few of the issues that arise in the context of quantum indeterminacy as that plays a central role in all quantum computer operations.

Let’s consider the well known but perhaps less understood metaphorical example of Schrödinger’s 50% of the time more than just maligned cat.

• A cat is put in a box with a radioisotope sample that has a 50% chance of producing one radioactive decay event in a given period of time. And if that happens, that will trigger a mechanism that will kill the cat. Then after exactly that 50% chance, interval of time has passed, the box is opened and the cat is removed from it, and with a 50% chance of it still being alive and a 50% chance of it now being dead.
• According to the principles of quantum indeterminacy, the condition of that cat is directly, empirically known as a fixed matter going into this “experiment.” The probability of that condition pertaining then and there as a valid empirical truth is 100%. The cat starts out alive. And when the box is opened at the end of this wait, its condition is once again directly, empirically known too, whatever it is. And once again, the probability of that condition, whether alive or dead can be empirically set at 100%, as a known, valid and validated truth. But the condition of that cat is both unknowable and undetermined while it is locked inside that box. And that last detail: the indeterminacy of that cat’s condition while in the box, is the crucially important detail here, and both for how this narrative applies to quantum physics per se and for how it applies in this specific context. And that most-crucial detail is also where this is least understandable.

Is the cat close to 100% alive and just a fraction over 0% dead immediately after the box is closed, with those condition specifying percentages “equilibrating” to 50% and 50% just as the box is opened? Mathematically at least, that is what the basic rules of quantum indeterminacy would specify. But that description just serves as an attempt to force fit classical physics and related scientific expectations and terminology into a context in which they do not actually apply. Those terms: dead and alive apply in a pre- and a post-experiment context where the condition probabilities are empirically resolved at 100%. They do not in fact hold meaning while the cat is in a here-assumed quantum indeterminate state.

Now let’s take this same pattern of reasoning and apply it to a circuit element, or rather its equivalent, that would manipulate a qubit, or quantum bit of data in carrying out a calculation in a quantum computer. Going into that calculation, that piece of data might have a known or at least knowable precise 0 or 1 value. And the same applies when this calculation step is completed. Think of those states as representing the classically observable states that that cat is in, immediately before and after its “experiment.” But the value of that qubit can best be thought of as a probabilistically shaped and determined smear of possible values that range from 0 to 1 while that calculation step is actually taking place.

What is the information carrying capacity of a qubit? The basic answer is one bit, though it is possible to pack two bits into a single qubit using a process called superdense coding. But that, crucially importantly only represents the information capacity inherent to a qubit outside of that in-between indeterminacy period when that calculation step is actually being carried out. Then, during that period of time, the actual functional information carrying capacity of a qubit becomes vastly larger from the vastly larger range of possible values that it in effect simultaneously holds. And that is where and when all quantum computer calculations take place and that is where those devices gain their expansively increased computational power – where that is a function of the speed at which a calculation takes place and a function of the actual volume of information processed during those involved time intervals.

I have to admit to having been in a mental state that is somewhat equivalent to a quantum indeterminacy here, as I have been thinking through how to proceed with this posting Part of me has wanted to in effect dive down an equivalent of Lewis Carroll’s rabbit hole and into some of the details buried under this largely – if still briefly stated metaphorical explanation. Part of me has wanted to keep this simple and direct, and even if that means leaving out areas of discourse that I at least find endlessly fascinating, and with discussions of Dirac notation and linear algebra and of Hilbert space and vectors in it – sorry, but I mention this for a reason here and even when I have chosen to follow the second of those two paths forward from here.

• Quantum computing and the basic theory behind it, and the emerging practice of it too, involve very long and very steep learning curves, as they contain within them a veritable flood of the unfamiliar to most, and of the disruptively New to all.
• And that unavoidable, 100% validatable truth will of necessity shape how the four to-address bullet points that I started this posting with and that I will discuss in detail, will be generally understood, let alone acted upon.

I repeat those topics points here, noting that my goal for this posting was to at least begin to address the technological side to quantum computing, in order to at least start to set a framework for thinking about them as they would arise in real world contexts:

• Where I will offer a reconsideration of risk and of benefits, and both as they would be viewed from a short-term and a longer-term perspective in this new and emerging context,
• A reconsideration of how those considerations would be addressed in a design and manufacturing context,
• The question of novelty and the challenges it creates when seeking to discern best possible technological development paths forward, where they can be clear when simple cosmetic changes are under consideration, but opaque and uncertain when more profound changes are involved,
• And lock-in and certainly as that becomes both more likely to arise, and more likely to become more limiting as novelty and the unexpected arise.

I will at least begin to more explicitly address these points and their issues starting in a next series installment. Meanwhile, here are two excellent reference works for anyone who would like to delve into the details, or at least a lot more of them than I have offered here (where I only skirted the edge of that rabbit hole in this posting):

• Bernhardt, C. (2019) Quantum Computing for Everyone. MIT Press.
• Hidary, J.D. (2019) Quantum Computing: an applied approach. Springer.

The author of the first of those book references claims that they should be understandable to anyone who is willing to put in some work on this, who is comfortable with high school mathematics. It makes use of linear algebra and a few other topic areas that are not usually included there, but it does not assume any prior knowledge of them. The second of those books uses more advanced mathematics and it presumes prior experience in them of its readers. But it goes into corresponding greater depth of coverage of this complex and fascinating topic too.

• Both books delve into issues such as quantum entanglement that are crucially important to quantum computing and to making it possible, so I do offer these references for a reason. And crucially importantly to this discussion, and as a perhaps teaser to prompt further reading, it is quantum entanglement that in effect connects a quantum computer together, so that the calculations carried out qubit- by-qubit in it can be connected together in carrying out larger and more complex calculations than any single information processing element (as touched upon above) could manager or encompass on its own, and no matter how it is recurringly used.
• And both books at least begin to address the issues of quantum computational algorithms and that is where the there-undefined mathematical terms that I cited in passing above come into this.
• All quantum computational algorithms function by shaping the probability distributions of possible values that can arise between that 0 and that 1 in an indeterminate state. And that is described in terms of mathematical machinery such as linear algebra and Hilbert space theory. And as all of the data that would be processed in or created by the execution of those algorithms, tends to be expressed in terms of Dirac notation, or bra–ket notation as it is also called, that actually becomes quite important here too.

Setting aside this set of references for the moment, and my at least attempted argument in favor of looking deeper in the technology side of quantum computing through them, or through similar resources, I add that you can find this posting and this series and related material at Ubiquitous Computing and Communications – everywhere all the time 3. And also see Page 1 and Page 2 of that directory. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And you can also find further related material in its Page 1 directory listing too.

Innovation, disruptive innovation and market volatility 53: innovative business development and the tools that drive it 23

Posted in business and convergent technologies, macroeconomics by Timothy Platt on July 11, 2020

This is my 53rd posting to a series on the economics of innovation, and on how change and innovation can be defined and analyzed in economic and related risk management terms (see Macroeconomics and Business and its Page 2 continuation, postings 173 and loosely following for its Parts 1-52.)

I have been successively discussing a set of three basic topic points and their issues here since Part 43, that I repeat now as I continue that narrative:

1. The complexities and impact of scale, and for both the innovation selling or licensing business: the innovation provider, and for the innovation acquiring business.
2. And the issues of direct and indirect competition, and how carefully planned and executed technology transfer transactions there, can in fact create or enhance business-to-business collaboration opportunities too (as well as creating or enhancing business versus business competitive positions),
3. Where that may or may not create monopoly law risks in the process.

I have completed an at least first-take response to the above repeated Point 1 since Part 43, and have been discussing the above Point 2 since Part 51. And my goal here is to continue that line of discussion as raised there and in Part 52 as well. And to round out this orienting starting note, after completing that I will turn to and address the above repeated Point 3 too. But I begin here with Point 2 and its issues, and by citing a concept that I have mentioned in passing in this narrative but that calls for further and more systematic discussion in this context: lifecycles, and cycles in general.

More specifically, I am going to discuss three concepts here, for how their issues interact and help to shape the contexts that innovation-related business decisions would be made in, and both within single organizations and in competitive contexts:

• Innovation lifecycles and the business process cycles that they arise and play out in,
• Uncertainty (as shaped by both innovative novelty itself and by business systems friction as well), and
• Their financial implications for the businesses that might become involved in all of this.

I begin addressing that set of issues with the business process cycles of the first of those bullet points. Or to be more precise, I begin here with cyclical and noncyclical processes and patterns of them. Lifecycles, to use that term in this context and in a business operations sense, are cyclical processes and process flows that follow a consistent enough recurring pattern so that they can be step-by-step described, and in ways that would apply to essentially any recurrence of them.

That noted I begin addressing those two more general (largely) cause and effect patterns: cyclical and noncyclical, by pointing out a detail that should be obvious. Noncyclical events do not recur, at least as a set determinable pattern, and certainly not for proactively predictable details that would be of particular planning value for improving business efficiency or business competitiveness. If they did, those event types would be deemed cyclical, even if just loosely so.

But cyclical means repeatedly following some determinable process flow that returns to and then repeats from some (perhaps arbitrarily) determined set starting point. How can that assertion as so stated, and as grounded in the seeming absolutes of the above two paragraphs, be reconciled with the often vagaries of actual day-to-day business practice? I offer this point of clarification, in order to address that question, and I add an assumption that I did in fact include in the above text.

• Businesses are, by their very nature, systems of stable and recurring processes, at least when viewed as structured organizations. And this applies as much to limited duration businesses such as seasonal Christmas tree outlets, as it does for long-term and open-ended duration business ventures. Basic processes carried out are in fact at least almost always describable in cyclical terms, and with standardization and consistency in them, serving as a source of risk limiting and value creating importance, and as a general rule.
• But real world business process flows do not always proceed smoothly or entirely predictably.

So I set aside one-off situations and their resolutions here and focus on the recurring, and on issues and events and circumstances where it would offer value to a business to develop and carry through on more lessons-learned based, standardized cyclical approaches. And I address the uncertainties inherent in the above Point 2, and as expressly noted in my second set of topics bullet points here, from that perspective.

In an ideal world, all business activity could arise and be carried out on the basis of perfect and complete information that is communicated where it is needed, when it is needed and in fully actionable form. But this type of completely frictionless circumstance is never going to be possible in the real world. So I turn here to at least begin to add something of the complexities of the second of the three bullet points that I said above, I would address here: uncertainty and its causes and issues.

I mentioned in passing, a few paragraphs earlier in this posting, cyclical processes that are “loosely” so. And I at least begin addressing the questions and issues of uncertainty here, by more explicitly considering that. And I do so by positing a perhaps cartoon understanding of cyclical processes.

• Imagine, if you will, a chalk outline pattern drawn on a sidewalk. And it consists of a roughly ring shaped pattern with a set of dots firmly drawn and with connecting lines added in between them. And to complete this as a simple systems model that does not include feedback or other related reverse flow options or features, every dot: every node in this image connects with two and only two other nodes through those connecting lines. And V shaped arrow head like wedges are drawn into each of those connecting lines, all pointing in the same , here let’s say clockwise direction.
• If I were to label this as representing a business process flow, I would focus on those firmly drawn organizing loci: those dots. And to take that out of the abstract with a retail sales business example, one of those dots might be a customer request for an item in inventory that they would like to purchase, a next might be a step requesting that that item be brought out of inventory so it can be made available for immediate sale, another might involve finding and shipping it to where it would actually be processed and sold and shipped from, to that customer, and so on. And this becomes cyclical as this sale leads to both an update as to how many copies of that SKU item remain in inventory, replenishment of that there when more of that item has to be brought into the business (from wholesale sources or from the original manufacturer) to meet anticipated ongoing sales demands, and with updates to inventory availability information being provided to salespeople who would need this for a next customer – and repeat. (Feedback and what amount to reverse flow steps would include quality assurance reviews to keep this running smoothly and efficiently, feedback and follow through that address any problems that arose on at least an individual cycle instance, and of course feedback that would go into determining how much of that item the business needs to keep in stock as based on issues such as rate and volume of sales of it, and perceived public demand for it.)
• And with that example in mind, I explicitly make note of why I outlined that basic graphical representation the way I did with connecting lines and dots – precisely located single mathematical points. What would happen at each of those categorically identified benchmarked steps, would follow a single consistent recurring pattern. The lines between them might vary as for example where the truck moving item A from the warehouse it is being stored in, might be delayed in bringing it to a fulfillment center where it would be shipped from; they might have to make extra intermediate stops as they make other, similar deliveries. But the dots themselves – the basic steps in this cycle would be known and in this case with essentially complete information (e.g. about any in-transit delays as just cited.) Think of this as a frictionless systems example.
• Now add in information-based friction: business systems friction. Now, you begin to see changes made in the What and How of those ideally-mathematical point dots and with them more accurately represented as circles or patches, and even as patches with fuzzy and sometimes uncertain boundary edges. Here, the greater the diameter of a now-patch, the more variability there is likely to be as the people carrying out those steps have to adjust and even improvise, to complete them and as quickly as possible.
• Fuzziness there, correlates with incomplete information and both as it arrives where those patch identified tasks are being carried out, and as information would be shared from those points of transaction to others – in this cycle and overseeing it.
• And if and when a patch spreads out and diffuses out into a sufficient tipping point of fuzzy uncertainty, this whole cycle can break, becoming a succession of ad hoc limited noncyclical transactional processes.

I have been writing this in terms of a strictly within-business context, and in terms of business systems friction as a breaker of clear, efficient predictable and plannable cyclical process operations. But the types of change and uncertainty that are inherent in the pursuit and development of the innovative New per se, and of the disruptively novel innovative New in particular, can contribute to that loss of clarity and efficiency too. Let’s reconsider my above-cartoonishly outlined sales cycle example as offered above, with this in mind. In an ideal world and absent the types of friction challenges that I write of here, that business and its hands-on employees and managers would have immediate and direct access to, and be able to plan and work from, a valid and detailed understanding of their marketplace and of who, at least demographically would want to buy the above cited SKU item A. But the more disruptively new and novel that product is, the more uncertainty there will be as to who will in fact even want to buy and use it, and certainly as pioneer and earliest adaptors of it.

And to bring financial considerations into this discussion, that is precisely where cash flow and working capital, and here-and-now reportable profitability enter this narrative, as do a range of other timing considerations.

I have been using this posting to build a conceptual framework that I will use when addressing the issues of direct and indirect competition in this context. I am going to complete that framework in a next series installment and at least begin to make use of it in discussing that complex of issues. And in the process, I will delve into how product lifecycles can and of necessity do connect to business cycles as addressed here, and with the types and degrees of innovative change and novelty involved, playing crucial defining roles there. And then, as noted above, after completing my discussion of the above-repeated general topics Point 2, I will turn to consider its accompanying Point 3 and its issues, as the possibilities of monopoly or restraint of trade concerns might arise, and definitely in a business to collaborative partner business, innovation transfer context.

Meanwhile, you can find this and related postings at Macroeconomics and Business and its Page 2 and Page 3 continuations. And also see Ubiquitous Computing and Communications – everywhere all the time 3 and that directory’s Page 1 and Page 2.

Reconsidering the varying faces of infrastructure and their sometimes competing imperatives 14: considering the perils of “technically sweet solutions”

Posted in business and convergent technologies, strategy and planning, UN-GAID by Timothy Platt on June 29, 2020

This is my 15th installment to a series on infrastructure as work on it, and as possible work on it are variously prioritized and carried through upon, or set aside for future consideration (see United Nations Global Alliance for ICT and Development (UN-GAID), postings 46 and following for Parts 1-13, plus its supplemental posting Part 4.5.)

I began this series by briefly and selectively outlining and discussing a succession of five specific case study examples of large scale infrastructure development and redevelopment projects, with that including both primarily positive role model, and primarily cautionary note negative examples. And in keeping with real world complexities as they arise in any contexts as large and complex as these, I selected examples there that combined both positive and negative features within them. Then I broke away from that expository pattern in Parts 10-12 to offer at least a first draft take on a set of general principles that I would argue should enter into any overall better practices guide for selecting, designing and carrying out such initiatives.

My goal here is to turn back to the specifics by reconsidering another historical example that I have previously looked into in this blog in an earlier series: an attempted greening of the Sahel region of Sub-Saharan Africa through development of a system of deep drilled artesian wells (see Planning Infrastructure to Meet Specific Goals and Needs, and not in Terms of Specific Technology Solutions 1. I was initially planning on addressing this in Part 13 to this series, but I broke with my initial organizational outline of it there, to add in a first step, anticipatory note as to what might arise from the COVID-19 global pandemic that we face now, as efforts are made to reopen and to rebuild, and with at least significantly scaled national infrastructure efforts all but certain to be included there. I will turn to and discuss my Sahelian example here and now, and will do so with the issues and challenges noted in that inserted Part 13 in mind. And I begin that by offering a general note regarding infrastructure programs and projects in general, as more fully addressed in Parts 10 through 12:

• I did not focus on specific technologies in those three postings for a reason. Technology changes and advances as new innovations emerge and advance, and as older tools and materials and approaches for using them become legacy and then disappear from any current or anticipated use. And even when a technology is still of use, it might or might not apply to the specific infrastructure initiatives and their needs that might be under consideration – or under immediate here-and-now development.
• But human involvement and human impact can be taken as an ongoing, automatic given.
• I have been discussing the now six infrastructure contexts that I have raised and selectively considered here, from a human and an interpersonal perspective because those are the true constants that can be found in any and all such endeavors. And ultimately, success and failure: immediate and long-term can only be measured in such terms.

To bring this lead-in note up to date here, that is most certainly the perspective that I espoused in Part 13. And I will continue pursuing it here too, using this example to further expand upon what that it has to include if it is to offer any real value.

Read my earlier posting about this unfortunately negative, cautionary note example, and how it came to fail and in ways that led to significant loss of life, and to expanding environmental degradation. Read it with these at least seemingly simple questions in mind, if nothing else, as we face our current COVID-19 context and both intranationally and internationally:

• When we reopen and recover and rebuild from this crisis, what long-term challenges that it has brought to light will we address and how?
• And what longer-term and wider reaching challenges will we exacerbate, if not create outright from that if we only take a short-term, here-and-now perspective as we plan and prioritize and carry through on whatever infrastructure and related efforts that we do enter into?

I offered some briefly stated points of conclusion in my above cited 2014 “greening of the desert” posting that I repeat here for their renewed relevance:

• If you are to develop and institute an effective infrastructure change societally, you need to do so in ways that will gain widespread support and that will meet real needs. So you have to be prepared to make this work on a short timeframe and with a clear vision for moving forward.
• But at the same time you have to plan and develop and monitor and fine tune with an acute awareness of longer term considerations too, and with an awareness of what I would call, based on the above-cited narrative, potential water challenges. And these are always problems that are readily perceived and understood – in hindsight, but that can prove much more difficult to see or anticipate in foresight. Even the most devastating such problems can seem to arise from unexpected directions.

Let’s consider what that second point and its cautionary note means, in the context of that African based example. Providing water to a parched land and with a goal of enriching the lives of the many peoples of a large multinational region, is a noble goal.

• The people who set out to do this, sought to do good and they made every effort to actually do so. But they treated this as a strictly technological problem: one of digging wells and of controlling water flow so as to enable the effective use of it in the communities that would arise around those wells.
• But the technological was in fact only one aspect to this, that was crucially important to consider and address there. It was at least as important to consider the historical and the sociological context of this and of the people of this large and widespread region.
• Would this project have been carried out as it was, if cultural anthropologists who really knew the peoples of this region and who spoke their languages, had been brought into its planning and from the beginning?
• It was well known as long established fact that this region: the Sahel region of Sub-Saharan Africa, was repeatedly, consistently recurringly faced with periods of rain and even if just in more limited quantities, interspersed with periods of parching, withering drought.
• All the puzzle pieces that went into this failing were known – including the fact that this vast pool of underground water was not being replenished from the outside and in any way, so it was a fixed and limited resource that with time would be exhausted.

This was an infrastructure project that was developed and instituted as a good and even noble effort. It was grounded in the best of intentions. But the blinders-limited failure of its planning, and the failure to understand or even see the pitfalls that it would have built into it from that, made this a tragedy and even an inevitable one. And circumstance had it that those wondrous deep drilled artesian wells began to run dry precisely as one of the region’s more severe if predictable droughts set in.

In retrospect, that confluence was at least partly predictable too. When the land became parched and the people of those artesian well-centered communities needed extra water for their now much larger herds of cattle and for their own larger numbers too, they opened the spigots as wide as they could, accelerating their race to deplete the reservoirs that they were tapping into for this. Drought in a way set the schedule for this failure.

• What types and sources of expertise should have been included in any such conversations, as this project was being planned?
• Looking to the first of those above-repeated bullets points from my 2014 posting (as edited here for this discussion), how can you best meet the needs that such a project would seek to fulfill, and gain buy-in and support for it and in a way that would not conflict with the basic understandings and the basic traditions of the people there, who would be most directly affected by it?

The largely nomadic herdsman communities of the Sahel were intelligent, and wise in the ways of their ancestors. But as vitally important as it was to gain their buy-in for whatever would be done, they were still largely illiterate and they did not have the educational background: the basic facts in their community knowledge bases, to understand all of the crucially important issues there. This does not mean they could not have been informed, and certainly if the information that they needed had been shared with them in their languages and in ways that they would see as making sense.

• So if you are to develop and institute an effective infrastructure change societally, you need to do so in ways that will gain widespread support and that will meet real needs, with a shared and agreed to determination of that even means, arrived at as a matter of genuinely informed consent.

Am I arguing here against deep artesian wells and against this type of project ever being carried out? No! I am arguing against simply carrying out this type of project blindly, as was arguably the case for what was done there. Am I arguing that the people who were involved in this, and either as technologist developers or as individuals and communities living in that region were odiously wrong in what they did? No! People of good will and good intention on all sides of this, made what were arguably the best decisions that they could have made there – given the information and the levels and types of insight and perspective that they could turn to as they reached their conclusions and made their decisions.

• What I am arguing is that all actually involved parties made what turned out to be fateful decisions, on the basis of faulty and incomplete information and on the basis of fundamentally limited perspective.

And if you look more widely at artesian well systems and the water resources that they can access, this project is not unique and either for its being carried out on the basis of incomplete information or for its potential for “law of unintended (and unexpected) consequences” failure.

I cite in that context, a 2001 report from the United States based Natural Resources Defense Council, concerning deep water sourced contamination in the United States, in California. The challenges addressed there, arising from those artesian well systems, came to include significant surface water contamination and with that creating what become all but intractable problems for agricultural irrigation in at least some areas of the state, along with problems with the quality of essential potable water too (see their report:California’s Contaminated Groundwater.)

As a surface water and agricultural, and a wildlife and environmental challenge example of the impact of contamination from subsurface, deep well-sourced water, I would cite selenium contamination where microscopic levels of that serve as essential micronutrients for all multicellular organisms, but where excessive levels are teratogenic poisons too. When deep sourced well water is contaminated with a nonvolatile contaminant like that and it is brought to the surface, it remains there in place as the water that conveyed it to the surface evaporates away. And more such water brings up more of it and it becomes more and more concentrated there.

I could just as easily cite examples of where farmland has been poisoned by salinization as salts so accumulate too. Israel has faced real problems of that type, and even when using drip irrigation and similar techniques to both reduce the amounts of water needed, and to help limit the impact of this problem.

I am going to turn back to a discussion outline that I first proposed in Part 6 in the next installment to this series, and discuss infrastructure development as envisioned by and carried out by the Communist Party of China and the government of the People’s Republic of China. And as part of that continuation I will also discuss Soviet Union era, Russian infrastructure and its drivers as took place within that nation. And I will, of course, also touch on the issues of Post-Soviet Russia too and of Vladimir Putin and his ambitions and actions there too. That, and for both China and Russia, is where infrastructure development meets authoritarianism, and in a form and to a degree that has never been possible until now, so both of those sources of case study material are important for better understanding our 21st century context.

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. I also include this in Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory. And I include this in my United Nations Global Alliance for ICT and Development (UN-GAID) directory too for its relevance there.

Rethinking national security in a post-2016 US presidential election context: conflict and cyber-conflict in an age of social media 21

Posted in business and convergent technologies, social networking and business by Timothy Platt on June 14, 2020

This is my 21st installment to a series on cyber risk and cyber conflict in a still emerging 21st century interactive online context, and in a ubiquitously social media connected context and when faced with a rapidly interconnecting internet of things among other disruptively new online innovations (see Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 354 and loosely following for Parts 1-20.)

I said at the end of Part 20 that I would at least begin to discuss the cyber policies and doctrines that at least appear to be in place in the United States and in Russia, as I write this. I will, of necessity, also at least comparatively discuss the People’s Republic of China (China) and the People’s Republic of Korea (North Korea) in the process. But before entering into that narrative, and with immediately preceding installments to this series in mind as I prepare to do so, I am going to offer an at least briefly stated framework for that, by considering defense policies, and more specifically cyber defense policies in general. I will then address those specific examples for how they fit into or diverge from that more general understanding. And with that noted as a point of orientation for what is to follow here, I begin with the absolute basics and with a consideration of what offensive and defensive mean, in a practical implementable context.

There is a saying to the effect that the best defense is a strong offense. That is in fact, simply a proffered justification for leading with an offense and for proactively attacking. But there still is at least a gleam of truth buried in that short-term considered, self-serving sentiment. A best defense can in fact be found in visibly having a strong enough capacity for reaching out (or striking back) militarily, so as to deter any proactive offensive action that might be taken by others. A defensive policy as a system of organized capabilities, can be built around that as an underlying doctrine. But that can only work if:

• Potential aggressors actually know that they face both defensive and offensive strength,
• And if they calculate correlation of forces and force symmetry and asymmetry in their decision making, in at least comparable ways to those of the would-be defender.

Miscommunications and misunderstandings in this, lead to what might otherwise be avoidable open, active conflicts. Would the Japanese have even seriously considered attacking Pearl Harbor in 1941 if they had known in advance what they would unleash against themselves, from what began as their “surprise attack” on that sunny Sunday morning? They saw the American public as hewing to a largely isolationist approach when facing the rise of Nazi Germany, and they saw Americans as weak and indecisive. They saw those ships in harbor as being vulnerable to attack and certainly when they had developed their special new shallow water capable aircraft-launched anti-ship torpedoes. And they were confident that if they timed everything correctly they could both attack and destroy America’s aircraft carriers at harbor and block the harbor itself with the wreckage of sunken ships – forcing the United States to a negotiating table that they would dominate as they made their demands on behalf of Japan and their Greater East Asia Co-Prosperity Sphere.

• As history shows, none of that worked out as planned.

Von Clausewitz argued a case for seeing war as an alternative form of diplomacy. But even he would have agreed that visible overwhelming strength in a potential adversary would make that a weak and risky negotiating tactic. And that brings me to the issues that I have raised in the last few installments to this series where the tools used to calculate potentially deployable strength of force, as conceived in a pre-cyber context, no longer offer real value and certainly where cyber weapons are likely to be deployed. We live in a world where Japan’s 1941 miscalculation, or at least its modern equivalents, have to be considered likely and in any potential conflict scenario. But at the same time, we live in a world where a seeming fig leaf cover of anonymity as to the actual source of a cyber-attack, can still make such action seem overwhelmingly appealing, and as “an alternative form of diplomacy” if nothing else.

• Defensive policy and doctrine have to be solidly grounded in empirical fact. And such fact has to be widely enough sourced so that civilian and military planners and strategists, working together, can understand the world as seen through the eyes of their potential adversaries, when arriving at them.
• And the tools that would be used to assess risks faced need to work and reliably so for the contexts in which they would be deployed.

Japan felt itself to be under direct ongoing economic attack from the trade restrictions and embargoes that they and their industrial base faced, coming out of the West and coming out of the United States and their governmental decisions in particular. A US-based decision to cut off their supply of imported natural rubber was in fact a deciding factor leading to Japan’s decision to directly attack American interests in the Pacific Ocean (at Pearl Harbor, the Philippines and elsewhere) that day.

The United States was taking actions such as the imposition of their rubber embargo, in response to Japan’s militarism in its attacks on China and elsewhere, and as a reaction to their realization that Japan’s expansionism could only serve to increase instability throughout Asia and heighten overall global risk.

Neither side stopped to look at the types of consequences that their decisions and actions could and perhaps inevitably would lead to. So in a fundamental sense, both nations backed into the conflict that exploded into being on November 7, 1941, because neither could arrive at a defensive policy or doctrine that were solidly grounded in empirical fact, and in a sufficiently wide understanding of what types and details of evidentiary fact were even needed for that.

Von Clausewitz argued a case for seeing war as an alternative form of diplomacy. It would perhaps make more sense to view the risk of conflict as a diplomacy framing tool, that can be used to bring less directly confrontational negotiations into a higher prioritized focus.

• All of this up to here is predicated on an assumption that the best defense precludes and even effectively eliminates the chance of having to face and deal with an actual offense: an actual attack.
• And the development of weapons of mass destruction and of what would amount to absolute mutual annihilation from their use, have only served to bring that point of understanding into focus as an imperative.

But even there, and with at least the possibilities of nuclear weapons hanging over any possible conflict, both defense and offense will happen. There have been few if any minutes: few if any individual seconds since the end of World War II, when there have been no open conflicts anywhere on Earth.

• While the immediate challenge of an attack can force a defensive response, or a defense followed by or even coordinately accompanied by offensive action, preparatory defensive action should be oriented towards making that type of reactive response unneeded by precluding any offensive attacks in the first place.
• Such defensive planning and preparation needs to be grounded in a clear-cut understanding of calculable risks faced and for both sides of any potential conflict,
• And an equally clear-cut understanding of what motivates potential adversaries and informs their decision making.
• And both sides to that have to include making sure that any potential adversary understands you and in corresponding ways too.
• That level and type of higher level understanding and on both counts, is essential to creating a conflict-free diplomatic framework. But at least selective uncertainty can offer value there too and particularly where that means uncertainty at a specific detail level, and on both sides, as to precisely how strong a potential adversary is and precisely where – with that increasing the presumable risk of any possible offensive action that might be taken.
• That is important. Uncertainty increases risk faced from action taken. It degrades any apparent value or benefit that might be expected from taking what might prove to have been ill considered action.
• So far these points apply with a measure of clarity and certainty in a pre-cyber arena. But when you add an at least likely presumption of realistically possible “plausible deniability” on the part of possible aggressors into that equation, as has to be done in any cyber-conflict oriented defensive planning, it quickly becomes apparent that
• Cyber-conflict capabilities increase uncertainty, but without that corresponding increase in understood risk, and certainly as an automatic given. And the destabilizing consequences of this are not always all that well understood.

I have used the terms reactive and proactive on any ongoing basis in this series, and in this posting itself up to here. And I have written of effective understanding and of blind spots in that, when assessing and planning for possible conflict risks, so as to limit if not eliminate them. But to round out this phase at least, of this more general discussion, I would at least clarify what I seek to encompass in those terms. And I begin doing so by setting as a for-here axiomatic assumption, a presumption that a cyber-attack’s consequences might outweigh for their adverse effects and costs, the costs of any realistic effort to prevent that. So any potential victim of an attack of this sort would see it as both desirable and even prudently necessary to forestall and prevent that from happening – if they can proactively do so.

• Proactive in this case would mean accepting and paying the cost of preventing an attack, and with that lessening the chance of such action being taken and certainly where knowledge of that might forestall action by a potential adversary. So this, from an actuarial, insurance-rate calculation model’s perspective would mean accepting the cost of any defensive preparations and actions taken. But such action might still be taken and a realized attack faced. So proactive as considered here still has to include an added-in actuarially based, risk of likelihood of occurrence-scaled percentage of the likely cost of such an attack if it takes place anyway. (Think of this as a necessary costs plus reserves in case of, calculation where a conservative estimation here would add extra remediation and recovery reserves just in case.)
• Reactive in this case, can mean attempting to be proactive in developing aggression-preventing defensive capabilities, but failing in that due for example to the disruptively unexpected nature of a cyber attack as actually launched and faced. But more generic defensive and related preparation can still in many cases limit the impact and cost faced and even from a more specifically unexpected type of attack. (Consider developing and deployed firewall separated parallel systems for critically important aspects of an overall information architecture as a case in cyber threat, point example there, that could be utilized even if one of those duplicative capabilities was compromised by a novel and unexpected for detail, type of cyber-attack. So this might still mean limiting the impact and the overall cost of an attack, so this and the (now failed) above proactive-attempted scenario would still be cost-limiting.
• But reactive can also mean entirely reactive and with no real defensive preparation or action entered into, that in this case would directly address cyber risk as discussed here. Then the full costs of any conventional-only defensive expenses would be incurred as would the full costs of any attacks actually faced – where a potential adversary might be more likely to attack to the degree that they assume a failure to defend from an attack. (To take this out of the abstract, consider how Russia hacked the 2016 US presidential and congressional elections, and how the Trump administration has actively sought to block both any effective investigation into that, and limit any proactive defense against its being repeated!)

Reactive and proactive carry costs, and can risk more and larger costs depending on how they are carried out. And an awareness of how risks and costs are understood and addressed by a nation (on the part of a potential adversary facing them), can only serve to affect both the range of the risks that are faced and the likelihood of those costs having to be paid. This becomes important when considering which specific defensive approaches and suites of them to take, contingent on their cost-effectiveness, and how concurrent measures might reinforce each other, and when considering:

• Risks faced if an adverse event happens anyway,
• The likelihood of those events happening,
• And the costs that would accrue if they did as balanced against the costs of efforts that might be taken to prevent or limit them.

Note: strictly monetary costs are only part of the actual total costs faced there. Put bluntly, what was the cost of losing those ships and the supportive docking and related facilities at Pearl Harbor on November 7, 1941? Few would argue against a conclusion that those loses only represented a small part of the actual overall costs faced that day, and of the loss that America and Americans actually faced from that event and even just there in the harbor itself.

And this brings me to a final area of consideration that I would include here in this more general orienting discussion of defensive and cyber defense policies and doctrines. That is the issue of systematic versus ad hoc in how all of this would be conceived and carried out. And that will become crucially important as I turn to consider Russian and American policies and practices in the next installment to this series.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time 3, and at Page 1 and Page 2 of that directory. And you can also find this and related material at Social Networking and Business 3 and also see that directory’s Page 1 and Page 2.

Rethinking the dynamics of software development and its economics in businesses 10

Posted in business and convergent technologies by Timothy Platt on May 30, 2020

This is my 10th installment to a thought piece that at least attempts to shed some light on the economics and efficiencies of software development as an industry and as a source of marketable products, in this period of explosively disruptive change (see Ubiquitous Computing and Communications – everywhere all the time 3, postings 402 and loosely following for Parts 1-9.)

I have been successively discussing each of a series of historically grounded paradigm defining steps in the development and evolution of computer software in this series, starting with direct machine language coding and moving forward from that. See any of Parts 5-9 for the complete list of them that I will have addressed by the end of this series, as I started with an initially shorter list that I have since added to.

The first five of those developmental stepping stone advancements can arguably be considered to represent mature and even settled technology implementation stages. And the sixth: language-oriented programming can be considered a transition stage advancement. I began addressing it in already-developed and mature technology-grounded terms with a Point 6 briefly stated definition of what it is. Then I added in self-learning and ontological self-development as a possibility there and reframed it in more still-to-come terms.

I begin this posting by repeating the updated version of that orienting Point 6 statement as I continue addressing its issues here:

6.0 Prime: Language-oriented programming seeks to provide customized computer languages with specialized grammars and syntaxes that would make them more optimally efficient in representing and solving complex problems, that current more generically structured computer languages could not resolve as efficiently. In principle, a new problem type-specific computer language, with a novel syntax and grammar that are selected and designed in order to optimize computational efficiency for resolving it, or some class of such problems, might start out as a “factory standard” offering (n.b. as in Point 6 as originally offered in this series, where in that case, “factory standard” is what it would remain, baring human programming updates.) But there is no reason why a capability for self-learning and a within-implementation capacity for further ontological self-development and change, could not be built into that.

And I begin this posting by acknowledging a point of detail that I expect anyone who works in software development would already know. Language-oriented programming may be interesting as a topic of discussion but it is and it will remain a developmental backwater – a specialized side story when considered in the light of software development and evolution as a whole and certainly when looking for areas of greatest long-term impact in all of that.

I freely agree; that is a valid assessment of this still-genuinely paradigmatic development step. But I offer it here and delve into its issues and its possible issues moving forward, for a reason. I see it as a bell weather example and one that is perhaps particularly important to cite and discuss here, because it is a side issue development for most. So very few if any software developers would approach it with strongly held preferences, or even professional biases towards one implementation of it or another, let alone blinder-creating preconceptions as to its basic underlying axiomatic assumptions or theory. Language-oriented programming as such, presents itself as a clutter-free example that I can work with here. I can cite and use that developmental step as a largely preconception-free platform for discussing issues such as perhaps-unexpected side effects of self-learning systems, as they will become more widely important in this 21st century. And it is with that noted, that I continue my discussion of this paradigmatic step in software development, where I left off at the end of Part 9, with the issues of adaptive peak models – for how that approach to understanding evolutionary development can describe and explain how different instantiations of what start out as line-for-line identical programming code can and undoubtedly will self-develop ontologically into very different programming capabilities that reach very different levels of performance effectiveness and that create very different risk and benefits profiles for the businesses that own and use them.

I began addressing that complex of issues in my Part 7, Part 8 and Part 9 discussion of this developmental step when for example, I wrote of businesses as being information driven:

• And more specifically when I wrote of businesses (the primary customers for any possible language-oriented programming products by far) facing conflicts of need and of risk and benefits perception, as they decide what of their confidential and proprietary data and processed knowledge to share with an outside software developer for that.

Consider the following generic scenario (as already made use of earlier here in related contexts):

• There are always going to be problems and challenges that arise that are very individual-business specific and particularly where they directly impact upon features or functionalities of that business that are at the core of what makes it unique – that set it apart. Unique, or at least its functional equivalent in a business can give it a distinct competitive edge. But even when such a point of distinction offers that, it still might need to be adjusted in practical implementation if it is to accomplish that as fully as possible,
• It might have to be updated to accommodate marketplace and other change taking place around that business,
• Or it might need to be adjusted or even more fundamentally rebuilt to accommodate scaling requirements, where its success in and of itself might come to reveal or even create problems as that business grows.
• Those are only a few possibilities for how a need for change can arise and even in this more special-case type of context.
• But many if not most of the really significant problems that arise that businesses have to be able to address, arise for entire business sectors, entire industries or even for wide swaths of businesses as drawn from seemingly all such natural divisions (e.g. for all businesses that need to be able to scale up beyond what turn out to be tipping point scales, for how they manage commonly shared online business requirements.)
• Let’s assume a problem, or rather a class of such problems that are widely faced, that arise in massively and even seemingly open-endedly data requiring contexts, and that essentially every business that is large enough to have a significantly scaled in-house Information Technology department is going to want to at least significantly keep in-house for its resolution and certainly where that means their maintaining positive control over their crucially sensitive business intelligence.
• And with that, I have re-set up the scenario of a software development company, operating in a business to (big) business market, developing and offering tools that they would sell access to, and to all willing to enter into sales and licensing agreements with them.
• And that brings me back to all of those initially identical instantiations of a software package that those business clients would purchase access to, and use in developing their own in-house solutions to the specific instances of that general problem class that they face as they particularly face it with their own business model and in the context of their own proprietary knowledge and their own data stores.

I have set this up in terms that directly connect with and even build upon the line of reasoning offered in Parts 8 and 9. But I continue from here by adding a second set of issues to that mix that would also contribute to a software development tool purchasing Company A ending up with a very different product than their competitor, Company B does, and when as above, they purchased and initially received the exact same computer code to work from.

The key word that I will enter in here is “disruptive.” And I begin addressing that and its issues as they arise in this context, by proposing a perhaps simplest case scenario. Let’s assume that those two companies: A and B freely offer direct and significantly full representations of their relevant businesses’ operational and strategic knowledge bases, to a software developer that would provide them both with some problem-class-specific language-oriented programming tool. Let’s assume that they openly and even transparently make visible, in that specific and presumably controlled context, all of the information that they have that they would have to address for meaningful context when they make use of the specific software tools that they would develop, using the language-oriented programming tools that they would buy access to from that software provider. So nothing is being intentionally hidden there and from either of them.

And to add an air of realism to that, let’s specifically assume that they can enter into legal agreements that even their most conservative legal counsels would find acceptable for doing this, with the software developer who would create that tool building tool for them. And let’s assume that that software developer can and does make prudently considered, realistic use of that data and processed knowledge in building this language developing tool, and that they place equal weight on what A and B have contributed there.

Disruptive equals uncertain in all of this and certainly when considered from any realistic risk management perspective.

• It is essentially certain that any problem that a Company A or B faces, that would call for this type of specialized coding language if they are going to be able efficiently manage it, is going to be disruptively novel, and probably to a wide range of such enterprises. Routine issues and challenges would never call for this type of effort or expense and they would never justify that type of information exposure risk either. And the more disruptively novel that challenge is, the more the unknowns it is likely to contain and the more uncertainty such a business will face in them,
• And the more uncertainty they will likely also be facing in even knowing precisely what data to bring to bear there, and with what assurance of completeness and reliability for at least some of those perhaps-key data variables,
• And the more uncertainty as to precisely what tools (here language-oriented programming tools) would offer the greatest value for that and certainly when they are considered in detail.
• What would be included and how in such tool making tools, and what would be tested to validate them as created and offered to market, and how?

Disruptive novelty creates uncertainty; that can be taken as a given. But perhaps more importantly it creates multiple layers of uncertainty that at the least and at best might only add together in something of a simple linear manner to create an overall impact. What amount to multiplicative effects there, have to be considered too. So when does the balance between certainty and uncertainty tip towards a realistic presumption of functional utility and when would it best be considered a call to keep on working and improving? And this, in a Point 6.0 Prime context, translates into two simple questions:

• Is this language development product (that we want to begin shipping out and selling or licensing now) ready for release as a stable software product (in more usual context of a more standard alpha and beta testing protocol)?
• And when the product that we ship is going to just begin to develop and evolve from that starting point, is that type of validation approach even sufficient in principle?

I am going to continue this overall discussion in a next installment to this series where I will turn to the next of eight basic development paradigms that I would analytically consider in this series: artificial intelligence programming. Then after delving at least selectively into that, insofar as that massive area of possible discourse might belong in this series, I will similarly discuss quantum computing. And then, to round out this anticipatory note, I will step back and consider this flow of history and its possible lessons as a source of what might be considered general principles.

In anticipation of that and to put this posting into perspective for all that will follow here, the questions that I just raised here from a language-oriented programming software developer perspective, have their counterparts for the next two paradigmatic steps to address here too.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory.

Meshing innovation, product development and production, marketing and sales as a virtuous cycle 24

Posted in business and convergent technologies, strategy and planning by Timothy Platt on May 27, 2020

This is my 24th installment to a series in which I reconsider cosmetic and innovative change as they impact upon and even fundamentally shape product design and development, manufacturing, marketing, distribution and the sales cycle, and from both the producer and consumer perspectives (see Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 342 and loosely following for Parts 1-23.)

I have been discussing two basic paradigmatic models of how individuals and communities respond to change and innovation and to New in general here, since Part 16:

• The standard innovation acceptance diffusion curve that runs from pioneer and early adaptors on to include eventual late and last adaptors, and
• Patterns of global flatting and its global wrinkling, pushback alternative.

And in the course of that I have raised as an important point of consideration, how social media has become a crucially important organizing factor in shaping how both of those approaches to innovation and to change in general would be perceived and carried through upon, and certainly on an individual innovation by individual innovation basis.

More traditional news and opinion perspectives as shared through central broadcasting channels still retain importance there, and so do government-sourced and other larger scale organized channels. Consider business sponsored lobbyists and lobbying organizations, and their opinion and judgment shaping influence, as just one possible example of “other” there. But online social media with its reviews and evaluations: positive and negative have come to significantly influence all of this now.

• All of these sources of influence play important roles there.
• And perhaps just as importantly the once perhaps traditional boundaries between all of those channels of information and opinion sharing have blurred.
• And social media has played a key role in that.

And with that noted, I arrive here at the issues that I said that I would at least begin to address in this posting, as repeated from the end of Part 23 in its anticipatory note for what would follow here:

• I will reconsider individual and social group, and governmental and other organizational-level influence in both creating and shaping conversations about change and about innovation.

I added that I would pursue that narrative to come, building from the working examples that I have already been discussing in Part 23. And I will at least briefly address them again in what follows. But before doing so and to set a stage for more fully addressing information and opinion sharing channels in general here, I will begin by raising and discussing two new examples:

• Antibiotics and their widespread use, and
• Vaccinations and certainly as they have become vilified and particularly in online social media.

I begin here with the first of them. Antibiotics and their development would at the very least qualify as a strong candidate for being considered the most significant healthcare advancement of the 20th century. They turned what were once death sentence scourges into readily curable ailments, and they have saved tens of millions and even hundreds of millions of lives in the process. And they are now increasingly failing us as progressively more widely antibiotic resistant strains of once readily curable disease causing bacteria arise and proliferate and for wider and wider ranges of disease types. How? Why?

I cite two developments here, in response to those questions that I would argue synergistically contribute to answering them: two still ongoing developments that significantly help to both create and drive that disturbing trend. One of them is the vast overuse and misuse of antibiotics in agribusiness and the other is the widespread overuse of these medications by members of the general public and both as prescribed drugs and as over the counter, self-medicating treatments.

• Those assertions of problems faced are not in any way uniquely mine alone; I am simply acknowledging what have already become widely established truths when offering the immediately preceding paragraph here – truths that are widely established in principle and in the abstract and as they would apply to others, and even if they are not always as commonly followed for how they would more individually apply to ourselves and in day-to-day practice.

How does this example fit into this narrative and as a matter of who is communicating what to whom and through which channels and how? I would begin addressing that question by considering food, and particularly meat and poultry production, and the organic and minimally treated food movements that have arisen and developed at least for relevant detail, as a response to this specific challenge. And I begin so by at least acknowledging other more wide-ranging consumer-challenging agribusiness practices that antibiotics overuse and misuse take place in the context of (e.g. the use of pesticides and of genetically modified plant crops that, for example allow growers to use massively increased levels of those chemicals to protect them from pests and without killing those crops in the process.) Roundup Ready crop varietals for corn, soybeans and other food staples come to immediate mind as poster child examples there.

Agribusiness wants to use antibiotics in their animal feed and in massive quantities, outstripping any medical use of these drugs in that, because animals that are being raised for slaughter on them grow to size faster, and because more of them survive and in a healthy enough condition for them to be profitably brought to market. So adding in those antibiotics carries a direct financial cost – but the direct financial return on that investment is many times higher.

But an ever-increasing, and an ever-increasingly vocal segment of the marketplace and of the pool of consumers that those businesses have to be able to sell to, have come to see this practice as problematical. And they have organized and both socially and politically on this issue and they have been successfully advancing this cause for years now.

The state of California, as a crucially important example here for the overall percentage of food that is consumed in the entire United States that is produced there, passed their first, Organic Foods Production Act of 1990 thirty years ago. Other state level and national legislation has been drafted and passed in follow-up to that and with California legislation serving as a role model for essentially all of that. And that effort is still ongoing, and with California still playing what can be seen as a trend setting role for all of this.

More recently that has meant passage of the California Organic Products Act Of 2003 and the California Organic Food and Farming Act of 2016. And while the use, and overuse of antibiotics have only been acted upon as one of many crucial areas of consideration in all of this, they have been addressed there and with intense lobbying on both (all) sides of that complex of issues.

This is an area of public discourse where conflicts of interest have pitted public perception and public interests against big agribusiness perception and interests. And a great deal of agribusiness as an industrial sector has sought to both maintain their old practices, while also marketing themselves as meeting the perceived needs and preferences of their target markets by adding in organic lines to their offerings if nothing else, and with “antibiotic free” included there as a prominent part of their new labeling. This is all shared message driven, and from both the consumer and the producer side and from other interested parties as well. And the overall consequence of this, and of the lobbying that enters into the shaping of what has now become a long-standing flood of legislation and in many locales and in many nations, is that many large agribusiness producers still overuse antibiotics but with an acute awareness of marketplace response and of the acceptance or resistance curves that they follow.

What of over-prescription and the completely inappropriate proscription of antibiotics, as for example occurs when physicians prescribe powerful antibiotics to patients who they know have viral infections that those drugs cannot affect? What of consumers overusing over the counter antibiotics such as antibiotic creams and for essentially any and every scrape or bruise and regardless of actual need? Those issues are not being addressed and certainly as of this writing, at least anywhere as actively as the challenge of food quality has been in this. And where they are, as abstract statements of principle, it is much rarer that any of that filter down to influence let alone change individual behavior.

So as a matter of achieving any real impact, the focus that has been placed on antibiotics misuse and overuse has mostly been on food quality and much less on the development of antibiotic resistant pathogens from how those drugs are used in a more normative medical care context – and even as both sides to that are critically important there.

What would I cite at this point in this narrative as more generally stated issues that arise from that type of example? Several come immediately to mind, including:

• The simple fact that neither of the basic paradigms of acceptance and resistance to it, that I have been discussing here arise, and certainly as a matter of general principle, as a result of the spread and acceptance of single influencing voices (with any such voice presumably holding greater or lesser positive influence depending on who their messages are reaching and where.)
• And few if any such acceptance or resistance patterns of the types that I raise here by way of this example, develop as the consequence of conflict between just two competing visions and understandings.
• They arise for their levels and degrees of impact, as a consequence of the sometimes conflict, sometimes agreement that arises among multiple competing voices, each representing separate if at times overlapping agendas.
• And that is complicated by (simplified by?) that fact that few if any of the people who ultimately find their places on those innovation diffusion curves, actually look all that far beyond their own more usual sources of news and opinion – and with that in effect serving to conflate or at least highly correlate larger patterns of acceptance or rejection as sets of issues are concurrently reported and commented upon there.
• That means that few if any of the people who would enter into these accept or reject decisions, or accept now or later decisions as the case may be, would be more likely to significantly shift their positions on those curves in general. To clarify this point and its presumption, earlier adaptors who focus on news and opinions that are supportive of change and of accepting and making use of the New, are likely to remain earlier adaptors. And late and last adaptors are likely to remain as such too, given their information and opinion filters in place. (I will come back to this presumption a bit later in this series, and to some axiomatic assumptions built into these bullet points, and particularly when considering types of possible risks and benefits faced. And that, among other things will mean my raising questions as to the relevancy of this line of reasoning to the context of this type of example where an innovation would not be directly used or not by the members of evaluating communities; it is just the results of pursuing and developing and using them that would be – and with the positives and negatives of that more often unequally distributed.)

I end this line of discussion at least for here and now on one final perhaps-ironic note. I have been writing here of agribusiness, and of industrial scale farming as a source of problems and challenges – which it can be. But at the same time, the Green Revolution, or the third agricultural revolution in overall human history as it is also sometimes called, has made it possible to feed billions more people and quite literally so, than earlier agricultural approaches could have made possible. Are there still people who suffer from hunger and malnutrition in the world? Yes and well considered estimates of how many fit into that category worldwide still range as high as 700 million or so. But the Green Revolution and the agribusiness scaled production of food that has made it a reality have ended famines and on what would have been all but apocalyptic scales. See this piece on the Green Revolution in India as a case in point example there. So I have to add one more area of general issues that arise from this example, that will at the very least have to be considered for other widely impactful innovations as well:

• How scale of impact and scale of perspective can and do have both quantitative and qualitative impact and both on what an innovation is and on how it might be perceived and evaluated too.

I am going to continue this discussion in a next series installment with a discussion of the second new example as offered above: vaccinations and the acceptance and the pushback that they face as both individually protective healthcare measures and as a matter of overall public health concern. And I will also at least briefly reconsider my earlier examples as offered in Part 22 and Part 23.

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. And see also Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations.

Reconsidering Information Systems Infrastructure 15

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on May 21, 2020

This is the 15th posting to a series that I am developing, with a goal of analyzing and discussing how artificial intelligence and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 374 and loosely following for Parts 1-14. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

I initially offered a set of four tool kit elements in Part 13 that might arguably hold value in both thinking about and developing more open artificial intelligence agents: agents that hold capability for carrying out open ended tasks, as I have used that term in this series. And I began to elaborate upon those at-least conceptual tools in Part 14. In the process of doing so I raised and began discussing something of the distinctions that can be drawn between deterministic and stochastic processes and systems as they would arise in this context, and with a focus on deterministic and stochastic understandings of the problems that they would seek to address, and either resolve outright or manage on an ongoing basis. (Note: the open ended task that I have been repeatedly referring back to here as a source of working examples: completely open and free ranging natural speech, can in most cases and circumstances be seen as an ongoing task with at least some subsequent conversation more usually at least a possibility. Us humans generally identify ourselves as Homo sapiens – the thinking hominid, and occasionally as Homo faber – the tool making hominid. But for purposes of this narrative, Homo loquace – the talkative hominid might be a better fit.)

That perhaps exculpatory note offered and now set aside, I will continue discussing open ended tasks using natural speech as its working example. And I will discuss agents that can successfully carry out that task as having proven themselves to have general intelligence.

The four “tool packets” under consideration here are:

• Ontological development that is both driven by and shaped by self-learning behavior ( as cited in Part 13 in terms of antagonistically positioned subsystems, and similar/alternative paradigmatic approaches),
• Scope and range of new data input that might come from the environment in general but that might also come from other intelligent agents (which might mean simple tool agents that carry out single fully specified tasks, gray area agents that carry out partly specified tasks, or actual general intelligence agents: artificial or human, or some combination of all of these source options.)
• How people or other input providing agents who would work with and make use of these systems, simplifying or adding complexity to the contexts that those acting agents would have to perform in, shifting tasks and goals actually required of them either more towards the simple and fully specified, or more towards the complex and open-ended.
• And I add here, the issues of how an open ended task to be worked upon and goal to be achieved for it, would be initially outlined and presented. Think in terms of the rules of the road antagonist in my two subsystem self-driving example of Part 12 here, where a great deal of any success that might be achieved in addressing any overtly open-ended systems goal will almost certainly depend on where a self-learning agent would begin addressing it from.

I addressed a few of the basic issues inherent to deterministic and stochastic systems in general, in Part 14. My goal here is to at least begin to offer a framework for more specifically thinking about the above tools (approaches to developing tools?) in more focused deterministic and stochastic terms, taking those issues at least somewhat out of the abstract in the process. So for example, I wrote of deterministic systems and the problems that they could best manage, as being more limited in scope; they can in fact often be thought of and realistically so, as simple systems challenges. And I wrote of the increased likelihood of combinatorial explosions in the range and diversity of issue and circumstance combinations that would have to be addressed, and in real-time, as problems faced become more stochastic in nature.

• Quantitative scale increases there, and both for what has to be resolved and for what it would take to achieve that, can quickly become very overtly qualitative changes and change requirements too, in this type of context.

And with that noted, let’s consider the first of those four tool packets and its issues. And I begin so by repeating the key wording that I offered in an anticipatory sentence that I added to the end of Part 14 in anticipation of this posting to come:

• I am going to at least begin it (n.b. that line of discussion) with a focus on what might be considered artificial intelligence’s bootstrap problem: an at least general conceptual recapitulation of a basic problem that had its origins in the first software-programmable electronic computers.

The first bullet pointed tool suggesting, if not specifying element to discuss here, as repeated above, is all about ontological development and with distinct positive advancement presumed as an outcome from it. This of necessity presumes a whole range of specific features and capabilities:

• First of all, you cannot have ontological development: success and improvement creating or not, without a starting point.
• And it has to be a starting point that has a capacity for self-testing and review built into it.
• It has to have a capacity to make specific change capabilities built into it where an agent can modify itself (at least through development of a test case parallel systems capability),
• Test and evaluate the results of using that new test-capability functionality in comparison to its starting stage, current use version for what it is to do and for how well it does that,
• And then further modify or use as-is, or delete that test possibility as a result of outcomes obtained from running it – as benchmarked against the best current standards in place and in use.

Two points of detail come immediately to mind as I offer this briefly stated ontological development framework. First, the wider the range of features that have to be available for this type of test-case development and validation, the more elaborate and sophisticated the machinery (information processing code definitely included there), that would be needed to carry this out. And the more complex that has to be able to be, the more of an imperative it becomes that ontological development capability itself would have to be ontologically improvable too, and expandable for its range of such activity as much as anything else.

And that brings me directly to that bootstrapping problem. Where do you start from, and I have to add how do you start for all of this? The classic bootstrapping problem for software-programmable electronic computers: the question of how you might be able to “lift yourself up by your bootstraps” in starting an early generation computer of that type was simple by comparison. When you first turned one of them on it had no programming in it. And that, among other things meant it did not even have a capability for accepting programming code as a starting point that it could operationally build from. It had no already-established input or output (I/O) capability for accepting programming code or anything else in the way of information, or for providing feedback to a programmer so they would know the status of their boot-up effort.

So it was necessary to in effect hand-feed at least some basic starter code, in machine language form, into it. That of necessity included basic input and output functionality so it could read and incorporate in, the code that would then be entered in by more automated electronic means (such as punch tape, or later via punch cards or from magnetic tape storage media.) But first of all and before you could do any of that, you had to teach it how to accept at least some input data at all, and from at least one of these types of sources.

That was simple: basic early computer I/O code was straightforward and clear-cut and both for what had to be included in it and for how that would be coded and then incorporated in. It is at the heart of the challenges posed by open-ended tasks that we do not start out knowing even what types of coding capabilities we would need to start out with, and certainly with any specificity, in setting an effective starting point at a software level for either addressing those tasks, or for building an “artificial general intelligence embryo.”

• To clarify that, we have a basic, general, high level categorical understanding of the building block types that such a system would have to start with.
• But we do not know enough to really effectively operationalize that understanding in working detail and certainly for anything like an actual general artificial intelligence agent – yet.
• We know how to do this type of thing and even routinely now, for single task specialized artificial intelligence agents. They are tools, if more complex ones, and the tasks they would carry out are subject to at least relatively straightforward algorithmic specification. Gray area agents and their tasks become more and more problematical and certainly as their tasks shift further away from the more fully determined of that. Actually addressing open ended tasks as discussed here that would call for general intelligence to resolve, will call for disruptively new and novel, game changing developments and probably at both the software and hardware levels.

When the challenges that those agents would have to face, become more complex and more stochastic in nature as an at least indirectly causally connected consequence of their becoming more and more open ended, and as we face increased uncertainty as to what resolving their tasks would entail, the more difficult it becomes to know where to start: the more uncertain any postulated “I/O code” starting point for that would have to be.

And this brings me to two deeply interconnected issues:

• The issues of understanding, and deeply enough, the precise nature and requirements of open-ended tasks per se, and
• The roles of “random walk” and of “random mutational”, and of “directed change” and how they might be applied and used in combinations, and regardless of what starting points are actually started from, ontologically.

I am going to at least begin to address those issues in the next installment of this series. And in anticipation of that discussion to come, that will lead me directly into a more detailed consideration of the second tool packet as repeated above. My goal here is to complete an at least initial discussion of the full set of four tool packets as offered above. And after completing that I will finally explicitly turn to consider the avowed overall goal of this series as repeated in the first paragraph of all of its installments up to here: “analyzing and discussing how artificial intelligence and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use.”

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

%d bloggers like this: