Platt Perspective on Business and Technology

Meshing innovation, product development and production, marketing and sales as a virtuous cycle 21

Posted in business and convergent technologies, strategy and planning by Timothy Platt on November 11, 2019

This is my 21st installment to a series in which I reconsider cosmetic and innovative change as they impact upon and even fundamentally shape product design and development, manufacturing, marketing, distribution and the sales cycle, and from both the producer and consumer perspectives (see Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 342 and loosely following for Parts 1-20.)

I have been discussing the issues of innovation and what it is, and as it is perceived, identified and responded to since Part 16 of this series. And as a core element of that narrative, I have been discussing two fundamentally distinctive patterns of acceptance and pushback that can in fact arise and play out either independently and separately from each other or concurrently and for members of the same communities and the same at least potential markets:

• The standard innovation acceptance diffusion curve that runs from pioneer and early adaptors on to include eventual late and last adaptors, and
• Patterns of global flatting and its global wrinkling, pushback alternative.

And I concluded Part 20 of this narrative by stating that I would:

• Continue this line of discussion in a next series installment, where I will more directly and fully discuss stakeholders as just touched on there, as change accepting and even change embracing advocates (nota bene or as change challenging and change resistance advocates.) And I will discuss the roles of reputation and history in all of that.

Note: I offered and discussed a social networking taxonomy in Part 20 that categorically identified and discussed specific types of networking strategy-defined “stakeholders,” as more collectively cited in the above to-address bullet point, and recommend reviewing that posting to put what follows into a clearer perspective.

I will in fact address the points raised in the above topics bullet point here, but begin doing so by noting and challenging a basic assumption that I made in Part 20 and that I have in fact repeated again here in my above-offered series installment-connecting comments. I have been following a more usual pattern here of presuming a fairly clear-cut distinction between standard innovation acceptance diffusion curve acceptance or rejection of New and change, and global flatting and its global wrinkling alternative. But one of the consequences of always online and always connected, and as a largely interactive, social media driven phenomenon is that the boundaries between the two have become blurred. And I bring that point of observation into focus by citing a statement that I made in Part 20, now challenging it:

• “Unless it is imposed from above, as for example through government embargo and nationalism-based threat, global wrinkling is a product of social networking.”

Individuals use social media and both to connect and to influence. Businesses do too and so do other organizations and of all sorts. And governments are increasingly using these tools with the same basic goals too, but on a vastly larger and more impactful scale. For governments, social media has become a suite of tools for both creating and controlling societally-reaching and societally-impactful decisions and for controlling their citizenry as a whole. Social media has in fact become both an inwardly facing, within-country toolset, and a means of outwardly facing statecraft, and for an increasing number of nations.

For an earlier generation precursor example of this, and certainly as communications and information sharing can be used as tools of statecraft and diplomacy, the United States and their Western allies used radio as an internationally impactful route into Eastern Europe, when the nations of that larger region were still under Soviet Russian control as Warsaw Pact nations, during the old Cold War. And the Soviet Union in turn, actively sought to sway and even shape public opinion and certainly about that nation itself, essentially wherever it sought to gain a foothold of influence of not control. So think of current online social media and its use by national governments as a version 2.0 update to that.

How does this emerging version 2.0 reality blur the lines between standard innovation acceptance curve, and global flattening versus wrinkling dynamics? When a nation state such as the People’s Republic of China or Russia actively controls that news as a governmental resource and when they actively use social media to both shape and control the message available through it, blocking any and all competing voices there and as soon as they are identified as such, this controls how individuals would make their own decisions by limiting the options available to them there. And this information access contol makes all such decisions, community shaped and from the top down – provided that you accept that top-down, government determined represents the voice of the community.

So I effectively begin this posting by expanding my Part 20 social networking taxonomy with its corresponding categorical type by categorical type social media strategies, by adding state players as a new and increasingly important category there. And with that, I turn to my above (edited and …) repeated to-address topics point and its issues. And I begin doing so by noting and discussing a fundamental shift that is taking place in how influence arises and spreads, or is damped down in publically open online conversations and in online social media in general. And I begin this by citing a principle that I have invoked many times in this blog, and in just as wide a range of contexts for its overall significance: the Pareto principle.

According to that empirically grounded principle, a large majority of all effects (often stated as some 80% of them) arise from a much smaller percentage of the overall pool of possible causes (with that often found to be some 20%.) So this is often called the 80/20 rule. Online contexts tend to shift those numbers to a more extreme position with 90/10 and even more skewed prevailing, and both for online marketing and sales, and for social media reach and impact, and more. So for example, a small percentage of all online businesses can effectively capture a vast majority of an overall online sales activity, leaving just a tiny percentage of all of those sales transactions entered into, to all the rest. (Consider the impact and reach of a business such as there as a case in point example.) And a single opinion-creating voice can come to be an essential online forum for an entire politically motivated demographic and even a huge one numerically. (Consider the outsized role that Breitbart holds in shaping the conservative, and particularly the “ultraconservative” voice in the United States, and certainly as its role there is marketed by the leadership of the Republican Party, and certainly as of this writing.)

Governments, and I have to add other larger organizational voices can skew those numbers and in their favor for increasing their reach and impact, simply because of their scale and impact as projected across what can become essentially all public information sharing channels – and even without the proactive efforts to manage and control the conversational flow that is so actively pursued by countries such as China and Russia. And this brings me to the issues of reputation and history in all of this, with its many complications and with that including both individual evaluations and publically shared reviews. I am going to continue this discussion in a next series installment where I will at least selectively discuss those issues. And in the course of doing so, I will also at least begin to discuss the social networking and social media issues that I have raised here, from a business and marketplace perspective and with regard to how they play out in those arenas.

As a source of cutting edge, bleeding edge influence where otherwise separate phenomena such as the standard innovation acceptance diffusion curve, and global flattening versus wrinkling can become blurred, this is where weaponized online social media through the deployment of troll and artificial agent participants, enter this narrative. So I will discuss that complex set of phenomena moving forward here too.

And in anticipation of all of that, I have been writing here of governments, but the basic principles that I have brought up in that context, apply at least as fully in a business context and certainly when considering multinationals that have valuations approaching or even exceeding a trillion dollars, as of this writing: businesses that have come to hold a reach and power normally associated with governments and not with the private sector. Though I will discuss smaller and more conventionally “larger” businesses too and how they fit into the online business and commerce world that those mega-corporations have done so much to create.

And with that, I outline what will become a progression of next postings to this series to come, and with a goal of starting to address its issues in the next such installment. Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. And see also Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations.

Reconsidering Information Systems Infrastructure 12

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on November 5, 2019

This is the 12th posting to a series that I am developing, with a goal of analyzing and discussing how artificial intelligence and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 374 and loosely following for Parts 1-11. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

I have conceptually divided artificial intelligence tasks and goals into three loosely defined categories in this series, and have been discussing artificial intelligence agents and their systems requirements in a goals and requirements-oriented manner that is consistent with that, since Part 9 with those categorical types partitioned out from each other as follows:

• Fully specified systems goals and their tasks (e.g. chess with its fully specified rules defining a win and a loss, etc. for it),
• Open-ended systems goals and their tasks (e.g. natural conversational ability with its lack of corresponding fully characterized performance end points or similar parameter-defined success constraints), and
• Partly specified systems goals and their tasks (as in self-driving cars where they can be programmed with the legal rules of the road, but not with a correspondingly detailed algorithmically definable understanding of how real people in their vicinity actually drive and sometimes in spite of those rules: driving according to or contrary to the traffic laws in place.)

And I focused on the third of those artificial intelligence agent systems goals in Part 11, citing them as a source of transition step goals that if successfully met, would help lead to the development of artificial general intelligence agents, capable of mastering and carrying out open-ended tasks (such as natural open-ended conversation as cited above.)

I then said that I would turn here to at least begin to more directly discuss open-ended systems goals and their tasks, and artificial agents that would be capable of them, here. And I will in fact do so. But as a starting point, I am going to further address that gray area category of partly specified systems goals and their tasks again, and my self-driving car example in particular, to put what is to follow into a clearer and perhaps less-abstract perspective. And to be more specific here, I am going to at least briefly outline one possible approach for more effectively dealing with and resolving the uncertainties posed by real-world drivers who do not always follow the rules of the road and the law, as that would inform and in fact fundamentally shape a self-driving car’s basic decision and action algorithm: its basic legally constrained driving behavior algorithm that if sufficient, would lead self-driving to qualify as a fully specified systems goal and task.

Complex single algorithms and the neural networks or other physical, hardware systems that contain and execute them, can at least in principle stand alone and offer, at least for those artificial agents, complete solutions to whatever problem or task is at hand for them. But these resources can also be layered and set up to interact, and with the results of those interactions determining finalized step-by-step decisions actually made and carried out by those agents as a whole. One way to look at those systems is to think of them as overall composite neural networks (to pursue that hardware approach here), where the constituent sub-networks involved each have their own information processing and decision making algorithm that would have to be reconciled as specific actions are actually taken, and with results-feedback going to those individual sub-networks as input for their ongoing self-learning improvement and optimization efforts. But to simplify this discussion, while hopefully still addressing at least something of the complexities there, let’s address this here as a matter of individual artificial intelligence agents having what amount to competing and even adversarial neural networks within them – with those arguably separate but closely interacting networks and their algorithms tasked, long-term with a goal of improving each other’s capabilities and performance from the ongoing challenge that they provide each other.

This approach is most certainly not original on my part. It traces back to work first published in 2014 by Ian Goodfellow when he was working at Google (see this piece on Generative Adversarial Networks). And it is an emerging approach for developing self-learning autonomous systems. I see these types of interacting neural networks as holding at least as much value in execution and ongoing functionality too, and particularly for partly specified systems and their task-defined goals. And I see this approach as holding real value for possible open-ended systems and their task goals: true artificial general intelligence agents definitely included there.

Consider in that regard, two adversarially positioned neural networks with one seeking through its algorithms and its expert knowledge database contained background information, to solve a complex problem or challenge. And the other, with its algorithms and expert knowledge data, is set up to challenge the first for the quality, accuracy and/or efficiency of what it arrives at as its for-the-moment best solution to its task at immediate hand. What types of tasks would be more amenable to this approach? Look for differences in what has to be included functionally, in an effective algorithm for each of these two sides, and look for how their mutually correcting feedback flows could lead to productive synergies. In this, both of these networks and their algorithms recurringly challenge their adversary – each other, and with insights arrived at from this added to both of their respective expert knowledge databases.

Let’s apply this to the self-driving car challenge that I have been selectively offering here, with the overall task of safely and efficiently autonomously self-driving, divided into two here-adversarial halves as follows:

• One of these included neural networks would focus on the rules of the road and on all of the algorithmic structure, and expert knowledge that would go into self-driving in a pure context where all vehicles were driven according to legally defined standard, rules of the road, and all were driven with attentive care. That is, at least how this neural network would start out, so call it the rules-based network here.
• And the other would start out with essentially the same starter algorithm, but with expert knowledge that is centered on how real people drive, and with statistical likelihood of occurrence and other relevant starter data provided, based on insurance company actuarial findings as to the causes of real-world auto accidents. Call this the human factor network.
• Every time a human “copilot,” sitting behind the wheel finds it necessary to take manual control of such a self-driving vehicle, and every time they do not, but sensor input from the vehicle matches a risk pattern that reaches at least some set minimum level of risk significance, the human factor network would notify the rules-based one, and that network would seek out a best “within the rules” solution for resolving that road condition scenario. It would put its best arrived at solution as well as its rules-based description of this event into its expert systems database and confirm that it has done so to its human factor counterpart.
• And this inter-network training would flow both ways with the rules-based network updating its human factor counterpart with its new expert knowledge that came from this event, updating the starting point driving assumptions that it would begin from too. That network would have, of course, updated its actual human driving behavior expert knowledge data too, to include this now-wider and more extensive experience base that it would base its decisions upon.
• And this flow of interactions would drive the ongoing ontological development of both of those neural networks and their embedded algorithms, as well as adding structured, analyzed data to their expert systems databases, advancing both of them and the system that they enter into as a whole.

None of this is particularly groundbreaking, and certainly as a matter of underlying principles, at least in the context of the already rapidly developing artificial intelligence knowledge and experience already in place and in use. But I sketch it out here, as one of many possible self-learning approaches that is based on the synergies that can arise in adversarial self-learning systems.

I am going to continue this discussion in a next series installment, where I will turn to consider how this basic approach might apply in a more general intelligence context, and certainly when addressing more open-ended systems goals and their tasks.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 9

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on October 27, 2019

This is my 9th posting to a short series on the growth potential and constraints inherent in innovation, as realized as a practical matter (see Reexamining the Fundamentals 2, Section VIII for Parts 1-8.) And this is also my sixth posting to this series, to explicitly discuss emerging and still forming artificial intelligence technologies as they are and will be impacted upon by software lock-in and its imperatives, and by shared but more arbitrarily determined constraints such as Moore’s law (see Parts 4-8.)

My goal for this installment is to at least begin to tie together a series of issues and discussion threads that I have been touching upon up to now, in an explicitly natural brain versus artificial intelligence agent context. And I begin doing so here, by noting that all of the lines of discussion that I have pursued up to here, relevant to that set of issues, have dealt with fundamental sources of constraints and of what can be physically possible, and how best to develop and build within those limitations:

• Thinking through and understanding basic constraining parameters that would offer outer limits as to what can be carried out as a maximum possible performance benchmark by these systems, and
• Optimizing the hardware and software systems, or their biological equivalents in place in them: artifactual or natural, so as to more fully push the boundaries of those constraints.

So I have, for example, discussed the impact of finite maximum signal speeds in these systems, as that would set maximum outer limits as to the physical size of these information processing systems, and certainly as maximum time allowances are set for how quickly any given benchmark performance tests under consideration, might have to be completed in them (see Part 7.) And on the performance optimization side of this, see my discussion of single processor versus parallel processor array systems, and of information processing problems that would be more amenable to one or the other of those two basic design approaches (see Part 8.)

I stated at the end of Part 8 that I would raise and at least briefly discuss one more type of constraining parameter here, as it would limit how large, complex and fast brains or their artificial intelligence counterparts can become, where size scale and speed can and do, and at least in their extremes here, come to directly compete against each other. And I also said that I would discuss the issues raised in this posting and immediately preceding installments to it, in terms of Moore’s law and its equivalents and in terms of lock-in and its limiting restrictions. I will, in fact begin addressing all of that by citing and discuss two new constraints-defining issues here, that I would argue are inseparably connected from each other:

• The first of them is a constraining consideration that I in fact identified at the end of Part 8, and both for what it is and for how I would address it here: the minimum size constraint for how small and compact a functional circuit element can be made, or in a biological system, the smallest and most compactly it can be evolved and grown into. This obviously directly addresses the issues of Moore’s law where it is an essential requirement that circuit elements be made progressively smaller and smaller, if the number of them that can be placed in a same-size integrated circuit chip is to continue to increase and at the rate that Moore initially predicted with its ongoing same-time period doublings.
• And the second of these constraints, that I add in here in this narrative, involves the energy requirements of these systems as the number of functional elements increases in accordance with Moore’s law. And that of necessity includes both energy required to power and run these systems, and the need to deal with ever-increasing amounts of waste energy byproduct: heat generation as all of this energy is concentrated into a small volume. Think of the two sides of this power dynamic as representing the overall energy flow cost of Moore’s law and its implementation.
• And think of these two constraints as representing the two sides of a larger, overarching dynamic that together serve to fundamentally shape what is and is not possible here.

Let’s at least begin to address the first of those two individual sources of constraint, as raised in the first of those bullet points, by considering the number of functional elements and connections between them that can be found in an at least currently high-end central processing unit chip, and in an average normal adult human brain:

• The standard estimate that is usually cited for a total number of neurons in at average, normal adult human brain is generally stated as 100 billion, give or take a billion or so. And the average number of synaptic connections in such a brain is often cited as being on the order of 100 trillion (with an average of some 1000 synapses per neuron and with some neuron types having as many as 10,000 and more per cell, depending on neuronal type.)
• The number of transistors in a single integrated circuit chip, and in central processing unit chips as a special case in point of particular relevance here, keeps growing as per Moore’s law as it is still holding true (see this piece on transistor counts.) And according to that Wikipedia, entry, at least as of this writing, the largest transistor count in a commercially available single-chip microprocessor (as first made commercially available in 2017) is 19.2 billion. But this count is dwarfed by the number of transistors in the largest and most advanced memory chips. As per that same online encyclopedia entry, the current record holder there (as of 2019) is the Samsung eUFS (1 TB) 3D-stacked V-NAND flash memory chip (consisting of 16 stacked V-NAND dies), with 2 trillion transistors (capable of storing 4 bits, or one half of a byte per transistor).
• Individual neurons are probably about as small and compact as they can be, and particularly given the way they interconnect to work, through long dendritic (input) and axonal (output) processes. And the overall volume of that normative adult human brain has approximately one support cell (and glial cells in particular for that) for every neuron. And at least some types of neuron as found in a normal brain are, and most probably have to be particularly large in order to function and particularly given the complexity of their interconnections with other cells (e.g. mirror neurons.)
• Continued pursuit of Moore’s law-specified chip development goals has brought integrated circuit, functional elements and the connectors between them down in size to a scale where quantum mechanical effects have come to predominate in describing much of their behavior. And the behavior and the positioning of individual atoms in them have become of fundamental importance and both for their design and fabrication and for their performance and their stability.

And this brings me directly to the second constraining factor that I would address here: energy flows, and both as electrical power would be used to run a given integrated circuit chip and as an unavoidable fraction of it would be dissipated as heat after use. And I begin addressing that, in an artificial intelligence agent context, by explicitly pointing out a detail just offered in one of my above-noted examples here: the Samsung eUFS random access memory chip, in its1 TB version, is constructed as a 3-D chip with functionally complete sets of layers built upon each other in stacked arrays. This, in fact is a rarity in the history of integrated circuit chip design, where energy requirements for powering these chips, translates into heat generation. One of the ongoing goals in the design and construction of integrated circuit-based systems, has been heat dissipation, and from the beginning of the industry. And 3-dimensional layering as pursued to the degree that it is in this chip, both increases circuit element density possible in it, keeping everything in closer physical proximity, but at the expense of concentrating the heat generated from running it.

This proximity is important as a possible mechanism for increasing overall chip speed and computational efficiency, given the finite speed of light that it has to function within the limits of, and the maximum possible speed that electrons can move at in those circuits – always less than that speed. To explain that with a quick numerical example, let’s assume that the speed of light in a vacuum is precisely 300 million meters per second (where one meter equals approximately 39 inches.) And let’s assume that a central processing chip, to switch example types, can carry out 1 billion floating point format, mathematical operations in a second. At that clock speed, light would travel 0.3 meters, or 11.7 inches in a straight line. But in the real world, operations are carried out on input, and on input that has been moved into at least short-term volatile memory cache and accessing that takes time, as does sending newly computed output to memory – that might be stored in buffer on the same chip but that might be stored on a separate chip too. And if data is called for that has to be retrieved from longer-term memory storage, that really slows all of this down.

In real chips and in real computer systems built around them, that maximum possible distance traveled, and by photons, not slower electrons, can and does evaporate. So stacking as per a Samsung eUFS chip makes sense. The only problem is that if this is done more than just nominally and under very controlled contexts, it becomes too likely that the chips involved will become so hot that they would cease to function; they could even literally melt.

Biological systems at least potentially face comparable, if less dramatically extreme challenges too. A normal adult brain consumes some 20% of all of the calories consumed by a human body as a whole, as a percentage of a standard resting metabolic rate. And that same brain consumes some 20% of all oxygen used too. And this is with a brain that has a total weight that accounts for only some 2% of the overall body weight. An expanded brain that now accounted for 10% of the overall body mass would, if this were to scale linearly, require 100% of the nutrients and oxygen that is now normally taken in and used by the entire body, requiring expanded capabilities for consuming, distributing and using both and probably throughout that body.

The basic constraints that I write of here, have profound impact, and on both biological and artifactual information processing systems. And together, the fuller sets of these constraints actually faced, have a tremendous impact on what is even possible, and on how even just approaching that set of limitations might be made possible. And this is where I explicitly turn back to reconsider Moore’s law as a performance improvement driver, and technology development lock-in as a brake on that, and how the two in fact shape each other in this context. And I will at least begin that phase of this overall discussion in the next installment to this series.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3 and also see Page 1 and Page 2 of that directory. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And also see its Page 1.

Innovation, disruptive innovation and market volatility 49: innovative business development and the tools that drive it 19

Posted in business and convergent technologies, macroeconomics by Timothy Platt on October 21, 2019

This is my 49th posting to a series on the economics of innovation, and on how change and innovation can be defined and analyzed in economic and related risk management terms (see Macroeconomics and Business and its Page 2 continuation, postings 173 and loosely following for its Parts 1-48.)

I have been discussing the questions and issues of technology transfer in this series, since Part 43, initially doing so in terms of a specific business model, to business model interaction, with:

• University research labs and the university-as-business systems that they function in, as original sources of new innovation,
• And invention acquiring, for-profit businesses that would buy access to these new and emerging opportunities for development and sale.

Then I began to generalize this line of discussion in Part 48 and with a goal of more fully including business participants of all types that might find incentive or even outright need for entering into technology transfer agreements and transactions. And I focused there on four basic due diligence questions that together constitute a basic analytical planning tool for this, that I will continue addressing here too:

• What value would the initially owning business gain, or retain if it were simply to maintain tightly controlled, access limited hold over the innovation or innovations in question here?
• What value could it develop and secure from divesting at least contractually specified control over the use or ownership of this innovative potential?
• And from the acquiring business’ perspective, what costs, or loss of positive value creating potential would it face, if it did not gain access to this innovation and on terms that would meet its needs?
• And what positive value would it gain if it did gain access to this, and with sufficient ownership control and exclusivity of use so as to meet its needs?

I began addressing these questions in Part 48, at the level of the individual innovation or invention, and in terms of its costs and risks, and its financial returns and other benefits as a possible technology transfer offering. Then at the end of that posting I changed direction and began addressing wider contexts here. More specifically, I said at the end of Part 48 that I would turn here to:

• Add in the complexities of scale, and for both the divesting, or licensing business and for the acquiring one.
• And I will also discuss the issues of direct and indirect competition, and how carefully planned and executed transfer transactions here, can in fact create or enhance business-to-business collaboration opportunities too,
• Where that may or may not create monopoly law risks in the process.

My goal here is to at least begin to more systematically address those issues, and with that including my explaining them where it is easy to read limiting assumptions into them. And that last possibility holds as true for business managers and owners who seek to run businesses as it does for people who simply seek to better understand them. And I begin all of this with the first of those three points, and with what can be an assumptions-laden word: scale.

• Scale of what? There are at least two possible answers to that question, both f which significantly apply here. The first is the scale of overall technology transfer activity that businesses might enter into, and on both the offering and acquiring sides of these agreements. And the second is the overall scale and I have to add the overall complexity and diversity of the businesses involved.
• And in both types of contexts, scale here is all about significance and impact, and for cash flow and profitability potential, for competitive impact, and for market appeal and market share.

Let’s start addressing the first of the above-offered topics points by more fully considering scale from the first of those perspectives, and with innovation itself. And I begin addressing that by offering a point of observation that in fact does conflate the two understandings of that word, but that does indicate something as to how larger contexts can shape the significance of any given single innovation in this type of context:

• While there are possible exceptions to this presumption, it does make sense that a single, at least potentially profitable innovation and rights to developing it, is going to hold a greater individual significance to a smaller business that would buy or sell rights to it,
• Than a same potential value innovation would if it were bundled into a large assortment of such offerings for sale of rights to interested parties, and particularly for larger businesses.

According to this, a larger business that was buying or selling off what might, in fact be a large number of to-them unneeded patents, is less likely to be particularly concerned about or even interested in any given single innovation in that collection, than a smaller business would be that was buy or selling rights to a single innovation. To start with, if that smaller business was the innovator in this, with much more limited reserves and cash flows, it is likely that developing this innovation expended a much larger proportion of their then-available liquid resources than a large and diverse business would, when developing one of many innovations and on an ongoing basis.

There is of course a fundamental assumption in that, that merits explicit consideration, that I would enter into here by introducing three terms:

Linchpin innovations are innovations that offer value from how they enable other innovations and make them possible.

So for example, a new photolithography technology that would make it possible to produce very significantly, physically larger integrated circuit chips with the same or higher circuit element density and with the same or higher quality control scores, and at lower costs, would be a linchpin innovation when considered in terms of any and all collateral innovations that could be developed from it in realizing the maximum amount of achievable value from it.

Endpoint innovations are innovations that offer explicit value, but they are either more stand-alone in nature, fitting into otherwise more stable application contexts, or they are collateral innovations as noted above here, that are not absolutely required for other innovations too.

Here, true linchpin innovations that are readily identifiable as such from the perspective of an involved business and its direct competition, are innovations that are essentially by definition, disruptive in nature. They are innovations that open up new ranges of opportunities, including opportunities for developing new emergent and new next-step truly linchpin innovations. Collateral innovations might significantly stem from specific readily identifiable linchpin innovations or they might simply arise as next step developments in already established technological or other arenas. And they might be specifically supportive of other innovations, but they are not likely to be essential, obligatorily required foundational elements for developing or implementing them. And endpoint innovations are collateral but without specifically being required for any other innovations to work or to be utilized. If you organize these three categorical types of innovation in a tree pattern according to how they functionally relate to each other and according to how they do or do not depend upon each other, linchpin innovations form the base of the tree, endpoint innovations comprise the tips of the smaller branches and twigs, and collateral innovations fall in-between and merge into and can meaningfully include those tips too, and certainly as the branches involved grow and next step innovations elaborate on what were those endpoint tips.

This conceptual model discusses the issues of functional utility and value. And from a business development or acquisition perspective, this addresses the relative value that one of these innovation types would carry as a perhaps-marketable offering. Linchpin innovations hold fundamental value that can only increase as more and more elaborations arising from it are conceived and developed as endpoint and then collateral innovations from it. Even an overtly endpoint innovation might be developed into a source of new innovations, emergent to it. But statistically, they are more likely to hold value for some period of time and then fade away in significance as they become outdated and replaced by new innovative alternatives.

• As a general pattern, already identified linchpin innovations hold the greatest value
• And simple evolutionary development derived endpoint innovations hold the least, and they are likely to remain that way for the value they offer until they age out for being innovations at all. They offer the least value and are in most cases the most ephemeral even for that.
• And already-identifiable mid-range collateral innovations fall in-between those two extremes for this.

Now let’s at least briefly consider business scale as an explicitly defining parameter as it fits into this pattern. I have already touched upon this factor here in this posting, but to more systematically include it here in this overall discussion thread, I note that:

• A smaller business is not likely to invest the time, effort and expense needed to develop an innovative idea into a more actively productive reality unless it fits into and supports at least one key aspect of their basic underlying business model and business plan.
• Larger and more diverse businesses can and do innovate more widely. And while they also tend to focus on innovative potential that would in some way support their business, they do not necessarily limit their involvement and participation there, to innovations that would be central to what they do. Innovation and its development call for proportionately smaller fractions of their overall resource bases so they are proportionately smaller investments for them. And they are proportionately less risk-involving investments, if an innovation development effort falls through and becomes a source of financial loss from that.
• On the acquisition side, smaller businesses correspondingly focus their attention and their activity on their core essentials and on optimizing their competitive strengths and their market reach and penetration. So if they acquire an innovation from another business through a technology transfer transaction, it is all but certain that their core due diligence decision making process there, will center on how this acquisition would support the fundamentals that define the business for what it is and that make it work. And they would compare that with analyses of the consequences they would face if they chose not to acquire this, and simply continue on as is.
• Larger, more diverse businesses can and do acquire innovation and controlling rights to it to competitively fill gaps in what they can and do offer, and how. Technology transfer acquisitions impact on parts of their organization and their market-facing productive puzzle.
• For smaller businesses, these acquisitions impact on their organization as a whole and on their market-facing productive puzzle as a whole too.

I am going to continue this discussion in my next installment to this series where I will turn to consider the second and third innovation context topic points:

• The issues of direct and indirect competition, and how carefully planned and executed technology transfer transactions here, can in fact create or enhance business-to-business collaboration opportunities too,
• Where that may or may not create monopoly law risks in the process.

Meanwhile, you can find this and related postings at Macroeconomics and Business and its Page 2 continuation. And also see Ubiquitous Computing and Communications – everywhere all the time 3 and that directory’s Page 1 and Page 2.

Reconsidering the varying faces of infrastructure and their sometimes competing imperatives 9: the New Deal and infrastructure development as recovery 3

This is my 10th installment to a series on infrastructure as work on it, and as possible work on it are variously prioritized and carried through upon, or set aside for future consideration (see United Nations Global Alliance for ICT and Development (UN-GAID), postings 46 and following for Parts 1-8 with its supplemental posting Part 4.5.)

I have, up to here, made note of and selectively analyzed a succession of large scale infrastructure development and redevelopment initiatives in this series, with a goal of distilling out of them, a set of guiding principles that might offer planning and execution value when moving forward on other such programs. And as a part of that and as a fifth such case study example, I have been discussing an historically defining progression of events and responses to them from American history:

• The Great Depression and US president Franklin Delano Roosevelt’s New Deal and related efforts that he envisioned, argued for and led, in order to help bring his country out of that seemingly existential crisis.

I began this line of discussion in Part 7 and Part 8, focusing there on what the Great Depression was, and with a focus on how it arose and took place in the United States. And my goal here is to at least begin to discuss what Roosevelt did and sought to do and how, in response to all of that turmoil and challenge. And I begin doing so by offering a background reference that I would argue holds significant relevance for better understanding the context and issues that I would focus upon here:

• Goodwin, D.K. (2018) Leadership in Turbulent Times. Simon & Schuster. (And see in particular, this book’s Chapter 11 for purposes of further clarifying the issues raised here.)

As already noted in the two preceding installments to this series, the Great Depression arguably began in late 1929 with an “official” starting date usually set for that as October 29 of that year: Black Tuesday when the US stock market completed an initial crash that had started the previous Thursday. But realistically it really began as a true depression and as the Great Depression on March 13, 1930 when the Smoot–Hawley Tariff Act was first put into effect. And Herbert Hoover was president of the United States as the nation as a whole and much of the world around it, spiraled down into chaos.

There are those who revile Hoover for his failure to effectively deal with or even fully understand and acknowledge the challenges that the United States and American citizens and businesses faced during his administration, and certainly after his initial pre-depression honeymoon period in office. And there are those who exalt him and particularly from the more extreme right politically as they speak out against the New Deal – and even for how its programs helped to pull the country back from its fall. All of that, while interesting and even important, is irrelevant here for purposes of this discussion. The important point of note coming out of that is that unemployment was rampant, a great many American citizens had individually lost all of whatever life savings they had been able to accumulate prior to this, and seemingly endless numbers of business, banks and other basic organizational structures that helped form the American society were now unstable and at extreme risk of failure, or already gone. And the level of morale in the United States, and of public confidence in both public and private sector institutions was one of all but despair and for many. And that was the reality in the United States, and in fact in much of the world as a whole that Franklin Delano Roosevelt faced as he took his first oath of office as the 32nd president of the United States on March 4, 1933.

Roosevelt knew that if he was to succeed in any real way in addressing and remediating any of these challenges faced, he had to begin and act immediately. And he began laying out his approach to doing that, and he began following forward on that in his first inaugural address, where he declared war on the depression and where he uttered one of his most oft-remembered statements: “the only thing we have to fear is fear itself.”

Roosevelt did not wait until March 5th to begin acting on the promise of action that he made to the nation in that inaugural address. He immediately began reaching out to key members of the US Congress and to members of both political parties there to begin a collaborative effort that became known as the 100 Days Congress, for the wide ranging legislation that was drafted, refined, voted upon, passed and signed into law during that brief span of time (see First 100 Days of Franklin D. Roosevelt’s Presidency.) This ongoing flow of activity came to include passage of 15 major pieces of legislation that collectively reshaped the country, setting it on a path that led to an ultimate recovery from this depression. And that body of legislation formed the core of Roosevelt’s New Deal as he was able to bring it into effect.

• What did Roosevelt push for and get passed in this way, starting during those first 100 days?
• I would reframe that question in terms of immediate societal needs. What were the key areas that Roosevelt had to address and at least begin to resolve through legislative action, if he and his new presidential administration were to begin to effectively meet the challenge of this depression and as quickly as possible?

Rephrased in those terms, his first 100 days and their legislative push sought to grab public attention and support by simultaneously addressing a complex of what had seemed to be intractable challenges that included:

• Reassuring the public that their needs and their fears were understood and that they were being addressed,
• And building safeguards into the economy and into the business sector that drives it, to ensure their long-term viability and stability.
• Put simply, Roosevelt sought to create a new sense of public confidence, and put people back to work and with real full time jobs at long-term viable businesses.

Those basic goals were and I add still are all fundamentally interconnected. And to highlight that in an explicitly Great Depression context, I turn back to a source of challenge that I raised and at least briefly discussed in Part 8 of this series: banks and the banking system, to focus on their role in all of this.

• The public at large had lost any trust that they had had in banks and in their reliability, and with good reason given the number of them that had gone under in the months and first years immediately following the start of the Great Depression. And when those banks failed, all of the people and their families and all of the businesses that had money tied up in accounts with them, lost everything of that.
• So regulatory law was passed to prevent banks and financial institutions in general, from following a wide range of what had proven to be high-risk business practices that made them vulnerable to failure.
• And the Federal Deposit Insurance Corporation (FDIC) was created to safeguard customer savings in the event that a bank were to fail anyway, among other consumer-facing and supporting measures passed.

The goal there was to both stabilize banks and make them sounder, safer and more reliable as financial institutions, while simultaneously reassuring the private sector and its participants: individuals and businesses alike, that it was now safe to put their money back into those banks again. And rebuilding the banking system as a viable and used resource would make monies available through them for loans again, and that would help to get the overall economy moving and recovering again.

• Banks and the banking system in general, can in a fundamental sense be seen as constituting the heart of an economy, and for any national economy that is grounded in the marketplace and its participants, and that is not simply mandated from above, politically and governmentally as a command economy. Bank loans and the liquidity reserves and cash flows that they create, drive growth and make all else possible, and for both businesses, large and small and for their employees and for consumers of all sorts.
• So banks and banking systems constitute a key facet of a nation’s overall critical infrastructure, and one that was badly broken by the Great Depression and that needed to be fixed for any real recovery from it.

This is a series about infrastructure, and the banking system of a nation is one of the most important and vital structural components of its overall infrastructure system, and for how banks collectively create vast pools of liquid funds from monies saved in them, that can be turned back to their communities and for such a wide range of personal and business uses if nothing else. But the overall plan put forth and enacted into law in the 100 Days Congress (which adjourned on June 16, 1933) went way beyond simply reinforcing and rebuilding as needed, banks and other behind the scenes elements of the overall American infrastructure. It went on to address rebuilding and expansion needs for more readily visible aspects of the overall infrastructure in place too, and for systems that essentially anyone would automatically see as national infrastructure such as dams and highways. Roosevelt’s New Deal impacted upon and even fundamentally reshaped virtually every aspect of the basic large-scale infrastructure that had existed in the United States. And to highlight a more general principle here that I will return to in subsequent installments to this series, all of this effort had at least one key point of detail in common;

• It was all organized according to an overarching pattern rather than simply arising ad hoc, piece by piece as predominantly happened before the Great Depression.
• Ultimately any large scale infrastructure development or redevelopment effort has to be organized and realized as a coherent whole, even if that means developing it as an evolving effort, if coherent and gap-free results are to be realized and with a minimum of unexpected complications.

That noted, what did the New Deal, and the fruits of Roosevelt’s efforts and the 100 Days Congress actually achieve? I noted above that this included passage of 15 major pieces of legislation and add here that this included enactment of such programs as:

• The Civilian Conservation Corps as a jobs creating program that brought many back into the productive workforce in the United States,
• The Tennessee Valley Authority – a key regional development effort that made it possible to spread the overall national electric power grid into a large unserved part of the country while creating new jobs there in the process,
• The Emergency Banking Act, that sought to stop the ongoing cascade of bank failures that was plaguing the country,
• The Farm Credit Act that sought to provide relief to family farms and help restore American agriculture,
• The Agricultural Adjustment Act, that was developed coordinately with that, and that also helped to stabilize and revitalize American agriculture,
• The National Industrial Recovery Act,
• The Public Works Administration, which focused on creating jobs through construction of water systems, power plants and hospitals among other societally important resources,
• The Federal Deposit Insurance Corporation as cited above, and
• The Glass Steagall Act – legislation designed to limit if not block high risk, institutional failure creating practices in banks and financial institutions in general.

Five of the New Deal agencies that were created in response to the Great Depression and that contributed to ending it, still exist today, including the Federal Deposit Insurance Corporation, the Securities and Exchange Commission, the National Labor Relations Board, the Social Security Administration and the Tennessee Valley Authority. And while subsequent partisan political efforts have eroded some of the key features of the Glass Steagall Act, much of that is still in effect today too.

And with that noted, I conclude this posting by highlighting what might in fact be the most important two points that I could make here:

• I wrote above, of the importance of having a single, more unified vision when mapping out and carrying out a large scale infrastructure program, and that is valid. But flexibility in the face of the unexpected and in achieving the doable is vital there too. And so is a willingness to experiment and simply try things out and certainly when faced with novel and unprecedented challenges that you cannot address by anything like tried-and-true methods. A willingness to experiment and try possible solutions out and a willingness to step back from them and try something new if they do not work, is vital there.
• And seeking out and achieving buy-in is essential if any of that is going to be possible. This meant reaching out to politicians and public officials, as Roosevelt did when he organized and led his New Deal efforts. But more importantly, this meant his reaching out very directly to the American public and right in their living rooms, through his radio broadcast fireside chats, with his first of them taking place soon after he was first sworn into office as president. (He was sworn into office on March 4 and he gave his first fireside chat of what would become an ongoing series of 30, eight days later on March 12.)
• Franklin Delano Roosevelt most definitely did not invent the radio. But he was the first politician and the first government leader who figured out how to effectively use that means of communication and connections building, to promote and advance his policies and his goals. He was the first to use this new tool in ways that would lead to the type and level of overall public support that would compel even his political opponents to seek out ways to work with and compromise with him, on the issues that were important to him. So I add to my second bullet point here, the imperative of reaching out as widely and effectively as possible when developing that buy-in, and through as wide and effective a span of possible communications channels and venues as possible.

I am going to step back in my next installment to this series, from the now five case-in-point examples that I have been exploring in it up to here. And I will offer an at least first draft of the more general principles that I would develop out of all of this, as a basis for making actionable proposals as to how future infrastructure development projects might be carried out. And in anticipation of what is to follow here, I write all of this with the future, and the near-future and already emerging challenges of global warming in mind as a source of infrastructure development and redevelopment imperatives. Then after offering that first draft note, I am going to return to my initial plans for how I would further develop this series, as outlined in Part 6 of this series, and discuss infrastructure development as envisioned by and carried out the Communist Party of China and the government of the People’s Republic of China. And as part of that I will also discuss Russian, and particularly Soviet Union era, Russian infrastructure and its drivers. And my intention for now, as I think forward about this is that after completing those two case study example discussion, I will offer a second draft-refined update to the first draft version of that, that I will offer as a Part 10 here.

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. I also include this in Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory. And I include this in my United Nations Global Alliance for ICT and Development (UN-GAID) directory too for its relevance there.

Rethinking national security in a post-2016 US presidential election context: conflict and cyber-conflict in an age of social media 17

Posted in business and convergent technologies, social networking and business by Timothy Platt on September 27, 2019

This is my 17th installment to a series on cyber risk and cyber conflict in a still emerging 21st century interactive online context, and in a ubiquitously social media connected context and when faced with a rapidly interconnecting internet of things among other disruptively new online innovations (see Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 354 and loosely following for Parts 1-16.)

The types of issues that I raise and discuss in this type of series, always have their roots in the histories of the peoples and nations involved in the issues and challenges discussed in them. So I have been approaching the issues and challenges of national security here, with specific relevant historic narratives and timelines in mind, as I consider and discuss specific case study examples. I take that approach in my consulting work and in how I think about these issues in general too, so I offer that as my basic approach here, to put this in wider perspective. And as part of that, I have been exploring Russia’s approach to national security, as a case in point source of working examples for this series, since Part 13.

More specifically, I have been developing and outlining a selectively considered timeline of the threats and challenges that Russia has faced over the centuries now, in Part 13, Part 14 and Part 16 of this, with that developing narrative leading up to and including a brief and selective biographical note concerning Vladimir Putin himself: Russia’s current leader, and with a goal of discussing that nation’s current and emerging national security strategies and how they implement them, as shaped by a combination of Russian national history and Putin’s own personal history and perspective.

I digressed from this largely chronologically organized narrative in Part 15, to offer a more generally stated perspective on how most if not all nations currently see, understand and plan for cyber-defense and offense in all of this. But my goal here is to complete, at least for purposes of this series, my Russian historical narrative, and then turn back to consider some still very open issues and questions that are at least implicitly raised in Part 15, to at least begin to offer some thoughts as to how a better, more resiliently effective national security doctrine might be developed that would more effectively take advancing cyber-dimensions of threat as faced, into account.

I intend to raise and discuss several other case study examples in this series after completing my discussion of Russia as such, to further develop and expand upon that narrative line. But I begin all of this here with further consideration Vladimir Putin and his Russia, and with the world context that he and his country face. And I begin this with an at least briefly selective discussion of what I have come to think of as the Putin Defense Policy and its underlying doctrine, as first cited here in Part 16. And I begin that by noting a point of detail that might or might not seem immediately obvious to a reader:

• When a policy or doctrine, or plan if it is called that (e.g. the Marshall Plan) is explicitly named after a single individual as its defined and defining source, that generally means that it is grounded in their own more individual understandings and their own preferred goals and priorities, and as much so as it is in the needs of whatever societal order that it would be developed for.

And yes – I have written in this blog of the Marshall Plan as an example of a massive, comprehensive infrastructure rebuilding program, but it was at least as much a massive mutual defense initiative too. And that face to it and its imperatives created the support that made its infrastructure redevelopment side possible too.

So what is the Putin Defense Policy and what is its underlying doctrine, and both as Vladimir Putin and his government seek to lay out and prioritize a strategic and operational plan for safeguarding and advancing Russia and that nation’s interests, and as they would advance Putin’s own more individual needs and understandings too? I begin addressing that question by posing a second one, that might at first glance seem unrelated and even non sequitur here. For all of their differences what do Donald Trump, Xi Jinping and Vladimir Putin most significantly hold in common, and even as share defining traits?

• All three see themselves as the essential leader of their time, and as indispensible for that.
• All three see any posited distinction between their own personal goals and ambitions and the realization of their visions for their nations, as being arbitrary and false.
• And all three pursue their more unified, if sometimes blurred and out of focus plans there, through authoritarian means, pursuing that approach to leadership as a shortest and most direct path to what they see as their inevitable success, and with a minimum amount of resistance or pushback to slow them down allowed for.

I have been addressing Xi and Trump in this regard in a concurrently running series (see Donald Trump, Xi Jinping, and the Contrasts of Leadership in the 21st Century as can be found at Social Networking and Business 2 and its Page 3 continuation, as postings 299 and loosely following.) And I at least briefly consider Putin in this same type of light here, as I raise and discuss his vision of Russia and of how best to safeguard it: his nation, while meeting his own more personal ambitions and needs too.

Ultimately, Vladimir Putin does not see as valid any distinction between his meeting his own needs and interests and his meeting Russia’s. (This same point, I add, could be said about Donald Trump and his vision of the United States, and Xi Jinping and his vision of China, and I explicitly note that here as a brief add-on note to the above cited series about them.) How does that play out as Putin shapes and implements his policies, both foreign and domestic? My goal for this posting is to at least briefly answer that question, and certainly as far as his foreign policy and his approach to national defense as included there is concerned.

I have, of course, already offered a key goals-oriented and goals-defining part of any real answer to that question, in the course of writing Part 16 to this, when I observed that what Putin:

• “Seeks to do is to reestablish the old protective buffer zone, or at least part of it, as that reached its greatest scope under the Warsaw Pact and certainly when considering Western threats. And as a continuation of old approaches of developing such protective buffer zones, the Putin Policy as it has emerged, also calls for the creation of what amount to cyber buffer zones too: areas of Russian dominating cyber influence and control.”

And the second half of that here-repeated point strikes to the heart of what Vladimir Putin has been operationally developing as his defense plan. He has come to see flexible hybrid systems of response and of proactive action that include use of both traditional military and cyber capabilities as fundamentally important, and has actively worked to both develop and use such combined, flexible capabilities as he evolves and advances his foreign policy s a whole.

I cited his government’s moves on the Crimean Peninsula and on Eastern Ukraine in Part 16, and add here that this involved:

• Direct military intervention with Russian soldiers and officers entering into the Ukraine in false flag garb as supposed Ukrainian citizens,
• Support of actual Ukrainian forces whose interests aligned with his own, organized as local “home grown” militias, and
• Provision of military equipment and supplies, military officer guidance and military intelligence findings to support all of this.

But that, in and of itself, only addresses Russia’s deployment and use of more conventional forces in this overall campaign. Putin’s Russian has orchestrated and led a campaign to reestablish a vassal state buffer zone there between his Russia and the West, and one that has just as importantly included a very active cyber component, and both to hinder efforts from the Ukraine’s Western-leaning government from effectively countering this action, and for sewing disinformation and both within the Ukraine itself and in the West as to who has been doing what there and why. That has prominently included enlisting and leading an army of third party social media trolls and related non-Russian, non-Ukranian agents, along with deploying explicitly Russian assets.

Initially, those Russian cyber-assets were organized as smaller specialized units under the command of Russia’s Main Directorate of the General Staff of the Armed Forces of the Russian Federation (GRU). But Putin’s Russian cyber-warfare capability (thought of there more broadly as an information-warfare capability) has now been reorganized under a single overall unified cyber command. And non-Russian, largely civilian cyber agents and explicitly Russian cyber-assets as drawn from that command and its operational units, have been and are used in parallel with each other and according to a basic doctrinal approach that closely mirrors how Putin and his planners and field commanders have made use of both local Ukrainian and Russian-sourced conventional military assets.

• For a brief but telling discussion of this emerging Russian cyber-capability, see Russia’s Approach to Cyber Warfare: a 2016 CNA Analysis Solutions paper, prepared in collaboration with the US Center for Naval Analyses, for the Office of the Chief of Naval Operations.

I began my discussion of Russian cyber warfare and of Russia’s weaponized cyber capabilities here, with a focus on events that have taken place in and near the Ukraine. But this was not Russia’s or even Vladimir Putin’s first use of cyber capabilities as a source of tools for carrying out his foreign policy. This campaign was not his counterpart to the Spanish Civil War, as that was used by Nazi Germany before World War II, to test new weapons and tactics there, and gain real-world proven proficiency in their use. He did that earlier as a key due diligence step, when he had forces developed within his military and his own intelligence service: the FSB (formerly the KGB), carry out cyber-attacks against what he saw as unsupportive and therefore hostile governments in the Baltic States, and against their nations’ private sectors too. Estonia was a particular target there. And Russia’s attacks there have served as test runs for all that has followed elsewhere.

Putin and his government and his military have used a combination of local Russian-supportive citizens of those countries and certainly in Estonia, working in concert there with Russian cyber-warfare and other assets (e.g. Russian operatives on the ground), to disrupt government and private sector functions, as a very direct threat for those nations to stay aligned with Russian interests or else.

• When Putin made his test-case moves on Estonia and the Baltic States, and then when he launched his attacks on the Crimean Peninsula and on Eastern Ukraine, he had his planners and his senior officers in place deploy networked computer resources such as denial of service attack-directed botnets, with them including veritable armies of security compromised personal computers and from all over, globally. And a large proportion of this activity was operationally directed out of a former, once Warsaw Pact ally and vassal state: Rumania.
• His campaign also made use of outsider-sourced cyber-attack assets: cyber-trolls and discontents from a more open-ended geographic range who his people could convince to contribute to this effort, and through the spread of online disinformation directed at them, if nothing else.

I have written of the flexible use of combined forces and asset types in all of this, and if the Putin Defense Policy has uniquely innovative aspects to it, is in how he has developed, and test-fire vetted this and in ways that few other nations can begin to claim to have done to match. That assurance of reliable usability makes these capabilities all the more dangerous in his hands, as Putin can consider them and use them without the pause for thought that attempted use of untested resources would always bring with them. But I would end this posting by raising another point of strategic and tactical consideration that is most certainly as much a part of Putin’s built-in way of thinking as any of the lessons learned that I made note of in Part 16: the concept of correlation of forces. See this now-declassified 1976 United States Defense Advanced Research Project Agency (DARPA) SRI report on The Soviet Concept of the “Correlation of Forces” as that would have shaped at least Putin’s early understanding of this concept. This is the vision and understanding of this basic military planning tool that would have informed his own training.

• Introduction of cyber-weaponized systems in general, and of crowd sourced weaponized capabilities as a particular advancement there, necessitate a complete reevaluation as to how an accurate risk and opportunity evaluation of the correlation of forces in play could even be determined, as part of a meaningful planning exercise.

And that point of observation at least begins to highlight the measure of significant of the disruptive novelty of what Putin’s Russia is both developing and life-fire testing, and right now.

With that noted, see this 2017 US Military sourced white paper:

Demystifying the Correlation of Forces Calculator for how it has baked into it, so many of the basic force identity and capability assumptions that enter into the above cited Soviet era Russian document on this planning tool, and for how it fails to take new and emerging cyber-capabilities into account. Yes, its more detailed understandings of more conventional forces differ from what was offered there, but its essentially complete focus on them remains the same and even in an age when cyber threat has to be assumed and accounted for too.

I am going to continue this discussion in a next series installment where I will further discuss cyber weapons and cyber weaponization, reconsidering among other issues, what dual-use technologies actually are in this fast changing context. And I will also further discuss the challenge of understanding and calculating correlations of forces and how they have to be redefined for the 21st century, and in an explicitly cyber-inclusive and cyber-ubiquitous context. That among other things will require my discussing symmetrical and asymmetrical conflicts and how they are being fundamentally redefined here too. Then I will at least briefly touch upon Russian efforts to influence and even suborn foreign referendums and elections, and in places like the European Union and the United States. Then, and after offering more summarizing comments on the Putin Defense Plan as a whole, I will turn back to reconsider and expand upon the cyber doctrine issues that I first raised here in Part 15. And I will continue on from there by at least briefly discussing other case study examples of relevance here too.

And as one more anticipatory note as to what is to come here, I will at least briefly address a key detail in all of this that I have cited here without explanation but that does merit more detailed consideration too: the fact that Vladimir Putin does not and probably cannot see any fundamental distinctions between his meeting his nation’s needs and his own. What I write of here is national in scope and focus but it is deeply personal to him too.

All of that noted, I end this posting with one final thought:

• Any conflict or potential conflict, and any use or possible use of force that in any significant way or degree includes use of cyber capabilities, automatically renders the theater of operations involved, global. And it is never going to be possible to meaningfully calculate correlation of forces or force symmetries or asymmetries or any related measures for such contexts if this simple fact is not taken into account and fully so.

I write this posting thinking back to a face-to-face conversation that I have had with a senior officer on the United States side of this, in that nation’s emerging cyber command, that would lead me to question how thoroughly the points that I would raise here are understood for their fuller implications.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time 3, and at Page 1 and Page 2 of that directory. And you can also find this and related material at Social Networking and Business 3 and also see that directory’s Page 1 and Page 2.

Rethinking the dynamics of software development and its economics in businesses 6

Posted in business and convergent technologies by Timothy Platt on September 9, 2019

This is my 6th installment to a thought piece that at least attempts to shed some light on the economics and efficiencies of software development as an industry and as a source of marketable products, in this period of explosively disruptive change (see Ubiquitous Computing and Communications – everywhere all the time 3, postings 402 and loosely following for Parts 1-5.)

I have been at least somewhat systematically been discussing a series of historically grounded benchmark development steps in both the software that is deployed and used, and by extension the hardware that it is run on, since Part 2 of this series:

1. Machine language programming
2. And its more human-readable and codeable upgrade: assembly language programming,
3. Early generation higher level programming languages (here, considering FORTRAN and COBOL as working examples),
4. Structured programming as a programming language defining and a programming style defining paradigm,
5. Object-oriented programming,
6. Language-oriented programming,
7. Artificial Intelligence programming, and
8. Quantum computing.

And I have successively delved into and discussed all of the first six of those development steps since then, noting how each successive step in that progression has simultaneously sought to resolve challenges and issues that had arisen in prior steps of that list, while opening up new positive possibilities in its own right (and with that also including their creating new potential problems for a next development step beyond it, to similarly address too.)

My goal beyond that has, and continues to include an intention to similarly discuss Points 7 and 8 of the above-repeated list. But in anticipation of doing so, and in preparation for that too, I switched directions in Part 5 and began to at least lay a foundation for explicitly discussing the business model and economic issues that comprise the basic topics goal of this series as a whole. And I focused on the above Points 1-6 for that, as Points 1-5 are all solidly historically grounded for their development and implementation, and Point 6 is likely to continue to develop along more stable, non-disruptive evolutionary lines. That is a presumption that could not realistically be made when considering Points 7 and 8.

I focused in Part 5 of this series on issues of what might be called anticipatory consistency, where systems: hardware and software in nature, are determined and designed in detail before they are built and run, and as largely standardized, risk-consistent forms. And in that, I include tightly parameterized flexibility in what is offered and used, as would be found for example, in a hardware setting where purchasing customers and end users can select among pre-set component options (e.g. for which specific pre-designed and built graphics card they get or for the amount of RAM their computer comes with.)

This anticipatory consistency can only be expected to create and enforce basic assumptions, and both for the businesses that would develop and offer these technologies, and for both their hardware and software, and for how this would shape their business models. And this would be expected to create matching, all but axiomatic assumptions when considering their microeconomics too, and both within specific manufacturing and selling businesses and across their business sectors in general.

I included Point 6: language-oriented programming there, as offering a transition step that would lead me from considering the more historically set issues of Points 1-5, to a discussion of the still very actively emerging Points 7 and 8. And I begin this posting’s main line of discussion here by noting a very important detail. I outlined something of language-oriented programming as it has more traditionally been conceived, when raising and first discussing it in Part 4 of this series. And I kept to that understanding of this software development step in Part 5, insofar as it came up there. But that is not the only way to view this technology, and developments to come in it are likely to diverge very significantly from what I offered there.

Traditionally, and at least as a matter of concept and possibility, language-oriented programming has been seen as an approach for developing computing and information management, problem-specific computer languages that would be developed and alpha tested and at least early beta tested, and otherwise vetted prior to their being offered publically and prior to their being used on real-world, client-sourced problems as marketable-product tools. The nature of this approach, as a more dynamic methodology for resolving problems that do not readily fit the designs and the coding grammars of already-available computer languages, at least for the speed and efficiency that they would offer, is such that this vetting would have to be streamlined and fast if the language-oriented programming protocols involved are to offer competitive value and in fact become more widely used. But the basic paradigm, going back to 1994 as noted in Part 4, fits the same pattern outlined in Part 4 and when considering development steps 1-5.

And with that, I offer what could be called a 2.0 version of that technology and its basic paradigm:

6.0 Prime: In principle, a new, problem type-specific computer language, with a novel syntax and grammar that are selected and designed in order to optimize computational efficiency for resolving that problem, or class of them, might start out as a “factory standard” offering. But there is no reason why a self-learning and a within-implementation capacity for further ontological self-development and change, could not be built into that.

Let’s reconsider some of the basic and even axiomatic assumptions that are in effect built into Points 1-5 as product offerings, as initially touched upon in Part 5 here, with this Point 6 Prime possibility in mind. And I will frame that reconsideration in terms of a basic biological systems evolutionary development model: the adaptive peaks, or fitness landscape model.

Let’s consider a computational challenge that would in fact likely arise in circumstances where more standard computer languages would not cleanly, efficiently code the problem at hand. A first take approach to developing a better language for this for coding it, with a more efficient grammar for that purpose, might in fact be well crafted and prove to be very efficient for that purpose. But if this is a genuinely novel problem, or one that current and existing computer languages are not well suited for, it is possible that this first, directly human coder crafted version will not be anywhere near as efficient as a genuinely fully optimized draft language would be. It might, when considered in comparison to a large number of alternative possible new languages, fit onto the slope of a fitness (e.g. performance efficiency) peak that at its best, most developed possibility would still offer much less that would be possible, overall when considering that performance landscape as a whole. Or it might in fact fit best into a lower level position in a valley there, where self-directed ontological change in a given instantiation of this language could conceivably lead it towards any of several possible peaks, each leading to improved efficiency but all carrying their own maximum efficiency potential. And instantiation A as purchased by one customer for their use self-learns and ontologically develops as a march up what in fact is a lower possible peak in that landscape and the overall efficiency of that, plateaus out with a still relatively inefficient tool. Instantiation B on the other hand, finds and climbs a higher peak and that leads to much better performance. And instantiation C manages to find and climb a veritable Mount Everest for that fitness landscape. And the company that bought that, publishes this fact by publically announcing how effectively its version of this computer language, as started in a same and standardized form, has evolved in their hands and simply from its own performance and from its own self-directed changes.

• What will the people who run and own the client businesses that purchased instantiations A and B think when they learn of this, and particularly if they see their having acquired this new computer language as having represented a significant up-front costs expenditure for them?

I am going to leave that as an open question here, and will begin to address it in my next series installment. In anticipation of that discussion to come, I will discuss both the business model of the enterprise that develops and markets this tool, and how it would offer this product to market, selling or in some manner leasing its use. And that means I will of necessity discuss the possible role that an acquiring business’ own proprietary data, as processed through this new software, might have helped shape its ontological development in their hands. Then after delving into those and related issues, I will begin to more formally discuss development step 7: artificial intelligence and the software that will come to contain it, and certainly as it is being fundamentally reshaped with the emergence of current and rapidly arriving artificial intelligence agents. Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory.

Meshing innovation, product development and production, marketing and sales as a virtuous cycle 20

Posted in business and convergent technologies, strategy and planning by Timothy Platt on September 3, 2019

This is my 20th installment to a series in which I reconsider cosmetic and innovative change as they impact upon and even fundamentally shape product design and development, manufacturing, marketing, distribution and the sales cycle, and from both the producer and consumer perspectives (see Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 342 and loosely following for Parts 1-19.)

I have been discussing the issues of what innovation is, and how a change would be identified as being cosmetic or more significant in nature, since Part 16 of this series. And as a core element of that narrative I have been discussing both acceptance of new and of innovation, and pushback and resistance to it, and certainly as members of differing readily definable demographics would make their own determinations here and take their own actions. On the resistance side of this, that has meant by discussing two fundamentally distinctive sources of pushback that can in fact arise and play out either independently and separately from each other or concurrently and for members of the same communities and the same at least potential markets:

• The standard innovation acceptance diffusion curve that runs from pioneer and early adaptors on to include eventual late and last adaptors, and
• Patterns of global flatting and its global wrinkling, pushback alternative.

And I begin this continuation of that discussion thread by briefly repeating two points that I laid out and at least briefly began to develop in Part 19 that will prove of significance here too:

• The standard innovation acceptance diffusion curve and a given marketplace participant’s position on it tends to be more determined by how consumers and potential consumers see New and Different as impacting upon themselves and their immediate families, insofar as they look beyond themselves there when making their purchase and use decisions.
• But global flattening (here viewable as being analogous to pioneer and early acceptance in the above), and global wrinkling (that can be seen as a rough counterpart to late and last adaptors and their responses and actions), are more overall-community and societally based and certainly as buy-in or reject decisions are made.

And social networking can play a role in both, and both in framing how individuals would see and understand an innovation change that confronts them, and in how members of larger communities would see it where that perspective would hold defining sway.

I stated at the end of Part 19 that I would focus here, more on global flattening and wrinkling than on standard innovation diffusion curve dynamics and I will, though more fully addressing the first of those dynamics will of necessity require at least commenting on the second of them too. And with that in mind, and with a flattening versus wrinkling possibility in more specific focus here, I repeat one more organizing point that I will build from in this posting:

• Change and innovation per se, can be disruptive and for both the perceived positives and negatives that that can bring with it. And when a sufficiently high percentage of an overall population primarily see positive, or at worst neutral there, flattening is at least going to be more possible and certainly as a “natural” path forward. But if a tipping point level of overall negative impact-perceived response arises, then the acceptance or resistance pressures that arise will favor wrinkling and that will become a societally significant force and it will represent a significant part of the overall voice for those peoples too.

Unless it is imposed from above, as for example through government embargo and nationalism-based threat, global wrinkling is a product of social networking. And so is the perception of threat of New that it can engender, mainstream and even bring to de facto axiomatic stature across communities. And this brings me directly to the issues and questions of Who and of agendas, that I said at the end of Part 19, that I would at least begin to address here.

Let’s begin by considering the players, and possible players in this, starting with the networking strategies and networker types (as determined by what strategies they use there), that I initially offered in my posting: Social Network Taxonomy and Social Networking Strategy.

I began that posting by classifying basic responses to networking opportunities as fitting into four categories:

• Active networkers – people who are seeking to expand their connections reach and really connect with their contacts to exchange value.
• Passive networkers – people who may or may not be looking to expand their networkers and who primarily wait for others to reach out to connect with them.
• Selective networkers – people who are resistant to networking online with anyone who they are not already actively connected with and networking with by other means.
• Inactive networks – people who may very well lean towards selective networking as defined above or tend to be passive networkers when working on their networks but who are not doing so, at least now.

For purposes of this discussion, I set aside the fourth of those groups: inactive networkers as people who are not likely to be strongly influenced by community sourced pressures towards either global flattening and buying into New, or global wrinkling and messages favoring resistance to New – and certainly if those messages come from strangers. Selective networkers who do in fact connect with and more actively communicate with people who they already know and respect, might be significantly influenced one way or the other if their current contacts bring them into this. And the same holds for passive networkers. And with all of this noted, I would argue that it is the active networkers in a community who drive change, and both for its acceptance or rejection.

Looking to those active networkers, I divided the more actively engaged among them into a further set of groups depending on their particular networking strategies followed:

• Hub networkers – people who are well known and connected at the hub of a specific community with its demographics and its ongoing voice and activities.
• Boundary networkers or demographic connectors – people who may or may not be hub networkers but who are actively involved in two or more distinct communities and who can help people connect across the boundaries to join new communities.
• Boundaryless networkers (sometimes called promiscuous networkers) – people who network far and wide, and without regard to community boundaries. These are the people who can seemingly always help you find and connect with someone who has unusual or unique skills, knowledge, experience or perspective and even on the most obscure issues and in the most arcane areas. And for purposes of this line of discussion, these are the people who can and do bring otherwise outliers into larger community discussions and in ways that can spread emerging shared opinions too.

The important point here, is that people have to widely connect to influence, and if not through one directional message broadcasting, then through more two and multi-directional conversation. That does not mean that any and every hub, boundary, or boundaryless networker is widely influential and opinion shaping: only that those who are more widely influential are also usually widely connected in one way or other too. And this is where agendas enter this narrative. Who is so connected? And what if anything are their agendas that would lead them to seek to shape public opinion, and certainly on matters of community response to change and its possibilities?

• Pushback and resistance and the global wrinkling that it would promote, more generally come from those who seek to preserve a status quo, or to restore what to them is a lost but not forgotten, perhaps idealized order.
• Open acceptance and the global flattening that it would promote, come from those who see greater promise in moving past the current and former realities that they see evidence of around themselves. And this is driven by a desire to join in and to move beyond the perhaps marginalizing impact of separation and even parochial isolation that wrinkling can lead to, and a desire to not be left behind as others advance into New and into new opportunities around them.

I offer this as a cartoonishly oversimplified stereotypical representation, but ultimate all join-and-engage moves towards global flattening and all reactive pushback to that, are driven by readily shared simplifications: all such movements become encapsulated in and bounded by slogans and related simplifying and even simplistic shorthand.

To add one more crucially important factor into this narrative here:

• Pushback and resistance to change and to New and certainly to foreign-sourced new, come for the most part from those who face pressures to change and adapt in the face of that New, where their judgments on this are driven by their fears of the foreign and of the different.
• But pressures towards global flattening can and generally do come from multiple directions, with the communities that face this New only serving as one source of that. Equally large and impactful pressures can come from the original sources of the New that is in play there, and that they might be very actively seeking new markets for. And the active networkers and the engaged broadcast format information sharing channels that they use in their promotion of open markets and global flattening can be very important here too.

I am going to continue this line of discussion in a next series installment, where I will more directly and fully discuss business stakeholders as just touched on there, as change accepting and even change embracing advocates. And I will discuss the roles of reputation and history in all of that. Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. And see also Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations.

Reconsidering Information Systems Infrastructure 11

This is the 11th posting to a series that I am developing, with a goal of analyzing and discussing how artificial intelligence and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 374 and loosely following for Parts 1-10. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

I conceptually divide artificial intelligence tasks and goals into three loosely defined categories in this series. And I have been discussing artificial intelligence agents and their systems requirements in a goals and requirements-oriented manner that is consistent with that, since Part 9 with those categorical types partitioned out from each other as follows:

• Fully specified systems goals and their tasks (e.g. chess with its fully specified rules defining a win and a loss, etc. for it),
• Open-ended systems goals and their tasks (e.g. natural conversational ability with its lack of corresponding fully characterized performance end points or similar parameter-defined success constraints), and
• Partly specified systems goals and their tasks (as in self-driving cars where they can be programmed with the legal rules of the road, but not with a correspondingly detailed algorithmically definable understanding of how real people in their vicinity actually drive and sometimes in spite of those rules: driving according to or contrary to the traffic laws in place.)

And I have focused up to here in this developing narrative on the first two of those task and goals categories, only noting the third of them as a transition category, where success in resolving tasks there would serve as a bridge from developing effective artificial specialized intelligence agents (that can carry out fully specified tasks and that have become increasingly well understood and both in principle and in practice) to the development of true artificial general intelligence agents (that can carry out open-ended tasks and that are still only partly understood for how they would be developed.)

And to bring this orienting starting note for this posting, up to date for what I have offered regarding that middle ground category, I add that I further partitioned that general category for its included degrees of task performance difficulty, in Part 10, according to what I identify as a swimming pool model:

• With its simpler, shallow end tasks that might arguably in fact belong in the fully specified systems goals and tasks category, as difficult entries there, and
• Deep end tasks that might arguably belong in the above-repeated open-ended systems goals and tasks category.

I chose self-driving vehicles and their artificial intelligence agent drivers as an intermediate, partly specified systems goal because it at least appears to belong in this category and with a degree of difficulty that would position it at least closer to the shallow end than the deep end there, and probably much closer.

Current self-driving cars have performed successfully (reaching their intended destinations and without accidents) and both in controlled settings and on the open road and in the presence of actual real-world drivers and their driving. And their guiding algorithms do seek to at least partly control for and account for what might be erratic circumambient driving on the part of others on the road around them, by for example allowing extra spacing between their vehicles and others ahead of them on the road. But even there, an “aggressive” human driver might suddenly squeeze into that space, and without signaling that they would change lanes, suddenly leaving a self-driving vehicle following too closely too. So this represents a task that might be encoded into a single if complex overarching algorithm, as supplemented by a priori sourced expert systems data and insight, based on real-world human driving behavior. But it is one that would also require ongoing self-learning and improvement on the part of the artificial intelligence agent drivers involved too, and both within these specific vehicles and between them as well.

• If all cars and trucks on the road were self-driving and all of them were actively sharing action and intention information with at least nearby vehicles in that system, all the time and real-time, self-driving would qualify as a fully specified systems task, and for all of the vehicles on the road. As soon as the wild card of human driving enters this narrative, that ceases to hold true. And the larger the percentage of human drivers actively on the road, the more statistically likely it becomes that one or more in the immediate vicinity of any given self-driving vehicle will drive erratically, making this a distinctly partly specified task challenge.

Let’s consider what that means in at least some detail. And I address that challenge by posing some risk management questions that this type of concurrent driving would raise, where the added risk that those drivers bring with them, move this out of a fully specified task category:

• What “non-standard” actions do real world drivers make?

This would include illegal lane changes, running red lights and stop signs, illegal turns, speeding and more. But more subtly perhaps, this would also include driving at, for example, a posted speed limit but under road conditions (e.g. in fog or during heavy rain) where that would not be safe.

• Are there circumstances where such behavior might arguably be more predictably likely to occur, and if so what are they and for what specific types of problematical driving?
• Are there times of the day, or other identifiable markers for when and where specific forms of problematical driving would be more likely?
• Are there markers that would identify problem drivers approaching, and from the front, the back or the side? Are there risk-predictive behaviors that can be identified before a possible accident, that a self-driving car and its artificial intelligence agent can look for and prepare for?
• What proactive accommodations could a self-driving car or truck make to lessen the risk of accident if, for example its sensors detect a car that is speeding and weaving erratically from lane to lane in the traffic flow, and without creating new vulnerabilities from how it would respond to that?

Consider, in that “new vulnerabilities” category, the example that I have already offered in passing above, when noting that increasing the distance between a self-driving car and a vehicle that is directly ahead of it, might in effect invite a third driver to squeeze in between them, and even if that meant it was now tailgating that leading vehicle and the self driving car that would now be behind it was tailgating it. A traffic light ahead, suddenly changing to red, or any other driving circumstance that would force the lead car in all of this to suddenly hit their brakes could cause a chain reaction accident.

What I am leading up to here in this discussion is a point that is simple to explain and justify in principle, even as it remains difficult to operationally resolve as a challenge in practice:

• With the difficulty in these less easily rules-defined challenges increasing, as the tasks that they would arise in it fit into deeper and deeper areas of that swimming pool in my above-cited analogy.

Fully specified systems goals and their tasks might be largely or even entirely deterministic in nature and rules determinable, where condition A always calls for action and response B, or at least a selection from among a specifiable set of particular such actions that would be chosen from, to meet the goals-oriented needs of the agent taking them. But partly specified systems goals and their tasks are of necessity significantly stochastic in nature, and with probabilistic evaluations of changing task context becoming more and more important as the tasks involved fit more and more into the deep end of that pool. And they become more open-endedly flexible in their response and action requirements too, no longer fitting cleanly into any given set of a priori if A then B rules.

Airplanes have had autopilot systems for years and even for human generations now, with the first of them dating back as far as 1912: more than a hundred years ago. But these systems have essentially always had human pilot back-up if nothing else, and have for the most part been limited to carrying out specific tasks, and under circumstances where the planes involved were in open air and without other aircraft coming too close. Self-driving cars have to be able to function in crowded roads and without human back-up – and even when a person is sitting behind the wheel, where it has to be assumed that they are not always going to be attentive to what the car or truck is doing, taking its self-driving capabilities for granted.

And with that noted, I add here that this is a goal that many are actively working to perfect, at least to a level of safe efficiency that matches the driving capabilities of an average safe driver on the road today. See, for example:

• The DARPA autonomous vehicle Grand Challenge, and
• Burns, L.D. and C Shulgan (2018) Autonomy: the quest to build the driverless car and how that will reshape the world. HarperCollins.

I am going to continue this discussion in a next series installment where I will turn back to reconsider open-ended goals and their agents again, and more from a perspective of general principles. Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 8

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on August 22, 2019

This is my 8th posting to a short series on the growth potential and constraints inherent in innovation, as realized as a practical matter (see Reexamining the Fundamentals 2, Section VIII for Parts 1-7.) And this is also my fifth posting to this series, to explicitly discuss emerging and still forming artificial intelligence technologies as they are and will be impacted upon by software lock-in and its imperatives, and by shared but more arbitrarily determined constraints such as Moore’s law (see Parts 4-7.)

I focused, for the most part in Part 7 of this series, on offering what amount to analogs to the simplified assumption Thévenin circuits of electronic circuit design. Thévenin’s theorem and the simplified and even detail-free circuits that they specify, serve to calculate and mimic the overall voltage and resistance parameters for what are construed to be entirely black-box electronic systems with their more complex circuitry, the detailed nature of which are not of importance in that type of analysis. There, the question is not one of what that circuitry specifically does or how, but rather of how it would or would not be able to function with what overall voltage and resistance requirements and specifications in larger systems.

My simplified assumption representations of Part 7 treated both brain systems and artificial intelligence agent systems as black box entities and looked at general timing and scale parameters to both determine their overall maximum possible size, and therefore their maximum overall complexity, given the fact that any and all functional elements within them would have larger than zero minimum volumes, and well as minimal time-to-task-completion requirements for what they would do. And I offered my Part 7 analyses there as first step evaluations of these issues, that of necessity would require refinement and added detail to offer anything like actionable value. Returning briefly to consider the Thévenin equivalents that I just cited above, by way of analogous comparison here, the details of the actual circuits that would be simplistically modeled there might not be important to or even germane to the end-result Thévenin circuits arrived at, but those simplest voltage and resistance matching equivalents would of necessity include within them, the cumulative voltage and resistance parameters of all of that detail in those circuit black boxes, even if as they would be rolled into overall requirement summaries for those circuits as a whole.

My goal for this posting is to at least begin to identify and discuss some of the complexity that would be rolled into my simplified assumptions models, and in a way that matches how a Thévenin theorem calculation would account for internal complexity in its modeled circuits’ overall electrical activity and requirements calculations, but without specifying their precise details either. And I begin by considering the functional and structural nodes that I made note of in Part 7 and in both brain and artificial intelligence agent contexts, and the issues of single processor versus parallel processing systems and subsystems. And this, of necessity means considering the nature of the information processing problems to be addressed by these systems too.

Let’s start this by considering the basic single processor paradigm and Moore’s law, and how riding that steady pattern of increased circuit complexity in any given overall integrated circuit chip size, has led to capacity to perform more complex information processing tasks and to do so with faster and faster clock speeds. I wrote in Part 7 of the maximum theoretical radius of a putative intelligent agent or system: biological and brain base, or artificial and electronic in nature, there assuming that a task could be completed, as a simplest possibility just by successfully sending a single signal at the maximum rate achievable in that system, in a straight line and for a period of time that is nominally assumed necessary to complete a task there. Think of increased chip/node clock speed here, as an equivalent of adding allowance for increased functional complexity into what would actually be sent in that test case signal, or in any more realistic functional test counterparts to it. The more that a processor added into this as an initial signal source, can do in a shorter period of time, in creating meaningful and actionable signal content to be so transmitted, the more functionally capable the agent, or system that includes it can be and still maintain a set maximum overall physical size.

Parallelism there, can be seen as a performance multiplier: as an efficiency and speed multiplier there, and particularly when that can be carried out within a set, here effectively standardized volume of space so as to limit the impact of maximum signal speeds in that initial processing as a performance degrader. Note that I just modified my original simplest and presumably fastest and farthest physically reaching, maximum size allowing example from Part 7 by adding in a signal processor and generator at its starting point. And I also at least allow for a matching node at the distant signal receiving end of this too, where capacity to do more and to add in more signal processing capacity at one or both ends of this transmission, and without increase in timing requirements for that added processing overhead, would not reduce the effective maximum physical size of such a system, in and of itself.

Parallel computing is a design approach that specifically, explicitly allows for such increased signal processing capacity, and at least in principle without necessarily adding in new timing delays and scale limitations – and certainly if it can be carried out within a single chip, that fits within the scale footprint of whatever single processor chip that it might be benchmark compared to.

I just added some assumptions into this narrative that demand acknowledging. And I begin doing so here by considering two types of tasks that are routinely carried out by biological brains, and certainly by higher functioning ones as would be found in vertebrate species: vision and the central nervous system processing that enters into that, and the information processing that would enter into carrying out tasks that cannot simply be handled by reflex and that would as such, call for more novel and even at least in-part, one-off information processing and learning.

Vision, as a flow of central nervous system and brain functions, is an incredibly complex process flow that involves pattern recognition and identification and a great deal more. And it can be seen as a quintessential example of a made for parallel processing problem, where an entire visual field can be divided into what amounts to a grid pattern that maps input data arriving at an eye to various points on an observer’s retina, and where essentially the same at least initial processing steps would be called for, for each of those data reception areas and their input data.

I simplify this example by leaving specialized retinal areas such as the fovea out of consideration, with its more sharply focused, detail-rich visual data reception and the more focused brain-level processing that would connect to that. The more basic, standardized model of vision that I am offering here, applies to the data reception and processing for essentially all of the rest of the visual field of a vertebrate eye and for its brain-level processing. (For a non-vision comparable computer systems example of a parallel computing-ready problem, consider the analysis of seismic data as collected from arrays of ground-based vibration sensors, as would be used to map out anything from the deep geological features and structures associated with potential petrochemical deposits, or the mapping details of underground fault lines that would hold importance in a potential earthquake context, or that might be used to distinguish between a possible naturally occurring earthquake and a below-ground nuclear weapons test.)

My more one-off experience example and its information processing might involve parallel processing and certainly when comparing apparent new with what is already known of and held in memory, as a speed-enhancing approach, to cite one possible area for such involvement. But the core of this type of information processing task and its resolution is likely to be more specialized, non-parallel processor or equivalent-driven.

And this brings me specifically and directly to the question of problem types faced: of data processing and analysis types and how they can best be functionally partitioned algorithmically. I have in a way already said in what I just wrote here, what I will more explicitly make note of now when addressing that question. But I will risk repeating the points that I made on this by way of special case examples, as more general principles, for purposes of increased clarity and focus and even if that means my being overly repetitious:

• The perfect parallel processing-ready problem is one that can be partitioned into a large and even vastly large set of what are essentially identical, individually simpler processing problems, where an overall solution to the original problem as a whole calls for carrying out all of those smaller t standardized sub-problems and stitching their resolutions together into a single organized whole. This might at times mean fully resolving the sub-problems and then combining them into a single organized whole, but more commonly this means developing successive rounds of preliminary solutions for them and repeatedly bringing them together, where adjacent parallel processing cells in this, serve as boundary value input for their neighbors in this type of system (see cellular automation for a more extreme example of how that need and its resolution can arise.)
• Single processor, and particularly computationally powerful single processor approaches become more effective, and even fundamentally essential as soon as problems arise that need comprehensive information processing that cannot readily be divided up into arrays of similarly structured simpler sub-problems that the individual smaller central processing units, or their biological equivalents, could separately address in parallel with each other, as is the case in my vision example or one of my non-vision computer systems examples as just given.

And this leads me to two open questions:

• What areas and aspects of artificial intelligence, or of intelligence per se, can be parsed into sub-problems that would make parallel processing both possible, and more efficient than single processor computing might allow?
• And how algorithmically, can problems in general be defined and specified, so as to effectively or even more optimally make this type of determination, so that they can be passed onto the right types and combinations of central processor or equivalent circuitry for resolution? (Here, I am assuming generally capable computer architectures that can address more open-ended ranges of information processing problems: another topic area that will need further discussion in what follows.)

And I will of course, discuss all of these issues from the perspective of Moore’s law and its equivalents and in terms of lock-in and its limiting restrictions, at least starting all of this in my nest installment to this series.

The maximum possible physical size test of possible or putative intelligence-supporting systems, as already touched upon in this series, is only one way to parse such systems at a general outer-range parameter defining level. As part of the discussion to follow from here, I will at least briefly consider a second such approach, that is specifically grounded in the basic assumptions underlying Moore’s law itself: that increasing the number of computationally significant elements (e.g. the number of transistor elements in an integrated circuit chip), can and will increase the scale of a computational or other information processing problem that that physical system can resolve within any single set period of time. And that, among other things will mean discussing a brain’s counterparts to the transistors and other functional elements of an electronic circuit. And in anticipation of that discussion to come, this will mean discussing how logic gates and arrays of them can be assembled from simpler elements, and both statically and dynamically.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3 and also see Page 1 and Page 2 of that directory. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And also see its Page 1.

%d bloggers like this: