Platt Perspective on Business and Technology

Rethinking the dynamics of software development and its economics in businesses 8

Posted in business and convergent technologies by Timothy Platt on January 22, 2020

This is my 8th installment to a thought piece that at least attempts to shed some light on the economics and efficiencies of software development as an industry and as a source of marketable products, in this period of explosively disruptive change (see Ubiquitous Computing and Communications – everywhere all the time 3, postings 402 and loosely following for Parts 1-7.)

I have been at least somewhat systematically discussing a series of eight historically grounded benchmark development steps in both the software that is deployed and used, and by extension the hardware that it is run on, since Part 2 of this series:

1. Machine language programming
2. And its more human-readable and codeable upgrade: assembly language programming,
3. Early generation higher level programming languages (here, considering FORTRAN and COBOL as working examples),
4. Structured programming as a programming language-defining and a programming style-defining paradigm,
5. Object-oriented programming,
6. Language-oriented programming,
7. Artificial Intelligence programming, and
8. Quantum computing.

And in the course of that still unfolding narrative, I have addressed the first five entries on this list as representing what amount to fundamentally mature technology examples. And I have been discussing the above Step 6 development in how software is coded, since Part 6, citing that as an in-effect transition step. Language-oriented programming, in this regard, represents a developmental step in the evolution of software and computing that holds both more settled qualities, and more disruptively new and novel potential too, and certainly as of this writing.

As a consequence of that fact, I have offered both an initial, more legacy-facing bullet pointed description of Point 6’s language-oriented programming paradigm, and an updated Point 6 Prime version of it as well, that adds in the still emerging complexities of self-learning systems, which I repeat here as given in Part 7:

6.0 Prime: Language-oriented programming seeks to provide customized computer languages with specialized grammars and syntaxes that would make them more optimally efficient in representing and solving complex problems, that current more generically structured computer languages could not resolve as efficiently. In principle, a new, problem type-specific computer language, with a novel syntax and grammar that are selected and designed in order to optimize computational efficiency for resolving it, or some class of such problems, might start out as a “factory standard” offering (n.b. as in Point 6 as originally offered in this series, where in that case, “factory standard” is what it would remain, baring human programming updates.) But there is no reason why a capability for self-learning and a within-implementation capacity for further ontological self-development and change, could not be built into that.

My point of focus in discussing that software development stage in Part 7 was on risk management issues, of a type that I see as likely to arise for software development companies that develop, market and sell such products to customers and on a non-exclusive basis for more generically faced information processing problems, where different customers would likely end up with very different ontologically developed software language products: some of which might be much more functionally effective than others. That is in fact at least potentially important and certainly where marked differences of performance achieved by different customer businesses that have paid essentially the same for what would ostensibly be the same software product, actually end up with very different products as a result. But that is only one possible liability issue that I could raise here as a consequence of self-learning software:

• The emergence of self-learning and ontologically developing and changing software can only lead to the functional death of one of software developers’ most used resources for managing emergent software bugs that might become visible in post-sales beta testing, or security issues that might come to light for it after sales and after customer installation and use: software patches.

When an initial software developer and provider can control the code that goes into its software and any given release or version of it, and stably so, they have a fixed starting point that any and all customers who have that software would consistently hold in their computers, and for whatever identified version or build that they have and are running. These software developers have a fixed starting point that they can develop single, fixed patches and updates to, and with essentially single responses developed, tested, and released that should apply identically across all copies of whatever software release they are intended to correct or update. The only exception to this uniform software release stability and consistency would be expected in the event that a copy of it on a customer’s computer were to become corrupted in some way, and that would involve accidental change, arising for the most part in the customer’s information management systems and its use, and fall outside of the responsibilities of the original software developer and provider. But as soon as that software begins to mutate on its own and by design: functionally and by paradigmatic intent, the essential stability needed for set and settled software patches evaporates.

Company A develops a great new piece of software and releases it – in this case a specialized new computer language that would address a class of widely faced business problems and needs, that would not just be exclusively sold to any one of many potential customer businesses. Then a significant security vulnerability is found in it as originally coded, six months after they began selling licensing rights to it and effectively that same full six months after that initially standardized software package began to individuate on the computers and in the networked systems of each and every one of those buying customers. That vulnerability might or might not still reside there in all of those differently self-evolving copies but if enough have been sold it is essentially certain that some will still show it. But given the wildcard nature of self-learning, and at a software code level, a settled patch that would close that vulnerability in the initial version shipped might just break things where it would be applied … now.

• Self-learning can mean the same basic processing code with newly updating expert knowledge data support, but in this case, self-learning would also at the very least have to mean an emergence of processing code level change too and particularly where that leads to improved efficiencies as determined according to whatever goals-directed criteria, that software has built into it – which at least in principle might be subject to self-learning updates too.

Note: this point has an assumption built into it that I will turn to and address in the next installment to this series. But with that simply acknowledged here for now, I continue on from it by noting that a whole new range of potential risk-creating or at least risk-enhancing possibilities arise when a software development Point 6 with its settled and even legacy grounding is shifted to a more disruptively new and novel Point 6 Prime form. I cite self-learning and particularly in the above bullet point’s more extreme form as a working example there. And all of this has business financials and microeconomic implications.

I said at the end of Part 7 that I would turn here to address:

• The role of the data that would be run on this programming language and its code and both as it is developed by the offering business, and as it is used by specific purchasing businesses.

I briefly noted this complex of issues in passing in Part 6 and stated that I would delve into its issues and complications in some detail here in Part 8. And after rounding out that phase of this programming language-focused line of discussion, I added that I would step back to consider how at least some of the risk issues that I would discuss here might apply more widely to self-learning software as a sellable product too. I have at least somewhat reversed the order there, and will in fact turn to and focus on the data that would be applied to self-learning, ontologically self-developing software next, with this posting’s discussion held in mind while doing so.

And then, as already noted, I will continue on to discuss the above listed software development steps of Points 7 and 8 too: artificial intelligence programming and quantum computing. And looking beyond that, my goal in all of this is to (at least somewhat) step back from the specifics of these individual example, development stages to raise and discuss some of the general principles that they both individually and collectively raise, as to the overall dynamics of software development and its economics.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory.

Meshing innovation, product development and production, marketing and sales as a virtuous cycle 22

Posted in business and convergent technologies, strategy and planning by Timothy Platt on January 19, 2020

This is my 22nd installment to a series in which I reconsider cosmetic and innovative change as they impact upon and even fundamentally shape product design and development, manufacturing, marketing, distribution and the sales cycle, and from both the producer and consumer perspectives (see Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 342 and loosely following for Parts 1-21.)

I have been discussing the complex of issues and challenges that arise for innovation acceptance and diffusion, and of resistance to innovation and to New in general here too, since Part 16, focusing through that developing narrative on two basic paradigmatic models:

• The standard innovation acceptance diffusion curve that runs from pioneer and early adaptors on to include eventual late and last adaptors, and
• Patterns of global flatting and its global wrinkling, pushback alternative.

And then in Part 21 of this, I began to at least briefly discuss how the boundaries between these two models can and do blur and overlap, and certainly as so much of the basic acceptance or rejection implicit in them is now driven by the voices and pressures of social media, and of online reviews and evaluations.

Details, I have to add, are not always important or even considered there by most online participants, and certainly where a one to five stars valuation scales with their up-front visibility can in effect render any more detailed reviews moot, and with their evaluation reduced to a search for confirmation, rather than a source of new and perhaps conflicting insight.

This becomes particularly important when negatively reviewing trolls and equally artifactual positive reviews are considered, that in effect game the “community based voice” that social media reviews ostensibly represent.

From a communications theory perspective, think of that as representing background static – noise in these systems, and with all of the signal degradation that noise would be expected to bring with it.

What I am writing of here is informed choice as might be based on valid and reliable data and insight, where no one can realistically expect that and certainly for any area of discourse that can be considered controversial and consequentially so, for any who might be inclined to skew the overall shared public message to their advantage.

Let’s consider this from a specifically business perspective and particularly where a business seeks to bring the innovatively new into production and to market, but in the face of headwind resistance. That resistance might be based at least in part on the underlying technology involved, on where and how those new products would be made, on where the raw materials that would go into them are sourced, or how, or on any of a range of other production and distribution cycle issues. Or they might be based on the products themselves and how they might be used, and both negatively and positively. The term “dual use” is often attached to products that can very specifically be used in a peaceful civilian context, but that can also be used and directly so for military purposes as well. But for purposes of this line of discussion, let’s generalize that designation. For purposes of this narrative, consider dual use as referring to both positive and negative usability potential as that would arise from the perspective of a beholder, where different people might see different boundaries there – if they seen any such dual use potential at all, and where significantly impactful voices can sway others and even large numbers of them. See my Part 21 discussion of the Pareto principle in that regard and particularly where negative and positive, dual use capabilities can become fluid and malleable in meaning and with all of the potential for opinion shaping influence that that can lead to.

And this brings me back to the fundamental question of what innovation actually is, and certainly in a noisy channel, controversial context. And I begin addressing that by citing two examples, both of which, unfortunately, are quite real:

1. The development of drought and disease resistant crops that can be grown with little if any fertilizer and without the use of insecticides or other pesticides, and
2. Russia’s Novichok (Новичо́к or newcomer) nerve agents.

Both quite arguably represent genuine innovations and even disruptively new ones. But reasonable people would probably view them very differently, and address them very differently in any social media driven, or other public communications.

It is both possible and easy to presume and essentially axiomatically so, that innovation per se is basically more values-neutral and certainly as a general categorical consideration, than anything else. The overall thrust of innovative change is for the most part considered a positive if its overall neutrality is considered and challenged at all, at least for those who are at all open to novelty and change, and certainly insofar as most innovation is developed and pursued with a goal of at least attempting to address specific publically realized challenges and the opportunities that effectively resolving them might bring, and for at least specific demographic groups. Innovation that is realized and certainly as marketable products, and the innovative process that leads to it, tend to focus on what New does and on what it could and can do, for meeting at least those perceived needs. And that perspective is in most cases valid; it is certainly understandable. But all of this just addresses innovation as a whole and even as an abstraction. Individual innovations are not, and probably cannot be considered in that way, and certainly automatically. Individual innovations have equally particular and even at least relatively unique consequences. And they arise in equally specific contexts.

To add a third example to this list, where longer-term cumulative effects become critically important, consider:

3. Disposable single use plastic bags and other petrochemical plastics-based wrapping materials.

I stated at the end of Part 21 that I would further discuss public voices and their influence, and I then turned in this installment to reconsider innovations per se. I am going to continue that line of discussion in a next series installment, at least starting with my three here-stated examples. I will then reconsider the two innovation acceptance versus resistance models that I have been considering here, but in less abstract terms than I have up to here. And then, and on the basis of that narrative, I will reconsider individual and social group, and governmental and other organizational influence in both creating and shaping conversations about change. (And as part of that narrative, I will explore some assumptions that I built into my above-offered presumptions paragraph as appears between my first two innovation examples and example three.)

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. And see also Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations.

Reconsidering Information Systems Infrastructure 13

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on January 13, 2020

This is the 13th posting to a series that I am developing, with a goal of analyzing and discussing how artificial intelligence and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 374 and loosely following for Parts 1-12. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

I have been discussing artificial intelligence tasks and goals as divided into three loosely defined categories in this series. And I have been discussing artificial intelligence agents and their systems requirements in a goals and requirements-oriented manner that is consistent with that, since Part 9 with those categorical types partitioned out from each other as follows:

• Fully specified systems goals and their tasks (e.g. chess with its fully specified rules defining a win and a loss, etc. for it),
• Open-ended systems goals and their tasks (e.g. natural conversational ability with its lack of corresponding fully characterized performance end points or similar parameter-defined success constraints), and
• Partly specified systems goals and their tasks (as in self-driving cars where they can be programmed with the legal rules of the road, but not with a correspondingly detailed algorithmically definable understanding of how real people in their vicinity actually drive and sometimes in spite of those rules: driving according to or contrary to the traffic laws in place.)

And much if not most of that discussion has centered on the middle-ground category of partly specified goals and their tasks, and the agents that would carry them out. That gray area category that resides between tasks for tools, and tasks for arguably people, serves as a source of transition testing and of development steps that would almost certainly have to be successfully met in order to develop systems that can in fact successfully carry out true open-ended tasks and achieve their goals.

And as part of that still unfolding narrative, and in a still-partly specified context, I began discussing antagonistic networks in Part 12, citing and considering them as possible ontological development resources within single agents that would promote both faster overall task-oriented systems improvement, and more effective learning and functioning there. Consider that as one possible approach that would go beyond simple random-change testing and any improvement that might be arrived at from it (as might for example arise in a biological evolutionary context where randomness enters into the determination of precisely which genetic mutations arise that would be selected upon for their survival value fitness.)

I initially wrote Part 11 of this series with a goal of similarly considering open-ended tasks and goal in Part 12. Then I postponed that shift in focus, with a goal of starting that phase of this series here. I will do so, but before I turn this discussion in that direction, I want to at least briefly outline a second fundamentally distinct approach that would at least in principle help to reduce the uncertainties and at least apparent complexities of partly specified tasks and goals, just as effective use of antagonistic neural network subsystems would allow for ontological improvements in that direction. And this alternative is in fact the possibility that has been at least wistfully considered, more than any other in a self-driving vehicle context.

I saw a science fiction movie recently in which all cars and trucks, busses and other wheeled motorized vehicles were self-driving and with all such vehicles at least presumably, continuously communicating with and functioning in coordinated concert with all others – and particularly with other vehicles in immediate and close proximity, where driving decision mismatches might lead to immediate cause and effect problems. It was blithely stated in that movie that people could no longer drive because they could not drive well enough. But on further thought, the real problem there would not be in the limitations of any possible human driver atavists who might push their self-driving agent chauffeurs aside to take the wheel in their own hands. It is much more likely, as already touched upon in this series and in its self-driving example context, that human drivers would not be allowed on the road because the self-driving algorithms in use there were not good enough to be able to safely drive in the presence of the added uncertainty of drivers who were not part of and connected into their information sharing system, who would not always follow their decision making processes.

• Artificial intelligence systems that only face less challenging circumstances and contexts in carrying out their tasks, do not need the nuanced complexity of data analytical capability and decision making complexity that they would need if they were required to function more in the wild.

For a very real-world, working example of this principle and how it is addressed in our already every day lives, consider how we speak to our currently available generation of online verbally communicative assistants such as Alexa and Siri. When artificial intelligence systems and their agents do not already have context and performance needs simplifications built into them by default, we tend to add them in ourselves in order to help make them work, and at least effectively enough to meet our needs.

So I approach the possibility of more open-ended systems and their tasks and goals with four puzzle pieces to at least consider:

• Ontological development that is both driven by and shaped by the mutually self-teaching and learning behavior of antagonistically positioned subsystems, and similar/alternative paradigmatic approaches,
• Scope and range of new data input that might come from the environment in general but that might also come from other intelligent agents (which might mean simple tool agents that carry out single fully specified tasks, gray area agents that carry out partly specified tasks, or genuinely artificial general intelligence agents: artificial or human, or some combination of all of these source options.)
• How people who would work with and make use of these systems, simplify or add complexity to the contexts that those agents would have to perform in, shifting tasks and goals actually required of them either more towards the simple and fully specified, or more towards the complex and open-ended.
• And I add here, the issues of how an open ended task to be worked upon and goal to be achieved for it, would be initially outlined and presented. Think in terms of the rules of the road antagonist in my two subsystem self-driving example of Part 12 here, where a great deal of any success that might be achieved in addressing any overtly open-ended systems goal will almost certainly depend on where a self-learning agent would begin addressing it from.

To be clear in both how I am framing this discussion of open-ended tasks, and of the agents that would carry them out, my goal here is to begin a discussion of basic parametric issues that would constrain and shape them in general. So my goal here is to address general intelligence at a much more basic level than that of consideration of what specific types of resources should be brought to bear there – which would almost certainly prove to be inadequate as any specific artificial general intelligence agents are actually realized.

I have, as such, just cited antagonistic neural networks and agents constructed from them, that can self-evolve and ontologically develop from that type of start, as one possible approach. But I am not at least starting out with a focus on issues or questions such as:

• What specifically would be included and ontologically developed as components in a suite of adversarial neural networks, in an artificial general intelligence agent (… if that approach is even ultimately used there)?
• And what type of self-learning neural network would take overall command and control authority in reconciling and coordinating all of the activity arising from and within such a system (… here, assuming that neural networks as we currently understand them are to be used)?

I would argue that you can in fact successfully start at that solutions-detail level of conceptual planning when building artificial specialized intelligence agents that can only address single fully specified systems goals and their tasks – when you are designing and building tools per se. That approach is, in fact, required there. But it begins to break down and quickly, when you start planning and developing for anything that would significantly fall into a gray area, partly-specified category as a task or goal to be achieved or for agents that could carry them out. And it is likely going to prove to be a hallmark identifier of genuinely open-ended systems goals and their tasks, and of their agents too, that starting with a “with-what” focus cannot work at all for them. (I will discuss pre-adaptation – also called exaptation as a source of at least apparent exceptions to this, later in this series but for now let’s consider the points made here in terms of fully thought through, end-goals oriented pre-planning, and tightly task-focused pre-planning there in particular.)

I am going to continue this discussion in a next series installment where I will at least begin to more fully examine the four puzzle pieces that I made note of here. Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 10

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on January 4, 2020

This is my 10th posting to a short series on the growth potential and constraints inherent in innovation, as realized as a practical matter (see Reexamining the Fundamentals 2, Section VIII for Parts 1-9.) And this is also my seventh posting to this series, to explicitly discuss emerging and still forming artificial intelligence technologies as they are and will be impacted upon by software lock-in and its imperatives, and by shared but more arbitrarily determined constraints such as Moore’s law (see Parts 4-9.)

I have been focusing in this series on the hardware that would serve as a platform for an artificial intelligence agent, since Part 4 of this series, with a goal of outlining at least some of the key constraining parameters that any such software implementation would have to be able to perform within. And as a key organizing assumption there, I have predicated this entire ongoing line of discussion in terms of integrated circuit technology (leaving out the possibilities of or the issues of quantum computing in the process.) And after at least briefly discussing a succession of such constraints and for both their artificial and natural brain counterpart systems, with comparisons drawn between them, I said at the end of Part 9 that I would explicitly turn to consider lock-in and Moore’s law in that context and as they apply to the issues raised in this series. I will pursue that line of discussion here, still holding to my initial integrated circuit assumption with its bits and bytes-formatted information flow (as opposed to quantum bit, or qubit formatted data.) And I do so by addressing the issues and challenges of Moore’s law from what some might consider a somewhat unexpected direction.

• Moore’s law is usually thought of as representing an ongoing base 2 logarithmic expansion of the circuit density and corresponding hardware capability in virtually all of our electronic devices, and without any significant accompanying, matching cost increases. This ongoing doubling has led to all but miraculous increase in the capabilities of the information processing systems that we have all seemingly come to use and to rely upon and throughout our daily lives. And in that, Moore’s law represents a logarithmic growth rate expansion of capability and of opportunity and a societally enriching positive.
• But just as importantly, and certainly from the perspective of the manufacturers of those integrated circuit chips, Moore’s law has become an imperative to find ways to develop, manufacture and bring to market, next step chips with next step doubled capability and on schedule and without significant per-chip cost increases, and to keep doing so,
• Even as this has meant finding progressively more sophisticated and expensive-to-implement work-arounds, in order to squeeze as much increased circuit density out of what would otherwise most probably already be considered essentially mature industrial manufacturing capabilities, in the face of fundamental physical law constraints and again and again and again ….
• Expressed this way, Moore’s law and lock-in begin to sound more and more compatible with each other and even more and more fundamentally interconnected. The pressures inherent to Moore’s law compel quick decisions and solutions and that adds extra pressures limiting anything in the way of disruptively new technology approaches, except insofar as they might be separately developed and verified, independently from this flow of development, and over several of its next step cycles. The genuinely disruptively new and novel takes additional time to develop and bring to marketable form and for the added uncertainties that it brings with it if nothing else. The already known and validated, and prepared for are easier, less expensive and less risky to build into those next development and manufacturing cycles, where they have already been so deployed.
• But the chip manufacturer-perceived and marketplace-demanded requirement of reaching that next Moore’s law step in chip improvement and every time and on schedule, compels a correspondingly rapid development and even commoditization of next-step disruptively new innovations anyway. As noted in earlier installments to this series, continued adherence to the demands of Moore’s law has already brought chip design, development and manufacturing to a point where quantum mechanical effects and the positions and behavior of individual atoms, and even of individual electrons in current flows, into chip success-defining importance.
• And all of this means decisions being made, and design and development steps being taken that rapidly and even immediately become fundamentally locked in as they are built upon in an essentially immediately started next-round of next-generation chip development. Novel and new have to be added into this flow of change in order to keep it going, but they of necessity have to be added into an ever-expanding and ever more elaborate locked-in chip design and development framework, and with all of the assumed but unconsidered details and consequences that that entails.

What I am writing of here amounts to an in-principle impasse. And ultimately, and both for computer science and its applications, and for artificial intelligence as a special categorical case there, this is an impasse that can only be resolved: that can only be worked around, by the emergence of disruptively new and novel that moves beyond semiconductor physics and technology, and the types of electronic circuitry that are grounded in it.

Integrated circuit technology as is currently available, and the basic semiconductor technology that underlies it have proven themselves to be quite sufficient for developing artificial specialized intelligence agents that can best be considered “smart tools,” and even at least low-end gray area agents that at the very least might arguably be developed in ways that could lead them beyond non-sentient, non-sapient tool status. (See my series: Reconsidering Information Systems Infrastructure, as can be found at Reexamining the Fundamentals, as its Section I, for a more complete discussion of artificial special and general intelligence agents, and gray area agents that would be found in the capabilities gap between them.) But advancing artificial intelligence beyond that gray area point might very well call for the development of stable, larger scale quantum computing systems, and certainly if the intended goal being worked toward is to achieve true artificial general intelligence and agents that can claim to have achieved that – just as it took the development of electronic computers, and integrated circuit-based ones at that, to realize the dreams inherent in Charles Babbage’s plans for his gear-driven, mechanical computers in creating general purpose analytical engines: general purpose computers per se.

I am going to continue this line of discussion in a next series installment where I will consider artificial intelligence software and hardware, from the perspective of breaking away from lock-ins that are for the most part automatically assumed as built-in requirements in our current technologies, but that serve as innovation barriers as well as speed of development enablers.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3 and also see Page 1 and Page 2 of that directory. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And also see its Page 1.

Innovation, disruptive innovation and market volatility 50: innovative business development and the tools that drive it 20

Posted in business and convergent technologies, macroeconomics by Timothy Platt on December 29, 2019

This is my 50th posting to a series on the economics of innovation, and on how change and innovation can be defined and analyzed in economic and related risk management terms (see Macroeconomics and Business and its Page 2 continuation, postings 173 and loosely following for its Parts 1-49.)

I have been discussing technology transfer and from both an innovation creator and an innovation acquiring business perspective, since Part 43. And as a part of that, I began discussing three topics points that all relate to the overall impact that these transfers (or lack thereof) can create:

• When adding in the complexities of scale, and for both the innovation selling or licensing business: the innovation provider, and for the innovation acquiring business.
• And the issues of direct and indirect competition, and how carefully planned and executed technology transfer transactions there, can in fact create or enhance business-to-business collaboration opportunities too (as well as creating or enhancing business versus business competitive positions),
• Where that may or may not create monopoly law risks in the process.

(Note: So far I have primarily been addressing the first of those topics points. And as part of that line of discussion I parsed product innovations as a more general category, as falling into any of three basic forms in Part 49, that I identified there as being: linchpin, collateral and endpoint in nature. I will continue making use of that set of definitions and distinctions as I proceed in discussing those here-repeated points, starting with a further discussion of the first of them.)

And with that offered, I begin this posting by repeating two bullet points from the end of Part 48, so I can expand upon them here, and both for completing my first topics point discussion and as preparation for addressing the second of those three main points as well:

• Larger, more diverse businesses can and do acquire innovation, and controlling rights to it in order to competitively fill gaps in what they can and do offer, and how. Technology transfer acquisitions as such, impact on aspects of their organization and of their market-facing production and sales puzzle and in ways that if effectively managed, can create significant new sources of value for them.
• For smaller businesses, these acquisitions impact on their organization as a whole and on their market-facing productive puzzle as a whole too.

Let’s consider the first of those points and its competitive positioning approach. Yes, a business can actively seek out and acquire access to outside created innovations in order to directly advance what they themselves can build and bring to market. They can seek out and acquire new innovations in order to integrate them into their systems and into their marketable products too. That is the more commonly followed pattern here. But a business can also actively seek to buy out exclusive rights, if not complete ownership of alternative approach innovations, that if pursued and exploited by their competition might mean their losing a significantly innovative edge for what they already offer now, too.

To clarify that, consider Business A: a manufacturer that came up with a disruptively novel and highly valuable innovation that has made its now-flagship product line possible. And at first, they are the only business in their sector or industry that is capable of developing or offering any products of that type, given their registered patents on that innovation. But this is so valuable an innovation and so significantly impactful to their competition, that every other business they compete with, that loses business because they cannot directly match those products, is trying to develop a work-around alternative innovation that would effectively accomplish the same thing for them that A’s innovation does but without impinging on their patent protections for it. Business B is A’s primary competitor there, most overtly capable of taking advantage of any alternative innovation that would not violate those patents, but that could lead to products that end user consumers would see as equivalent. But given the nature of disruptive innovation – and its shift to simple evolutionary change with time, A has to worry just as much about Businesses C, D, E and more too. And then an outsider comes up with an innovative development, and perhaps with very different initial intended uses in mind from when they were initially devising it. But its real value is going to be more in what A and its competition do, than it is for what that innovation creating business does. That innovation as realized, does not in fact actually connect into the basic, core business model of that innovating business at all, arriving as a disruptively unexpected possibility. So they decide to sell it to a highest bidder and both to recoup their expenses for developing it in the first place and to make a profit from it at the same time.

B, C, D and more would all like to buy this innovation and probably outright, and with long-term exclusive use licensing a distant but still viable option for them too. This would make them direct competitors for A where it has held what amounts to a monopoly of opportunity from its consumer preferred, exclusively offered product line. And A would like to buy out this innovation too, and secure patent rights to it to protect their technological lead there and even if that means their burying it. (Nota bene: I cannot claim that I approve of this business strategy but it does occur so I report it as such.)

I make note of and at least briefly explain this type of scenario to illustrate that the range of options and the reasoning that would drive them in a technology transfer context can be more complex and varied than my Part 49 discussion of this might have suggested. Businesses seek out and develop, or otherwise acquire innovations both to advance what they can do and to protect what they are already doing in their here-and-now. And that takes me directly to the second of the to-address points that I began this posting by repeating, and the issues of direct and indirect competition. And I begin doing so by briefly saying what direct and indirect competition actually are:

• Direct competition is the competition that arises when businesses vie for the same completed sales from the same customer base in the same, or at least overlapping markets. Businesses A, B, C and so on in my above examples are direct competitors there and this is the form of competition that is more commonly referred to when the words competition and competitor are used in business performance discussions.
• Indirect competition is a more correlational phenomenon that arises in supply chain and related business-to-business systems among other contexts. And to more fully explain what it is, I at least begin with a working example that I offer in counterpoint to my above-offered direct competition example. Here, let’s assume that businesses A and B are supply chain system partners and that B provides a support service that A benefits from in enabling its sales transactions (e.g. as a delivery and related logistics support service provider.) B succeeds in its business with A insofar as those sales transaction deliveries go through quickly and correctly, with an absolute minimum of damage or delay in shipments. And that means that B benefits from A’s success, and both for retaining their business and for maintaining their overall business reputation as a quality service provider. That alignment means that if C is a direct competitor with A, then it is an indirect competitor with B too – and with that certainly holding significance if C for whatever reason does not use B’s services too. But it holds true even if they do. Then B would be playing the role indirect competitor in at least two directions at once and the impact of this type of relationship would be diluted accordingly. But in a more winner takes all, zero-sum situation as would apply in the innovation developer and acquirer example that I offered above, indirect competition applies to the business that is competitively marketing and selling its breakthrough, and certainly with respect to any potential buyers that it chooses not to sell or license to. And there, that competition is anything but trivial for its significance.

I am going to continue this discussion in a next series installment where I will more fully address the competitive significance of technology transfer in both a collaborative and a more traditionally considered single business versus single business, zero-sum context of the type offered by my working examples here, and how a technology providing business can be as competitively involved in that type of marketplace sale, as any potential buyers are. Then after completing my discussion of that second basic topic point I will turn to and address the third entry in that list:

• Where that (e.g. technology transfers) may or may not create monopoly law risks in the process.

But before I begin addressing any of that, I am going to return to and expand upon a more subsidiary topics point that I offered in passing, earlier on in this posting that I did not then discuss here. And that is where I will in fact make explicit use of the linchpin versus collateral versus endpoint innovation distinctions that I cited above too. This topic point, as expanded upon here in anticipation of discussion to come, is now:

• For smaller businesses, innovation acquisitions and divestitures impact on their organization as a whole and on their market-facing productive puzzle as a whole too, and largely independently of their precise nature (e.g. linchpin et al.)
• For larger and more diverse businesses, and businesses with larger reserves and operational spending capabilities in general, that type and level of impact would mostly be found in a true linchpin innovation context.

Meanwhile, you can find this and related postings at Macroeconomics and Business and its Page 2 continuation. And also see Ubiquitous Computing and Communications – everywhere all the time 3 and that directory’s Page 1 and Page 2.

Reconsidering the varying faces of infrastructure and their sometimes competing imperatives 10: a first draft discussion of general principles and practices 1

Posted in business and convergent technologies, strategy and planning, UN-GAID by Timothy Platt on December 17, 2019

This is my 11th installment to a series on infrastructure as work on it, and as possible work on it are variously prioritized and carried through upon, or set aside for future consideration (see United Nations Global Alliance for ICT and Development (UN-GAID), postings 46 and following for Parts 1-9, plus its supplemental posting Part 4.5.)

I have, up to here, successively raised and discussed a set of five case study examples of infrastructure development programs in this series – with a goal of arriving at and explaining a set of more general principles and practices that might be fruitfully employed in future such initiatives, moving forward. In that, think of the case study examples that I have included here, as learning curve opportunities for future infrastructure development or redevelopment efforts. And with that in mind I begin this posting by offering a general principle that would arguably derive from all of them, and that would likely belong in any infrastructure program planning guide and from its early planning-stage steps on:

• Effective next-step infrastructure planning and execution should always be grounded in a solidly reasoned, dispassionately analytical evaluation of what has been done before,
• And both for the specific context that a particular new and coming development program under consideration would explicitly build from, where there are relevant historical examples for that,
• And from prior development programs elsewhere and of other types that can still serve as role model examples, at least for key issues faced.
• And in that, “role model” can mean positive and a source of strategic and operational insight to follow, or it can mean negative and serve more as a cautionary note.
• Either way, it is important to think in terms of the long-term and in terms of development life cycles where they apply too. It is important to think in terms of how those role model learning curve examples under review took shape during their own development processes, and for what has become of them after their at least nominal completion. And it is important to think through their immediate and longer term impact, and for what has happened consequentially from them. (I will come back to this in subsequent installments to this series.)

This noted, in the course of writing this progression of postings leading up to this one, I have already at least preliminarily touched upon a number of more general points that might arguably enter into such an overarching infrastructure development approach. My goal here is to step back from the specifics of particular case in point examples, to at least begin to offer a first draft take on what would enter into such a general principles infrastructure development model as a whole, and into a best practices guide for that as a whole too. And I will do so by at least briefly considering each of the case studies that I have included up to now in this series, for more general principles that they raise.

• I write here of positive and even inspiring role model case study examples and of cautionary and warning examples, as I have intentionally offered both types in this series – as well as examples that arguably include elements of both of those more stereotypically framed types.

I begin this first draft take on general principles, lessons-learnable with the first two of the five case studies already offered here: one of which can be thought of as a largely pure example of the negative here, and one of which at the very least has negative aspects built into it. See:

• Hurricane Maria and Puerto Rico, and its aftermath: Part 1 and Part 2, and
• The New York City Metropolitan Transportation Authority (MTA), and its subway system in particular there: Part 2, Part 3, Part 4.5 (as cited above) and the addendum note appended to the end of Part 6.

Hurricane Maria and Puerto Rico: Essentially the entire island of Puerto Rico was devastated by a massive category 5 hurricane in September of 2017. And all of the island’s critical infrastructure was damaged by that; much of it was effectively destroyed. The expected disaster relief and subsequent rebuilding effort that American citizens had come to expect from their national government after a major natural disaster, effectively did not take place with a Donald Trump serving as president and with his fellow ideologues leading the federal governmental agencies that should have carried this out. And the Trump administration’s wholesale dismantling of regulatory oversight meant that private sector contractors and others who did agree to carry out what government funded work was done, were not background checked before being approved for that. And they were not monitored for what they did, or for how they used the funds that they received for that work. And even now as I write this: more than two years later, there is still much to be done that should have been completed by now. Even now, some of those private sector contractors are under scrutiny and facing legal action for diverting relief funds received, for other purposes.

• It is impossible to effectively carry out an infrastructure development, redevelopment or recovery effort if is not actively supported by the people who would lead it.
• It is impossible to effectively carry out such an effort if that initiative is not actively planned and followed through upon with a goal of seeking to meet the genuine needs of the people directly affected.
• And even then, active oversight and accountability have to be in place too, and to make sure funds allocated to such an effort are not misdirected from it, and to ensure that the work agreed to is carried out and up to a sufficient quality standard so as to make anything built, viable and long-term.
• And even there, piece-by-piece efforts cannot offer overall comprehensive value if they are not planned out and carried through upon in an effectively coordinated, prioritized manner. Effective infrastructure development is always a large scale effort, that creates much of its long term value from the synergies that can be built from its component parts.

The New York City Metropolitan Transportation Authority subway system: The New York City MTA and its subway system do work. The trains run and the stations in that system basically function too. But this system has been a political football with the mayor of New York City and their city government, and the governor of New York State and their state legislature fighting for control over it, and generally to their own personal political career advantages. So according to the MTA’s own publically offered numbers, it would take over $60 billion of investment just to bring the MTA’s subway system up to date for switching technology, passenger accessibility and all of the other areas where it is burdened by the broken and the obsolete – and the missing (e.g. elevators needed to meet Americans with Disabilities Act (ADA) requirements.) And to pick up on that last detail, according to the MTA and their published information on this, only 24% of all subway stations in their system are currently ADA compliant. And even if this system is upgraded for that according to the full intended terms of current upgrade plans in place, the percentage of ADA compliant subway stations would only increase to approximately 35%!

• And meanwhile, some of the switching technology that manages train flow through this system and that is used to track where subway trains even are in it, between stations, dates back over 80 years now. That is where old and out of date legacy becomes all but paleontological.
• And their computer network technology, to cite another area of pronounced neglect, is riddled with system components that range from new if not cutting edge, to as old as networkable technology could be – think of the limitations that this brings, as overall systems are effectively reduced functionally, to a lowest common denominator of what the most limited and out of date of their component parts can do, as all of this has to be networked together!
• And meanwhile, competing politicians fight for credit for building the Second Avenue subway line extension, bringing the Q line up as far as 96th Street, while failing in practice to address less visible, but more crucially necessary infrastructure problems that the public rarely sees or hears of, but that affect the entire system and its safe reliability.

Large scale infrastructure programs should be seen as, and should be pursued as meeting overall societal needs. Partisan politics can only serve as poison there. This point of principle applies to both of the case study examples that I have cited here so far. And both examples serve to validate that principle as a source of significant risk-concern if nothing else, as other infrastructure programs are contemplated. And when competing politically motivated forces, in effect use such a development or redevelopment program as a battleground for advancing their own more personal interests, by for example showing how powerful their leaders are as they seek to advance their own personal careers, that has consequences. At the very least, that can only serve to skew any determination of how priorities would set (as in my second example here) with mostly just the most politically marketable projects in it being pursued.

The next two case study examples to address here are the Marshall Plan as briefly discussed in Part 4 and Part 5, and the Molotov Plan as discussed in parallel with that in those same postings. And I will continue this line of discussion in a next series installment with a reconsideration of them as a source of general principles.

• In anticipation of that discussion to come, I will of necessity reconsider what “success” means in an infrastructure development or redevelopment context, where I wrote of both of those programs as being successful – but with caveats at least implied.

I will follow that with similar reconsideration discussion of the fifth and final case study example that I have addressed up to here: The New Deal (see Parts 7-9). And then after completing that phase of this first step offering of more general principles, I will continue on as outlined in Part 6 of this series, and discuss infrastructure development as envisioned by and carried out the Communist Party of China and the government of the People’s Republic of China. And as part of that I will also discuss Soviet Union era, Russian infrastructure and its drivers. I will, of course, also touch on the issues of Post-Soviet Russia too and Vladimir Putin and his ambitions and actions there. And that and for both China and Russia, is where infrastructure development meets authoritarianism, and in a form and to a degree that has never been possible until now. And I raise in anticipation of that discussion to come, a question that will of necessity arise in my immediately next installment to this series too.

• Who ultimately does own, and who should own a massive infrastructure development undertaking as it creates massive societal impact and in ways that can even fundamentally shape what can even be possible, and for many?

I fully expect to cite other case study examples at least in passing in the course of this overall narrative to come, with that including references to infrastructure programs and initiatives that I have already discussed in other series (e.g. see Planning Infrastructure to Meet Specific Goals and Needs, and not in Terms of Specific Technology Solutions, as can be found at United Nations Global Alliance for ICT and Development (UN-GAID) as its postings 25 and loosely following.) And my goal here, as of now, is to conclude this series with a second draft update to this general principles posting.

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. I also include this in Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory. And I include this in my United Nations Global Alliance for ICT and Development (UN-GAID) directory too for its relevance there.

Rethinking national security in a post-2016 US presidential election context: conflict and cyber-conflict in an age of social media 18

This is my 18th installment to a series on cyber risk and cyber conflict in a still emerging 21st century interactive online context, and in a ubiquitously social media connected context and when faced with a rapidly interconnecting internet of things among other disruptively new online innovations (see Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 354 and loosely following for Parts 1-17.)

I have been developing this series in part in terms of specific historically and current events-grounded case study examples, with relevant more-general considerations included there as called for. And from a case study perspective, I have focused for the most part on Russia and the history of what is now that nation in this, since Part 13 (with a somewhat longer digression into more general principles and issues, added into that narrative as Part 15.) And to bring this orienting note up to date for where I am in offering this series, I devoted Part 17 of it to an at least initial orienting discussion of what I have come to see as the Putin Defense Policy and its underlying political and military doctrine. And I offered that in the context of also offering a cautionary note to the West as well.

I ended that first step introductory note to what could easily be a larger and more inclusive discussion of Russia’s current and emerging combined-use approach to conventional and cyber-capabilities in defensive and offensive systems, with a brief discussion of a fundamentally basic tool for strategic and tactical military planning, and how the emergence of cyber-capabilities and cyber-assets, of necessity change how that tool would be used and its findings interpreted: basic correlation of forces calculations.

I chose this example for two reasons: it really is basic to overall strategic and close-in tactical planning and execution, that forces be maneuvered into position and used where they would have the greatest impact and where they would hold relative strength advantage wherever possible. But this is also one of the key areas where the disruptively new and novel of cyber-warfare and its capabilities, skews all such calculations, compromising all such planning and execution – if only more traditional, conventional forces-oriented approaches are used in thinking about correlations of forces available and deployed.

I begin this posting’s line of discussion with the last words offered in that immediately preceding paragraph: “forces available and deployed.” And I begin addressing them by repeating a bullet pointed note that I offered at the end of Part 17 that in fact serves as a beginning point for this posting’s discussion too:

• “Any conflict or potential conflict, and any use or possible use of force that in any significant way or degree includes use of cyber capabilities, automatically renders the theater of operations involved, global. And it is never going to be possible to meaningfully calculate correlation of forces or force symmetries or asymmetries or any related measures for such contexts if this simple fact is not taken into account and fully so.”

How do my above-noted four word phrase and this bullet point connect? There are in fact several answers to that. And I begin this posting’s main line of discussion here by at least briefly addressing the most traditionally obvious of them.

Conventional military doctrine and all of the correlation of forces and related calculations that would go into its planning, are predicated on an axiomatic presumption of supply chains and logistics, and a need to bring assets physically to where they would be needed and with all of the supplies required for them to be operationally effective. But all possible and potential cyber-assets that are available, or that could rapidly be made so, are in principle at least, more ubiquitously available as a matter of course. And they have to be considered to be automatically and reliably so available, and certainly when attempting to determine a possible adversary’s potential force levels and their distributions and accessibility.

When electrons travel at close to the speed of light in a vacuum and electromagnetic transmission, through the air or through fiber optic cable do too, physical distances that such signals would cover, effectively collapse down to zero. The only meaningful counterpart to distance as a limiting factor that command, control and communications signals (or any other weaponizable information flows) face in this context, are to be found in:

• Bandwidth limitations and
• Where signals would have to push through noisy channels, with message redundancy and similar measures needed to allow for that.

Effective cyber-capabilities can be brought to bear wherever there is sufficient bandwidth available to use them. And when key transmissions can be more surreptitiously, proactively sent in advance of any overt action, even bandwidth restrictions as a distance surrogate, become moot.

Distance and physical position do not matter from a more strictly cyber-conflict perspective; bandwidth and ability to find the most effective physical, cyber or combined, vulnerabilities to exploit, mean everything. And “available” and “deployed” can and do become one, and certainly in the hands of any command, control and communications systems and their leadership, that would make effective use of their available cyber-forces.

And with that, I peel back the layers of the onion here, one more level by moving on to raise the question of what military assets even are, in a cyber or a mixed cyber plus conventional context. And this brings me to the question of dual-use technologies. And it also raises the issue of how the most effective cyber-weapon capabilities in a nation’s arsenal need not even actually be in their arsenal at all; they can be overtly owned by and seemingly fully controlled and managed by others.

In anticipation of what is to come here, I will discuss those issues and symmetrical and asymmetrical forces and conflicts and how they are being fundamentally redefined here too. And then I will turn back to reconsider the Putin Defense Policy and its underlying political and military doctrine, in light of the general issues and principles that all of this raises. But before doing so, and as a continuation of my Part 17 cautionary note, I offer the following point of observation: a point of judgment that I would assume even just a casual study of history would show to be too likely to be true to safely overlook, and essentially ever.

• Generals and Admirals and their political leaders, where they have them, all too often prepare to fight the last war, no matter how long ago it took place … and precisely because they do not and seemingly cannot recognize, acknowledge or understand the nature of the New and of disruptive change.
• But next wars essentially always bring such changes with them and both for the weapons and the tactics and strategies that a new adversary would bring to bear, and for the contexts in which they would do this.
• And ultimately, victory at least usually goes to whichever side can learn to both embrace and advance the New and the disruptive in their thoughts and actions, and the most quickly and effectively.
• (And it is the foot soldiers and their naval and other enlisted counterparts, and it is civilians who pay the bulk of the price for the learning curves that all of this brings. But that is food for thought for another posting and series.)

Look to the carnage of World War I and its insistence on both sides in sending so many to their slaughter in Western Europe’s trench warfare, with machine guns and tanks and chemical warfare, military aircraft and their guns and bombs and more, as that generation’s glimpse at what the disruptively new and novel can mean in a next military conflict! And for a single, more focused example here, consider the aircraft launched torpedoes that were used when Japan attacked the US forces stationed at Pearl Harbor in 1941, bringing the United States into what became a truly globally involving World War II. Traditional thinking and all of the policy and planning that came from it, resoundingly said that such weapons were not possible – even as warnings were offered that they were and that they were being developed, built and deployed. I offer both of these examples as cautionary note warnings as we see an emergence of and at least a proof of principle test use of this generation’s disruptively new and novel here, in cyber-weapons and cyber-weaponizable systems.

And with that, I turn to consider

• What military assets even are, in a cyber or a mixed cyber plus conventional context,
Dual-use technologies and the evaporation of boundaries there, and
• Locality and ownership and how the most effective cyber-weapon capabilities in a nation’s arsenal need not even actually be in their arsenal at all.

Let’s consider the first of those bullet points and its issues by considering what is probably the single most basic, fundamental question here; what is a cyber-weapon? And I begin addressing that by sharing two references:

• Sanger, D.E. (2018) The Perfect Weapon: war, sabotage and fear in the cyber age. Crown Publishing.
• Kello, L. (2017) The Virtual Weapon and International Order. Yale University Press.

And I begin here by offering two comments on these books. While both authors sought to offer both accurate and up to date narratives on the issues and the threats faced in an emerging cyberspace connected world, with an emerging weaponization of its capabilities as a part of that, both should be viewed as snapshots in time, and with the flood of new and disruptive continuing on for all of that. And Kello is still very much correct when he argues the case that our understanding of the larger implications of all of this, is still “primitive.” And so are our built in assumptions and presumptions as we seek to proactively plan and prepare for what is possible in a cyber-weaponized and a cyber-militarized world, and where that can be used against a nation’s own population as a means of control, as readily and fully as it can be used against outside nations, or against outside organizations of all types.

• Question: What is a cyber-weapon?
• Answer: Any cyber-tool or capability that can be used to connect and share information, can be used as a weaponized capability (where the ability to share information has implicit to it, a capacity to block it or to share carefully drafted disinformation too).

Consider conventional military weapons; there are very few if any examples of them that could be cited that arguably might or might not actually be weapons or that might or might not have been designed and built for use as such. A machine gun, or an aircraft-launched bomb are, to take that out of the abstract, weapons. And that is all they are. But in a cyber-context, essentially anything there can be weaponized and the only restriction to that is that it actually work as an effective cyber-capability or resource at all.

Dual use technologies are ones that can be used in a peaceful civilian contest, or in a more explicitly military context and in support of military action. Traditionally, this type of determination has been hardware-based and at least relatively straightforward – even if controversial at times. And to cite an example, precision low light image intensifiers, as used in night vision goggles, can be used by soldiers and special forces troops in night or other low-light, poor visibility combat operations. Or they can be used by search and rescue personnel in civilian contexts and as essential life saving tools there. This means gray area determinations, as for example when a foreign buyer seeks to purchase access to cutting edge technologies that are deemed to be dual-use in nature, where sale of military equipment or technology to them might be blocked for national security reasons. But if a tool or its underlying technology works and is cyber, it is essentially automatically going to be dual-use, at least in principle. And a consequence of this, is that almost nothing cyber is treated as if dual-use in practice. Cutting edge artificial intelligence technology is one of the few exceptions there, as of this writing.

But the third topics point that I am addressing here and its challenges of locality and ownership, render much of the above moot. Facebook and Twitter and more: essentially any and all of the globally impactful online social media sites, can be and are weaponized and used and on an ongoing basis and both by national governments as they seek to spread their weaponized messages, and by more private sector organizations and groups too, as they develop and disseminate theirs too.

I am going to continue this line of discussion in a next series installment where I will add one more piece to this puzzle, with a reconsideration of symmetrical and asymmetrical forces and conflicts in a cyber contest. And I will turn from that to further discuss the Putin Defense Policy and its underlying political and military doctrine in light of these considerations. And in anticipation of that, this will of necessity mean my addressing the power vacuum that United States president Trump has created, as a path forward for Putin and others to take advantage of, and by cyber as well as more conventional means. I will discuss Russian incursions into the Ukraine and Trump’s effort to extort the government of that nation to help him attack his political enemies in the United States in that context, and Trump’s decision to pull American forces out or Northern Syria, enabling Turkey and Russia to in effect carve up that entire region as their new spheres of influence now.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time 3, and at Page 1 and Page 2 of that directory. And you can also find this and related material at Social Networking and Business 3 and also see that directory’s Page 1 and Page 2.

Innovation and product lifecycles, as coordinately considered from several perspectives – with a non-working example

Posted in business and convergent technologies by Timothy Platt on November 21, 2019

I have often said in this blog that my best, most insightful and compelling teachers have often come from bad example experiences. And they have also, with time at least, come to be among my most compelling sources of impetus for writing here too. And this posting at least starts from that launching point, in keeping with what others might consider my perhaps somewhat idiosyncratic way of thinking.

I am a member of a local gymnasium and make use of their swimming pool on a regular, ongoing basis, and usually some four times per week. I enjoy that and I see it as my way of keeping to a now ancient precept that is encompassed in a classical Roman expression: “mens sana in corpore sano” – a sound mind in a sound body. And that means I use their men’s locker room there and the lockers offered in it.

Those lockers, or at least the taller ones that members can hang their cloths in on hooks, are organized side by side in rows, with aisles – open spaces, provided between facing rows for changing one’s cloths. And the backs of all of these lockers are either snuggly attached to the backs of next-row lockers that face an adjacent aisle, or they attach to a wall, at either end of the locker room and its changing area. These lockers are also tightly connected together side by side in those rows too, so sets of lockers present themselves as single large indivisible units. These details prove to be important here, and that is because the inspirational example that I would write of here, centers on the hooks that I mentioned above for hanging cloths in those lockers, and how they were set up.

Each of those lockers (supposedly) has hooks on both interior sides and in the back facing into their interior too. And the hooks were installed by the simple expediency of machine screws, with their screw heads positioned outside of the lockers themselves and those screws themselves sticking through holes into them. Hooks were slipped over those screws when the lockers were individually assembled and bolts were tightened in place to hold everything of that together and in place. And then those rows of lockers were brought together and firmly attached to each other as briefly noted above – and from that point on it was all but impossible to reach or adjust anything if a well used hook started to come loose and that fact was not caught in time.

In principle, if a bolt loosened and that was not picked up upon and addressed (through hand tightening from inside a locker) and most probably by a locker using gym member, then it might be possible to replace an inside-locker hook from the outside of that individual locker. But that would only be possible, and certainly if that bolt came off completely and the screw itself fell back behind the locker, if that set of lockers were disassembled from each other – a large undertaking. So while that repair might be possible in principle, in practice it became an all but impossibility from how those lockers were hooked together, and made flush to each other along their user-facing edges.

These lockers were installed a couple of years ago when the locker room was last renovated, and the number of lockers there that are now missing one or even two of the three original hooks provided, has continued to grow. A few probably don’t have any inside hooks left. And that pattern of loss will continue too, and particularly as alternatives attempted for replacing missing hooks have not proven satisfactory enough to the people who run that gym, for them to be more generally deployed there.

I offer this example in what is perhaps at least somewhat excessive-seeming detail for a reason. It exemplifies in its details how a technical solution that seems to be effective early on, and quick and easy, can in fact have within it what amount to built-in points of failure and ones that will prove challenging to correct at the very least, and due to their fundamental designs.

In retrospect, even tightly screwed on bolts eventually loosen and particularly when they are repeatedly and I add frequently tugged at as would be expected as cloths are tugged onto and off of them via those hooks. And when the people using those lockers are not looking at or considering that type of change, those bolts will eventually fall off and a next user will find that there really isn’t anything substantial holding that hook up. And then it will fall off with its once-holding screw lost.

• “Eventually” … : product life cycles are all about “eventually.” And good design, of necessity, anticipates as many potentially adverse “eventuallys” as possible so as to limit or even eliminate their likelihood of occurrence as realized contingencies.

And this brings me to innovations and to disruptively new and novel innovations as an extreme source of case-in-point examples. Those hooks and their installation can be thought of as representing an off-the-shelf, standardized solution to a seemingly minor piece of a larger design and assembly problem – and one that no one anticipated might become a single point of failure issue there. The types of challenges that I write of here can arise in design and assembly circumstances that arguably might not even rise to the level of simple next-step evolutionary change in a pattern of product design and development. The greater uncertainties that arise in any developmental changes that would qualify as true innovations, both increase the chances of lifecycle and other challenges arising and of their doing so in unexpected ways and unexpected phases of those life cycles. And it can be considered something of a defining hallmark of truly disruptive innovation that this will happen. And I cite by way of example there, the progression of disruptively new and novel innovation steps that I outline for the evolutionary development of computer software, in my series: Rethinking the Dynamics of Software Development and Its Economics in Businesses, as can be found at Ubiquitous Computing and Communications – everywhere all the time 3 as postings 402 and loosely following.

Each and every one of the fundamental steps that I have been writing of in the development and evolution of computer software, from early machine language programming on to the still emerging next steps of artificial intelligence and quantum computing, hold two crucially important details in common as what amount to defining features. They all (successfully) address at least one major problem that had arisen from earlier computing paradigms and the then-revolutionary software developments that enabled them. And they have all turned out to have the seeds of a next round of such challenges fundamentally built into them, that a still further development, next step disruptive innovation would have to address too.

I intentionally conflate two sets of issues in this narrative, leading up to here:

• Individual product longevity and lifecycle, as such a product offering initially works and as it comes to require maintenance or repair that might or might not be cost-effective: that might or might not even be possible, and
• Individual innovation-driven design longevity, at least as an effective and effectively current source of product options, and their lifecycle arcs that might start out on the cutting edge of new and emerging, to slip in time into legacy status and obsolescence.

My locker hook example fits the first of those two issues while the software innovations of my above-cited series on that, exemplifies the second of them. There is, however, a great deal of overlap between them when they are considered in practice, as for example where lifecycle status in software is ultimately driven by how easy it is to code real specific computationally accessible problems, how easy it is to run them, how effectively they actually resolve those problems and how easy it is to maintain and upgrade those programs as needed. Think of that as a software outline counterpart to the challenge of keeping hooks available in those lockers. So conflated or not as a matter of general principle: here, that can become more a point of distinction without a difference. And it makes sense to consider all relevant design and implementation challenges in this type of overall context as being similarly lifecycle influenced, if not lifecycle driven.

And with all of this in place, I turn to consider one final source of defining detail here, as the issues raised in this brief note actually play out with real-world products:

• Unplanned and planned obsolescence and next step evolutionary product development as clean-up.

The above-offered discussion should make the second half of that bullet point sound familiar; my focus for what is to follow here will center on its first half and particularly on the words “planned” and “unplanned.” And to be at least as close to perfectly clear as I can be in that, I begin outlining this final thought to this posting by stating that what will follow does not in any way involve words such as “malicious” or “adversarial.” Yes, it is certainly possible to design and build a product in such a way that it has what amounts to a pre-programmed drop-dead built into it. Choice of materials in key parts that would be difficult at best to replace, comes to mind as a possibility there. But a phrase such as “predictably expectable” would probably be more accurate and for a much larger percentage of cases where anything like prior planning might be considered, as a product dies for usability and under circumstances where timing there might be questioned.

Sometimes a failure-prone (with time) component is the only way to make an overall product at a price point that consumers would find acceptable and competitively so. And if that “(with time)” is long enough so that an average, reasonable consumer would think that they have gotten their money’s worth from that product, that point of failure can become acceptable anyway. To take that out of the abstract, if the average warranty on a new home-use hot water heater is ten years and regardless of manufacturer, and the average hot water heater actually lasts just about exactly ten years and two months before it starts gushing water out onto the floor, and the average consumer thinks that spending an amortized $50 per year on that, to have a fully usable and reliable source of hot water in their home while it lasts, is acceptable, then that pattern will prevail. And a hot water heater what would last 18 years but cost three times as much up-front will not sell.

And with that first half of this last point offered, I return to the locker and clothing hook example that I began this note with, and invoke the words “unconsidered” and “not thought through.” I expended over 500 words in my initial discussion of that example but I still left out a fundamentally important detail. It is that there are several possible work-arounds that could be used to prevent those hooks from irrevocably falling off, and even when they are attached to that type of locker in essentially the same way that they are in that gym, and even when those lockers are all but welded together as they are. And those work-arounds would not necessarily be costly, labor intensive or difficult to implement.

To cite a very simple and even simplistic solution there, that I am certain would work from having applied its basic approach elsewhere with success, when the factory installer first prepared to slip those machine screws in place to hold those hooks, they could have put a drop of shellac on their threading first. It would go on as a liquid and form a seal with the inside channel on the hooks that those screws are run through. It would also form a reliable if still liquid seal with the threading of the bolt screwed into place. Simple capillary action would ensure this, eliminating anything like possible air gaps. And then it would dry and harden and it would be all but impossible to take those hooks off again – which would not be a problem as the intention would be for them to stay exactly where they are and for many, many years to come.

I would leave it as a thought exercise for the reader to consider how any obsolescence example that they face might be planned or unplanned as variously discussed here, and how significant that is for them and, and how and why.

You can find this and related postings, and series of them at Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory.

Rethinking the dynamics of software development and its economics in businesses 7

Posted in business and convergent technologies by Timothy Platt on November 14, 2019

This is my 7th installment to a thought piece that at least attempts to shed some light on the economics and efficiencies of software development as an industry and as a source of marketable products, in this period of explosively disruptive change (see Ubiquitous Computing and Communications – everywhere all the time 3, postings 402 and loosely following for Parts 1-6.)

I have been at least somewhat systematically discussing a series of historically grounded benchmark development steps in both the software that is deployed and used, and by extension the hardware that it is run on, since Part 2 of this series:

1. Machine language programming
2. And its more human-readable and codeable upgrade: assembly language programming,
3. Early generation higher level programming languages (here, considering FORTRAN and COBOL as working examples),
4. Structured programming as a programming language-defining and a programming style-defining paradigm,
5. Object-oriented programming,
6. Language-oriented programming,
7. Artificial Intelligence programming, and
8. Quantum computing.

And I have, over the course of this series, discussed the first five of those developmental step entries, as by-now fully mature technologies and their implementations. Then I turned to and began discussing Point 6 of that list as an example with a historical background that goes back to 1994, and that arguably might simply continue to be developed to the extent that it is, along a simpler, more predictably consistent evolutionary path. Then in Part 6, I set aside that assumption and updated that software development step so as to include possible, and I add realistic disruptive change too, with an updated version of it phrased in terms of what language-oriented programming explicitly seeks to do that would set it apart:

6.0 Prime: Language-oriented programming seeks to provide customized computer languages with specialized grammars and syntaxes that would make them more optimally efficient in representing and solving complex problems, that current more generically structured computer languages could not resolve as efficiently. In principle, a new, problem type-specific computer language, with a novel syntax and grammar that are selected and designed in order to optimize computational efficiency for resolving it, or some class of such problems, might start out as a “factory standard” offering (n.b. as in Point 6 as originally offered in Part 6 of this series, where in that case, “factory standard” is what it would remain, baring human programming updates.) But there is no reason why a capability for self-learning and a within-implementation capacity for further ontological self-development and change, could not be built into that.

I expanded on the verbiage there from how I expressed this in Part 6, to accommodate how I have taken it out of the explanatory context that I provided there for it. And I stress the last sentence (as repeated verbatim here, from that earlier draft version of it) because that is where disruptively novel change with its uncertainties, and its potential for disruptively novel forms of risk, enter into this narrative. And I stress that new and novel range of risk potential here because I in fact began discussing this (here expanded) 6.0 Prime in explicit risk management terms. That is where this particular next developmental step in programming languages in fact begins to create whole new problems while addressing ones already established in earlier programming language approaches.

I wrote in Part 6 of three business clients that all purchase what begin as identical, line of code by line of code software that they would use in their own business operations for addressing a commonly held software problem: Businesses A, B and C. And it turns out that the instantiation of this that A purchased ontologically self-evolves into a usable but relatively inefficient “end product optimized” version, given the self-development path that it follows. B and its instantiation does somewhat better. And C’s instantiation does dramatically better making it a real business asset and a real source of savings and of revenue value that A’s in particular could never even begin to match. I suggest you’re reviewing that scenario as offered in Part 6, as I will cite it without more fully reiterating it in what follows here.)

• If the complete adaptive peaks landscape of my Part 6 discussion were fully known, or even just fully knowable in advance, and as a cost-effectively available pool of knowledge, the business that developed this custom programming language as a marketable product in a business-to-business context, would probably not have bothered building self-learning per se into it, devolving this example back to a pure, originally stated, Point 6 form – assuming of course there, that they could map positions on that adaptive landscape to specific language development coding decisions in creating and tuning this language.
• But in practice, the best they could probably do in approaching that idealized goal, would be to build out and performance test according to their already-available best understandings as to what does and does not work, seeking out at least reasonably effective ontological self-development potential in what they would bring to market and sell there.
• And after building their basic alpha-tested and at least early beta-tested “factory standard” version of this as a marketable product, they would benchmark its performance, to the performance standards that might be achieved with at least one major off-the-shelf programming language – where optimization would be framed in terms of improved performance achieved when compared to the results obtained when this problem was coded and run in that language. But the software developers – here computer language developers who would create this product would not and as a practical matter cannot know a priori how to achieve a best possible optimization for this complex and comprehensive a coding challenge, and particularly as it requires at least some genuinely novel features.
• And collectively, and looking across all of the issues that I have raised here and more, that complex of decisions: each grounded in its own unknowns, creates requirement that these programmers and their business operate with real and potentially impactful uncertainty. And that uncertainty creates sources of risk. (Once again, see Part 6 where I briefly make note of possible legal and litigation exposure risk, as that might arise in the types of situations exemplified by my three client-business example here.)

I am going to continue this discussion in a next series installment where I will cite and discuss the role of the data that would be run on this programming language and its code and both as it is developed by the offering business, and as it is used by specific purchasing businesses. I briefly noted this complex of issues in passing in Part 6 and will delve into its issues and complications in some detail in Part 8. And after rounding out that phase of this programming language-focused line of discussion, I will step back to consider how at least some of the risk issues that I would discuss here might apply more widely to self-learning software as a sellable product too. And then, as already noted, I will continue on to discuss the above listed Points 7 and 8 too: artificial intelligence programming and quantum computing. And looking beyond that, my goal in all of this is to (at least somewhat) step back from the specifics of these individual example, development stages to raise and discuss some of the general principles that they both individually and collectively raise, as to the overall dynamics of software development and its economics.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory.

Meshing innovation, product development and production, marketing and sales as a virtuous cycle 21

Posted in business and convergent technologies, strategy and planning by Timothy Platt on November 11, 2019

This is my 21st installment to a series in which I reconsider cosmetic and innovative change as they impact upon and even fundamentally shape product design and development, manufacturing, marketing, distribution and the sales cycle, and from both the producer and consumer perspectives (see Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 342 and loosely following for Parts 1-20.)

I have been discussing the issues of innovation and what it is, and as it is perceived, identified and responded to since Part 16 of this series. And as a core element of that narrative, I have been discussing two fundamentally distinctive patterns of acceptance and pushback that can in fact arise and play out either independently and separately from each other or concurrently and for members of the same communities and the same at least potential markets:

• The standard innovation acceptance diffusion curve that runs from pioneer and early adaptors on to include eventual late and last adaptors, and
• Patterns of global flatting and its global wrinkling, pushback alternative.

And I concluded Part 20 of this narrative by stating that I would:

• Continue this line of discussion in a next series installment, where I will more directly and fully discuss stakeholders as just touched on there, as change accepting and even change embracing advocates (nota bene or as change challenging and change resistance advocates.) And I will discuss the roles of reputation and history in all of that.

Note: I offered and discussed a social networking taxonomy in Part 20 that categorically identified and discussed specific types of networking strategy-defined “stakeholders,” as more collectively cited in the above to-address bullet point, and recommend reviewing that posting to put what follows into a clearer perspective.

I will in fact address the points raised in the above topics bullet point here, but begin doing so by noting and challenging a basic assumption that I made in Part 20 and that I have in fact repeated again here in my above-offered series installment-connecting comments. I have been following a more usual pattern here of presuming a fairly clear-cut distinction between standard innovation acceptance diffusion curve acceptance or rejection of New and change, and global flatting and its global wrinkling alternative. But one of the consequences of always online and always connected, and as a largely interactive, social media driven phenomenon is that the boundaries between the two have become blurred. And I bring that point of observation into focus by citing a statement that I made in Part 20, now challenging it:

• “Unless it is imposed from above, as for example through government embargo and nationalism-based threat, global wrinkling is a product of social networking.”

Individuals use social media and both to connect and to influence. Businesses do too and so do other organizations and of all sorts. And governments are increasingly using these tools with the same basic goals too, but on a vastly larger and more impactful scale. For governments, social media has become a suite of tools for both creating and controlling societally-reaching and societally-impactful decisions and for controlling their citizenry as a whole. Social media has in fact become both an inwardly facing, within-country toolset, and a means of outwardly facing statecraft, and for an increasing number of nations.

For an earlier generation precursor example of this, and certainly as communications and information sharing can be used as tools of statecraft and diplomacy, the United States and their Western allies used radio as an internationally impactful route into Eastern Europe, when the nations of that larger region were still under Soviet Russian control as Warsaw Pact nations, during the old Cold War. And the Soviet Union in turn, actively sought to sway and even shape public opinion and certainly about that nation itself, essentially wherever it sought to gain a foothold of influence if not control. So think of current online social media and its use by national governments as a version 2.0 update to that.

How does this emerging version 2.0 reality blur the lines between standard innovation acceptance curve, and global flattening versus wrinkling dynamics? When a nation state such as the People’s Republic of China or Russia actively controls that news as a governmental resource and when they actively use social media to both shape and control the message available through it, blocking any and all competing voices there and as soon as they are identified as such, this controls how individuals would make their own decisions by limiting the options available to them there. And this information access contol makes all such decisions, community shaped and from the top down – provided that you accept that top-down, government determined represents the voice of the community.

So I effectively begin this posting by expanding my Part 20 social networking taxonomy with its corresponding categorical type by categorical type social media strategies, by adding state players as a new and increasingly important category there. And with that, I turn to my above (edited and …) repeated to-address topics point and its issues. And I begin doing so by noting and discussing a fundamental shift that is taking place in how influence arises and spreads, or is damped down in publically open online conversations and in online social media in general. And I begin this by citing a principle that I have invoked many times in this blog, and in just as wide a range of contexts for its overall significance: the Pareto principle.

According to that empirically grounded principle, a large majority of all effects (often stated as some 80% of them) arise from a much smaller percentage of the overall pool of possible causes (with that often found to be some 20%.) So this is often called the 80/20 rule. Online contexts tend to shift those numbers to a more extreme position with 90/10 and even more skewed prevailing, and both for online marketing and sales, and for social media reach and impact, and more. So for example, a small percentage of all online businesses can effectively capture a vast majority of an overall online sales activity, leaving just a tiny percentage of all of those sales transactions entered into, to all the rest. (Consider the impact and reach of a business such as Amazon.com there as a case in point example.) And a single opinion-creating voice can come to be an essential online forum for an entire politically motivated demographic and even a huge one numerically. (Consider the outsized role that Breitbart holds in shaping the conservative, and particularly the “ultraconservative” voice in the United States, and certainly as its role there is marketed by the leadership of the Republican Party, and certainly as of this writing.)

Governments, and I have to add other larger organizational voices can skew those numbers and in their favor for increasing their reach and impact, simply because of their scale and impact as projected across what can become essentially all public information sharing channels – and even without the proactive efforts to manage and control the conversational flow that is so actively pursued by countries such as China and Russia. And this brings me to the issues of reputation and history in all of this, with its many complications and with that including both individual evaluations and publically shared reviews. I am going to continue this discussion in a next series installment where I will at least selectively discuss those issues. And in the course of doing so, I will also at least begin to discuss the social networking and social media issues that I have raised here, from a business and marketplace perspective and with regard to how they play out in those arenas.

As a source of cutting edge, bleeding edge influence where otherwise separate phenomena such as the standard innovation acceptance diffusion curve, and global flattening versus wrinkling can become blurred, this is where weaponized online social media through the deployment of troll and artificial agent participants, enter this narrative. So I will discuss that complex set of phenomena moving forward here too.

And in anticipation of all of that, I have been writing here of governments, but the basic principles that I have brought up in that context, apply at least as fully in a business context and certainly when considering multinationals that have valuations approaching or even exceeding a trillion dollars, as of this writing: businesses that have come to hold a reach and power normally associated with governments and not with the private sector. Though I will discuss smaller and more conventionally “larger” businesses too and how they fit into the online business and commerce world that those mega-corporations have done so much to create.

And with that, I outline what will become a progression of next postings to this series to come, and with a goal of starting to address its issues in the next such installment. Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. And see also Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations.

%d bloggers like this: