Platt Perspective on Business and Technology

Rethinking the dynamics of software development and its economics in businesses 5

Posted in business and convergent technologies by Timothy Platt on July 2, 2019

This is my 5th installment to a thought piece that at least attempts to shed some light on the economics and efficiencies of software development as an industry and as a source of marketable products, in this period of explosively disruptive change (see Ubiquitous Computing and Communications – everywhere all the time 3, postings 402 and loosely following for Parts 1-4.)

I have been working my way through a brief simplified history of computer programming in this series, as a foundation-building framework for exploring that complex of issues, starting in Part 2. And I repeat it here in its current form, as I have updated this list since then:

1. Machine language programming
2. And its more human-readable and codeable upgrade: assembly language programming,
3. Early generation higher level programming languages (here, considering FORTRAN and COBOL as working examples),
4. Structured programming as a programming language defining and a programming style defining paradigm,
5. Object-oriented programming,
6. Language-oriented programming,
7. Artificial Intelligence programming, and
8. Quantum computing.

I have in fact already offered at least preliminary orienting discussions in this series, of the first six entries there, and how they related to each other with each successive step in that progression simultaneously seeking to resolve challenges and issues that had arisen in prior steps there, while opening up new possibilities in its own right.

I will in fact discuss steps seven and eight of that list as I proceed in this series too. But before I do that and in preparation for doing so, I will step back from this historical narrative to at least briefly start an overall discussion of the economics and efficiencies of software development as they have arisen and developed through the first six of those development steps.

• Topic Points 1-5 of the above list all represent mature technology steps at the very least, and Point 6 has deep historical roots, at least as a matter of long-considered principle. And while it is still to be more fully developed and implemented in a directly practical sense, at least current thinking about it would suggest that that will take a more step-by-step evolutionary route that is at least fundamentally consistent with what has come up to now, when and as it is brought into active ongoing use.
• Point 7: artificial intelligence programming has been undergoing a succession of dramatically disruptively novel changes and the scope and reach of that overall effort is certain to expand in the coming years. It has old and even relatively ancient roots and certainly by the standards and timeframes of electronic computing per se. But it is heading into a period of unpredictably disruptively new. And my discussion of this step in my above-listed progression will reflect that.
• And Point 8: quantum computing, is still, as of this writing, at most just in its early embryonic stage of actual realization as a practical, working source of new computer systems technologies and at both the software and even just the fundamental proof of principle hardware level. So its future is certain to be essentially entirely grounded in what as of this writing would be an emerging disruptively innovative flow of new and of change.

My goal for this installment is to at least briefly discuss something of the economics and efficiencies of software development as they have arisen and developed through the first six of those development steps, where they collectively can be seen as representing a largely historically grounded starting point and frame of reference, for more fully considering the changes that will arise as artificial intelligence agents and their underlying technologies, and as quantum computing and its, come into fuller realization.

And I begin considering that historic, grounding framework and its economics and efficiencies, by setting aside what for purposes of this discussion would qualify as disruptively innovative cosmetics as they have arisen in its development progression. And yes, I am referring here to the steady flow of near-miraculous technological development that has taken place since the initial advent of the first electronic computers, that in a span of years that is essentially unparalleled in human history for its fast-paced brevity, has led from early vacuum tube computers to single transistor per chip computers to early integrated circuit technology to the chip technology of today that can routinely and inexpensively pack billions of transistor gates onto a single small integrated circuit, and with all but flawless manufacturing quality control perfection.

• What fundamental features or constraints reside in both the earliest ENIAC and similar vacuum tube computers and even in their earlier electronic computer precursors, and in the most powerful supercomputers of today that can surpass petaflop performance speeds (meaning their being able to perform over one thousand million million floating point operations per second, that would lead to fundamental commonalities in the business models and the economics of how they are made?)
• What fundamental features or constraints underlie at least most of the various and diverse computer languages and programming paradigms that have been developed for and used on these increasingly diverse and powerful machines, that would lead to fundamental commonalities in the business models and the economics of how they are used?

I would begin approaching questions of economics and efficiencies here, for these widely diverse systems, by offering an at least brief and admittedly selective answer to those questions – noting that I will explicitly refer back to what I offer here when considering artificial intelligence programming and certainly its modern and still-developing manifestations, and when discussing quantum computing too. My response to this set of questions in this context will, in fact service as a baseline starting point, for discussing new issues and challenges that Points 7 and 8 and its emerging technologies raise and will continue to raise.

Computer circuit design and in fact overall computer design has traditionally been largely fixed at least within the design and construction of any given device or system, for computers developed according to the technologies and the assumptions of all of these first six steps. Circuit design and architecture, for example, have always been explicitly developed and built towards, as fixed product development goals that would be finalized before any given hardware that employs it would be built and used. And even in the most actively mutable Point 6: language-oriented programming scenario per se as currently envisioned, a customized programming language and any supportive operating system and other software that would be deployed and used with it, is essentially always going to have been finalized and settled for form and functionality prior to its use, in addressing any given computational or other information processing tasks that it would be developed and used for.

I am, of course, discounting hardware customization here, that in usually comprised of swapping different version, also-predesigned and finalized modules into a standardized hardware framework. Yes, it has been possible to add in faster central processing unit chips out of a suite of different price and different capability offerings that would fit into some single same name-branded computer design. And the same type and level of flexibility, and of purchaser and user choice has allowed for standardized, step-wise increased amounts of RAM memory and cache memory and hard drive and other forms of non-volatile storage. And considering this from a computer systems perspective this has meant buyers and users having the option of incorporating in alternative peripherals, specialty chips and even entire add-on circuit boards for specialized functions such as improved graphics and more, and certainly since the advent of the personal computer. But these add-on and upgrade features and options only add expanded functionalities to what are essentially pre-established computer designs with for them, settled overall architectures. The basic circuitry of these computers has never had to capability of ontological change based simply upon how it is used. And that change: a true capability for programming structure-level machine learning and adaptation, are going to become among the expected and even automatically assumed features of Point 7 and Point 8 systems.

My focus in this series is on the software side of this, even if it can be all but impossible to cleanly and clearly discuss that without simultaneously considering its hardware implementation context, so I stress here that computer languages and the code that they convey in specific software instances have been fundamentally set and in similar ways and to similar degrees by the programmers who have developed them, to any hardware lock-in that is built in at least by the assembly floor, a priori to their being loaded into any given hardware platform and executed there, and certainly prior to their being actively used – and even in more dynamically mutable scenarios as envisioned in a Point 6 context.

This fundamental underlying standardization led to and sustained a series of fundamental assumptions, and practices that have collectively gone a long way to shaping both these systems themselves and their underlying economics and their cost-efficiencies:

This has led to the development and implementation of a standardized, paradigmatic approach that has led from initial concept to product design and refinement, prototyping as appropriate, and alpha and beta testing and certainly in any realized software context and its implementations, and with every step of this following what have become well understood and expected cost and returns based financial models. I am not saying here that problems cannot or do not arise, as specific New is and has been developed in this type of system. What I am saying here is that there is a business process and accounting-level microeconomic system that has arisen and that can be followed according to scalable, understandable risk management and due diligence terms. And a big part of that stability comes from the simple fact what when a business, hardware or software in focus, has developed a new product and brings it to market, they are bringing what amounts to a set and settled finalized product to market that they can calculate all costs paid and returns expected to be received from.

The basic steps and performance benchmarks that arise in these business and economic models and process flows, and that are followed in developing these products can and do vary in detail of course, and certainly when considering computer technologies as drawn from different steps in my first five points, above. And the complexity of those steps has gone up, and of necessity as computer systems under consideration have become more complex. But at least categorically, the basic types business and supportive due diligence steps that I refer to here have become more settled and even in the face of the ongoing technological change they would manage.

But looking ahead for a moment, consider one step in that process flow, and from a software perspective. What happens to beta testing (as touched upon above) when any given computer system: any given artificial intelligence agent can and most likely will continue to change and evolve and on its own and starting the instant that it is first turned on and running, and with every one of a perhaps very large number of at least initially identical agents, coming to individuate in its own potentially unique ontological development direction? How would this type of change impact upon economic modeling: microeconomic or macroeconomic that might have to be determined for this type of New?

I am going to continue this discussion in my next installment to this series, with at least a brief discussion of the balancing that has to be built for, when managing both in-house business model and financial management requirements for the companies that produce these technologies, and the pressures that they face if they are to be and if they are remain effective when operating in demanding competitive markets. Then after that I will at least begin to discuss Point 7: artificial intelligence programming, with a goal of addressing the types of questions that I have begun raising here as to business process and its economics. And in anticipation of that, I will add more such questions to complement the basic one that I have just started that line of discussion with. Then I will turn to and similarly address the above Point 8: quantum computing and its complex of issues.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory.

Meshing innovation, product development and production, marketing and sales as a virtuous cycle 19

Posted in business and convergent technologies, strategy and planning by Timothy Platt on June 29, 2019

This is my 19th installment to a series in which I reconsider cosmetic and innovative change as they impact upon and even fundamentally shape the product design and development, manufacturing, marketing, distribution and sales cycle, and from both the producer and consumer perspectives (see Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 342 and loosely following for Parts 1-18.)

I initially offered a set of to-address topics points in Part 16 that I have been discussing since then. And I repeat that list here as I continue doing so, noting in advance that I have in effect been simultaneously addressing its first three points up to here, due to their overlaps:

1. What does and does not qualify as a true innovation, and to whom in this overall set of contexts?
2. And where, at least in general terms could this New be expected to engender resistance and push-back, and of a type that would not simply fit categorically into the initial resistance patterns expected from a more standard cross-demographic innovation acceptance diffusion curve and its acceptance and resistance patterns?
3. How in fact would explicit push-back against globalization per se even be identified, and certainly in any real case-in-point, detail-of-impact example, where the co-occurrence of a pattern of acceptance and resistance that might arise from that might concurrently appear in combination with the types and distributions of acceptance and resistance that would be expected from marketplace adherence to a more standard innovation acceptance diffusion curve? To clarify the need to address this issue here, and the complexities of actually doing so in any specific-instance case, I note that the more genuinely disruptively new an innovation is, the larger the percentage of potential marketplace participants would be that would be expected to hold off on accepting it and at least for significant periods of time, and with their failure to buy and use it lasting throughout their latency-to-accept periods. But that failure to buy in on the part of these involved demographics and their members does not in and of itself indicate anything as to their underlying motivation for doing so, longer term and as they become more individually comfortable with its particular form of New. Their marketplace activity, or rather their lack of it would qualify more as noise in this system, and certainly when anything like a real-time analysis is attempted to determine underlying causal mechanisms in the market activity and marketplace behavior in play. As such, any meaningful analysis and understanding of the dynamics of the marketplace in this can become highly reactive and after the fact, and particularly for those truly disruptive innovations that would only be expected to appeal at first to just a small percentage of early and pioneer adaptor marketplace participants.
4. This leads to a core question of who drives resistance to globalization and its open markets, and how. And I will address that in social networking terms.
5. And it leads to a second, equally important question here too: how would globalization resistance-based failure to buy in on innovation peak and then drop off if it were tracked along an innovation disruptiveness scale over time?

My primary goal for this series installment is to focus on Points 3 and 4 of that list, but once again, given the overlaps implicit in this set of issues as a whole, I will also return to Part 1 again to add further to my discussion of that as well.

To more formally outline where this discussion is headed, I ended Part 18 with this anticipatory note as to what would follow, at least beginning here:

• I am going to continue this discussion in a next series installment where I will make use of an approach to social network taxonomy and social networking strategy that explicitly addresses the issues of who networks with and communicates with whom, and that also can be used to map out patterns of influence as well: important to both the basic innovation diffusion model and to understanding the forces and the dynamics of global flattening and wrinkling too. In anticipation of that discussion to come, that is where issues of agendas enter this narrative. Then after discussing that, I will explicitly turn to the above-repeated Point 3: a complex of issues that has been hanging over this entire discussion since I first offered the above topics list at the end of Part 16 of this series. And I will address a very closely related Point 4 and its issues too, as already briefly touched upon here.

I will in fact address all of that in what follows in this series. But to set the stage for that, I step back to add another layer of nuance if not outright complexity to the questions and possible answers of what innovation is in this context, and to whom. And I will very specifically use the points that I will make there, in what follows in addressing the issues of the above-added bullet point.

• As a first point that I raise here, a change might arise in its significance to be seen as an innovation because “at least someone might realistically be expected to see it as creating at least some new source or level of value or challenge, however small, at least by their standards and criteria” (with “…value or challenge” offered with their alternative valances because such change can be positively or negatively perceived.)
• But it is an oversimplifying mistake to only consider such changes individually and as if they only arose as in a context-free vacuum. More specifically, a sufficient number of individually small changes: small and even more cosmetic-in-nature innovations, all arriving in a short period of time and all affecting a same individual or group, can have as great an impact upon them and their thinking as a single, stand alone disruptively new innovation would have on them. And when those people are confronted with what they would come to see as an ongoing and even essentially ceaseless flood of New, and even if that just arrives as an accumulating mass of individually small forms of new, they can come to feel all but overwhelmed by it. Context can in fact be essentially everything here.
• Timing and cumulative impact are important here, and disruptive is in the eye of the beholder.

Let’s consider those points, at least to start, for how they impact upon and even shape the standard innovation acceptance diffusion curve as empirically arises when studying the emergence and spread of acceptance of New, starting with pioneer and early adaptors and continuing on through late and last adaptors.

• Pioneer and early adaptors are both more tolerant of and accepting of new and the disruptively new, and more tolerant of and accepting of a faster pace of their arrival.
• Or to put this slightly differently, late and last adaptors can be as bothered by too rapid a pace of new and of change, as they would be bothered by pressure to adapt to and use any particular new innovation too quickly to be comfortable for them, and even just any new more minor one (more minor as viewed by others.)
• Just considering earlier adaptors again here, these details of acceptance or caution, or of acceptance and even outright rejection and resistance stem from how more new-tolerant and new-accepting individuals and the demographics they represent, have a higher novelty threshold for even defining a change in their own thinking as actually being significant enough to qualify as being more than just cosmetic. And they have a similarly higher threshold level for qualifying a change that they do see as being a significant innovation, as being a disruptively new and novel one too.
• What is seen as smaller to the earlier adaptors represented in an innovation acceptance diffusion curve, is essentially certain to appear much larger for later adaptors and for whatever individual innovative changes, or combinations and flows of them that might be considered.

And with that continuation of my Point 1 (and by extension, Point 2) discussions, I turn to consider how a flow of new innovations would impact upon a global flattening versus global wrinkling dynamic.

While most if not all of the basic points that I have just raised here in my standard innovation acceptance curve discussion apply here too, at least insofar as its details can be mapped to corresponding features there too, there is one very significant difference that emerges in the flattening versus wrinkling context:

• Push-back and resistance, as exemplified by late and last adaptors in the standard acceptance curve pattern, is driven by questions such as “how would I use this?” or “why would I need this?”, as would arise at a more individual level. But resistance to acceptance as it arises in a wrinkling context, is driven more by “what would adapting this new, challenge and even destroy in my society and its culture?” It is more a response to perceived societal-level threat.

This is a challenge that is defined at, and that plays out at a higher, more societally based organizational level than would apply to a standard innovation acceptance curve context. And this brings me very specifically and directly to the heart of Point 4 of the above list and the question of who drives resistance to globalization and its open markets, and how. And I begin addressing that by noting a fundamentally important point of distinction:

• Both acceptance of change and resistance of it, in a global flattening and wrinkling context, can and do arise from two sometimes competing, sometimes aligned directions. They can arise from the bottom up and from the cumulative influence of local individuals, or they can arise from the top down.
• And to clarify what I mean there, local and bottom up, and (perhaps) more centralized for source and top down can mean any combination of two things too, as to the nature of the voice and the power of influence involved. This can mean societally shaped and society shaping political authority and message coming from or going to voices of power and influence there. Or this can mean the power of social media and of social networking reach. And that is where I will cite and discuss social networking taxonomies and networking reach and networking strategies as a part of this discussion.

I am going to continue this discussion in a next series installment where I will focus explicitly on the issues and challenges of even mapping out and understanding global flattening and its reactive counterpoint: global wrinkling. And as a final thought for here that I offer in anticipation of that line of discussion to come, I at least briefly summarize a core point that I made earlier here, regarding innovation and responses to it per se:

• Change and innovation per se, can be disruptive and for both the perceived positives and negatives that that can bring with it. And when a sufficiently high percentage of an overall population primarily see positive, or at worst neutral there, flattening is at least going to be more possible and certainly as a “natural” path forward. But if a tipping point level of overall negative impact-perceived response arises, then the acceptance or resistance pressures that arise will favor wrinkling and that will become a societally significant force and it will represent a significant part of the overall voice for those peoples too.

I will discuss the Who of this and both for who leads and for who follows in the next installment to this narrative. Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. And see also Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations.

Reconsidering Information Systems Infrastructure 10

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on June 20, 2019

This is the 10th posting to a series that I am developing, with a goal of analyzing and discussing how artificial intelligence and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 374 and loosely following for Parts 1-9. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

I have been discussing artificial intelligence agents from a variety of perspectives in this series, turning in Part 9 for example, to at least briefly begin a discussion of neural network and related systems architecture approaches to hardware and software development in that arena. And my goal in that has been to present a consistently, logically organized discussion of a very large and still largely amorphous complex of issues, that in their simplest case implementations are coming to be more fully understood, but that are still open and largely undefined when moving significantly beyond that.

We now have a fairly good idea as to what artificial specialized intelligence is and certainly when that can be encapsulated into rigorously defined starter algorithms and with tightly constrained self-learning capabilities added in, that would primarily just help an agent to “random walk” its way towards greater efficiency in carrying out its specifically defined end-goal tasks. But in a fundamental sense, we are still in the position of standing as if at the edge of an abyss of yet to acquire knowledge and insight, when it comes to dealing with genuinely open-ended tasks such as natural conversation, and the development of artificial agents that can master them.

I begin this posting by reiterating a basic paradigmatic approach that I have offered in other information technology development contexts, and both in this blog and as a consultant, that explicitly applies here too.

• Start with the problem that you seek to solve, and not with the tools that you might use in accomplishing that.

Start with the here-artificial intelligence problem itself that you seek to effectively solve or resolve: the information management and processing task that you seek to accomplish, and plan and design and develop from there. In a standard if perhaps at least somewhat complex-problem context and as a simple case ideal, this means developing an algorithm that would encapsulate and solve a specific, clearly stated problem in detail, and then asking necessary questions as they arise at the software level and then the hardware level, to see what would be needed to carry that out. And ultimately that will mean selecting designing and building at the hardware level for data storage and accessibility, and for raw computational power requirements and related capabilities that would be needed for this work. And at the software level this would mean selecting programming languages and related information encoding resources that are capable of encoding the algorithm in place and that can manage its requisite data flows as it is carried out. And it means actually encoding all of the functionalities required in that algorithm, in those software tools so as to actually perform the task that it specifies. (Here, I presume in how I state this, as a simplest case scenario, a problem that can in fact be algorithmically defined up-front and without any need for machine learning and algorithm adjustment as better and best solutions are iteratively found for the problem at hand. And I arbitrarily represent the work to be done there as fitting into what might in fact be a very large and complex “single overall task”, and even if carrying it out might lead to very different outcomes depending on what decision points have to be included and addressed there and certainly at a software level. I will, of course, set aside these and several other similar more-simplistic assumptions as this overall narrative proceeds and as I consider the possibilities of more complex artificial intelligence challenges. But I offer this simplified developmental model approach here, as an initial starting point for that further discussion to come.)

• Stepping back to consider the design and development approach that I have just offered here, if just in a simplest application form, this basic task-first and hardware detail-last approach can be applied to essentially any task, problem or challenge that I might address here in this series. I present that point of judgment on my part as an axiomatic given and even when ontological and even evolutionary development, as self-organized and carried out by and within artificial agents carrying out this work, is added into the basic design capabilities developed. There, How details might change but overall Towards What goals would not necessarily do so, unless the overall problem to be addressed in changed or replaced.

So I start with the basic problem-to-software-to-hardware progression that I began this line of discussion with, and continue building from there with it, though with a twist and certainly for artificial intelligence oriented tasks that are of necessity going to be less settled up-front as to their precise algorithms as would ultimately be required. I step back from my more firmly stated a priori assumptions as explicitly outlined above in my simpler case problem solving scenario, that I would continue to assume and pursue as-is in more standard computational or data processing task-to-software-to-hardware computational systems analyses, and certainly where off the shelf resources would not suffice, to add another level of detail there.

• And more specifically here, I argue a case for building flexibility into these overall systems and with the particular requirements that that adds to the above development approach.
• And I argue a case for designing and developing and building overall systems – and explicitly conceived artificial intelligence agents in particular, with an awareness of a need for such flexibility in scale and in design from their initial task specifications step in this development process, and with more and more room for adjustment and systems growth added in, and for self-adjustment within these systems added in for each successive development step as carried out from there too.

I focused in Part 9 on hardware, and on neural network designs and their architecture, at least as might be viewed from a higher conceptual perspective. And I then began this posting by positing in effect, that starting with the hardware and its considerations might be compared to looking through a telescope – but backwards. And I now say that a prospective awareness of increasing resource needs, with next systems-development steps is essential. And that understanding needs to enter into any systems development effort as envisioned here, and from the dawn of any Day 1 in developing and building towards it. This flexibility and its requisite scope and scale change requirements, I add, cannot necessarily be anticipated in advance of its actually being needed, and at any software or hardware level, and certainly not in any detail. So I write here of what might be called flexible flexibility: flexibility that itself can be adjusted and updated for type and scope as changing needs and new forms of need arise. So on the face of things, this sounds like I have now reversed course here and that I am arguing a case for hardware then software then problem as an orienting direction of focused consideration, or at the very least hardware plus software plus problem as a simultaneously addressed challenge. There is in fact an element of truth to that final assertion, but I am still primarily just adding flexibility and capacity to change directions of development as needed, into what is still basically a same settled paradigmatic approach. Ultimately, the underlying problem to be resolved has to take center stage and the lead here.

And with that all noted and for purposes of narrative continuity from earlier installments to this series if nothing else, I add that I ended Part 9 by raising a tripartite point of artificial intelligence task characterizing distinction, that I will at least begin to flesh out and discuss here:

• Fully specified systems goals (e.g. chess rules as touched upon in Part 8 for an at least somewhat complex example, but with fully specified rules defining a win and a loss, etc. for it.),
• Open-ended systems goals (e.g. natural conversational ability as more widely discussed in this series and certainly in its more recent installments with its lack of corresponding fully characterized performance end points or similar parameter-defined success constraints), and
• Partly specified systems goals (as in self-driving cars where they can be programmed with the legal rules of the road, but not with a correspondingly detailed algorithmically definable understanding of how real people in their vicinity actually drive and sometimes in spite of those rules: driving according to or contrary to the traffic laws in place.)

My goal here as noted in Part 9, is to at least lay a more detailed foundation for focusing on that third, gray area middle-ground task category in what follows, and I will do so. But to explain why I would focus on that and to put this step in this overall series narrative into clearer perspective, I will at least start with the first two, as benchmarking points of comparison. And I begin that with fully specified systems and with the very definite areas of information processing flexibility that they still can require – and with the artificial agent chess grand master problem.

• Chess is a rigorously definable game as considered at an algorithm level. All games as properly defined involve two players. All involve functionally identical sets of game pieces and both for numbers and types of pieces that those players would start out with. All chess games are played on a completely standardized game board with opposing side pieces positioned to start in a single standard accepted pattern. And opposing players take turns moving pieces on that playing board, with rules in place that would determine who is to make the first move, going first in any given game played.
• The chess pieces that are employed in this all have specific rules associated with them as to how they can be moved on a board, and for how pieces can be captured and removed by an opposing player. And chess games proceed until a player sees that they are one move away from being able to win in which case they declare “check.” Winning by definition for chess always means capturing an opposing player’s king piece. And when they win and with the determination of a valid win fully specified, they declare “checkmate.” And if a situation arises in which both players realize that a definitive formal win cannot be achieved in a finite number of moves from how the pieces that remain in play are laid out in the board, preventing one player from being able to capture their opponent’s king piece and winning, a draw is called.
• I have simplified this description for a few of the rules possibilities that enter into this game when correctly played, omitting a variety of at least circumstantially important details. But bottom line, the basic How of playing chess is fully and readily amenable to being specified within a single highly precise algorithm that can be in place and in use a priori to the actual play of any given chess game.
• Similar algorithmically defined specificity could be offered in explaining a much simpler game: tic-tac toe with its simple and limited range of moves and move combinations. Chess rises to the level of complexity and the level of interest that would qualify it for consideration here because of the combinatorial explosion in the number of possible distinct games of chess that can be played, each carrying out an at least somewhat distinct combination of moves when compared with any other of the overall set. All games start out the same with all pieces identically positioned. After the first set of moves with each player moving once, there are 400 distinct board setups possible with 20 possible white piece moves and 20 possible black piece moves. After two rounds of moves there are 197,742 possible board layouts and after three, that number expands out further to over 121 million. This range of possibilities arises at the very beginning of any actual game with the numbers of moves and of board layouts continuing to expand from there, and with the overall number of moves and move combinations growing to exceed and even vastly exceed the number of board position combinations possible, as differing move patterns can converge on same realized board layouts. And this is where strategy and tactics enter chess and in ways that would be meaningless for a game such as tic-tac toe. And this is where the drive to develop progressively more effective chess playing algorithm-driven artificial agents enters this too, where those algorithms would just begin with the set rules of chess and extend out from there to include tactical and strategic chess playing capabilities as well – so agents employing them can play strongly competitive games and not just by-the-rules, “correct” games.

So when I offer fully specified systems goals as a task category above, I assume as an implicit part of its definition that the problems that it would include all involve enough complexity so as to prove interesting, and that they be challenging to implement and certainly if best possible execution of the specific instance implementations involved in them (e.g. of the specific chess games played) is important. And with that noted I stress that for all of this complexity, the game itself is constrainable within a single and unequivocal rules-based algorithm, and even when effective strategically and tactically developed game play would be included.

That last point is going to prove important and certainly as a point of comparison when considering both open-ended systems goals and their so-defined tasks, and partly specified systems goals and their tasks. And with the above offered I turn to the second basic benchmark that I would address here: open-ended systems goals. And I will continue my discussion of natural conversation in that regard.

I begin with what might be considered simple, scale of needed activity-based complexity and the numbers of chess pieces on a board, and on one side of it in particular, when compared to the number of words as commonly used in wide-ranging conversation, in real-world natural conversation. Players start out with 16 chess pieces each and with fewer functionally identical game piece types than that; if you turn to resources such as the Oxford English Dictionary to benchmark English for its scale as a widely used language, it lists some 175,000 currently used words and another roughly 50,000 that are listed as obsolete but that are still at least occasionally used too. And this leaves out a great many specialized terms that would only arise when conversing about very specific and generally very technical issues. Assuming that an average person might in fact only actively use a fraction of this: let’s assume some 20,000 words on a more ongoing basis, that still adds tremendous new levels of complexity to any task that would involve manipulating and using them.

• Simple complexity of the type addressed there, can perhaps best be seen as an extraneous complication here. The basic algorithm-level processing of a larger scale piece-in-play set, as found in active vocabularies would not necessarily be fundamentally affected by that increase in scale beyond a requirement for better and more actively engaged sorting and filtering and related software as what would most probably be more ancillary support functions. And most of the additional workload that all of this would bring with it would be carried out by scaling up the hardware and related infrastructure that would carry out the conversational tasks involved and certainly if a normal rate of conversational give and take is going to be required.
• Qualitatively distinctive, emergently new requirements for actually specifying and carrying out natural conversation would come from a very different direction, that I would refer to here as emergent complexity. And that arises in the fundamental nature of the goal to be achieved itself.

Let’s think about conversation and the actual real-world conversations that we ourselves enter into and every day. Many are simple and direct and focus on the sharing of specific information between or concerning involved parties. “Remember to pick up a loaf of bread and some organic lettuce at the store, on the way home today.” “Will do, … but I may be a little late today because I have a meeting that might run late at work that I can’t get out of. I’ll let you know if it looks like I am going to be really delayed from that. Bread and lettuce are on the way so that shouldn’t add anything to any delays there.”

But even there, and even with a brief and apparently focused conversation like this, a lot of what was said and even more of what was meant and implied, depended on what might be a rich and complex background story, and with added complexities there coming from both of the two people speaking. And they might individually be hearing and thinking through this conversation in terms of at least somewhat differing background stories at that. What, for example, does “… be a little late today” mean? Is the second speaker’s boss, or whoever is calling this meeting known for doing this, and disruptively so for the end of workday schedules of all involved? Does “a little” here mean an actual just-brief delay or could this mean everyone in the room feeling stressed for being held late for so long, and with that simply adding to an ongoing pattern? The first half of this conversation was about getting more bread and lettuce, but the second half of it, while acknowledging that and agreeing to it, was in fact very different and much more open-ended for its potential implied side-messages. And this was in fact a very simple and very brief conversation.

Chess pieces can make very specific and easily characterized moves that fit into specific patterns and types of them. Words as used in natural conversations cannot be so simply characterized, and conversations – and even short and simple ones, often fit into larger ongoing contexts, and into contexts that different participants or observers might see very differently. And this is true even if none of the words involved have multiple possible dictionary definition meanings, if none of them can be readily or routinely used in slang or other non-standard ways, and if none of them have matching homophones – if there is not confusion as to precisely which word was being used, because two or more that differ by definition sound the same (e.g. knight or night, and to, two or two.)

And this, for all of its added complexities, does not even begin to address issues of euphemism, or agendas that a speaker might have with all of the implicit content and context that would bring to any conversation, or any of a wide range of other possible issues. It does not even address the issues of accent and its accurate comprehension. But more to the point, people can and do converse about any and every of a seemingly entirely open-ended range of topics and issues, and certainly when the more specific details addressed are considered. Jut consider the conversation that would take place if the shopper of the above-cited chat were to arrive home with a nice jar of mayonnaise and some carrots instead of bread and lettuce, after assuring that they knew what was needed and saying they would pick it up at the store. Did I raise slang here, or dialect differences? No, and adding them in here still does not fully address the special combinatorial explosions of meaning at least potentially expressed and at least potentially understood that actual wide-ranging open ended natural conversation brings with it.

And all of this brings me back to the point that I finished my above-offered discussion of chess with, and winning games in it as an example of a fully specified systems goal. Either one of the two players in a game of chess wins and the other loses, or they find themselves having to declare a draw for being unable to reach a specifically, clearly, rules-defined win/lose outcome. So barring draws that might call of another try that would at least potentially reach a win and loss, all chess games if completed, lead to a single defined outcome. But there is no single conversational outcome that would meaningfully apply to all situations and contexts, all conversing participants and all natural conversation – unless you were to attempt to arrive at some overall principle that would of necessity be so vague and general as to be devoid of any real value. Open-ended systems goals, as the name implies, are open-ended. And a big part of developing and carrying through a realistic sounding natural conversational capability in an artificial agent has to be that of keeping it in focus in a way that is both meaningful and acceptable to all involved parties, where that would mean knowing when a conversation should be concluded and how, and in a way that would not lead to confusion or worse.

And this leads me – finally, to my gray area category: partly specified systems goals and the tasks and the task performing agents that would carry them out and on a specific instance by specific instance basis and in general. My goal for what is to follow now, is to start out by more fully considering my self-driving car example, then turning to consider partly specified systems goals and the agents that would carry out tasks related to them, in general. And I begin that by making note of a crucially important detail here:

• Partly specified systems goals can be seen as gateway and transitional challenges, and while solving them at a practical matter can be important in and of itself,
• Achieving effective problem resolutions there can perhaps best be seen as a best practices route for developing the tools and technologies that would be needed for better resolving open-ended systems challenges too.

Focusing on the learning curve potential of these challenge goals, think of the collective range of problems that would fit into this mid-range task set as taking the overall form of a swimming pool with a shallow and a deep end, and where deep can become profoundly so. On the shallow end of this continuum-of-challenge degree, partly specified systems merge into the perhaps more challenging end of fully specified systems goals and their designated tasks. So as a starting point, let’s address low-end, or shallow end partly specified artificial intelligence challenges. At the deeper end of this continuum, it would become difficult to fully determine if a proposed problem should best be considered partly specified or open-ended in nature, and it might in fact start out designated one way to evolve into the other.

I am going to continue this narrative in my next installment to this series, starting with a more detailed discussion of partly specified systems goals and their agents as might be exemplified by my self-driving car problem/example. I will begin with a focus on that particular case in point challenge and will continue from there to consider these gray area goals and their resolution in more general terms, and both in their own right and as evolutionary benchmark and validation steps that would lead to carrying out those more challenging open-ended tasks.

In anticipation of that line of discussion to come and as an opening orienting note for what is to come in Part 11 of this series, I note a basic assumption that is axiomatically built into the basic standard understanding of what an algorithm is: that all step by step process flows as carried out in it, would ultimately lead to or at least towards some specific at least conceptually defined goal. (I add “towards” there to include algorithms that for example seek to calculate the value of the number pi (π) to an arbitrarily large number of significant digits where complete task resolution is by definition going to be impossible for that. And for a second type of ongoing example, consider an agent that would manage and maintain environmental conditions such as atmospheric temperature and quality within set limits in the face of complex ongoing perturbing forces, where ‘achieve and done” cannot apply.)

Fully specified systems goals can in fact often be encapsulated within endpoint determinable algorithms that meet the definitional requirements of that axiomatic assumption. Open-ended goals as discussed here would arguably not always fit any single algorithm in that way. There, ongoing benchmarking and performance metrics that fit into agreed to parameters might provide a best alternative to any final goals specification as presumed there.

In a natural conversation, this might mean for example, people engaged in a conversation not finding themselves confused as to how their chat seems to have become derailed from a loss of focus on what is actually supposedly being discussed. But even that type and level of understanding can be complex, as perhaps illustrated with my “shopping plus” conversational example of above.

So I will turn to consider middle ground, partly specified systems goals and agents that might carry out tasks that would realize them in my next installment here. And after completing that line of discussion, at least for purposes of this series, I will turn back to reconsider open-ended goals and their agents again, and more from a perspective of general principles.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 7

This is my 7th posting to a short series on the growth potential and constraints inherent in innovation, as realized as a practical matter (see Reexamining the Fundamentals 2, Section VIII for Parts 1-6.) And this is also my third posting to this series, to explicitly discuss emerging and still forming artificial intelligence technologies as they are and will be impacted upon by software lock-in and its imperatives, and by shared but more arbitrarily determined constraints such as Moore’s law (see Part 4, Part 5 and Part 6.)

I focused in Part 6 of this narrative on a briefly stated succession of possible development possibilities, that all relate to how an overall next generation internet will take shape, that is largely and even primarily driven at least for proportion of functional activity carried out in it, by artificial intelligence agents and devices: an increasingly largely internet of things and of smart artifactual agents among them. And I began that with a continuation of a line of discussion that I began in earlier installments to this series, centering on four possible development scenarios as initially offered by David Rose in his book:

• Rose, D. (2014) Enchanted Objects: design, human desire and the internet of things. Scribner.

I added something of a fifth such scenario, or rather a caveat-based acknowledgment of the unexpected in how this type of overall development will take shape, in Part 6. And I ended that posting with a somewhat cryptic anticipatory note as to what I would offer here in continuation of its line of discussion, which I repeat now for smoother continuity of narrative:

• I am going to continue this discussion in a next series installment, where I will at least selectively examine some of the core issues that I have been addressing up to here in greater detail, and how their realized implementations might be shaped into our day-to-day reality. And in anticipation of that line of discussion to come, I will do so from a perspective of considering how essentially all of the functionally significant elements to any such system and at all levels of organizational resolution that would arise in it, are rapidly coevolving and taking form, and both in their own immediately connected-in contexts and in any realistic larger overall rapidly emerging connections-defined context too. And this will of necessity bring me back to reconsider some of the first issues that I raised in this series too.

The core issues that I would continue addressing here as follow-through from that installment, fall into two categories. I am going to start this posting by adding another scenario to the set that I began presenting here, as initially set forth by Rose with his first four. And I will use that new scenario to make note of and explicitly consider an unstated assumption that was built into all of the other artificial intelligence proliferation and interconnection scenarios that I have offered here so far. And then, and with that next step alternative in mind, I will reconsider some of the more general issues that I raised in Part 6, further developing them too.

I begin all of this with a systems development scenario that I would refer to as the piecewise distributed model.

• The piecewise distributed model for how artificial intelligence might arise as a significant factor in the overall connectiverse that I wrote of in Part 6 is based on current understanding of how human intelligence arises in the brain as an emergent property, or rather set of them, from the combined and coordinated activity of simpler components that individually do not display anything like intelligence per se, and certainly not artificial general intelligence.

It is all about how neural systems-based intelligence arises from lower level, unintelligent components in the brain and how that might be mimicked, or recapitulated if you will through structurally and functionally analogous systems and their interconnections, in artifactual systems. And I begin to more fully characterize this possibility by more explicitly considering scale, and to be more precise the scale of range of reach for the simpler components that might be brought into such higher level functioning totalities. And I begin that with a simple of perhaps somewhat odd sounding question:

• What is the effective functional radius of the human brain given the processing complexities and the numbers and distributions of nodes in the brain that are brought into play in carrying out a “higher level” brain activity, the speed of neural signal transmission in that brain as a parametric value in calculations here, and an at least order of magnitude assumption as to the timing latency to conscious awareness of a solution arrived at for a brain activity task at hand, from its initiation to its conclusion?

And with that as a baseline, I will consider the online and connected alternative that a piecewise distributed model artificial general intelligence, or even just a higher level but still somewhat specialized artificial intelligence would have to function within.

Let’s begin this side by side comparative analysis with consideration of what might be considered a normative adult human brain, and with a readily and replicably arrived at benchmark number: myelinated neurons as found in the brain send signals at a rate of approximately 120 meters per second, where one meter is equal to approximately three and a quarter feet in distance. And for simplicity’s sake I will simply benchmark the latency from the starting point of a cognitively complex task to its consciously perceived completion at one tenth of a second. This would yield an effective functional radius of that brain at 12 meters or 40 feet, or less – assuming as a functionally simplest extreme case for that outer range value that the only activity required to carry out this task was the simple uninterrupted transmission of a neural impulse signal along a myelinated neuron for some minimal period of time to achieve “task completion.”

And actual human brain is of course a lot more compact than that, and a lot more structurally complex too, with specialized functional nodes and complex arrays of parallel processor organized structurally and functionally duplicated elements in them. And that structural and functional complexity, and the timing needed to access stored information from and add new information back into memory again as part of that task activity, slows actual processing down. An average adult human brain is some 15 centimeters long, or six inches front to back so using that as an outside-value metric and a radius as based on it of some three inches, structural and functional complexities in the brain that would be called upon to carry out that tenth of a second task, would effectively reduce its effective functional radius some 120-fold from the speedy transmission-only outer value that I began this brief analysis with.

Think of that as a speed and efficiency tradeoff reduction imposed on the human brain by its basic structural and functional architecture and by the nature and functional parameters of its component parts, on the overall possible maximum rate of activity, at least for tasks performed that would fit the overall scale and complexity of my tenth of a second benchmark example. Now let’s consider the more artifactual overall example of computer and network technology as would enter into my above-cited piecewise distributed model scenario, or in fact into essentially any network distributed alternative to it. And I begin that by noting that the speed of light in a vacuum is approximately 300 million meters per second, and that electrons can travel along a pure copper wire at up to approximately 99% of that value.

I will assume for purposes of this discussion that photons in wireless networked and fiber optic connected aspects of such a system, and the electrons that convey information through their flow distributions in more strictly electronic components of these systems all travel on average at roughly that same round number maximum speed, as any discrepancy from it in what is actually achieved would be immaterial for purposes of this discussion, given my rounding off and other approximations as resorted to here. Then, using the task timing parameter of my above-offered brain functioning analysis, as sped up to one tenth of a millisecond for an electronic computer context, an outer limit transmission-only value for this system and its physical dimensions would suggest a maximum radius of some 30,000 kilometers, encompassing all of the Earth and all of near-Earth orbit space and more. There, in counterpart to my simplest case neural signal transmission processing as a means of carrying out the above brain task, I assume here that its artificial intelligence counterpart might be completed simply by the transmission of a single pulse of electrons or photons and without any processing step delays required.

Individual neurons can fire up to some 200 times per second, depending on the type of function carried out, and an average neuron in the brain connects to what is on the order of 1000 other neurons through complex dendritic branching and the synaptic connections they lead to, and with some neurons connecting to as many as 10,000 others and more. I assume that artificial networks can grow to that level of interconnected connectivity and more too, and with levels of involved nodal connectivity brought into any potentially emergent artificial intelligence activity that might arise in such a system, that matches and exceeds that of the brain for its complexity there too. That at least, is likely to prove true for any of what with time would become the all but myriad number of organizing and managing nodes, that would arise in at least functionally defined areas of this overall system and that would explicitly take on middle and higher level SCADA -like command and control roles there.

This would slow down the actual signal transmission rate achievable, and reduce the maximum physical size of the connected network space involved here too, though probably not as severely as observed in the brain. There, even today’s low cost readily available laptop computers can now carry out on the order of a billion operations per second and that number continues to grow as Moore’s law continues to hold forth. So if we assume “slow” and lower priority tasks as well as more normatively faster ones for the artificial intelligence network systems that I write of here, it is hard to imagine restrictions that might realistically arise that would effectively limit such systems to volumes of space smaller than the Earth as a whole, and certainly when of-necessity higher speed functions and activities could be carried out by much more local subsystems and closer to where their outputs would be needed.

And to increase the expected efficiencies of these systems, brain as well as artificial network in nature, effectively re-expanding their effective functional radii again, I repeat and invoke a term and a design approach that I used in passing above: parallel processing. That, and inclusion of subtask performing specialized nodes, are where effectively breaking up a complex task into smaller, faster-to-complete subtasks, whose individual outputs can be combined as a completed overall solution or resolution, can speed up overall task completion by orders of timing efficiency and for many types of tasks, allowing more of them to be carried out within any given nominally expected benchmark time for expected “single” task completions. This of course also allows for faster completion of larger tasks within that type of performance measuring timeframe window too.

• What I have done here at least in significant part, is to lay out an overall maximum connected systems reach that could be applied to the completion of tasks at hand, and in either a human brain or an artificial intelligence-including network. And the limitations of accessible volume of space there, correspondingly sets an outer limit to the maximum number of functionally connected nodes that might be available there, given that they all of necessity have space filling volumes that are greater than zero.
• When you factor in the average maximum processing speed of any information processing nodes or elements included there, this in turn sets an overall maximum, outer limit value to the number of processing steps that could be applied in such a system, to complete a task of any given time-requiring duration, within such a physical volume of activity.

What are the general principles beyond that set of observations that I would return to here, given this sixth scenario? I begin addressing that question by noting a basic assumption that is built into the first five scenarios as offered in this series, and certainly into the first four of them: that artificial intelligence per se reside as a significant whole in specific individual nodes. I fully expect that this will prove true in a wide range of realized contexts as that possibility is already becoming a part of our basic reality now, with the emergence and proliferation of artificial specialized intelligence agents. But as this posting’s sixth scenario points out, that assumption is not the only one that might be realized. And in fact it will probably only account for part of what will to come to be seen as artificial intelligence as it arises in these overall systems.

The second additional assumption that I would note here is that of scale and complexity, and how fundamentally different types of implementation solutions might arise, and might even be possible, strictly because of how they can be made to work with overall physical systems limitations such as the fixed and finite speed of light.

Looking beyond my simplified examples as outlined here: brain-based and artificial alike, what is the maximum effective radius of a wired AI network, that would as a distributed system come to display true artificial general intelligence? How big a space would have to be tapped into for its included nodes to match a presumed benchmark human brain performance for threshold to cognitive awareness and functionality? And how big a volume of functionally connected nodal elements could be brought to bear for this? Those are open questions as are their corresponding scale parameter questions as to “natural” general intelligence per se. I would end this posting by simply noting that disruptively novel new technologies and technology implementations that significantly advance the development of artificial intelligence per se, and the development of artificial general intelligence in particular, are likely to both improve the quality and functionality of individual nodes involved and regardless of which overall development scenarios are followed, and their capacity to synergistically network together.

I am going to continue this discussion in a next series installment where I will step back from considering specific implementation option scenarios, to consider overall artificial intelligence systems as a whole. I began addressing that higher level perspective and its issues here, when using the scenario offered in this posting to discuss overall available resource limitations that might be brought to bear on a networked task, within given time to completion restrictions. But that is only one way to parameterize this type of challenge, and in ways that might become technologically locked in and limited from that, or allowed to remain more open to novelty – at least in principle.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3 and also see Page 1 and Page 2 of that directory. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And also see its Page 1.

Addendum note: The above presumptive end note added at the formal conclusion of this posting aside, I actually conclude this installment with a brief update to one of the evolutionary development-oriented examples that I in effect began this series with. I wrote in Part 2 of this series, of a biological evolution example of what can be considered an early technology lock-in, or rather a naturally occurring analog of one: of an ancient biochemical pathway that is found in all cellular life on this planet: the pentose shunt.

I add a still more ancient biological systems lock-in example here that in fact had its origins in the very start of life itself as we know it, on this planet. And for purposes of this example, it does not even matter whether the earliest genetic material employed in the earliest life forms was DNA or RNA in nature for how it stored and transmitted genetic information from generation to generation and for how it used such information in its life functions within individual organisms. This is an example that would effectively predate that overall nucleic acid distinction as it involves the basic, original determination of precisely which basic building blocks would go into the construction and information carrying capabilities of either of them.

All living organisms on Earth, with a few viral exceptions employ DNA as their basic archival genetic material, and use RNA as an intermediary in accessing and making use of the information so stored there. Those viruses use RNA for their own archival genetic information storage, and the DNA replicating and RNA fabrication machinery of the host cells they live in to reproduce. And the genetic information included in these systems, and certainly at a DNA level is all encoded in patterns of molecules called nucleotides that are linearly laid out in the DNA design. Life on Earth uses combinations of four possible nucleotides for this coding and decoding: adenine (A), thymine (T), guanine (G) and cytosine (C). And it was presumed at least initially that the specific chemistry of these four possibilities made them somehow uniquely suited to this task.

More recently it has been found that there are other possibilities that can be synthesized and inserted into DNA-like molecules, with the same basic structure and chemistry, that can also carry and convey this type of genetic information and stably, reliably so (see for example:

Hachimoji DNA and RNA: a genetic system with eight building blocks.)

And it is already clear that this only indicates a small subset of the information coding possibilities that might have arisen as alternatives to the A/T/G/C genetic coding became locked-in, in practice in life on Earth.

If I could draw one relevant conclusion to this still unfolding story that I would share here, it is that if you want to find technology lock-ins, or their naturally occurring counterparts, look to your most closely and automatically held developmental assumptions, and certainly when you cannot rigorously justify them from first principles. Then question the scope of relevance and generality of your first principles there, for hidden assumptions that they carry within them.

Innovation, disruptive innovation and market volatility 47: innovative business development and the tools that drive it 17

Posted in business and convergent technologies, macroeconomics by Timothy Platt on June 5, 2019

This is my 47th posting to a series on the economics of innovation, and on how change and innovation can be defined and analyzed in economic and related risk management terms (see Macroeconomics and Business and its Page 2 continuation, postings 173 and loosely following for its Parts 1-46.)

I have been discussing innovation discovery, and the realization of value from it through applied development through most of this series, and as one of the primary topics considered here. And I have sought to take that line of discussion at least somewhat out of the abstract since Part 43, through an at least selective discussion and analysis of a specific case in point example of how this can and does take place:

• The development of a new synthetic polymer-based outdoor paint type as an innovation example, as developed by one organization (a research lab at a university), that would be purchased or licensed by a second organization for profitable development: a large paint manufacturer.

I focused for the most part on the innovation acquiring business that is participating in this, from Part 43 through Part 45, and turned to more specifically consider the innovation creating organization and its functioning in Part 46. And at the end of that installment and with this and subsequent entries to this series in mind, I said that I would continue from there by:

• Completing at least for purposes of this series, my discussion of this university research lab and outside for-profit manufacturer scenario.
• And I added that I will then step back to at least briefly consider this basic two organization model in more general terms, where for example, the innovating organization there might in fact be another for profit business too – including one that is larger than the acquiring business and that is in effect unloading patents that do not fit into their own needs planning.
• I will also specifically raise and challenge an assumption that I just built into Part 46 and its narrative, regarding the value of scale in the innovation acquiring business in their being able to successfully compete in this type of innovation as product market.

And I begin addressing this topics list with the first of those bullet points and with my real world, but nevertheless very specialized university research lab-based example. And I do so by noting a point of detail in what I have offered here up to now, that anyone who has worked in a university-based research lab has probably noted, and whether that has meant their working there as a graduate student or post-doc or as a lead investigator faculty member. Up to here, I have discussed both the innovation acquiring, and the innovation providing organizations in these technology transfer transactions as if they were both simple monolithic entities. Nothing could be further from the truth, and the often competing dynamics that play out within these organizations are crucially important as a matter of practice, to everything that I would write of here.

I begin this next phase of this discussion with the university side to that, and with the question of how grant money that was competitively won from governmental and other funding sources is actually allocated. For a basic reference on this, see Understanding Cost Allocation and Indirect Cost Rates.

Research scientists who run laboratories at universities as faculty members there, write and submit grant proposals as to what they would do if funded. And they support their grant funding requests for this, by outlining the history of the research project that they would carry out, and both to illustrate how their work would fit into ongoing research and discovery efforts in their field as a whole and to prove the importance of the specific research problems that they seek funding to work on, as they fit into that larger context. As part of that, they argue the importance of what they seek to find or validate, and they seek to justify their doing this work in particular and their receiving funding support for it, based on their already extant research efforts and the already published record of their prior and ongoing there-relevant research as can be found in peer reviewed journals.

They do the work needed to successfully argue the case for their receiving new grant funding for this research and they carry out the voluminous and time consuming work needed to document that in grant applications. And they are generally the ones who have to find the funds needed to actually apply for this too (e.g. with filing fees where they apply and grant application related office expenses.) Then the universities that they work for, demand and receive a percentage off of the top of the overall funds actually received from this, that would go towards what are called indirect costs (and related administrative costs, etc., though many funding agencies that will pay these types of expenses under one name will not do so under another, so labels are important here.)

My above-cited reference link, points to a web page that focuses in its working example on a non-research grant- in-aid funding request, and on how monies received there would be allocated. But it does offer basic definitions of some of the key terms involved, which tend to be similar regardless of what such outside-sourced grant funding would be applied to, and certainly where payment to the institution as a whole is permitted under the range of labels offered.

And with that noted as to nomenclatural detail, the question of how funds received would be allocated can set up some interesting situations, as for example where a university that a productive research lab is a part of, might in general require a larger percentage of the overall funds received for meeting its indirect costs, than the funding agency offering those monies would allow. For a university sourced reference to this and to put those funding requirements in a numerically scaled perspective, see Indirect Costs Explanation as can be found as of this writing on the website of the Northern Michigan University. Their approach and their particular fee scale here are in fact representative of what is found in most research supportive colleges and universities and certainly in the United States. And they, to be specific but still fairly representative here, apply an indirect cost rate of 36.5% as their basic standard.

The Bill and Melinda Gates Foundation, to cite a grant funding source that objects to that level of indirect costs expenditures, limits permitted indirect cost rates to 10% – a difference that can be hard to reconcile and certainly as a matter of anything like “rounding off.” And that leads to an interesting challenge. No university would willingly turn away outside grant money and certainly from a prestigious source. But if they agree to accept such funds under terms that significantly undercut their usual indirect costs funding guidelines, do they run the risk of facing challenge from other funding sources that might have accepted their rates in the past but that no longer see them as acceptable? Exceptions there and particularly when they are large-discrepancy exceptions, can challenge the legitimacy of higher indirect cost rates in place and in the eyes of other potential funding agencies too.

• Funding agencies support research and have strong incentives to see as many pennies on the dollar of what they send out, actually directly going towards the funding of that research. Excessive and perceived excessive loss of granted funds for more general institutional support, very specifically challenge that.

Universities that have and use the type of innovation development office that I wrote of in Part 46 for managing the sale or licensing of innovation developed on-campus to outside businesses and other organizations, generally fund them from monies gained from research grants in aid received, as payment made to them in support of allowed indirect expenses. And this makes sense as they are university-wide research lab and research program-supportive facilities. But indirect expenses also cover utilities and janitorial services and even what amounts to rent of the lab space used – among other expenses.

To round out this example here, I add that one of the most important parts of any grant application is its budget documentation in which it spells out as precisely as possible what monies received will be expended upon. This includes equipment and supplies and related expenses that would directly go towards fulfilling a grant application’s goals but it also includes salaries for postdoctoral fellows who might work at that lab and it usually includes at least part of the salary of the lead investigator faculty member who runs the lab too, as well as the salaries of any technicians employed there. And I freely admit that I wrote the above with at least something of a bias towards the research lab side of this dynamic, and at least in part because I also find the one third or more cut taken by the universities involved for its use, to be excessive. And this sentiment is reinforced by the simple fact that very little of the monies coming into such a university as a result of innovation sales or licensing agreements actually goes back to the specific labs that came up with those innovations in the first place, and certainly as earmarked shares of funds so received.

• Bottom line: even this brief and admittedly very simplified accounting of the funding dynamics of this example, as take place within a research supportive university and between that institution and its research labs and its lead investigators, should be enough indicate that these are not simple monolithic institutions and that they are not free of any internal conflict over funding and its allocation.

Innovation acquiring businesses are at least as complex and certainly as different stakeholders and stakeholder groups view the cost-benefits dynamics of these agreements differently. And that just begins with the questions and issues of what lines on their overall budget would pay for this innovation acquisition and in competition with what other funding needs that would be supported there, and what budget lines (and functional areas of that business) would receive the income benefits of any positive returns on these investments that are received.

• Neither of these institutions can realistically be considered to be simple monoliths in nature, or be thought of as if everyone involved in these agreements and possible agreements where always going to be in complete and automatic agreement as to their terms.
• And these complex dynamics as take place at least behind the scenes for both sides to any such technology transfer negotiations, shape the terms of the agreements discussed and entered into, and help determine who even gets to see those negotiating tables in the first place.

I am going to continue this discussion, as outlined towards the top of this posting by considering a wider range of organizational types and business models here, and for both the innovation source and the innovation acquisition sides to these transfer agreements. And as part of that, I will at least begin to discuss the third to-address bullet pointed topic that I listed there, and organizational scale as it enters into this complex of issues. Meanwhile, you can find this and related postings at Macroeconomics and Business and its Page 2 continuation. And also see Ubiquitous Computing and Communications – everywhere all the time 3 and that directory’s Page 1 and Page 2.

Reconsidering the varying faces of infrastructure and their sometimes competing imperatives 7: the New Deal and infrastructure development as recovery 1

Posted in business and convergent technologies, strategy and planning, UN-GAID by Timothy Platt on May 27, 2019

This is my 8th installment to a series on infrastructure as work on it, and as possible work on it are variously prioritized and carried through upon or set aside for future consideration (see United Nations Global Alliance for ICT and Development (UN-GAID), postings 46 and following for Parts 1-6 with its supplemental posting Part 4.5.)

I have already briefly discussed four infrastructure development case study examples in this narrative. And my goal for here is to at least begin a similarly brief and selective discussion of a fifth such case study, large scale infrastructure development (or redevelopment) initiative, as drawn from at least relatively recent history:

• US president Franklin Delano Roosevelt’s New Deal and related efforts to help bring his country out of a then actively unfolding Great Depression.

Then after discussing that, I will turn to consider a sixth such example, with an at least brief and selective accounting of how:

• A newly formed Soviet Union sought to move from being a backward agrarian society, or rather a disparate ethnically diverse collection of them that had all existed under a single monarchical rule, to become a single more monolithic modern industrial state.

And I will of necessity find myself referring back to two other infrastructure development examples that I have already cited and explicitly discussed here, continuing them for purposes of comparison to these two: the Marshall and Molotov Plans as can be found in Part 4 and Part 5 of this series respectively.

My overall goal for this series as a whole has been to successively explore a progression of such current and historic large scale infrastructure initiatives, with a goal of distilling out of them, a set of guiding principles that might offer planning and execution value when moving forward on other such programs. I continue developing this narrative as a foundation building outline, as based upon real world experience at infrastructure development, with that goal in mind. This type of more organized consideration, I add, will prove to be of crucial importance as we all face and societally so, an imperative to more effectively address large-scale and even globally spanning challenges that only begin with global warming and its already visible impact: challenges that history will all but certainly come to see as defining a large part of this 21st century experience and the legacy that we will leave moving forward from that. Ad hoc and one-off cannot offer lasting sustaining value for that, so regardless of where a more organized understanding of large scale infrastructure development comes from, such an understanding is needed.

I write a lot in this blog about the new and unexpected and about the disruptively new as that would call for entirely new understandings of challenges and opportunities faced, and of best ways to address them. There were a variety of issues that led to the Great Depression that people in elected office and that people with training in economics and related fields thought they understood, from their apparent similarity to at least seemingly parallel events from their past. And they were correct on some of those points, even as they were hopelessly wrong about others and with great consequence for how they sought to correct for them.

I begin this posting’s main line of discussion here by citing two such factors: one effectively more understood at least in principle if not in practice, and the other much less so and certainly when effective action could have had a positive impact:

• Pre-Great Depression bank holding companies and their acceptance of their own stock shares as preferred collateral when making loans, creating vast liquidity and reserves gaps in their systems in times of stress (and also see Banking Panics of 1930-31.)
• And tariff barriers with their effect of killing overseas markets that American industry depended upon for its very survival, and particularly at a time when local and national markets were drying up for lack of available liquidity. See in particular the Hawley–Smoot Tariff Act of 1930 for an orienting discussion as to how these business challenging and economy breaking barriers were erected.

Both of these developments happened: the first as a leading cause for what became the Great Depression for its bad banking practices consequences, and the second as an ill-considered response to a deepening recession, that made it the Great Depression for its duration and for its depths of severity.

I just identified the first of those two contributing factors: bad banking practices, as having been understood in principle if not in practice, and the second of them: tariff barrier protectionism as being less fully understood of the two. But in all fairness, faulty assumptions and fundamental misunderstandings contributed to both, and both for their occurring and for contributing to their consequences. And as I will briefly note in what follows here, these and other related failures in understanding and action fed upon each other and over a course of overlapping timeframes. And those toxic synergies made the Great Depression into the systemic collapse that it turned into. But let’s start with the banking system and its challenges.

Even the leadership of the bank holding companies that failed during the Great Depression knew and understood that they should not make unsecured loans and certainly not as a matter of routine practice. They required collateral on the part of businesses and other entities that would take out loans from them, as assurance that if those loans were in danger of defaulting, they could recover at least significant amounts of their invested capital. Their mistake, or at least the fundamental one that I would cite here: the point where their understanding of this type of due diligence as a matter of principle, broke with their understanding of it in practice, came from what in retrospect can only be called their hubris. They saw themselves as absolutely secure and stable, and as a result saw themselves as organizations as being immune from market-based stress or volatility. So when their customers saw the economy that they had invested in begin to collapse and when those customers started going to their banks that they had put their savings in, and in increasing numbers, those banks were unprepared. And as their customers started lining up at their doors to take their money out, more and more of them panicked and joined the lines until their banks’ cash holdings were so depleted that they could no longer function.

I cited in my above bullet point how these holding companies had accepted shares in their own business as even preferred collateral for the loans they made. As their own systems began to fail, one member bank at a time, they found themselves as having in effect made vast numbers of loans without requiring any real collateral at all – or at least any that could still hold outside-validated value. So a bank in such a holding company might fail with others in that system finding themselves in distress but still remaining recoverable, at least in principle. But the house of cards nature of how they had all managed their businesses, in-house, as their own collateral valuation standard meant that those banks folded too. Their customers knew they had nothing real backing the loans they had entered into and they knew that the banks they had entrusted their savings in were now unreliable for that. And there was no Federal Deposit Insurance Corporation (FDIC) or similar outside supportive agency in place, at least yet, that might have quelled the panic and stopped the cascades of failures that quickly brought down entire bank holding companies and all of their member banks.

The dynamics that led to these banking system collapses were understood in principle and in the abstract, even if no one in those bank holding companies seemed capable of turning that abstract understanding into prudent due diligence and risk management practice. The second of the pair of examples that I cite here was, and I have to add still is a lot less well understood and certainly as the current presidential administration in the United States is playing with the fire of trade wars and tariff barriers even as I write this.

Ultimately, there is no such thing as a national economy. Nations operate in larger contexts and they have financially for as long as there has been international trade, and with the roots to that going back to before the dawn of recorded history. And ultimately, economies are liquid reserve and cash flow determined, and with trade shaping and driving all of that. Stop trade and you stop the flow of money and you challenge and even kill the economies involved. And that holds when considering the overall and even global economy as a whole, or when considering the portions thereof that are based in some particular country or region, but that depend for their existence on the ongoing functioning health of the larger economy that they are part of.

As just noted, even today there are people in positions of real power and authority who do not understand this. And a lot fewer seem to have understood this in 1930 when the Hawley–Smoot Tariff Act was put into law and into practice, and the American economy and other national and regional components of the overall global economy began to really collapse.

I at least alluded to timing overlap and toxic synergies in these systematic challenges in the above text, and explicitly make note of that point of detail here. The stock market of the Roaring 20’s was impulse driven for the most part and with seemingly everyone investing in it looking for quick riches. And investing in it was comparable to walking a tightrope without a safety net. But the structural instabilities and the lack of anything like regulatory oversight to limit if not prevent bad business and investment practices that characterized this early stock market, only constituted one of several systematic sources of failure, that all came to collapse together, with a pattern-setting start to that developing over a period of about one year starting in October, 1929 and continuing on to the Fall of 1930.

• To explicitly connect two points of detail just touched upon here, stocks and other investment instruments that might be counted as representing saved and invested wealth, were not necessarily considered to be reliable sources of collateral to individual banks or to their parent holding companies, for their volatility and their potential for it. But these banking institutions thought that they at least knew and could trust their own paper: their own traded stock shares as reliable collateral when making loans. So many if not most came to preferentially keep this “in-house” and asked for proof of ownership in their stock and with it used to back loans signed for through them.
• My point here is that all of the factors and considerations that I make note of here, and a lot more that also contributed to the Great Depression, interacted and reinforced each other for their toxic potential and for their eventual consequences.

The American economy and other national and regional economies in general, all took a real beating in late 1929, and with the US stock market collapse only serving as one measure of that. But these markets were actually beginning what would have seemed a long slow path back to stability and recovery. Recessions end and more recent ones of note in particular, had generally begun to significantly turn towards recovery within about 15 months from their visibly impactful starts. As one admittedly limited and skewed measure in support of that claim for what might have happened here too, consider the collapse of the New York City based stock market itself as tracked by an already relied upon and trusted Dow Jones Industrial Average as a measure of stock market performance and of public confidence in that. The stock market crash began on Thursday, October 24, 1929 with nervous investors trading a single day record 12.9 million shares and with many more trying to sell than to buy. The overall market valuation at measured by the Dow Jones average began falling precipitously.

The weekend that followed did not have a positive impact, in giving investors time to reconsider and settle their nerves. Tuesday, October 29: Black Tuesday came and by the end of that trading day, the overall Dow Jones average had fallen nearly 13%. And the stock market was effectively in freefall as investors panicked, losing their faith in the value of their investments, and any sense of safety in the security of their life savings insofar as they were invested in stock shares. (For further background information on this see for example, this piece on the Wall Street Crash of 1929.) But even the stock market was beginning to recover by March 13, 1930 as the Hawley–Smoot Tariff Act was first put into effect, as investors began buying stock shares again, looking for undervalued ones and real bargains to be gained from them. Then international trade effectively died as nation after nation began raising their own retaliatory and presumed self-protective trade barriers in response to what was going on around them and from their erstwhile trading partners. And that, to my thinking is when the actual Great Depression itself actually began. That is when what might have been just another significant recession became The Great Depression.

That overall systemic economic collapse did not take place all at once; it took a number of months for the real impact of this decision and action on the part of the US Congress to be realized. So for example, US based banks began to be stressed from panicked customer withdrawals and from larger numbers of their customers no longer being able to pay back loans, in late 1929. But many of the larger bank holding companies that failed from this onslaught of challenges, did so in the Fall of 1930 and over the next two years as the strangling of international trade really took hold with so many of their business customers – so many employers facing bankruptcy from loss of business and incoming revenue. And they continued to fail for years to come and at an incredibly impactful rate.

My goal for this posting has been to outline something of the challenge that Franklin Delano Roosevelt faced when first taking office as the 32nd president of the United States. I will continue its narrative thread in a next installment to this series where I will among other things, quantify the bank failures and their timeline to put what I am writing here into clearer perspective. And I will then discuss Roosevelt’s New Deal as a massive recovery effort, and one that had within it a massive infrastructure redevelopment effort too. Then after completing that narrative thread, I will continue in this series as a whole, as briefly noted above. But in anticipation of the next installment to this to come, I step back from the details to reframe this discussion in a second, and here-crucially important way. What Roosevelt faced, underlying and infusing all of the toxic details of policy and practice and on so many fronts, was a complete failure of an underlying world view as to how businesses and economies run, and of how and why they fail when they do that too. Roosevelt faced the problem of a broken puzzle with its pieces having to be reconnected. But at least as importantly he faced a problem of a broken and failed puzzle where its business as usual assumptions and understandings could no longer be made to apply. He had to find a way to fundamentally reshape and redefine the overall image in that puzzle too. And that is the challenge that I will write of in my next installment to this series.

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. I also include this in Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory. And I include this in my United Nations Global Alliance for ICT and Development (UN-GAID) directory too for its relevance there.

Rethinking national security in a post-2016 US presidential election context: conflict and cyber-conflict in an age of social media 15

This is my 15th installment to a series on cyber risk and cyber conflict in a still emerging 21st century interactive online context, and in a ubiquitously social media connected context and when faced with a rapidly interconnecting internet of things among other disruptively new online innovations (see Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 354 and loosely following for Parts 1-14.)

My goal for this installment is to reframe what I have been offering up to here in this series, and certainly in its most recent postings up to now. And I begin that by offering a very specific and historically validated point of observation (that I admit up-front will have a faulty assumption built into it, that I will raise and discuss later on in this posting):

• It can be easily and cogently argued that the single greatest mistake that the civilian and military leadership of a nation can make, when confronting and preparing for possible future challenge and conflict,
• Is to simply think along familiar lines with that leading to their acting according to what is already comfortable and known – thinking through and preparing to fight a next war as if it would only be a repeat of the last one that their nation faced
• And no matter how long ago that happened, and regardless of whatever geopolitical change and technological advancement might have taken place since then.
• Strategic and tactical doctrine and the logistics and other “behind the lines” support systems that would enable them, all come to be set as if in stone: and in stone that was created the last time around in the crucible of their last conflict. And this has been the basic, default pattern followed by most and throughout history.
• This extended cautionary note applies in a more conventional military context where anticipatory preparation for proactively addressing threats is attempted, and when reactive responses to those threats are found necessary too. But the points raised here are just as cogently relevant in a cyber-conflict context too, or in a mixed cyber plus conventional context (as Russia has so recently deployed in the Ukraine as its leadership has sought to restore something of its old Soviet era protective buffer zone around the motherland if nothing else.)
• History shows more leaders and more nations that in retrospect have been unprepared for what is to come, than it does those who were ready to more actively consider and prepare for emerging new threats and new challenges, and in new ways.
• Think of the above as representing in outline, a strategic doctrine that is based on what should be more of a widening of the range and scope of what is considered possible, and the range and scope of how new possibilities might have to be addressed, but that by its very nature cannot be up to that task.

To take that out of the abstract, consider a very real world example of how the challenges I have just discussed, arise and play out.

• World War I with its reliance on pre-mechanized tactics and strategies, with its mass frontal assault charges and its horse cavalry among other “trusted traditions,” and with its reliance on trench warfare to set and hold front lines and territory in all of that:
• Traditions that had led to horrific loss of life even in less technologically enabled previous wars such as the United States Civil War,
• Arguably led to millions of what should have been completely avoidable casualties as foot soldiers faced walls of machinegun fire and tanks, aircraft bombardment and aerial machinegun attack and even poison gas attacks as they sought to prevail through long-outmoded military practice.

And to stress a key point that I have been addressing here, I would argue that cyber attacks and both as standalone initiatives and as elements in more complex offensives, hold potential for causing massive harm and to all sides involved in them too. And proactively seeking to understand and prepare for what might come next there, can be just as important as comparable preparation is in a more conventional warfare-oriented context. Think World War I if nothing else there, as a cautionary note example of the possible consequences in a cyber-theatre of conflict, of making the mistakes outlined in the above bullet pointed preparation and response doctrine.

Looking back at this series as developed up to here, and through its more recent installments in particular, I freely admit that I have been offering what might be an appearance of taking a more reactive and hindsight-oriented perspective here. And the possibility of confusion there on the part of a reader begins in its Part 1 from the event-specific wording of its title, and with the apparent focus on a single now historical event that that conveys. But my actual overall intention here is in fact more forward thinking and proactively so, than retrospective and historical-narrative in nature.

That noted, I have taken an at least somewhat historical approach to what I have written in this series up to here and even as I have offered a few more general thoughts and considerations here too. But from this point on I will offer an explicitly dual narrative:

• My plan is to initially offer a “what has happened”, historically framed outline of at least a key set of factors and considerations that have led us to our current situation. That will largely follow the pattern that I have been pursuing here and certainly as I have discussed Russia as a source of working examples in all of this.
• Then I will offer a more open perspective that is grounded in that example but not constrained by it, for how we might better prepare for the new and disruptively novel and proactively so where possible, but with a better reactive response where that proves necessary too.

My goal in that will not be to second guess the decisions and actions of others, back in 2016 and leading up to it or from then until now as of this writing. And it is not to offer suggestions as to how to better prepare for a next 2016-style cyber-attack per se and certainly not as a repeat of an old conflict, somehow writ new. To clarify that with a specific in the news, current detail example, Russian operatives and others who were effectively operating under their control for this, hacked Facebook leading up to the 2016 US presidential and congressional elections, using armies of robo-Facebook members: artifactual platforms for posting false content, that were set up to appear as coming from real people and from real American citizens in particular. Facebook has supposedly tightened its systems to better identify and delete such fake, manipulative accounts and their online disinformation campaigns. And with that noted, I cite:

In Ukraine, Russia Tests a New Facebook Tactic in Election Tampering.

Yes, this new approach (as somewhat belatedly noted above) is an arms race advancement meant to circumvent the changes made at Facebook as they have attempted to limit or prevent how their platform can be used as a weaponized capability by Russia and others as part of concerted cyber attacks. No, I am not writing here of simply evolutionary next step work-arounds or similar more predictable advances in cyber-weapon capabilities of this type, when writing of the need to move beyond simply preparing for a next conflict as if it would just be a variation on the last one fought.

That noted, I add that yes, I do expect that the social media based disinformation campaigns will be repeated as an ongoing means of cyber-attack, and both in old and in new forms. But fundamentally new threats will be developed and deployed too that will not fit the patterns of anything that has come before. So my goal here is to take what might be learnable lessons from history: recent history and current events included, combined with a consideration of changes that have taken place in what can be done in advancing conflicts, and in trends in what is now emerging as new possibilities there, to at least briefly consider next possible conflicts and next possible contexts that they might have to play out in. My goal for this series as a whole is to discuss Proactive as a process and even as a strategic doctrine, and in a way that at least hopefully would positively contribute to the national security dialog and offer a measure of value moving forward in general.

With all of that noted as a reframing of my recent installments to this series at the very least, I turn back to its Part 14 and how I ended it, and with a goal of continuing its background history narrative as what might be considered to be a step one analysis.

I wrote in Part 13 and again in Part 14 of Russia’s past as a source of the fears and concerns, that drive and shape that nation’s basic approaches as to how it deals with other peoples and other nations. And I wrote in that, of how basic axiomatic assumptions that Russia and its peoples and government have derived from that history, shape their basic geopolitical policy and their military doctrine for now and moving forward too. Then at the end of Part 14 I said that I would continue its narrative here by discussing Vladimir Putin and his story. And I added that that is where I will of necessity also discuss the 45th president of the United States: Donald Trump and his relationship with Russia’s leadership in general and with Putin in particular. And in anticipation of this dual narrative to come, that will mean my discussing Russia’s cyber-attacks and the 2016 US presidential election, among other events. Then, as just promised here, I will step back to consider more general patterns and possible transferable insights.

Then I will turn to consider China and North Korea and their current cyber-policies and practices. And I will also discuss current and evolving cyber-policies and practices as they are taking shape in the United States as well, as shaped by its war on terror among other motivating considerations. I will use these case studies to flesh out the proactive paradigm that I would at least begin to outline here as a goal of this series. And I will use those real world examples at least in part to in effect reality check that paradigmatic approach too, as I preliminarily offer it here.

And with that, I turn back to the very start of this posting, and to the basic orienting text that I begin all of the installments to this series with. I have consistently begun these postings by citing “cyber risk and cyber conflict in a still emerging 21st century interactive online context, and in a ubiquitously social media connected context and when faced with a rapidly interconnecting internet of things among other disruptively new online innovations. To point out an obvious example, I have made note of the internet of things 15 times now in this way, but I have yet to discuss it at all up to here in the lines of discussion that I have been offering. I do not even mention artificial intelligence-driven cyber-weaponization there in that first paragraph opening text, where that is in fact one of the largest and most complex sources of new threats that have ever been faced and at any time in history. And its very range and scope, and its rate of disruptively new development advancement will probably make it the single largest categorical source of weaponized threat that we will all face in this 21st century, and certainly as a source of weaponized capabilities that will be actively used. I will discuss these and related threat sources when considering the new and unexpected and as I elaborate on the above noted proactive doctrine that I offer here.

And as a final thought here, I turn back to my bullet pointed first take outline of that possible proactive doctrine, to identify and address the faulty assumption that I said I would build into it, and certainly as stated above. And I do so by adding one more bullet point to that initial list of them:

• I have just presented and discussed a failure to consider the New when preparing for possible future conflict, and its consequences. And I prefaced that advisory note by acknowledging that I would build a massive blind spot built into what I would offer there. I have written all of the above strictly in terms of nations and their leaders and decision makers. That might be valid in a more conventional military sense but it is not and cannot be considered so in anything like a cyber-conflict setting, and for either thinking about or dealing with aggressors, or thinking about and protecting, or remediating harm to victims. Yes, nations can and do develop, deploy and use cyber-weapon capabilities, and other nations can be and have been their intended targets. But this is an approach that smaller organizations and even just skilled and dedicated individuals can acquire, if not develop on their own. And it is a capability that can be used against targets of any scale of organization from individuals on up. That can mean attacks against specific journalists, or political enemies, or competing business executives or employees. It can mean attacks against organizations of any size or type, including nonprofits and political parties, small or large businesses and more. And on a larger than national scale, this can mean explicit attack against international alliances such as the European Union. Remember, Russian operatives have been credited with sewing disinformation in Great Britain leading up to its initial Brexit referendum vote, to try to break that country away from the European Union and at least partly disrupt it. And they have arguably succeeded there. (See for example, Brexit Goes Back to Square One as Parliament Rejects May’s Plan a Third Time.)

If I were to summarize and I add generalize this first draft, last (for now) bullet point addition to this draft doctrine, I would add:

• New and the disruptively new in particular, break automatically presumed, unconsidered “axiomatic truths,” rendering them invalid moving forward. This can mean New breaking and invalidating assumptions as to where threats might come from and where they might be directed, as touched upon here in this posting. But more importantly, this can mean the breaking and invalidating of assumptions that we hold to be so basic that we are fundamentally unaware of them in our planning – until they are proven to be wrong in an active attack and as a new but very real threat is realized in action. (Remember, as a conventional military historical example of that, how “everyone” knew that aircraft launched anti- ship torpedoes could not be effectively deployed and used in shallow waters as found in places such as Pearl Harbor – until, that is they were.)

And with that, I will offer a book recommendation that I will be citing in upcoming installments to this series, adding it here in anticipation of doing so for anyone interested:

• Kello, L. (2017) The Virtual Weapon and International Order. Yale University Press.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time 3, and at Page 1 and Page 2 of that directory. And you can also find this and related material at Social Networking and Business 2, and also see that directory’s Page 1.

Rethinking the dynamics of software development and its economics in businesses 4

Posted in business and convergent technologies by Timothy Platt on April 27, 2019

This is my 4th installment to a thought piece that at least attempts to shed some light on the economics and efficiencies of software development as an industry and as a source of marketable products, in this period of explosively disruptive change (see Ubiquitous Computing and Communications – everywhere all the time 3, postings 402 and loosely following for Parts 1-3.)

I have been working my way through a brief simplified history of computer programming in this series, as a foundation-building framework for exploring that complex of issues, starting in Part 2. And I repeat that list of benchmark software development steps here for smoother continuity of narrative, as I continue selectively discussing some of their computer history-defining details:

1. Machine language programming
2. And its more human-readable and codeable upgrade: assembly language programming,
3. Early generation higher level programming languages (here, considering FORTRAN and COBOL as working examples),
4. Structured programming as a programming language defining and a programming style defining paradigm.
5. And object oriented programming.

Each of these steps has arisen with a goal of addressing challenges that had reached threshold levels of problem-creating significance in how software was developed and coded before it. And that point of observation even applies with full force at least as a matter of general principle, even for Step 1 of that progression: machine language and the first software programming paradigm of the electronic computer age. There, its defining improvement can be found in how it eliminated the need to literally rewire the hardware of a computer in order to run a new program on it. And along with making it possible to run programs on more general purpose, computer architecture-specified generic circuitry, this development can arguably be claimed to have made the first mass market (e.g. standardized design) computer possible, that could be produced in multiple copies for individual sale and that could be explicitly developed from in a more standardized process of next technology generation, iterative evolutionary development. The advent of machine language coding, can arguably be considered a key development in making computers and their technology standard and not just one-off curiosities.

I cite in that regard a quote that is famously (infamously) attributed to one of the founders of the electronic computer age: the president of the company that in fact designed, build and licensed or sold those first “mass market” mainframe computers: IBM 360 series computer systems: “I think there is a world market for maybe five computers.”

Thomas Watson, the chairman of IBM, offered this prediction in 1943, adding “… where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1,000 vacuum tubes and weigh only 1.5 tons.” It is easy to forget, but important to remember that Watson offered this advice at a time when all of the electronic computers on this planet were programmed by being physically rewired, using plug-in switchboards for the most part to literally set up new hardware level circuits in them.

I shift back to reconsider the first step of the above progression to stress how what is possible and what is considered possible, are so strongly shaped by what is current and by what has led up to that. Each of the developmental steps of the above-repeated computer technology advancement list, represents a disruptively novel breakthrough from its past and from the particular current state of the art that it individually arose from. But at the same time, each of those development steps carries with it a rich legacy from its past too, and both for what has worked and for what has come to break down and fail to work too, and certainly as the demands placed on computer technology have scaled up and advanced.

Cutting ahead here, Step 4 of that list: structured programming, was developed with a goal of standardizing large scale development of complex software that might come functional-part by functional-part from several or even many separate sources. But at least as importantly, it was developed with a goal of enabling the development of code that follows a more clearly logically defined and constrained algorithm and even when – and particularly when the necessary range of logical paths that might be required in it, depending on the input data it would process, becomes complex and far reaching. I referred to such programs as being archetypally well formed and I will continue doing so here as I turn to consider the above-listed Step 5 of object oriented programming, which I loosely identified in Part 3 of this series as representing a structured programming version 2.0 stage in software development.

Step 1 was a first take at coding in software per se, at least in electronic computers, Step 2 was software that was reformatted and organized to make it more human readable and writable, and more user friendly as such. But it was still specific-hardware dependent. Step 3 was a “higher level” programming approach that sought to make software dramatically more user friendly and on all counts, including for its initial coding and later reading/debugging. And this next step advance made it possible for programmers to think and code in larger, more complex programmatic blocks with what had been complex functions that would have required separate coding every time, now available as coding library resources through use of predefined programming terms and their syntax. And this step also introduced programming portability for the first time where it was now becoming possible to run the same higher level, human friendly code on different computers with different hardware architectures, and with hardware specificity added in behind the scenes for it through what amounted to operating system add-ons in the form of programming language compilers and interpreters. Step 4 was then developed to address the challenges of programming scale that began to emerge and then predominate, as larger and more complex computational and related problems were converted over for processing and resolution on computers and computer systems. And Step 5 arose as a next step improvement of the basic Step 4 approach, and through that a next evolutionary step improvement in this entire progression.

Computers actually run machine code still, so earlier steps can be hidden away within later ones. But it is very, very rare that anyone would ever have to actually hand code in that anymore, where it was once the only way to code at all for an electronic computer.

• But historical development considerations aside, what makes object oriented programming that next big step forward iteration in this progression?
• Object oriented programming partitions larger and even vastly larger overall programs that might include in them tens and even hundreds of millions of lines of code, into systems of controllably compartmentalized building blocks that take input from other such blocks in a highly structured and controlled way, and that pass their output on in a similarly controlled, programmer-defined way and to explicitly specified other such building blocks, and that only at least directly communicate with these intentionally permissioned resources. And all of these programming building blocks are set up and coded as black boxes that carry out internal code execution processed, in ways that are invisible outside of themselves and to the rest of the program as a whole, for precisely how they process their own data in converting the input they receive to the output that they might pass on and share. (More formally, objects are characterizable as fitting into distinct categorical processing abstractions called classes, and as exhibiting four basic properties: inheritance, polymorphism, encapsulation and abstraction, and for a brief outline of this paradigmatic approach and its basic vocabulary, see this piece on object oriented programming.)

What is coming next after this as a possible Step 6? Higher level languages now prevail, and highly structured ones at that, in order to make both program development and coding, and program maintenance and upgrades as easy and reliable and standardized as possible. And we can all safely assume that at least updated and improved versions of the object oriented languages that prevail now, and certainly in open internet and related globally reaching contexts will continue to serve vital roles in computers and in networks of them, everywhere. But that does not mean that this Step 5 cannot or will not be rolled into a new Step 6 paradigm, any more than it means Step 5 approaches would not be used in conjunction with parallel supportive processes, such as markup languages (that for example, give form to the human user interfaces that we all see online whenever we open a web browser.)

• What will come next? Let’s consider the challenges that begin to emerge as object oriented programs scale up and continue to do so and for both overall code volume required and more importantly, for the complexities of the algorithms that all of this code would have to encompass.

Object oriented programming languages are used to produce tightly hierarchically organized programs with logical process flow branching and interconnections that are both defined and controlled by the basic structure of that programming paradigm as a whole, and from the level of it specific building blocks themselves, on up. But the real world that complex programs increasingly needs to be able to model, and descriptively and predictively emulate is not always amenable to hierarchical modeling with simple untangled tree branching-like abstractions.

Metaphorically:

• If object oriented programming is build around developing and organizing a conversation
• Around what might be considered at least a relatively open-ended vocabulary for scale and complexity,
• Where objects (as specific executable instances) and the classes that abstractly define them, as mentioned in passing above serve as words here,
• Then one proposed next step advancement beyond Step 5 programming as discussed here: a structured programming language paradigm version 3.0 if you will, might be built around more formally developing and expanding the grammar of large scale and logically complex programs.

And one such approach, that I add informs how and why I have worded that description the way that I just did, is referred to as language oriented programming (and see Language-oriented programming: an evolutionary step beyond object-oriented programming?.)

This approach has roots that go back at least as far as a research-oriented paper by M.P. Ward that was originally published in 1994 in Software Concepts and Tools. And its possibilities have been known of in the programming language development community since then. Has its time now come? On the one hand, this approach does offer at least one possible avenue to addressing what can become progressively more and more complexly tangled branching and interconnecting logic flows in overall algorithm designs. But it accomplishes that at the cost of making a very large and significant trade-off. Language oriented programming paradigm as currently conceived, hinges on developing explicitly problem-specific custom languages for specific computational needs that would be as isomorphic as possible on a programming needs mapped to programming resources offered basis, in order to solve that specific computational challenge with its very specific logic flow complexities.

This in a way can be thought of as a higher level software regression back to challenges that computer scientists have been seeking to resolve for decades now, and even from the beginning of programming for electronic computers at all: breaking away from custom solutions and custom localized usability-only approaches, and toward generally and even openly applicable, portable, scalable software that to the user can run on essentially any platform and that can be openly applicable to resolving as wide a range of computerizable problems as possible.

A language oriented programming solution might be portable, but it would be single purpose, single programming problem-specific in its portability and not necessarily usable and certainly without significant redevelopment for other problems that might also have to be addressed. And this bring up the last more general detail that I would add to this narrative as a whole: the simple fact that every step-by-step advance that I have raised and discussed here, and every other one that I could have raised here depending on what details I would focus on in this narrative as I parse this history, have involved making trade-offs.

Each step in this type of developmental progression, has been designed and developed with a goal of solving challenges and certainly ones stemming from needs-required scalability increases. But each and every one of them had also had limitations built into it, and ones that with time have created new problems, even as they have sought to address old ones and (ones that are newly emerging for having just reached threshold levels of significance.)

I am going to continue this discussion in a next series installment where I will at least begin to discuss the economics and efficiencies of all of this. And in the process of developing that narrative, I will at least touch on some of the issues and challenges of artificial intelligence programming, and quantum computing and how their forms of disruptively New are already beginning to reframe the issues that the historical progression that I have been discussing here, raises.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory.

Meshing innovation, product development and production, marketing and sales as a virtuous cycle 18

Posted in business and convergent technologies, strategy and planning by Timothy Platt on April 24, 2019

This is my 18th installment to a series in which I reconsider cosmetic and innovative change as they impact upon and even fundamentally shape the product design and development, manufacturing, marketing, distribution and sales cycle, and from both the producer and consumer perspectives (see Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 342 and loosely following for Parts 1-17.)

I began discussing a set of topically related, and in fact interconnected points in Part 16 and Part 17 that I repeat here as I continue exploring and developing the narrative that they collectively raise:

1. What does and does not qualify as a true innovation, and to whom in this overall set of contexts?
2. And where, at least in general terms could this New be expected to engender resistance and push-back, and of a type that would not simply fit categorically into the initial resistance patterns expected from a more standard cross-demographic innovation acceptance diffusion curve and its acceptance and resistance patterns?
3. How in fact would explicit push-back against globalization per se even be identified, and certainly in any real case-in-point, detail-of-impact example, given the co-occurrence of a pattern of acceptance and resistance to that, that would be expected from marketplace adherence to a more standard innovation acceptance diffusion curve? To clarify the need to address this issue here, and the complexities of actually doing so in any specific-instance case, I note that the more genuinely disruptively new an innovation is, the larger the percentage of potential marketplace participants would be, that would be expected to hold off on accepting it and at least for significant periods of time, and with their failure to buy and use it lasting throughout their latency-to-accept periods. But that failure to buy in on the part of these involved demographics and their members does not in and of itself indicate anything as to their underlying motivation for doing so, longer term and as they become more individually comfortable with its particular form of New. Their marketplace activity, or rather their lack of it would qualify more as noise in this system, and certainly when anything like a real-time analysis is attempted to determine underlying causal mechanisms in the market activity and marketplace behavior in play. As such, any meaningful analysis and understanding of the dynamics of the marketplace in this can become highly reactive and after the fact, and particularly for those truly disruptive innovations that would only be expected to appeal at first to just a small percentage of early and pioneer adaptor marketplace participants.
4. This leads to a core question of who drives resistance to globalization and its open markets, and how. And I will address that in social networking terms.
5. And it leads to a second, equally important question here too: how would globalization resistance-based failure to buy in on innovation peak and then drop off if it were tracked along an innovation disruptiveness scale over time?

More specifically, I offered an at least preliminary working answer to the question raised in the above Point 1 in Part 17. And I began addressing Point 2 there too, with that of necessity meaning at least touching on Point 3 as well, reframing it in terms of my Points 1 and 2 comments. My goal for this posting is to more fully develop my more preliminary responses to Points 1 and 2 as already offered and to continue laying a foundation for more systematically discussing Point 3 as well. And as I noted at the end of Part 17, that means my at least in part, offering a reconsideration here of:

• The demographic groups involved here and their members, and how they are behaviorally defined: from within or by outside shaping pressures or both.

I begin all of this by repeating a bullet pointed detail that I initially offered in a Point 1 context, that:

• Innovation is change that at least someone might realistically be expected to see as creating at least some new source or level of value, however small, at least by their standards and criteria.

And I begin expanding upon that by adding in some key additional words: “… or challenge”, as in “… at least some new source or level of value or challenge, however small.” Perception and response in that, are of necessity grounded in the minds of the “at least someone” who would make such a determination.

I at least implicitly raised two possibilities at to the nature of the Who of this in Point 2, when I cited two possible patterns that response of this sort might categorically fit into:

• A standard innovation acceptance diffusion curve and its various demographics, with those cohorts ranging from those predisposed towards being among the earliest adaptors to New, on out to those more predisposed towards waiting until any innovation that they are confronted with proving itself first and until it is not longer a current new innovation at all.
• And a more societally defined pattern of acceptance or denial that is based more on a distinction as to whether a new source of change supports or threatens an already existing order.

Note that the former of these possibilities posits that individuals behave in ways that can be characterized as fitting into and consistently following demographic level patterns here. But it is still the individuals involved who make determinative decisions there as to whether to buy in now or wait. And beyond that, such determinations are made based on the change faced: the innovative New itself, and how its adaption would support or challenge the individuals who collectively comprise the demographics at play here. But the later: the second of those patterns with its global flattening versus global wrinkling dynamics, is at least as much about where this New came from. Global flattening as a basic paradigm seeks to rapidly geographically diffuse out access to and use of new best of breed solutions and approaches and as widely as possible, while globally wrinkling pushes back against that as a threat to the communities and societies that would have to change as a consequence of this influx of new. That can mean challenges to local businesses and industries, and the local economies in place that they enter into. But at least as importantly, that can mean push back responses to perceived threats to local cultures and traditions too. So in this second paradigm’s perspective, reaction and certainly push-back reaction is at least as much about what would be pushed aside and lost as it is about what would come in as new that arguably might supplant or at the very least marginalize that.

And this brings me directly and specifically to the questions and the challenges of Who is determinatively shaping and driving all of this.

• I just stated that the dynamics of the innovation acceptance diffusion curve is more individually grounded. But people do both learn from others, and influence others as well – and certainly when they more directly communicate with each other, or take recourse to the same more centrally published news and opinion sources that might address the change possibilities that they face, and that might influence or even shape their views on that.
• And I just stated that the dynamics of global flattening and wrinkling are more societal in nature, but even there, they are actually carried out by individuals, who might or might not go along with the more expected community and societal norms that might be playing out around them. Individuals in a more actively global flattening society can still say “but I am not interested in buying into that” and individuals who live in societies that are more resistant to change, and supportive of preserving and advancing their status quo and their local systems, can still say “but I am interested in buying into that and I want to.”

And this brings me to the Point 4 (from the above list) questions and issues of the demographic groups involved here and their members, and how they are behaviorally defined: from within or by outside shaping pressures or both. And that is where social networking and its taxonomy of key influencers enters this narrative too.

First of all, I have to note that while I more usually discuss social networking and social influence pressures from an online-connected perspective in this blog, the forces and factors that I address there have richly detailed and nuanced histories that both precede and still exist independently of any given technologies or technology-enabled or created connectivity platforms. Humans are a social species and seek out opportunities to converse and to share, and to organize and to build inclusive groups. And at the same time they can just as actively build boundaries between groups, defining their local communities from that too – and regardless of which specific networking and communicating forms and forums they would use to create and enable all of this. So what follows is not technology dependent and it is certainly not specific-technology dependent (e.g. it is not dependent on Facebook, Instagram or Twitter or any other sharing platforms more currently in vogue and use.) What follows addresses processes and activities that have roots that run as deep as humanity does as a species, and perhaps even deeper and with older precedents than that.

To clarify that last point I cite one of many possible research paper references on primate learning behavior, that originally appeared in the journal Primate in May, 2013: Food Washing and Placer Mining in Captive Great Apes. While primate groups have been observed to wash sand and dirt off of food such as fruits or vegetables before eating them, as more widely held and shared traits in communities and for primates of a variety of species, there is also a set of learning curve examples that would suggest how this type of behavior can start and spread in such a community. One example that comes to mind involved a community that did not wash the sand off of food tossed to them on sandy soil near flowing water, until one juvenile member of that community tried doing so. Other juveniles began doing this too. Then their mothers began washing their food. And then other adults began trying this new innovation too and its use continued to spread until essentially the entire community was now washing sand off of their food before eating it. There, the innovation was a food handling process and this example might perhaps best fit a classical innovation acceptance diffusion curve. But just as tellingly it follows the patterns laid down by social networking and community-level power and influence held by those social networkers and influencers. Interestingly, it was the more dominant males in this community who were the last holdouts to finally begin washing their food too: the members of this community who others turned to for leadership and who were least likely to turn to others themselves for new ideas or possibilities.

And with that, I return to consider humans again, and their responses to the challenges and opportunities of New and of innovation that drives it. I am going to continue this discussion in a next series installment where I will make use of an approach to social network taxonomy and social networking strategy that explicitly addresses the issues of who networks with and communicates with whom, and that also can be used to map out patterns of influence as well: important to both the basic innovation diffusion model and to understanding the forces and the dynamics of global flattening and wrinkling too. In anticipation of that discussion to come, that is where issues of agendas enter this narrative. Then after discussing that, I will explicitly turn to the above-repeated Point 3: a complex of issues that has been hanging over this entire discussion since I first offered the above topics list at the end of Part 16 of this series. And I will address a very closely related Point 4 and its issues too, as already briefly touched upon here.

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. And see also Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations.

Reconsidering Information Systems Infrastructure 9

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on April 18, 2019

This is the 9th posting to a series that I am developing, with a goal of analyzing and discussing how artificial intelligence and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 374 and loosely following for Parts 1-8. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

I stated towards the beginning of Part 8 of this series that I have been developing a foundation in it for thinking about neural networks and their use in artificial intelligence agents. And that has in fact been one of my primary goals here, as a means of exploring and analyzing more general issues regarding artificial agents and their relationships to humans and to each other, and particularly in a communications and an information-centric context and when artificial agents can change and adapt. Then at the end of Part 8, I said that I would at least begin to specifically discuss neural network architectures per se and systems built according to them in this complex context, starting here.

The key area of consideration that I would at least begin to address in this posting as a part of that narrative, is that of flexibility in range and scope for adaptive ontological change, where artificial intelligence agents would need that if they are to self-evolve new types of, or at least expanded levels of functional capabilities for more fully realizing the overall functional goals that they would carry out. I have been discussing natural conversation as a working artificial general intelligence-validating example of this type of goal-directed activity in this series. And I have raised the issues and challenges of chess playing excellence in Part 8, with its race to create the best chess player agent in the world as an ongoing computational performance benchmark-setting goal too, and with an ongoing goal beyond that of continued improvement in chess playing performance per se. See in that regard, my Part 8 discussion of the software-based AlphaZero artificial intelligence agent: the best chess player on the planet as of this writing.

Turning to explicitly consider neural networks and their emerging role in all of this, they are more generically wired systems when considered at a hardware level, that can flexibly adapt themselves on a task performance level basis, for which specific possible circuit paths are actually developed and used within them, and for which of them are downgraded and in effect functionally removed too. These are self-learning systems that in effect rewire themselves to more effectively carry out data processing flows that can more effectively carry out their targeted functions, developing and improving circuit paths that work for them and culling out and eliminating ones that do not – and at a software level and de-facto at a hardware level too.

While this suggestion is cartoonish in nature, think of these systems as blurring the lines between hardware and software, and think of them as being at least analogous to self-directed and self-evolving software-based hardware emulators in the process, where at any given point in time and stage in their ongoing development, they emulate through the specific pattern of preferred hardware circuitry used and their specific software in place, an up to that point most optimized “standard” hardware and software computer for carrying out their assigned task-oriented functions. It is just that neural networks can continue to change and evolve, testing and refining themselves, instead of being locked into a single fixed overall solution as would be the case in a “standard” design more conventional computer, and certainly when it is run between software upgrades.

• I wrote in Part 8 of human-directed change in artificial agent design and both for overall systems architecture and for component-and-subsystem, by component-and-subsystem scaling. A standard, fixed design paradigmatic approach as found in more conventional computers as just noted here, fits into and fundamentally supports the systems evolution of fixed, standard systems and in its pure form cannot in general self-change either ontologically or evolutionarily.
• And I wrote in Part 8 of self-directed, emergent capabilities in artificial intelligence agents, citing how they might arise as preadapted capabilities that have arisen without regard to a particular task or functional goal now faced, but that might be directly usable for such a functional requirement now – or that might be readily adapted for such use with more targeted adjustment of the type noted here. And I note here that this approach really only becomes fundamentally possible in a neural network or similar, self-directed ontological development context, with that taking place within the hardware and software system under consideration.

Exaptation (pre-adaptation) is an evolutionary development option that would specifically arise in neural network or similarly self-changing and self-learning systems. And with that noted I invoke a term that has been running through my mind as I write this, and that I have been directing this discussion towards reconsidering here: an old software development term that in a strictly-human programmer context is something of a pejorative: spaghetti code. See Part 6 of this series where I wrote about this phenomenon in terms of a loss of comprehensibility as to the logic flow of whatever underlying algorithm a given computer program is actually running – as opposed to the algorithm that the programmer intended to run in that program.

I reconsider spaghetti code and its basic form here for a second reason, this time positing it as an alternative to lean code that would seek to carry out specific programming tasks in very specific ways and as quickly as possible and as efficiently as possible, as far as specific hardware architecture, system speed as measured by clock signals per unit time, and other resource usage requirements and metrics are concerned. Spaghetti code and its similarly more loosely structured counterparts, are what you should expect and they are what you get when you set up and let loose self-learning neural network-based or similar artificial agent systems and let them change and adapt without outside guidance, or interference if you will.

• These systems do not specifically, systematically seek to ontologically develop as lean systems as that would most likely mean their locking in less than optimal hardware-used and software-executed solutions than they could otherwise achieve.
• They self-evolve with slack and laxity in their systems, while iteratively developing towards next step improvements in what they are working on now, and in ways that can create pre-adaptation opportunities – and particularly as these systems become larger and more complex and as the tasks that they would carry out and optimize towards become more complex and even open-endedly so (as emerges when addressing problems such as chess, but that would come fully into its own for tasks such as development of a natural conversation capability.)

If more normative step-by-step ontological development of incremental performance improvements in task completion, can be compared to more gradual evolutionary change within some predictable-for-outline pattern, then the type of slack allowance with its capacity for creating fertile ground for possible pre-adaptation opportunity that I write of here, can perhaps best be compared to disruptive change or at least opportunity for it – at least for the visible outcome consequences observed as a pre-adapted capability that has not proven particularly relevant up to now is converted from a possibility to a realized current functionally significant actuality.

And with this noted, I raise a tripartite point of distinction, that I will at least begin to flesh out and discuss as I continue developing this series:

• Fully specified systems goals (e.g. chess rules as touched upon in Part 8 for an at least somewhat complex example, but with fully specified rules defining a win and a loss, etc. for it.),
• Open-ended systems goals (e.g. natural conversational ability as more widely discussed in this series and certainly in its more recent installments with its lack of corresponding fully characterized performance end points or similar parameter-defined success constraints), and
• Partly specified systems goals (as in self-driving cars where they can be programmed with the legal rules of the road, but not with a correspondingly detailed algorithmically definable understanding of how real people in their vicinity actually drive and sometimes in spite of those rules: driving according to or contrary to the traffic laws in place.)

I am going to discuss partly specified systems goals and agents, and overall systems that would include them and that would seek to carry out those tasks in my next series installment. And I will at least start that discussion with self-driving cars as a source of working examples and as an artificial intelligence agent goal that is still in the process of being realized, as of this writing. In anticipation of that discussion to come, this is where stochastic modeling enters this narrative.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

%d bloggers like this: