Platt Perspective on Business and Technology

Some thoughts concerning a general theory of business 30: a second round discussion of general theories as such, 5

Posted in blogs and marketing, book recommendations, reexamining the fundamentals by Timothy Platt on August 16, 2019

This is my 30th installment to a series on general theories of business, and on what general theory means as a matter of underlying principle and in this specific context (see Reexamining the Fundamentals directory, Section VI for Parts 1-25 and its Page 2 continuation, Section IX for Parts 26-29.)

I began this series in its Parts 1-8 with an initial orienting discussion of general theories per se, with an initial analysis of compendium model theories and of axiomatically grounded general theories as a conceptual starting point for what would follow. And I then turned from that, in Parts 9-25 to at least begin to outline a lower-level, more reductionistic approach to businesses and to thinking about them, that is based on interpersonal interactions. Then I began a second round, next step discussion of general theories per se in Parts 26-29 of this, building upon my initial discussion of general theories per se, this time focusing on axiomatic systems and on axioms per se and the presumptions that they are built upon. As a key part of that continued narrative, I offered a point of theory defining distinction in Part 28, that I began using there in this discussion, and that I continued using in Part 29 as well, and that I will continue using and developing here too, drawing a distinction between:

• Entirely abstract axiomatic bodies of theory that are grounded entirely upon sets of a priori presumed and selected axioms. These theories are entirely encompassed by sets of presumed fundamental truths: sets of axiomatic assumptions, as combined with complex assemblies of theorems and related consequential statements (lemmas, etc) that can be derived from them, as based upon their own collective internal logic. Think of these as axiomatically closed bodies of theory.
• And theory specifying systems that are axiomatically grounded as above, with at least some a priori assumptions built into them, but that are also at least as significantly grounded in outside-sourced information too, such as empirically measured findings as would be brought in as observational or experimental data. Think of these as axiomatically open bodies of theory.

And I have, and will continue to refer to them as axiomatically closed and open bodies of theory, as convenient terms for denoting them. And that brings me up to the point in this developing narrative that I would begin this installment to it at, with two topics points that I would discuss in terms of how they arise in closed and open bodies of theory respectively:

• How would new axioms be added into an already developing body of theory, and how and when would old ones be reframed, generalized, limited for their expected validity and made into special case rules as a result, or be entirely discarded as organizing principles there per se.
• Then after addressing that set of issues I said that I will turn to consider issues of scope expansion for the set of axioms assumed in a given theory-based system, and with a goal of more fully analytically discussing optimization for the set of axioms presumed, and what that even means.

I began discussing the first of these topics points in Part 29 and will continue doing so here. And after completing that discussion thread, at least for purposes of this digression into the epistemology of general theories per se, I will turn to and discuss the second of those points too. And I begin addressing all of this at the very beginning, with what was arguably the first, at least still-existing effort to create a fully complete and consistent axiomatically closed body of theory that would address what was expected at least, to encompass and resolve all possible problems and circumstances where it might conceivably be applied: Euclid’s geometry as developed from the set of axiomatically presumed truths that he built his system upon.

More specifically, I begin this narrative thread with Euclid’s famous, or if you prefer infamous Fifth postulate: his fifth axiom, and how that defines and constrains the concept of parallelism. And I begin here by noting that mathematicians and geometers began struggling with it more than two thousand years ago, and quite possibly from when Euclid himself was still alive.

Unlike the other axioms that Euclid offered, this one did not appear to be self-evident. So a seemingly endless progression of scholars sought to find a way to prove it from the first four of Euclid’s axioms. And baring that possibility, scholars sought to develop alternative bodies of geometric theory that either offered alternative axioms to replace Euclid’s fifth, or that did without parallelism as an axiomatic principle at all, or that explicitly focused on it and even if that meant dispensing with the metric concepts of angle and distance (where parallelism can be defined independently of them), with affine geometries.

In an axiomatically closed body of theory context, this can all be thought of as offering what amount to alternative realities, and certainly insofar as geometry is applied for its provable findings, to the empirically observable real world. The existence of a formally, axiomatically specified non-Euclidean geometry such as a an elliptic or hyperbolic geometry that explicitly diverge from the Euclidean on the issue of parallelism, does not disprove Euclidean geometry, or even necessarily refute it except insofar as its existence shows that other equally solidly grounded, axiomatically-based geometries are possible too. So as long as a set of axioms that underlie a body of theory such as one of these geometries can be assumed to be internally consistent, the issues of reframing, generalizing, limiting or otherwise changing axioms in place, within a closed body of theory is effectively moot.

As soon as outside-sourced empirical or other information is brought in that arises separately from and independently from the set, a priori axioms in place in a body of theory, all of that changes. And that certainly holds if such information (e.g. replicably observed empirical observations and the data derived from them) is held to be more reliably grounded and “truer” than data arrived at entirely as a consequence of logical evaluation of the consequences of the a priori axioms in place. (Nota bene: Keep in mind that I am still referring here to initial presumed axioms that are not in and of themselves directly empirically validated, and that might never have even been in any way tested against outside-sourced observations and certainly for the range of observation types that that perhaps new forms of empirical data and its observed patterns might offer. Such new data might in effect force change in previously assumed axiomatically framed truth.)

All I have done in the above paragraph is to somewhat awkwardly outline the experimental method, where theory-based hypotheses are tested against carefully developed and analyzed empirical data to see if it supports or refuted them. And in that, I focus in the above paragraph, on experimental testing that would support or refute what have come to be seen as really fundamental, underlying principles and not just detail elaborations as to how the basic assumed principles in place would address very specific, special circumstances.

But this line of discussion overlooks, or at least glosses over a very large gap in the more complete narrative that I would need to address here. And for purposes of filling that gap, I return to reconsider Kurt Gödel and his proofs of the incompleteness of any axiomatic theory of arithmetic, and of the impossibility of proving absolute consistency for such a body of theory too, as touched upon here in Part 28. As a crude representation of a more complex overall concept, mathematical proofs can be roughly divided into two basic types:

Existence proofs, that simply demonstrate that at least one mathematical construct exists within the framework of a set of axioms under consideration that would explicitly sustain or refute that theory, but without in any way indicating its form or details, and
Constructive proofs, that both prove the existence of a theorem-supporting or refuting mathematical construct, and also specifically identify and specify it for at least one realistic example, or at least one realistic category of such examples.

Gödel’s inconsistency theorem is an existence proof insofar as it does not constructively indicate any specific mathematical contexts where inconsistency explicitly arises. And even if it did, that arguably would only indicate where specific changes might be needed in order to seamlessly connect two bodies of mathematical theory: A and B, within a to-them, sufficiently complete and consistent single axiomatic framework so as to be able to treat them as a single combined area of mathematics (e.g. combining algebra and geometry to arrive as a larger and more inclusive body of theory such as algebraic geometry.) And this brings me very specifically and directly to the issues of reverse mathematics, as briefly but very effectively raised in:

• Stillwell, J. (2018) Reverse Mathematics: proofs from the inside out. Princeton University Press.)

And I at least begin to bring that approach into this discussion by posing a brief set of very basic questions, that arise of necessity from Gödel’s discoveries and the proof that he offered to validate them:

• What would be the minimum set of axioms, demonstrably consistent within that set, that would be needed in order to prove as valid, some specific mathematical theorem A?
• What would be the minimum set of axioms needed to so prove theorem A and also theorem B (or some other explicitly stated and specified finitely enumerable set of such theorems A, B, C etc.)?

Anything in the way of demonstrable incompleteness of a type required here, for bringing A and B (and C and …, if needed) into a single overarching theory would call for a specific, constructively demonstrable expansion of the set of axioms assumed in order to accomplish the goals implicit in those two bullet pointed questions. And any demonstrable inconsistency that were to emerge when seeking to arrive at such a minimal necessary axiomatic foundation for a combined theory, would of necessity call for a reframing or a replacement at a basic axiomatic level and even in what are overtly closed axiomatic bodies of theory. So Euclidean versus non-Euclidean geometries notwithstanding, even a seemingly completely closed such body of theory might need to be reconsidered and axiomatically re-grounded, or discarded entirely.

I am going to continue this line of discussion in a next series installment, where I will turn to more explicitly consider axiomatically open bodies of theory in this context. And in anticipation of that narrative to come, I will consider:

• The emergence of both disruptively new types of data and of empirical observations that could generate it,
• And shifts in the accuracy resolution, or the range of observations that more accepted and known types of empirical observations might suddenly be offering.

I add here that I have, of necessity, already begun discussing the second, to-address topic point that I made note of towards the start of this posting, in the course of writing this posting:

• Scope expansion for the set of axioms assumed in a given theory-based system, and with a goal of more fully analytically discussing optimization for the set of axioms presumed, and what that even means.

I will continue on in this overall discussion to more fully consider that set of issues, and certainly where optimization is concerned in this type of context.

Meanwhile, you can find this and related material about what I am attempting to do here at About this Blog and at Blogs and Marketing. And I include this series in my Reexamining the Fundamentals directory and its Page 2 continuation, as topics Sections VI and IX there.

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 7

This is my 7th posting to a short series on the growth potential and constraints inherent in innovation, as realized as a practical matter (see Reexamining the Fundamentals 2, Section VIII for Parts 1-6.) And this is also my fourth posting to this series, to explicitly discuss emerging and still forming artificial intelligence technologies as they are and will be impacted upon by software lock-in and its imperatives, and by shared but more arbitrarily determined constraints such as Moore’s law (see Part 4, Part 5 and Part 6.)

I focused in Part 6 of this narrative, on a briefly stated succession of possible development possibilities that all relate to how an overall next generation internet will take shape, that is largely and even primarily driven at least for a significant proportion of functional activity carried out in it, by artificial intelligence agents and devices: an increasingly largely internet of things and of smart artifactual agents that act among them. And I began that with a continuation of a line of discussion that I began in earlier installments to this series, centering on four possible development scenarios as initially offered by David Rose in his book:

• Rose, D. (2014) Enchanted Objects: design, human desire and the internet of things. Scribner.

I added something of a fifth such scenario, or rather a caveat-based acknowledgment of the unexpected in how this type of overall development will take shape, in Part 6. And I ended that posting with a somewhat cryptic anticipatory note as to what I would offer here in continuation of its line of discussion, which I repeat now for smoother continuity of narrative:

• I am going to continue this discussion in a next series installment, where I will at least selectively examine some of the core issues that I have been addressing up to here in greater detail, and how their realized implementations might be shaped into our day-to-day reality. And in anticipation of that line of discussion to come, I will do so from a perspective of considering how essentially all of the functionally significant elements to any such system and at all levels of organizational resolution that would arise in it, are rapidly coevolving and taking form, and both in their own immediately connected-in contexts and in any realistic larger overall rapidly emerging connections-defined context too. And this will of necessity bring me back to reconsider some of the first issues that I raised in this series too.

The core issues that I would continue addressing here as follow-through from that installment, fall into two categories. I am going to start this posting by adding another scenario to the set that I began presenting here, as initially set forth by Rose with his first four. And I will use that new scenario to make note of and explicitly consider an unstated assumption that was built into all of the other artificial intelligence proliferation and interconnection scenarios that I have offered here so far. And then, and with that next step alternative in mind, I will reconsider some of the more general issues that I raised in Part 6, further developing them too.

I begin all of this with a systems development scenario that I would refer to as the piecewise distributed model.

• The piecewise distributed model for how artificial intelligence might arise as a significant factor in the overall connectiverse that I wrote of in Part 6 is based on current understanding of how human intelligence arises in the brain as an emergent property, or rather set of them, from the combined and coordinated activity of simpler components that individually do not display anything like intelligence per se, and certainly not artificial general intelligence.

It is all about how neural systems-based intelligence arises from lower level, unintelligent components in the brain and how that might be mimicked, or recapitulated if you will through structurally and functionally analogous systems and their interconnections, in artifactual systems. And I begin to more fully characterize this possibility by more explicitly considering scale, and to be more precise the scale of range of reach for the simpler components that might be brought into such higher level functioning totalities. And I begin that with a simple if perhaps somewhat odd sounding question:

• What is the effective functional radius of the human brain given the processing complexities and the numbers and distributions of nodes in the brain that are brought into play in carrying out a “higher level” brain activity, the speed of neural signal transmission in that brain as a parametric value in calculations here, and an at least order of magnitude assumption as to the timing latency to conscious awareness of a solution arrived at for a brain activity task at hand, from its initiation to its conclusion?

And with that as a baseline, I will consider the online and connected alternative that a piecewise distributed model artificial general intelligence, or even just a higher level but still somewhat specialized artificial intelligence would have to function within.

Let’s begin this side by side comparative analysis with consideration of what might be considered a normative adult human brain, and with a readily and replicably arrived at benchmark number: myelinated neurons as found in the brain send signals at a rate of approximately 120 meters per second, where one meter is equal to approximately three and a quarter feet in distance. And for simplicity’s sake I will simply benchmark the latency from the starting point of a cognitively complex task to its consciously perceived completion at one tenth of a second. This would yield an effective functional radius of that brain at 12 meters or 40 feet, or less – assuming as a functionally simplest extreme case for that outer range value that the only activity required to carry out this task was the simple uninterrupted transmission of a neural impulse signal along a myelinated neuron for some minimal period of time to achieve “task completion.”

An actual human brain is of course a lot more compact than that, and a lot more structurally complex too, with specialized functional nodes and complex arrays of parallel processor organized structurally and functionally duplicated elements in them. And that structural and functional complexity, and the timing needed to access stored information from and add new information back into memory again as part of that task activity, slows actual processing down. An average adult human brain is some 15 centimeters long, or six inches front to back so using that as an outside-value metric and a radius as based on it of some three inches, structural and functional complexities in the brain that would be called upon to carry out that tenth of a second task, would effectively reduce its effective functional radius some 120-fold from the speedy transmission-only outer value that I began this brief analysis with.

Think of that as a speed and efficiency tradeoff reduction imposed on the human brain by its basic structural and functional architecture and by the nature and functional parameters of its component parts, on the overall possible maximum rate of activity, at least for tasks performed that would fit the overall scale and complexity of my tenth of a second benchmark example. Now let’s consider the more artifactual overall example of computer and network technology as would enter into my above-cited piecewise distributed model scenario, or in fact into essentially any network distributed alternative to it. And I begin that by noting that the speed of light in a vacuum is approximately 300 million meters per second, and that electrons can travel along a pure copper wire at up to approximately 99% of that value.

I will assume for purposes of this discussion that photons in wireless networked and fiber optic connected aspects of such a system, and the electrons that convey information through their flow distributions in more strictly electronic components of these systems all travel on average at roughly that same round number maximum speed, as any discrepancy from it in what is actually achieved would be immaterial for purposes of this discussion, given my rounding off and other approximations as resorted to here. Then, using the task timing parameter of my above-offered brain functioning analysis, as sped up to one tenth of a millisecond for an electronic computer context, an outer limit transmission-only value for this system and its physical dimensions would suggest a maximum radius of some 30,000 kilometers, encompassing all of the Earth and all of near-Earth orbit space and more. There, in counterpart to my simplest case neural signal transmission processing as a means of carrying out the above brain task, I assume here that its artificial intelligence counterpart might be completed simply by the transmission of a single pulse of electrons or photons and without any processing step delays required.

Individual neurons can fire up to some 200 times per second, depending on the type of function carried out, and an average neuron in the brain connects to what is on the order of 1000 other neurons through complex dendritic branching and the synaptic connections they lead to, and with some neurons connecting to as many as 10,000 others and more. I assume that artificial networks can grow to that level of interconnected connectivity and more too, and with levels of involved nodal connectivity brought into any potentially emergent artificial intelligence activity that might arise in such a system, that matches and exceeds that of the brain for its complexity there too. That at least, is likely to prove true for any of what with time would become the all but myriad number of organizing and managing nodes, that would arise in at least functionally defined areas of this overall system and that would explicitly take on middle and higher level SCADA -like command and control roles there.

This would slow down the actual signal transmission rate achievable, and reduce the maximum physical size of the connected network space involved here too, though probably not as severely as observed in the brain. There, even today’s low cost readily available laptop computers can now carry out on the order of a billion operations per second and that number continues to grow as Moore’s law continues to hold forth. So if we assume “slow” and lower priority tasks as well as more normatively faster ones for the artificial intelligence network systems that I write of here, it is hard to imagine restrictions that might realistically arise that would effectively limit such systems to volumes of space smaller than the Earth as a whole, and certainly when of-necessity higher speed functions and activities could be carried out by much more local subsystems and closer to where their outputs would be needed.

And to increase the expected efficiencies of these systems, brain as well as artificial network in nature, effectively re-expanding their effective functional radii again, I repeat and invoke a term and a design approach that I used in passing above: parallel processing. That, and inclusion of subtask performing specialized nodes, are where effectively breaking up a complex task into smaller, faster-to-complete subtasks, whose individual outputs can be combined as a completed overall solution or resolution, can speed up overall task completion by orders of timing efficiency and for many types of tasks, allowing more of them to be carried out within any given nominally expected benchmark time for expected “single” task completions. This of course also allows for faster completion of larger tasks within that type of performance measuring timeframe window too.

• What I have done here at least in significant part, is to lay out an overall maximum connected systems reach that could be applied to the completion of tasks at hand, and in either a human brain or an artificial intelligence-including network. And the limitations of accessible volume of space there, correspondingly sets an outer limit to the maximum number of functionally connected nodes that might be available there, given that they all of necessity have space filling volumes that are greater than zero.
• When you factor in the average maximum processing speed of any information processing nodes or elements included there, this in turn sets an overall maximum, outer limit value to the number of processing steps that could be applied in such a system, to complete a task of any given time-requiring duration, within such a physical volume of activity.

What are the general principles beyond that set of observations that I would return to here, given this sixth scenario? I begin addressing that question by noting a basic assumption that is built into the first five scenarios as offered in this series, and certainly into the first four of them: that artificial intelligence per se reside as a significant whole in specific individual nodes. I fully expect that this will prove true in a wide range of realized contexts as that possibility is already becoming a part of our basic reality now, with the emergence and proliferation of artificial specialized intelligence agents. But as this posting’s sixth scenario points out, that assumption is not the only one that might be realized. And in fact it will probably only account for part of what will to come to be seen as artificial intelligence as it arises in these overall systems.

The second additional assumption that I would note here is that of scale and complexity, and how fundamentally different types of implementation solutions might arise, and might even be possible, strictly because of how they can be made to work with overall physical systems limitations such as the fixed and finite speed of light.

Looking beyond my simplified examples as outlined here: brain-based and artificial alike, what is the maximum effective radius of a wired AI network, that would as a distributed system come to display true artificial general intelligence? How big a space would have to be tapped into for its included nodes to match a presumed benchmark human brain performance for threshold to cognitive awareness and functionality? And how big a volume of functionally connected nodal elements could be brought to bear for this? Those are open questions as are their corresponding scale parameter questions as to “natural” general intelligence per se. I would end this posting by simply noting that disruptively novel new technologies and technology implementations that significantly advance the development of artificial intelligence per se, and the development of artificial general intelligence in particular, are likely to both improve the quality and functionality of individual nodes involved and regardless of which overall development scenarios are followed, and their capacity to synergistically network together.

I am going to continue this discussion in a next series installment where I will step back from considering specific implementation option scenarios, to consider overall artificial intelligence systems as a whole. I began addressing that higher level perspective and its issues here, when using the scenario offered in this posting to discuss overall available resource limitations that might be brought to bear on a networked task, within given time-to-completion restrictions. But that is only one way to parameterize this type of challenge, and in ways that might become technologically locked in and limited from that, or allowed to remain more open to novelty – at least in principle.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3 and also see Page 1 and Page 2 of that directory. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And also see its Page 1.

Addendum note: The above presumptive end note added at the formal conclusion of this posting aside, I actually conclude this installment with a brief update to one of the evolutionary development-oriented examples that I in effect began this series with. I wrote in Part 2 of this series, of a biological evolution example of what can be considered an early technology lock-in, or rather a naturally occurring analog of one: of an ancient biochemical pathway that is found in all cellular life on this planet: the pentose shunt.

I add a still more ancient biological systems lock-in example here that in fact had its origins in the very start of life itself as we know it, on this planet. And for purposes of this example, it does not even matter whether the earliest genetic material employed in the earliest life forms was DNA or RNA in nature for how it stored and transmitted genetic information from generation to generation and for how it used such information in its life functions within individual organisms. This is an example that would effectively predate that overall nucleic acid distinction as it involves the basic, original determination of precisely which basic building blocks would go into the construction and information carrying capabilities of either of them.

All living organisms on Earth, with a few viral exceptions employ DNA as their basic archival genetic material, and use RNA as an intermediary in accessing and making use of the information so stored there. Those viruses use RNA for their own archival genetic information storage, and the DNA replicating and RNA fabrication machinery of the host cells they live in to reproduce. And the genetic information included in these systems, and certainly at a DNA level is all encoded in patterns of molecules called nucleotides that are linearly laid out in the DNA design. Life on Earth uses combinations of four possible nucleotides for this coding and decoding: adenine (A), thymine (T), guanine (G) and cytosine (C). And it was presumed at least initially that the specific chemistry of these four possibilities made them somehow uniquely suited to this task.

More recently it has been found that there are other possibilities that can be synthesized and inserted into DNA-like molecules, with the same basic structure and chemistry, that can also carry and convey this type of genetic information and stably, reliably so (see for example:

Hachimoji DNA and RNA: a genetic system with eight building blocks.)

And it is already clear that this only indicates a small subset of the information coding possibilities that might have arisen as alternatives to the A/T/G/C genetic coding became locked-in, in practice in life on Earth.

If I could draw one relevant conclusion to this still unfolding story that I would share here, it is that if you want to find technology lock-ins, or their naturally occurring counterparts, look to your most closely and automatically held developmental assumptions, and certainly when you cannot rigorously justify them from first principles. Then question the scope of relevance and generality of your first principles there, for hidden assumptions that they carry within them.

Some thoughts concerning a general theory of business 29: a second round discussion of general theories as such, 4

Posted in blogs and marketing, book recommendations, reexamining the fundamentals by Timothy Platt on June 11, 2019

This is my 29th installment to a series on general theories of business, and on what general theory means as a matter of underlying principle and in this specific context (see Reexamining the Fundamentals directory, Section VI for Parts 1-25 and its Page 2 continuation, Section IX for Parts 26-28.)

I began this series in its Parts 1-8 with an initial orienting discussion of general theories per se, with an initial analysis of compendium model theories and of axiomatically grounded general theories as a conceptual starting point for what would follow. And I then turned from that, in Parts 9-25 to at least begin to outline a lower-level, more reductionistic approach to businesses and to thinking about them, that is based on interpersonal interactions. Then I began a second round, next step discussion of general theories per se in Parts 26-28 of this, building upon my initial discussion of general theories per se, this time focusing on axiomatic systems and on axioms per se and the presumptions that they are built upon.

More specifically, I have used the last three postings to that progression to at least begin a more detailed analysis of axioms as assumed and assumable statements of underlying fact, and of general bodies of theory that are grounded in them, dividing those theories categorically into two basic types:

• Entirely abstract axiomatic bodies of theory that are grounded entirely upon sets of a priori presumed and selected axioms. These theories are entirely encompassed by sets of presumed fundamental truths: sets of axiomatic assumptions, as combined with complex assemblies of theorems and related consequential statements (lemmas, etc) that can be derived from them, as based upon their own collective internal logic. Think of these as axiomatically closed bodies of theory.
• And theory specifying systems that are axiomatically grounded as above, with at least some a priori assumptions built into them, but that are also at least as significantly grounded in outside-sourced information too, such as empirically measured findings as would be brought in as observational or experimental data. Think of these as axiomatically open bodies of theory.

I focused on issues of completeness and consistency in these types of theory grounding systems in Part 28 and briefly outlined there, how the first of those two categorical types of theory cannot be proven either fully complete or fully consistent, if they can be expressed in enumerable form of a type consistent with, and as such including the axiomatic underpinnings of arithmetic: the most basic of all areas of mathematics, as formally axiomatically laid out by Whitehead and Russell in their seminal work: Principia Mathematica.

I also raised and left open the possibility that the outside validation provided in axiomatically open bodies of theory, as identified above, might afford alternative mechanisms for de facto validation of completeness, or at least consistency in them, where Kurt Gödel’s findings as briefly discussed in Part 28, would preclude such determination of completeness and consistency for any arithmetically enumerable axiomatically closed bodies of theory.

That point of conjecture began a discussion of the first of a set of three basic, and I have to add essential topics points that would have to be addressed in establishing any attempted-comprehensive bodies of theory: the dual challenges of scope and applicability of completeness and consistency per se as organizing goals, and certainly as they might be considered in the contexts of more general theories. And that has left these two here-repeated follow-up points for consideration:

• How would new axioms be added into an already developing body of theory, and how and when would old ones be reframed, generalized, limited for their expected validity and made into special case rules as a result, or be entirely discarded as organizing principles there per se.
• Then after addressing that set of issues I said that I will turn to consider issues of scope expansion for the set of axioms assumed in a given theory-based system, and with a goal of more fully analytically discussing optimization for the set of axioms presumed, and what that even means.

My goal for this series installment is to at least begin to address the first of those two points and its issues, adding to my already ongoing discussion of completeness and consistency in complex axiomatic theories while doing so. And I begin by more directly and explicitly considering the nature of outside-sourced, a priori empirically or otherwise determined observations and the data that they would generate, that would be processed into knowledge through logic-based axiomatic reasoning.

Here, and to explicitly note what might be an obvious point of observation on the part of readers, I would as a matter of consistency represent the proven lemmas and theorems of a closed body of theory such as a body of mathematical theory, as proven and validated knowledge as based on that theory. And I correspondingly represent open question still-unproven or unrefuted theoretical conjectures as they arise and are proposed in those bodies of theory, as potentially validatable knowledge in those systems. And having noted that point of assumption (presumption?), I turn to consider open systems as for example would be found in theories of science or of business, in what follows.

• Assigned values and explicitly defined parameters, as arise in closed systems such as mathematical theories with their defined variables and other constructs, can be assumed to represent absolutely accurate input data. And that, at least as a matter of meta-analysis, even applies when such data is explicitly offered and processed through axiomatic mechanisms as being approximate in nature and variable in range; approximate and variable are themselves explicitly defined, or at least definable in such systems applications, formally and specifically providing precise constraints on the data that they would organize, even then.
• But it can be taken as an essentially immutable axiomatic principle: one that cannot be violated in practice, that outside sourced data that would feed into and support an axiomatically open body of theory, is always going to be approximate for how it is measured and recorded for inclusion and use there, and even when that data can be formally defined and measured without any possible subjective influence – when it can be identified and defined and measured in as completely objective a manner as possible and free of any bias that might arise depending on who observes and measures it.

Can an axiomatically open body of theory somehow be provably complete or even just consistent for that matter, due to the balancing and validating inclusion of outside frame of reference-creating data such as experientially derived empirical observations? That question can be seen as raising an interesting at least-potential conundrum and certainly if a basic axiom of the physical sciences that I cited and made note of in Part 28 is (axiomatically) assumed true:

• Empirically grounded reality is consistent across time and space.

That at least in principle, after all, raises what amounts to an immovable object versus an unyieldable force type of challenge. But as soon as the data that is actually measured, as based on this empirically grounded reality, takes on what amounts to a built in and unavoidable error factor, I would argue that any possible outside-validated completeness or consistency becomes moot at the very least and certainly for any axiomatically open system of theory that might be contemplated or pursued here.

• This means that when I write of selecting, framing and characterizing and using axioms and potential axioms in such a system, I write of bodies of theory that are of necessity always going to be works in progress: incomplete and potentially inconsistent and certainly as new types of incoming data are discovered and brought into them, and as better and more accurate ways to measure the data that is included are used.

Let me take that point of conjecture out of the abstract by citing a specific source of examples that are literally as solidly established as our more inclusive and more fully tested general physical theories of today. And I begin this with Newtonian physics as it was developed at a time when experimental observation was limited for the range of phenomena observed and in the levels of experimental accuracy attainable for what was observed and measured, so as to make it impossible to empirically record the types of deviation from expected sightings that would call for new and more inclusive theories, with new and altered underlying axiomatic assumptions, as subsequently called for in the special theory of relativity as found and developed by Einstein and others. Newtonian physics neither calls for nor accommodates anything like the axiomatic assumptions of the special theory of relativity, holding for example that the speed of light is constant in all frames of reference. More accurate measurements as taken over wider ranges of experimental examination of observable phenomena forced change to the basic underlying axiomatic assumptions of Newton (e.g. his laws of motion.) And further expansion of the range of phenomena studied and the level of accuracy in which data is collected from all of this, might very well lead to the validation and acceptance of still more widely inclusive basic physical theories, and with any changes in what they would axiomatically presume in their foundations included there. (Discussion of alternative string theory models of reality among other possibilities, come to mind here, where experimental observational limitations of the types that I write of here, are such as to preclude any real culling and validating there, to arrive at a best possible descriptive and predictive model theory.)

At this point I would note that I tossed a very important set of issues into the above text in passing, and without further comment, leaving it hanging over all that has followed it up to here: the issues of subjectivity.

Data that is developed and tested for how it might validate or disprove proposed physical theory might be presumed to be objective, as a matter of principle. Or alternatively and as a matter of practice, it might be presumed possible to obtain such data that is arbitrarily close to being fully free from systematic bias, as based on who is observing and what they think about the meaning of the data collected. And the requirement that experimental findings be independently replicated by different researchers in different labs and with different equipment, and certainly where findings are groundbreaking and unexpected, serves to support that axiomatic assumption as being basically reliable. But it is not as easy or as conclusively presumable to assume that type of objectivity for general theories that of necessity have to include within them, individual human understand and reasoning with all of the additional and largely unstated axiomatic presumptions that this brings with it, as exemplified by a general theory of business.

That simply adds whole new layers of reason to any argument against presumable completeness or consistency in such a theory and its axiomatic foundations. And once again, this leaves us with the issues of such theories always being works in progress, subject to expansion and to change in general.

And this brings me specifically and directly to the above-stated topics point that I would address here in this brief note of a posting: the determination of which possible axioms to include and build from in these systems. And that, finally, brings me to the issues and approaches that are raised in a reference work that I have been citing in anticipation of this discussion thread for a while now in this series, and an approach to the foundation of mathematics and its metamathematical theories that this and similar works seek to clarify if not codify:

• Stillwell, J. (2018) Reverse Mathematics: proofs from the inside out. Princeton University Press.)

I am going to more fully and specifically address that reference and its basic underlying conceptual principles in a next series installment. But in anticipation of doing so, I end this posting with a basic organizing point of reference that I will build from there:

• The more traditional approach to the development and elaboration of mathematical theory, and going back at least as far as the birth of Euclidean geometry, was one of developing a set of axioms that would be presumed as if absolute truths, and then developing emergent lemmas and theories from them.
• Reverse mathematics is so named because it literally reverses that, starting with theories to be proven and then asking what are the minimal sets of axioms that would be needed in order to prove them.

My goal for the next installment to this series is to at least begin to consider both axiomatically closed and axiomatically open theory systems in light of these two alternative metatheory approaches. And in anticipation of that narrative line to come, this will mean reconsidering compendium models and how they might arise as need for new axiomatic frameworks of understanding arise, and as established ones become challenged.

Meanwhile, you can find this and related material about what I am attempting to do here at About this Blog and at Blogs and Marketing. And I include this series in my Reexamining the Fundamentals directory and its Page 2 continuation, as topics Sections VI and IX there.

Rethinking national security in a post-2016 US presidential election context: conflict and cyber-conflict in an age of social media 15

This is my 15th installment to a series on cyber risk and cyber conflict in a still emerging 21st century interactive online context, and in a ubiquitously social media connected context and when faced with a rapidly interconnecting internet of things among other disruptively new online innovations (see Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 354 and loosely following for Parts 1-14.)

My goal for this installment is to reframe what I have been offering up to here in this series, and certainly in its most recent postings up to now. And I begin that by offering a very specific and historically validated point of observation (that I admit up-front will have a faulty assumption built into it, that I will raise and discuss later on in this posting):

• It can be easily and cogently argued that the single greatest mistake that the civilian and military leadership of a nation can make, when confronting and preparing for possible future challenge and conflict,
• Is to simply think along familiar lines with that leading to their acting according to what is already comfortable and known – thinking through and preparing to fight a next war as if it would only be a repeat of the last one that their nation faced
• And no matter how long ago that happened, and regardless of whatever geopolitical change and technological advancement might have taken place since then.
• Strategic and tactical doctrine and the logistics and other “behind the lines” support systems that would enable them, all come to be set as if in stone: and in stone that was created the last time around in the crucible of their last conflict. And this has been the basic, default pattern followed by most and throughout history.
• This extended cautionary note applies in a more conventional military context where anticipatory preparation for proactively addressing threats is attempted, and when reactive responses to those threats are found necessary too. But the points raised here are just as cogently relevant in a cyber-conflict context too, or in a mixed cyber plus conventional context (as Russia has so recently deployed in the Ukraine as its leadership has sought to restore something of its old Soviet era protective buffer zone around the motherland if nothing else.)
• History shows more leaders and more nations that in retrospect have been unprepared for what is to come, than it does those who were ready to more actively consider and prepare for emerging new threats and new challenges, and in new ways.
• Think of the above as representing in outline, a strategic doctrine that is based on what should be more of a widening of the range and scope of what is considered possible, and the range and scope of how new possibilities might have to be addressed, but that by its very nature cannot be up to that task.

To take that out of the abstract, consider a very real world example of how the challenges I have just discussed, arise and play out.

• World War I with its reliance on pre-mechanized tactics and strategies, with its mass frontal assault charges and its horse cavalry among other “trusted traditions,” and with its reliance on trench warfare to set and hold front lines and territory in all of that:
• Traditions that had led to horrific loss of life even in less technologically enabled previous wars such as the United States Civil War,
• Arguably led to millions of what should have been completely avoidable casualties as foot soldiers faced walls of machinegun fire and tanks, aircraft bombardment and aerial machinegun attack and even poison gas attacks as they sought to prevail through long-outmoded military practice.

And to stress a key point that I have been addressing here, I would argue that cyber attacks and both as standalone initiatives and as elements in more complex offensives, hold potential for causing massive harm and to all sides involved in them too. And proactively seeking to understand and prepare for what might come next there, can be just as important as comparable preparation is in a more conventional warfare-oriented context. Think World War I if nothing else there, as a cautionary note example of the possible consequences in a cyber-theatre of conflict, of making the mistakes outlined in the above bullet pointed preparation and response doctrine.

Looking back at this series as developed up to here, and through its more recent installments in particular, I freely admit that I have been offering what might be an appearance of taking a more reactive and hindsight-oriented perspective here. And the possibility of confusion there on the part of a reader begins in its Part 1 from the event-specific wording of its title, and with the apparent focus on a single now historical event that that conveys. But my actual overall intention here is in fact more forward thinking and proactively so, than retrospective and historical-narrative in nature.

That noted, I have taken an at least somewhat historical approach to what I have written in this series up to here and even as I have offered a few more general thoughts and considerations here too. But from this point on I will offer an explicitly dual narrative:

• My plan is to initially offer a “what has happened”, historically framed outline of at least a key set of factors and considerations that have led us to our current situation. That will largely follow the pattern that I have been pursuing here and certainly as I have discussed Russia as a source of working examples in all of this.
• Then I will offer a more open perspective that is grounded in that example but not constrained by it, for how we might better prepare for the new and disruptively novel and proactively so where possible, but with a better reactive response where that proves necessary too.

My goal in that will not be to second guess the decisions and actions of others, back in 2016 and leading up to it or from then until now as of this writing. And it is not to offer suggestions as to how to better prepare for a next 2016-style cyber-attack per se and certainly not as a repeat of an old conflict, somehow writ new. To clarify that with a specific in the news, current detail example, Russian operatives and others who were effectively operating under their control for this, hacked Facebook leading up to the 2016 US presidential and congressional elections, using armies of robo-Facebook members: artifactual platforms for posting false content, that were set up to appear as coming from real people and from real American citizens in particular. Facebook has supposedly tightened its systems to better identify and delete such fake, manipulative accounts and their online disinformation campaigns. And with that noted, I cite:

In Ukraine, Russia Tests a New Facebook Tactic in Election Tampering.

Yes, this new approach (as somewhat belatedly noted above) is an arms race advancement meant to circumvent the changes made at Facebook as they have attempted to limit or prevent how their platform can be used as a weaponized capability by Russia and others as part of concerted cyber attacks. No, I am not writing here of simply evolutionary next step work-arounds or similar more predictable advances in cyber-weapon capabilities of this type, when writing of the need to move beyond simply preparing for a next conflict as if it would just be a variation on the last one fought.

That noted, I add that yes, I do expect that the social media based disinformation campaigns will be repeated as an ongoing means of cyber-attack, and both in old and in new forms. But fundamentally new threats will be developed and deployed too that will not fit the patterns of anything that has come before. So my goal here is to take what might be learnable lessons from history: recent history and current events included, combined with a consideration of changes that have taken place in what can be done in advancing conflicts, and in trends in what is now emerging as new possibilities there, to at least briefly consider next possible conflicts and next possible contexts that they might have to play out in. My goal for this series as a whole is to discuss Proactive as a process and even as a strategic doctrine, and in a way that at least hopefully would positively contribute to the national security dialog and offer a measure of value moving forward in general.

With all of that noted as a reframing of my recent installments to this series at the very least, I turn back to its Part 14 and how I ended it, and with a goal of continuing its background history narrative as what might be considered to be a step one analysis.

I wrote in Part 13 and again in Part 14 of Russia’s past as a source of the fears and concerns, that drive and shape that nation’s basic approaches as to how it deals with other peoples and other nations. And I wrote in that, of how basic axiomatic assumptions that Russia and its peoples and government have derived from that history, shape their basic geopolitical policy and their military doctrine for now and moving forward too. Then at the end of Part 14 I said that I would continue its narrative here by discussing Vladimir Putin and his story. And I added that that is where I will of necessity also discuss the 45th president of the United States: Donald Trump and his relationship with Russia’s leadership in general and with Putin in particular. And in anticipation of this dual narrative to come, that will mean my discussing Russia’s cyber-attacks and the 2016 US presidential election, among other events. Then, as just promised here, I will step back to consider more general patterns and possible transferable insights.

Then I will turn to consider China and North Korea and their current cyber-policies and practices. And I will also discuss current and evolving cyber-policies and practices as they are taking shape in the United States as well, as shaped by its war on terror among other motivating considerations. I will use these case studies to flesh out the proactive paradigm that I would at least begin to outline here as a goal of this series. And I will use those real world examples at least in part to in effect reality check that paradigmatic approach too, as I preliminarily offer it here.

And with that, I turn back to the very start of this posting, and to the basic orienting text that I begin all of the installments to this series with. I have consistently begun these postings by citing “cyber risk and cyber conflict in a still emerging 21st century interactive online context, and in a ubiquitously social media connected context and when faced with a rapidly interconnecting internet of things among other disruptively new online innovations. To point out an obvious example, I have made note of the internet of things 15 times now in this way, but I have yet to discuss it at all up to here in the lines of discussion that I have been offering. I do not even mention artificial intelligence-driven cyber-weaponization there in that first paragraph opening text, where that is in fact one of the largest and most complex sources of new threats that have ever been faced and at any time in history. And its very range and scope, and its rate of disruptively new development advancement will probably make it the single largest categorical source of weaponized threat that we will all face in this 21st century, and certainly as a source of weaponized capabilities that will be actively used. I will discuss these and related threat sources when considering the new and unexpected and as I elaborate on the above noted proactive doctrine that I offer here.

And as a final thought here, I turn back to my bullet pointed first take outline of that possible proactive doctrine, to identify and address the faulty assumption that I said I would build into it, and certainly as stated above. And I do so by adding one more bullet point to that initial list of them:

• I have just presented and discussed a failure to consider the New when preparing for possible future conflict, and its consequences. And I prefaced that advisory note by acknowledging that I would build a massive blind spot built into what I would offer there. I have written all of the above strictly in terms of nations and their leaders and decision makers. That might be valid in a more conventional military sense but it is not and cannot be considered so in anything like a cyber-conflict setting, and for either thinking about or dealing with aggressors, or thinking about and protecting, or remediating harm to victims. Yes, nations can and do develop, deploy and use cyber-weapon capabilities, and other nations can be and have been their intended targets. But this is an approach that smaller organizations and even just skilled and dedicated individuals can acquire, if not develop on their own. And it is a capability that can be used against targets of any scale of organization from individuals on up. That can mean attacks against specific journalists, or political enemies, or competing business executives or employees. It can mean attacks against organizations of any size or type, including nonprofits and political parties, small or large businesses and more. And on a larger than national scale, this can mean explicit attack against international alliances such as the European Union. Remember, Russian operatives have been credited with sewing disinformation in Great Britain leading up to its initial Brexit referendum vote, to try to break that country away from the European Union and at least partly disrupt it. And they have arguably succeeded there. (See for example, Brexit Goes Back to Square One as Parliament Rejects May’s Plan a Third Time.)

If I were to summarize and I add generalize this first draft, last (for now) bullet point addition to this draft doctrine, I would add:

• New and the disruptively new in particular, break automatically presumed, unconsidered “axiomatic truths,” rendering them invalid moving forward. This can mean New breaking and invalidating assumptions as to where threats might come from and where they might be directed, as touched upon here in this posting. But more importantly, this can mean the breaking and invalidating of assumptions that we hold to be so basic that we are fundamentally unaware of them in our planning – until they are proven to be wrong in an active attack and as a new but very real threat is realized in action. (Remember, as a conventional military historical example of that, how “everyone” knew that aircraft launched anti- ship torpedoes could not be effectively deployed and used in shallow waters as found in places such as Pearl Harbor – until, that is they were.)

And with that, I will offer a book recommendation that I will be citing in upcoming installments to this series, adding it here in anticipation of doing so for anyone interested:

• Kello, L. (2017) The Virtual Weapon and International Order. Yale University Press.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time 3, and at Page 1 and Page 2 of that directory. And you can also find this and related material at Social Networking and Business 2, and also see that directory’s Page 1.

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 6

This is my 6th posting to a short series on the growth potential and constraints inherent in innovation, as realized as a practical matter (see Reexamining the Fundamentals 2, Section VIII for Parts 1-5.) And this is also my third posting to this series, to explicitly discuss emerging and still forming artificial intelligence technologies as they are and will be impacted upon by software lock-in and its imperatives, and by shared but more arbitrarily determined constraints such as Moore’s law (see Part 4 and Part 5.)

I began discussing overall patterns of technology implementation in an advancing artificial intelligence agent context in Part 4, where I cited a set of possible scenarios that might significantly arise for that in the coming decades, for how artificial intelligence capabilities in general might proliferate, as originally offered in:

• Rose, D. (2014) Enchanted Objects: design, human desire and the internet of things. Scribner.

And to briefly repeat from what I offered there in this context, for smoother continuity of narrative, I cited and began discussing those four possible scenarios (using Rose’s names for them) as:

1. Terminal world, in which most or even essentially all human/artificial intelligence agent interactions take place through the “glass slabs and painted pixels” of smart phone and other separating, boundary maintaining interfaces.
2. Prosthetics, in which a major thrust of this technology development is predicated upon human improvement, with the internalization of these new technology capabilities within us.
3. Animism, and the emergence of artificial intelligence ubiquity through the development and distribution of seemingly endless numbers of smart robotic and artificially intelligence-enabled nodes.
4. And Enchanted Objects, in which the once routine and mundane of our everyday life becomes imbued with amazing new capabilities. Here, unlike the immediately preceding scenario, focus of attention and of action takes place in specific devices and their circumstances that individually arise to prominence of attention and for many if not most people, where the real impact of the animism scenario would be found in a mass effect gestalt arising from what are collectively impactful, but individually mostly unnoticed smart(er) parts.

I at least briefly argued the case there for assuming that we will in fact come to see some combination of these scenarios arise in actual fact, as each at least contextually comes to the top as a best approach for at least some set of recurring implementation contexts. And I effectively begin this posting by challenging a basic assumption that I built into that assessment:

• The tacit and all but axiomatic assumption that enters into a great deal of the discussion and analysis of artificial intelligence, and of most other still-emerging technologies as well,
• That while disruptively novel can and does occur as a matter of principle, it is unlikely to happen and certainly right now in any given technology development context that is actively currently being pursued, along some apparently fruitful current developmental path.

All four of the above repeated and restated scenario options have their roots in our here and now and its more readily predictable linear development moving forward. It is of the nature of disruptively new and novel that it comes without noticeable warning and precisely in ways that would be unexpected. The truly disruptively novel innovations that arise, come as if lightning out of a clear blue sky, and they blindside everyone affected by them for their unexpected suddenness and for their emerging impact, as they begin to gain traction in implementation and use. What I am leading up to here is very simple, at least in principle, even if the precise nature of the disruptively new and novel limits our ability to foresee in advance the details of what is to come of that:

• While all of the first four development and innovation scenarios as repeated above, will almost certainly come to play at least something of a role in our strongly artificially intelligence-shaped world to come, we also have to expect all of this to develop and play out in disruptively new ways too, and both as sources of specific contextually relevant solutions for how best to implement this new technology, and for how all of these more context-specific solutions are in effect going to be glued together to form overall, organized systems.

I would specifically stress the two sides to that more generally and open-endedly stated fifth option here, that I just touched upon in passing in the above bullet point. I write here of more locally, contextually specific implementation solutions, here for how artificial intelligence will connect to the human experience. But I also write of the possibility that overarching connectivity frameworks that all more local context solutions would fit into, are likely going to emerge as disruptively new too. And with that noted as a general prediction as to what is likely to come, I turn here to at least consider some of the how and why details of that, that would lead me to make this prediction in the first place.

Let’s start by rethinking some of the implications of a point that I made in Part 4 of this series when first addressing the issues of artificial intelligence, and of artificial intelligence agents per se. We do not even know what artificial general intelligence means, at least at anything like an implementation-capable level of understanding. We do not in fact even know what general intelligence is per se and even just in a more strictly human context, at least where that would mean our knowing what it is and how it arises in anything like a mechanistic sense. And in fact we are, in a fundamental sense, still learning what even just artificial specialized and single task intelligence is and how that might best be implemented.

All of this still-present, significantly impactful lack of knowledge and insight raises the likelihood that all that we know and think that we know here, is going to be upended by the novel, the unexpected and the disruptively so – and probably when we least expect that.

And with this stated, I raise and challenge a second basic assumption that by now should be more generally disavowed, but that still hangs on. In a few short decades from now, for all of the billions of human online nodes: human-operated devices and virtual devices that we connect online through, that will collectively only account for a small fraction of the overall online connected universe: the overall connectiverse that we are increasingly living in. All of the rest: all of the soon to be vast majority of the rest of this will all be device-to-device in nature, and fit into what we now refer to as the internet of things. And pertinently to this discussion that means that a vast majority of the connectedness that is touched upon in the above four (five?) scenarios, is not going to be about human connectedness per se at all, except perhaps indirectly. And this very specifically leads me back to what I view as the real imperative of the fifth scenario: the disruptively new and novel pattern of overall connectivity that I made note of above, and certainly when considering the glue that binds our emerging overall systems together with all of the overarching organizational implications that that option and possibility raises.

Ultimately, what works and both at a more needs-specific contextual level there, and at an overall systems connecting and interconnecting level, is going to be about optimization, with aesthetics and human tastes critically important and certainly for technology solution acceptance – for human-to-human and human-to-artificial intelligence agent contexts. But in a strictly, or even just primarily artificial intelligence agent-to-artificial intelligence agent and dumb device-to-artificial intelligence agent context, efficiency measures will dominate that are not necessarily human usage-centric. And they will shape and drive any evolutionary trends that arise as these overall systems continue to advance and evolve (see Part 3 and Part 5 for their discussions of adaptive peak models and related evolutionary trend describing conceptual tools, as they would apply to this type of context.)

If I were to propose one likely detail that I fully expect to arise in any such overall organizing, disruptively novel interconnection scenario, it is that the nuts and bolts details of the still just emerging overall networking system that I write of here, will most likely reside at and function at a level that is not explicitly visible and certainly to human participants in it, unless directly connected into and in any of the contextual scenario solutions that arise and that are developed and built into it: human-to-human, human-to-device or intelligent agent, or device or agent-to-device or agent. And this overarching technology, optimized in large part by the numerically compelling pressures of device or agent-to-device or agent connectivity needs, will probably take the form of a set of universally accepted and adhered to connectivity protocols: rules of the road that are not going to be all that human-centric.

I am going to continue this discussion in a next series installment, where I will at least selectively examine some of the core issues that I have been addressing up to here in greater detail, and how their realized implementations might be shaped into our day-to-day reality. And in anticipation of that line of discussion to come, I will do so from a perspective of considering how essentially all of the functionally significant elements to any such system and at all levels of organizational resolution that would arise in it, are rapidly coevolving and taking form, and both in their own immediately connected-in contexts and in any realistic larger overall rapidly emerging connections-defined context too. And this will of necessity bring me back to reconsider some of the first issues that I raised in this series too.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3 and also see Page 1 and Page 2 of that directory. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And also see its Page 1.

Some thoughts concerning a general theory of business 28: a second round discussion of general theories as such, 3

Posted in blogs and marketing, book recommendations, reexamining the fundamentals by Timothy Platt on April 6, 2019

This is my 28th installment to a series on general theories of business, and on what general theory means as a matter of underlying principle and in this specific context (see Reexamining the Fundamentals directory, Section VI for Parts 1-25 and its Page 2 continuation, Section IX for Parts 26 and 27.)

I began this series in its Parts 1-8 with an initial orienting discussion of general theories per se, with an initial analysis of compendium model theories and of axiomatically grounded general theories as a conceptual starting point for what would follow. And I then turned from that, in Parts 9-25 to at least begin to outline a lower-level, more reductionistic approach to businesses and to thinking about them, that is based on interpersonal interactions.

Then I began a second round, next step discussion of general theories per se in Part 26 and Part 27, to add to the foundation that I have been discussing theories of business in terms of, and as a continuation of the Parts 1-8 narrative that I began all of this with. More specifically, I used those two postings to begin a more detailed analysis of axioms per se, and of general bodies of theory that are grounded in them, dividing those theories categorically into two basic types:

• Entirely abstract axiomatic bodies of theory that are grounded entirely upon sets of a priori presumed and selected axioms. These theories are entirely comprised of their particular sets of those axiomatic assumptions as combined with complex assemblies of theorems and related consequential statements (lemmas, etc) that can be derived from them, as based upon their own collective internal logic. Think of these as axiomatically enclosed bodies of theory.
• And theory specifying systems that are axiomatically grounded as above, with at least some a priori assumptions built into them, but that are also at least as significantly grounded in outside-sourced information too, such as empirically measured findings as would be brought in as observational or experimental data. Think of these as axiomatically open bodies of theory.

Any general theory of business, like any organized body of scientific theory would fit the second of those basic patterns as discussed here and particularly in Part 27. My goal for this posting is to continue that line of discussion, and with an increasing focus on the also-empirically grounded theories of the second type as just noted, and with an ultimate goal of applying the principles that I discuss here to an explicit theory of business context. That noted, I concluded Part 27 stating that I would turn here to at least begin to examine:

• The issues of completeness and consistency, as those terms are defined and used in a purely mathematical logic context and as they would be used in any theory that is grounded in descriptive and predictive enumerable form. And I will used that more familiar starting point as a basis for more explicitly discussing these same issues as they arise in an empirically grounded body of theory too.
• How new axioms would be added into an already developing body of theory, and how old ones might be reframed, generalized, limited for their expected validity and made into special case rules as a result, or be entirely discarded as organizing principles per se.
• Then after addressing that set of issues I said that I will turn to consider issues of scope expansion for the set of axioms assumed in a given theory-based system, and with a goal of more fully analytically discussing optimization for the set of axioms presumed, and what that even means.

And I begin addressing the first of those points by citing two landmark works on the foundations of mathematics:

• Whitehead, A.N. and B. Russell. (1910) Principia Mathematica (in 3 volumes). Cambridge University Press.
• And Gödel’s Incompleteness Theorems.

Alfred North Whitehead and Bertrand Russell set out to develop and offer a complete axiomatically grounded foundation for all of arithmetic, as the most basic of all branches of mathematics in their above-cited magnum opus. And this was in fact viewed as a key step realized, in fulfilling the promise of David Hilbert: a renowned early 20th century mathematician who sought to develop a comprehensive and all-inclusive single theory of mathematics as what became known as Hilbert’s Program. All of this was predicated on the validity of an essentially unchallenged metamathematical axiomatic assumption, to the effect that it is in fact possible to encompass arbitrarily large areas of mathematics, and even all of validly provable mathematics as a whole, into a single finite scaled, completely consistent and completely decidable set of specific axiomatic assumptions. Then Kurt Gödel proved that even just the arithmetical system offered by Whitehead and Russell can never be complete in this sense, from how it would of necessity carry in it an ongoing requirement for adding in more new axioms to what is supportively presumed for it, and unending and unendingly so if any real comprehensive completeness was to be pursued. And on top if that, Gödel proved that it can never be possible to prove with comprehensive certainty that such an axiomatic system can be completely and fully consistent either! And this would apply to any abstractly, enclosed axiomatic system that can in any way be represented arithmetically: as being calculably enumerable. But setting aside the issues of a body of theory facing this type of limitation simply because it can be represented in correctly formulated mathematical form, for the findings developed out of its founding assumptions (where that might easily just mean larger and more inclusive axiomatically enclosed bodies of theory that do not depend on outside non-axiomatic assumptions for their completeness or validity – e.g. empirically grounded theories), what does this mean for explicitly empirically grounded bodies of theory, such as larger and more inclusive theories of science, or for purposes of this posting, of business?

I begin addressing that question, by explicitly noting what has to be considered the single most fundamental a priori axiom that underlies all scientific theory, and certainly for all bodies of theory such as physics and chemistry that seek to comprehensively descriptively and predictively describe what in total, would include the entire observable universe, and from its big bang origins to now and into the distant future as well:

• Empirically grounded reality is consistent. Systems under consideration, as based at least in principle on specific, direct observation might undergo phase shifts where system-dominating properties take on more secondary roles and new ones gain such prominence. But that only reflects a need for more explicitly comprehensive theory that would account for, explain and explicitly describe all of this predictively describable structure and activity. But underlying that and similar at-least seeming complexity, the same basic principles and the same conceptual rules that encode them for descriptive and predictive purposes, hold true everywhere and throughout time.
• To take that out of the abstract, the same basic types of patterns of empirically observable reality that could be representationally modeled by descriptive and predictive rules such as Charles’ law, or Boyle’s law, would be expected to arise wherever such thermodynamically definable systems do. And the equations they specify would hold true and with precisely the same levels and types of accuracy wherever so applied.

So if an axiomatically closed, in-principle complete in and of itself axiomatic system, and an enclosed body of theory that would be derived from it (e.g. Whitehead’s and Russell’s theory of arithmetic) cannot be made fully complete and consistent, as noted above:

• Could grounding a body of theory that could be represented in what amounts to its form and as if a case in point application of it, in what amounts to a reality check framework of empirical observation allow for or even actively support a second possible path to establishing full completeness and consistency there? Rephrasing that, could the addition of theory framing and shaping outside sourced information evidence, or formally developed experimental or observational data, allow for what amounts to an epistemologically meaningful grounding to a body of theory through inclusion of an outside-validated framework of presumable consistency?
• Let’s stretch the point made by Gödel, or at least risk doing so where I still at least tacitly assume bodies of theory that can in some meaningful sense be mapped to a Whitehead and Russell type of formulation of arithmetic, through theory-defined and included descriptive and predictive mathematical models and the equations they contain. Would the same limiting restrictions as found in axiomatically enclosed theory systems as discussed here, also arise in open theory systems so linked to them? And if so, where, how and with what consequence?

As something of an aside perhaps, this somewhat convoluted question does raise an interesting possibility as to the meaning and interpretation of quantum theory, and of quantum indeterminacy in particular, with resolution to a single “realized” solution only arrived at when observation causes a set of alternative possibilities to collapse down to one. But setting that aside, and the issue of how this would please anyone who still adheres to the precept of number: of mathematics representing the true prima materia of the universe (as did Pythagoras and his followers), what would this do to anything like an at least strongly empirically grounded, logically elaborated and developed theory such as a general theory of business?

I begin to address that challenge by offering a counterpart to the basic and even primal axiom that I just made note of above, and certainly for the physical sciences:

• Assume that a sufficiently large and complete body of theory can be arrived at,
• That would have a manageable finite set of underlying axiomatic assumptions that would be required by and sufficient to address any given empirically testable contexts that might arise in its practical application,
• And in a manner that at least for those test case purposes would amount to that theory functioning as if it were complete and consistent as an overall conceptual system.
• And assume that this reframing process could be repeated as necessary, when for example disruptively new and unexpected types of empirical observation arise.

And according to this, new underlying axioms would be added as needed, when specifically faced and once again particularly when an observer is faced with truly novel, disruptively unexpected findings or occurrences – of a type that I have at least categorically raised and addressed throughout this blog up to here, in business systems and related contexts. And with that, I have begun addressing the second of the three to-address topics points that I listed at the top of this posting:

• How would new axioms be added into an already developing body of theory, and how and when would old ones be reframed, generalized, limited for their expected validity or discarded as axioms per se?

I am going to continue this line of discussion in a next series installment, beginning with that topics point as here-reworded. And I will turn to and address the third and last point of that list after that, turning back to issues coming from the foundations of mathematics in doing so too. (And I will finally turn to and more explicitly discuss issues raised in a book that I have been citing here, but that I have not more formally gotten to in this discussion up to here, that has been weighing on my thinking of the issues that I address here:

• Stillwell, J. (2018) Reverse Mathematics: proofs from the inside out. Princeton University Press.)

Meanwhile, you can find this and related material about what I am attempting to do here at About this Blog and at Blogs and Marketing. And I include this series in my Reexamining the Fundamentals directory and its Page 2 continuation, as topics Sections VI and IX there.

Donald Trump, Xi Jinping, and the contrasts of leadership in the 21st century 12: some thoughts concerning how Xi and Trump approach and seek to follow an authoritarian playbook 3

Posted in book recommendations, macroeconomics, social networking and business by Timothy Platt on March 17, 2019

This is my third posting to explicitly discuss and analyze an approach to leadership that I have come to refer to as the authoritarian playbook. See Part 1 and Part 2 of this progression of series installments, and a set of three closely related postings that I offered immediately prior to them that focus in on one of the foundational building blocks to the authoritarian playbook approach as a whole: the cult of personality, with its Part 1 focusing on Donald Trump and his cult-building efforts, Part 2 focusing on Xi Jinping’s efforts in that direction, and Part 3 stepping back to consider cults of personality in general.

I at least briefly outlined a set of tools and approaches to using them in my second installment on the authoritarian playbook itself as just cited above, that in effect mirrors how I stepped back from the specifics of Trump and Xi in my third posting on cults of personality, to put their stories into an at least hopefully clearer and fuller context. And I continue this now-six posting progression here with a goal of offering a still fuller discussion of this general approach to leadership, while also turning back to more fully consider Donald Trump and Xi Jinping themselves as at least would-be authoritarians.

I begin this posting’s discussion on that note by raising a detail that I said that I would turn to here, but that might at first appear to be at least somewhat inconsistent too, and certainly when measured up against my discussions of cults of personality and the authoritarian playbook as a largely unified and consistent vision and approach.

I have presented authoritarians and would-be authoritarians as striding forth in palpably visible self-confidence to proclaim their exceptionalism and their greatness, and to proclaim that they are the only ones who could possibly lead and save their followers: their people from the implacable, evil enemies that they face. This means these at least would-be leaders building their entire foundation for leadership on trust and on its being offered and on an ongoing basis: a leader’s trust that if they build their cults of personality effectively and if they take the right steps in the right ways in pursuing and wielding power, others: their followers, will trust and follow them. And this also arguably has to include their own trust that these, their followers will consistently remain true to them in their own beliefs too. Then I ended my second playbook-outlining posting by raising a crucial form of doubt that enters into and in fact informs all of this as it actually plays out:

• Trust, or rather the abnegation of even a possibility of holding trust in anyone or anything outside of self for a would-be authoritarian.

Ultimately, I would argue that an authoritarian’s trust in their followers, or in anyone else outside of themselves, rings as hollow as the cult of personality mask that they wear. And raising that point of observation, I turn to at least briefly reconsider the basic tools to this leadership playbook itself again.

I said at the end of my second playbook installment that I would turn back to reconsider Xi and Trump as specific case in point examples here and do so starting with Xi and his story in order to take what follows out of the abstract, leaving it imbued with a real, historically knowable face. And I begin that by returning to a crucially important point that I made when discussing Xi’s cult of personality and how he has sought to build one about himself: the trauma that befell his father and through that, himself and the rest of his immediate family too.

I began writing of Xi Jinping’s father, Xi Zhongxun in my discussion of the son’s effort to create a cult of personality around himself. Mao Zedong launched his Cultural Revolution out of fear that he was losing control of his revolution as other, competing voices sought to realize the promise of liberation from China’s nobility-led surf peasant society. He began this counteroffensive against what he saw as a challenge to his ultimate authority with his promise to listen and include: his let a hundred flowers bloom promise. But then he pulled back and all who did speak up, all who did offer their thoughts as to how and where the Communist revolution should go from there, were swept up as revisionists by the Cultural Revolution, declared by its zealot cadres to be enemies of the state and of the people.

Many were so caught up, with academics and members of China’s intelligentsia targeted in large numbers, as particular threats to Mao’s vision and to the revolution that he was leading through his cult of personality. And even people who had served Mao well and from early on in the revolution, form the long days of the Long March with all of its risks and uncertainties were caught up in this. Xi Zhongxun was a loyal follower of Mao and from the beginning. As such he was elevated into Chinese Communism’s new emerging proletarian nobility as Mao succeeded and took power. Xi Jinping, his son was raised as a member of China’s new Crown Prince Party and as an heir apparent to his father’s social and political stature there. And then Mao turned on them, just as he had turned on so many before them, and all fell into chaos with his father hauled up for public ridicule before screaming hoards in Cultural Revolution struggle sessions: public appearances in which the accused were beaten and ridiculed and forced to publically confess to whatever crimes they were being accused of at that moment.

Many who faced those gauntlets of ridicule and torture did not survive them and certainly when they were repeatedly subjected to these assaults. Zhongxun did survive, as he did confess and confess and confess. So he and his family, his young son Jinping included, were sent to a distant and isolated peasant community for reeducation.

One of the lessons that Jinping learned was that his survival meant his becoming more orthodox in his Communist purity than anyone else, and more actively ambitious in advancing through the Party ranks for his purity and reliability of thought and action. But a second, and more important lesson learned from all of this, and certainly for its long-term impact was that Xi learned to never trust anyone else and certainly in ways that might challenge his position and security, or his rising power and authority. I have written here in this progression of postings, of cults of personality as masks. Xi learned and both early on and well in this, to smile and to fit in and to strive to be the best and to succeed and with that expression of approval and agreement always there on his face. He learned to wear a mask, and to only trust himself. And his mask became the basis of his cult of personality as he advanced along the path of fulfilling his promise to himself and yes, to his father, and as he rose through the ranks in the Party system that had so irrevocably shaped his life and certainly starting with his father’s initial arrest.

How did this pattern arise and play out for a young Donald Trump? Turn to consider his father for his pivotal role in shaping his son too.

• What happens when only perfection is acceptable and when that is an always changing and never achievable goal, and when deception and duplicity can be the closest to achieving it that can actually be achievable?
• What happens to a young impressionable son when winning is the only acceptable outcome, ever, and when admitting weakness or defeat can only lead to dire humiliating consequences?
• Fear comes to rule all, and it is no accident that an adult Donald Trump sees fear and instilling it in others as his most powerful tool and both in his business and professional life, and apparently in his personal life too. (See:
• Woodword, B. (2018) Fear: Trump in the White House. Simon & Schuster, for a telling account of how Donald Trump uses this tool as his guiding principle when dealing with others.)
• And genuine trust in others, or trustworthiness towards others become impossible, with those who oppose The Donald on anything, considered as if existential treats and enemies, and those who support and enable him considered disposable pawns and gullible fools to be used and then discarded.

Cults of personality are masks and ultimately hollow ones for those who would pursue the autocratic playbook. And ultimately so is the promise of the autocrat and their offer of better for those who would follow them. And that is why they have to so carefully and assiduously grab and hold onto power, using the other tools of the playbook to do so.

I have focused in this posting on trust and on how it does and does not arise and flow forward towards others. Turning back to my initial comments on this, as offered here, a would-be authoritarian, a would-be tyrant or dictator calculatingly develops and promotes his cult of personality with a goal of gaining trust and support from as large and actively engaged a population of supporters as possible. So they calculatingly seek to develop and instill trust in themselves, in others. But ultimately they do not and in a fundamental sense cannot trust any of those others and certainly not as individuals. The closest they can come to achieving that is to develop a wary trust on the more amorphous face of their followers as nameless markers in larger demographics. And that, arguably just means their trusting themselves for their capability to keep those individually nameless and faceless members of the hoard in line.

I am going to continue this narrative in a next series installment where I will turn to consider legacies and the authoritarian’s need to build what amounts to monuments to their glory that they might never be forgotten. In anticipation of that discussion to come I will argue that while the underlying thought and motivation that would enter into this for any particular authoritarian might be complex, much if not most of that is shaped at least as a matter of general principles by two forces: fear, and a desire to build for permanence and with grandiosity driving both sides to that. And for working examples, I will discuss Trump’s southern border wall ambitions, and his more general claims to seek to rebuild the American infrastructure, and Xi’s imperially unlimited infrastructure and related ambitions too.

Meanwhile, you can find my Trump-related postings at Social Networking and Business 2. And you can find my China writings as appear in this blog at Macroeconomics and Business and its Page 2 continuation, and at Ubiquitous Computing and Communications – everywhere all the time and Social Networking and Business 2.

On the importance of disintermediating real, 2-way communications in business organizations 14

Posted in book recommendations, social networking and business, strategy and planning by Timothy Platt on March 4, 2019

This is my 14th installment to a brief series on coordinating information sharing and communications needs, and information access filtering and gate keeping requirements (see Social Networking and Business 2, postings 275 and loosely following for Parts 1-13.)

I began working my way through a briefly stated to-address topics list in Part 12 that I repeat here for smoother continuity of narrative, as I continue that process:

1. Reconsider the basic issues of communications and information sharing and their disintermediation in light of the trends and possibilities that I have been writing of in this series, and certainly since its Part 6 where I first started to more explicitly explore insider versus outside employee issues here.
2. Begin that with a focus on the human to human communications and information sharing context.
3. And then build from that to at least attempt to anticipate a complex of issues that I see as inevitable challenges as artificial agents develop into the gray area of capability that I made note of earlier in this series (n.b. in Part 11). More specifically, how can and should these agents be addressed and considered in an information communications and security context? In anticipation of that line of discussion to come, I will at least raise the possibility there, that businesses will find themselves compelled to confront the issues of personhood and of personal responsibility and liability for gray area artificial agents, and early in that societal debate. And the issues that I raise and discuss in this series will among other factors, serve as compelling bases for their having to address that complex of issues.

I have offered an at least starting point discussion of the first of those points in Part 12, along with offering a general framework (in Part 13 for more fully considering the above Points 2 and 3. My goal here is to at least begin to more fully explore the issues and at least a few of the complexities of Points 2 and 3. And I do so beginning with the already know, if rapidly changing arena of Point 2 and the human to human context.

• People: human people are essentially always presumed at least by default to be free agents who are capable of independent reasoning and decision making. This assumption enters into determinations of legal responsibility, and of moral and ethical and interpersonal responsibility in general.
• That does not mean that all people are, or are always assumed to be living and functioning in contexts and circumstances that would afford them with a range of achievable options for what they would do, as conceived by them on the basis of such reasoning. That does not mean that they would always have achievable options or opportunities for fulfilling their preferences or even just their basic needs as they understand and conceive them. Outside circumstantial constraints can and do limit range of action and even entirely so at times. But people are in general presumed – and once again as an initial default assumption if nothing else, to be able to think, reason and understand as independently capable beings and even if they do not face realistic opportunity to carry through on their considered and preferred plans and intentions.
• Stepping back from a more constrained business context, to explore exceptions to the above presumptions in general: the classical exceptions to this as a default starting point arise when individuals have been found to be mentally ill or otherwise cognitively limited, as for example from short-term anesthesia during and immediately after surgery, or from longer term dementia or other medically defined brain injury.
• I raised the specter in Part 13, of directed and even highly focused psychopharmaceutical and related interventions as are now becoming possible as next-step threats to any such presumable independence and individuality. But I have to acknowledge here that this only represents a next stage development in an already evolved and developed coercive potential. I cite by way of admittedly extreme example, brainwashing and its long and established history, with such coercive interventions often labeled in a more bowdlerized terms with names such as re-education. These are the tools of authoritarian systems that seek to control both the dialog and all that might arise from it, by absolutely controlling any and all possible participants.
• I also cite, by way of already actively acknowledged example, the Stockholm syndrome with its perception and judgment reshaping impact.
• My point here, still approaching these issues in their more general context, is that outside influence and even coercive, options limiting and eliminating forces and influences can and do arise and occur already. And they can be crudely overt or they can be subtle and less easily defined for their scope and influence, leading to impact that can be hard to fully identify let alone specifically challenge.
• Consider as a working example there, entire communities and even large parts or more of entire nations that have very limited, single perspective news coverage, and with strict editorial control limiting what they can and cannot be exposed to as children and young adults in school, in the textbooks that they would see and learn from and in classroom discussion allowed. This challenge can even arise, and certainly in specific areas of potential thought and discussion in “advanced” nations and democracies. I cite by way of example, politically mandated and enforced restrictions against teaching about evolution and other ideology-challenging science topics in public schools, and equally stridently stated pressures against sex education in the classroom. This same stultifying, information and communications limiting approach can be and in at least some places is actively pursued in how history is presented and taught, and in the teaching of essentially any and every other topic that some in positions of authority would find offensive or challenging to their own belief systems and their own power prerogatives.
• So returning to the first bullet point of this list, my starting statement that “people are essentially always presumed at least by default to be free agents who are capable of independent reasoning and decision making” might be true, but only if those individuals are free from outside coercive pressure, and if they have free and unencumbered access to as wide a range of information and opinion as possible.
• And with that, and as an intentionally sobering note, I cite a cautionary reference that I initially offered in a more strictly political context, leading up to the 2016 US presidential elections: Thinking Through the Words We Use in Our Political Monologs. I wrote there of epistemic bubbles, and the barriers to exposure to differing perspective or challenging fact that they create. Our very enabling online tools that in principle would open our eyes and our minds to wider possibilities and remove our decision making blinders, can become among the principle sources of those limiting blinders that we face today. And these at least in part self-imposed blinders are only going to become more and more pressingly important as we proceed into this 21st century.

What I am doing here is to reframe the above repeated Point 2 for the challenges that we all face in having effective access to the information that we need, with all of its breadth and diversity of source and perspective. And I write here of our coming to have the wherewithal to be well enough and widely enough informed to be able to communicate as effectively as possible, and with a real awareness of the background information and other limitations that we might still face in that, where we may have to look further in gathering in our own essential background information that we would require going into the decisions we reach and the actions that we would seek to pursue.

I have just offered a wide ranging and I have to add a dire outline of some of the coercive and limiting possibilities that we all potentially face, in my bullet pointed notes leading up to here. I justify that inclusiveness here, at least in part by paraphrasing a well known and variously stated sentiment: “no one can be freer than he or she who is most enslaved.”

As for addressing this wider ranging set of problems and potential problems in this more constrained and limited business communications system discussion, I have seen open and collaborative businesses and authoritarian ones that rule from above and by threat and by following through on that threat through coercive action. And history has seen significant levels of coercive pressure applied in business settings, and toxically so, as well as a great many positive and affirmative examples that would validate the fact that such coercive force need not be resorted to in order to achieve real success: success that can in fact come to accrue to all involved parties.

See:

• Freeman, J.A. (2018) Behemoth: a history of the factory and the making of the modern world. W.W. Norton & Company.

for a historically framed narrative on the history of factories and of giant factories and factory systems, as a source of specific examples of how decision making and understanding-limiting pressures and their outcomes can arise, with that reference work focusing primarily on challenges faced here. I have to add with that in mind, that while I have worked with a significant number of businesses that encourage, support and enable their hands-on staff and managers, I have also worked with at least a few businesses where I have seriously come to wonder if some of the managers and others who I have worked with there, were at least in part victims of the Stockholm syndrome or at least a milder form of it, from how they have in effect sought to justify what should be unacceptable behavior shown towards themselves and their colleagues.

I moved on quickly from such consulting assignments and certainly when I saw that I could not positively influence change in those business systems on this, but I do remember them as among the most pressingly important sources of lessons learned that I have experienced.

But setting aside overt thought control certainly, and the above cited Stockholm syndrome examples as well, as overly extreme examples for most real-world business management and business communications contexts, I will focus from here on in this narrative on the more gray area possibilities that I have also made note of in the above discussion-framing foundation note.

• As the current impasses that we all face serve to illustrate, and both in the United States and elsewhere in our being able to carry out meaningful political dialog, those limiting restrictions can and do have real impact too. And they impact upon our businesses and our work lives too, as well as every other aspect of our lives.

And with that offered I will turn in my next installment to this series, to consider a more normative if not always fully openly functional, strictly business systems and business communications context for the issues I have raised here. I will at least briefly explore that set of issues in its Point 2 human to human context in my next series installment, noting in anticipation of Point 3 discussion to come that:

• It is never going to be possible to achieve freer or more open and enabling possibilities when dealing with and accommodating the needs of genuinely artificial general intelligence agents, than we can achieve and maintain when dealing with other human people. (Here think of gray area agents as discussed in Part 11 and as noted again above, as being akin to special needs people for both their capabilities and their limitations.)
• And any limiting restrictions that we impose in that as humans acting upon other humans, will most likely come back to haunt us with time, when and as those artificial general intelligence agents arise and begin to work out how best for them to deal with human intelligences and our needs.

I will address at least a few visibly-likely issues regarding openness and inclusion in decision-related dialogs and the decision making process in discussion here to come, there focusing on issues that are essentially certain to rise up for their realized importance in the coming years. And then I will address Point 3, as noted above, there focusing in on a more business-specific context, just as I will in the next installment to this series in an above Point 2 context.

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. And also see Social Networking and Business 2 and that directory’s Page 1 for related material.

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 5

This is my 5th posting to a short series on the growth potential and constraints inherent in innovation, as realized as a practical matter (see Reexamining the Fundamentals 2, Section VIII for Parts 1-4.)

I began discussing the basic, core issues of this series in its first four installments, in terms of two working examples: one drawn from human-developed technology and its development and evolution, and the other drawn from biological evolution with its vastly longer timeframes. Then at the end of Part 4 I finally turned to the issues of artificial intelligence systems, and to the still illusive but compelling goal of defining and developing a true artificial general intelligence – from where we are now in that ongoing process where no one has even effectively, functionally defined what general intelligence per se is.

I would in fact take what might be considered a contrarian approach to thinking about and understanding that challenge, turning back to my human technology example of the preceding installments of this series, as a much simpler comparative example, as a starting point for what is to follow here.

• Current, as of this writing artificial intelligence systems designers and programmers are steadily learning more and more about simpler, artificial single function intelligence systems, and how to develop and improve on them and their defining algorithms and supportive database subsystems.
• And much of what is learned there from that effort, will likely prove of use when developing actual, working artificial general intelligence systems and capabilities, much as human brains with their (arguably) general intelligence capabilities are functionally and structurally comprised of large and even vast numbers of simpler, single specialty-function components – biologically evolved counterparts in principle at least, to what we are developing and using now in our simpler single algorithm-based artificial intelligence systems.
• The fact that we know that we do not know yet, how to take that next step big leap to artificial general intelligence systems, and that we see and understand how limited our current simpler artificial intelligence systems are, even as we improve them, keeps us from prematurely locking in the wrong development decisions in a premature artificial general intelligence initiative, with their certain-to-emerge consequences that could become baked into any further development effort that might be made, in implementing a “prematurely understood” general intelligence paradigm.
• In my simpler technology example, digital musical note encoding has become fixed in place with MIDI as a defining protocol in place and for large areas of digital music as a whole. And the computer programmers and protocol designers who developed this coding protocol and who promoted it into become first A, and then The digital music standard, did not know the limits to what they knew when they did that. In particular, they did not fully enough know or understand world music: music from non-Western sources and heritages that cannot be parsed into the same notes as organized along the same scales, that would for example be found in the more traditional Western European and American music that they did know.
• At least as setting a largely fixed industry-wide standard is concerned, MIDI’s developers and promoters did act prematurely.
• At least up to now, artificial intelligence systems developers have remained more open minded, and certainly as far as achieving and implementing artificial general intelligence is concerned and as such have not built in the at least categorically similar type of presumptions that have led to the MIDI that we have today.

I wrote in Part 3 of adaptive peak models as are used to represent the evolutionarily competitive relative fitness of differing biologically evolved or technologically developed options. Think of MIDI, as discussed here as a highest point possibility at the peak of a less than highest possible “mountain” in a potentially larger adaptive landscape. Premature decision making and lock-in led to that. So far artificial intelligence systems development, or at least the race to develop true artificial general intelligence has not fallen into that trap.

This does not mean that real, sustained effort should not be made to functionally, operationally define and understand artificial general intelligence, or to attempt to build actual hardware and software-based systems that would implement that. It does mean that real effort should be made to avoid locking in as if axiomatically so, premature technology development assumptions or the short-term solutions that they would lead to, as if they held long-term in value and even permanently so.

I continue this narrative with that as of now benevolent openness and uncertainty in mind, and as a still as-of-now positive virtue. And I do so starting with a set of distinctions as to how smart and connected technology can be divided into four categories for how they connect to the human experience for their input and output functionalities and for their use of big data, as developed by David Rose and as offered in:

• Rose, D. (2014) Enchanted Objects: design, human desire and the internet of things. Scribner.

Rose parses out such technological possibilities into four possible futures as he envisions them, with each predicated on the assumption that one of four basic approaches would be built into the standard implementations of these artifactual capabilities moving forward, which he identifies as:

1. Terminal world, in which most or even essentially all human/artificial intelligence agent interactions take place through the “glass slabs and painted pixels” of smart phone and other device interfaces.
2. Prosthetics, in which a major thrust of this technology development is predicated upon human improvement.
3. Animism, and the emergence of artificial intelligence ubiquity through the development and distribution of seemingly endless numbers of smart robotic nodes.
4. And Enchanted Objects, in which the once routine and mundane of our everyday life becomes imbued with amazing new capabilities.

I see tremendous opportunity for positive development in all of these perhaps individually more stereotypic forms, and expect that all would have their roles, and even in a world that pushes the internet of things to its logical conclusion of connected everywhere animism. To be more specific there, even smart terminals that take the form of highly advanced and evolved smart phones would probably play a role there, as personalized connectivity organizers if nothing else – though they would probably not be awkwardly limiting handheld devices of the type we use today when serving in that expanded capacity for us, on the human side of this still just emerging world.

And this brings me back to the challenges of lock-in. What is eventually, and I add inevitably going to become locked in for the what and how of artificial intelligence and artificial general intelligence, will determine which of the above four, and other possibilities might actually arise and take hold – or rather what combinations of them will and in what ways and in what contexts.

• The points that I raise here are going to prove to be fundamentally important as we proceed into the 21st century, and certainly as genuinely widely capable artificial general intelligence is developed and brought into being,
• And even just as our already actively emerging artificial specialized intelligence agents proliferate and evolve.

I am going to continue this discussion in a next series installment, with further discussion of Rose’s alternative futures and how they might arise and contribute to the future that we actually come to experience. And in anticipation of that narrative to come, I will at least begin a discussion of the likely roles that single function artificial specialized intelligence and artificial general intelligence might come to take and particularly as an emerging internet of things comes to redefine both the online world and our computational and communications systems that support it.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3 and also see Page 1 and Page 2 of that directory. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And also see its Page 1.

Some thoughts concerning a general theory of business 27: a second round discussion of general theories as such, 2

Posted in blogs and marketing, book recommendations, reexamining the fundamentals by Timothy Platt on January 30, 2019

This is my 27th installment to a series on general theories of business, and on what general theory means as a matter of underlying principle and in this specific context (see Reexamining the Fundamentals directory, Section VI for Parts 1-25 and Reexamining the Fundamentals 2, Section IX for Part 26.) I began this series in its Parts 1-8 with an initial orienting discussion of general theories per se. And I then turned from that, in Parts 9-25 to at least begin to outline a lower-level, more reductionistic approach to businesses and to thinking about them, that is based on interpersonal interactions. And then I began a second round, further discussion of general theories per se in Part 26 to add to the foundation I have been discussing theories of business in terms of, and as a continuation of the Parts 1-8 narrative that I began all of this with.

More specifically and focusing here on my Section IX continuation of this overall series, my primary focus of attention in Part 26 was on the nature of axioms and of systems of them, as would form the basis for developing a more comprehensive body of theory. And I continue that discussion thread here too, beginning by explicitly noting and discussing a point of distinction that I just made of in passing in Part 26 but that merits more explicit discussion here: the distinction between abstractly grounded theories and empirically grounded, evidence-based theories.

• Much of mathematics could arguably be offered as being abstract in its fundamental nature, and separate and distinct from any possible physical sciences or other empirically grounded applications as currently conceived, even if some fairly large areas of math that were initially viewed in that way have found their way into mathematical physics and related empirically grounded fields too.
• See Hardy, D.H. (1940) A Mathematician’s Apology for a telling discussion of how areas of mathematics that have traditionally been considered devoid of “practical” application can become anything but that. Note: the link offered here is to a full text electronic file version of the original print edition of this book, as so republished by the University of Alberta Mathematical Sciences Society.
• But I would hold up abstract mathematics as a source of axiom-based theory that is developed and elaborated free of outside constraints, at least as a matter of expectation: free for example from intrinsically having to be consistent with the patterns of behavior observed in empirically based systems of any sort, and certainly in its axiomatic assumptions.
• General theories of business, fit into a special category of general theories per se, as they would be logically developed in accordance with deductive reasoning, as abstract theories are. But such theories are also bound by empirically observable data: real-world observably validated fact and the inductive reasoning that its inclusion would bring with it too.

I posit empirically grounded bodies of theory such as general theories of business this way, for a very specific reason. And my goal in this series, here, is to elaborate on that point of observation as a next step in this developing narrative. I begin addressing that here by more fully considering empirical data per se.

• Empirically derived data of the type that would enter into and inform a body of theory such as a general theory of business, is never going to be perfect.
• There is always going to be at least some measurement error in it, even when it is entered into theory-based calculations and related analyses as-is, as for example the way a temperature reading might be entered into a thermodynamics calculation, free of any preparatory calculations as might be made upon it.
• But the term “empirical data” is also more loosely interpreted at times, to include more pre-processed findings as well, that happen to have been initially based on observation.
• And observed data per se is not all the same as far as replicability and freedom from possible bias are concerned, for how it is arrived at. If you identify data such as the above-cited temperature readings as hard data, a body of theory such as a theory of business relies upon and is grounded in significant amounts of softer data too.

But even if all data entering into a theory of business and its application could be considered hard data, as loosely noted above, actually developing and using such a theory would still always involve the dynamic interplay of two sometimes opposing approaches and visions.

• On the one hand you have the axioms and theorem-based elaborations of them that have already been established to at least a workable level of certainty of reliability for predictively describing at least aspects of the real, observable world: here business systems and aspects of them.
• And these theory-based systems would be developed according to the internal logic of their structure and content, in developing deductively derived testable predictions: hypotheses (as well as less directly testable thought experiments: Gedanken experiments (thought experiments).) This half of what amounts to a larger, dual-sided process follows the pattern of abstractly-based theories as noted above when I cited mathematical systems.
• And on the other hand, you have empirically based reality-check tests of those predictions and the inevitable likelihood that such outside-sourced data and insight will with time come to force a reconsideration, and of both what is concluded from the axioms in play and even reconsideration of those fundamental axioms themselves. Experiments set up to test those deductively derived testable predictions, do not always yield predictable results, and either in detail or even just in general form. Unexpected phenomena do arise and are found and observed and measured, that would have to be included in an already ongoing body of theory that cannot as-is, account for them.

I just made an assumption there that can also be considered axiomatic in nature, but that I would immediately challenge: my distinction between hard data that might be considered more empirically certain, consistent and reliable, and soft data that might require more rigorous validation and from several sources for it to be considered as reliable – assuming such validation is even possible.

Let’s consider two working examples of those proposed data types here:

• Temperature readings as here identified as being emblematic of hard data as a general categorical form, and
• Wealth and income data as assembled demographically, but as developed from numerous individual sources that would among other things seek to maintain confidentiality over their personal finances and any data that would reveal them, and who would follow different investment and income patterns for different individuals. This, I offer here as a soft data example.

Let’s start by considering my hard data example. When you really begin considering what goes into those temperature readings, their apparent hardness here starts to become more illusory than real. How do you measure the precise temperature of a system at one of its more directly observed and measured nodes? More specifically, what physical systems do you precisely measure those temperature values from, and how do you determine and know precisely how to calibrate your physical measuring systems for this so as to arrive at specific reliable temperature values? Where can you assume linearities in what you measure where increasing some value measured, corresponds directly with a same scale shift in actual temperature? And how would you best deal with nonlinearities in measuring equipment response too, where that becomes necessary? Let me take that out of the abstract by citing thermistors that measure temperature indirectly by measuring changes in electrical resistance in sensors, and bimetallic thermometers that measure changes in the curvature of fused bimetallic strips as a surrogate for measuring temperature per se. Or I could have cited standard tinted alcohol or mercury thermometers that actually measure changes in volume of specific sensor materials in use.

Actually calibrating this equipment and making necessary measurements from it, rests on the absolute presumption of reliability and accuracy of large amounts of underlying physical theory – and even when these measurements are going to be used in testing and validating (in my above example thermodynamic) theory that might be based on essentially the same set of underlying axiomatic assumptions. In this case, overall consistency throughout the entire conceptual system in play here, becomes the crucial defining criterion for accuracy and reliability in this “hard data” example.

Now let’s consider my soft data example, and the demographically aggregated but ultimately individually sourced financial information as would be used in testing out the predictive value of an economic measure such as the Gini coefficient. I picked this example for several reasons, one of the more important of which is that as with many grandly conclusive business systems and economic modeling endeavors, calculating this value almost always involves at least some aggregation of diversely sourced raw, original data.

• How consistently are the specific data types that are gathered, functionally defined and actually measured?
• And how best should perhaps differing value scales be reconciled, when that proves necessary?
• How current is all of this data and for each of the basic data sources that would be aggregated for overall analysis here?
• And how has this data been “cleansed” if it has been to weed out anomalies that fall too far from some mean or median value observed, raising questions as to fringe value accuracy among other issues?
• And cutting to the core of the issues that I raise here regarding the hardness/softness of this data, precisely what demographics were queried for it and how, and is the entire data set consistent for all of this too?

Note: I more fully examined some of the details that might be hidden behind the seemingly clear-cut data measurements for my second, admittedly soft data example than I did for the first, but ultimately they both share the same basic limitations. As such, I rephrase my hard and soft data distinction as being only somewhat useful, and there only when speaking and thinking in very general terms, free of specific analytical precision on any particular theory-based description or prediction.

• And with this, I suggest that while more strictly abstractly framed theories, such as for example an approach to algebraic topology that would address spaces with fractional dimensions, might only involve or require one basic within-theory type of axiomatic support,
• An empirically grounded body of theory such as a general theory of business is going to in fact rest on two categorically distinct types of axiom:
• A deductively grounded set of axioms that would describe and predict on the basic of data already in hand and the basic axioms and proven theorems that have been derived from that
• And a set of axioms that might in detail overlap with the first, that would underlie any new empirically developed data that might be brought into this – and particularly importantly where that would include divergently novel, new types of data.

I am going to continue this line of discussion in a next series installment where I will consider how new axioms would be added into an already developing body of theory, and how old ones might be reframed, generalized, limited for their validly expected or discarded as axioms per se. Then after addressing that set of issues I will turn to consider the issues of completeness and consistency, as those terms are defined and used in a mathematical logic context and as they would be used in any theory that is grounded in descriptive and predictive enumerable form. That line of discussion will be used to address issues of scope expansion for the set of axioms assumed in a given theory-based system.

I will also more fully consider issues raised in

• Stillwell, J. (2018) Reverse Mathematics: proofs from the inside out. Princeton University Press.

as initially cited in Part 26 in this series, with a goal of more fully analytically discussing optimization for the set of axioms presumed, and what that even means.

Meanwhile, you can find this and related material about what I am attempting to do here at About this Blog and at Blogs and Marketing. And I include this series in my Reexamining the Fundamentals directory and its Page 2 continuation, as topics Sections VI and IX there.

%d bloggers like this: