Platt Perspective on Business and Technology

Reconsidering the varying faces of infrastructure and their sometimes competing imperatives 9: the New Deal and infrastructure development as recovery 3

This is my 10th installment to a series on infrastructure as work on it, and as possible work on it are variously prioritized and carried through upon, or set aside for future consideration (see United Nations Global Alliance for ICT and Development (UN-GAID), postings 46 and following for Parts 1-8 with its supplemental posting Part 4.5.)

I have, up to here, made note of and selectively analyzed a succession of large scale infrastructure development and redevelopment initiatives in this series, with a goal of distilling out of them, a set of guiding principles that might offer planning and execution value when moving forward on other such programs. And as a part of that and as a fifth such case study example, I have been discussing an historically defining progression of events and responses to them from American history:

• The Great Depression and US president Franklin Delano Roosevelt’s New Deal and related efforts that he envisioned, argued for and led, in order to help bring his country out of that seemingly existential crisis.

I began this line of discussion in Part 7 and Part 8, focusing there on what the Great Depression was, and with a focus on how it arose and took place in the United States. And my goal here is to at least begin to discuss what Roosevelt did and sought to do and how, in response to all of that turmoil and challenge. And I begin doing so by offering a background reference that I would argue holds significant relevance for better understanding the context and issues that I would focus upon here:

• Goodwin, D.K. (2018) Leadership in Turbulent Times. Simon & Schuster. (And see in particular, this book’s Chapter 11 for purposes of further clarifying the issues raised here.)

As already noted in the two preceding installments to this series, the Great Depression arguably began in late 1929 with an “official” starting date usually set for that as October 29 of that year: Black Tuesday when the US stock market completed an initial crash that had started the previous Thursday. But realistically it really began as a true depression and as the Great Depression on March 13, 1930 when the Smoot–Hawley Tariff Act was first put into effect. And Herbert Hoover was president of the United States as the nation as a whole and much of the world around it, spiraled down into chaos.

There are those who revile Hoover for his failure to effectively deal with or even fully understand and acknowledge the challenges that the United States and American citizens and businesses faced during his administration, and certainly after his initial pre-depression honeymoon period in office. And there are those who exalt him and particularly from the more extreme right politically as they speak out against the New Deal – and even for how its programs helped to pull the country back from its fall. All of that, while interesting and even important, is irrelevant here for purposes of this discussion. The important point of note coming out of that is that unemployment was rampant, a great many American citizens had individually lost all of whatever life savings they had been able to accumulate prior to this, and seemingly endless numbers of business, banks and other basic organizational structures that helped form the American society were now unstable and at extreme risk of failure, or already gone. And the level of morale in the United States, and of public confidence in both public and private sector institutions was one of all but despair and for many. And that was the reality in the United States, and in fact in much of the world as a whole that Franklin Delano Roosevelt faced as he took his first oath of office as the 32nd president of the United States on March 4, 1933.

Roosevelt knew that if he was to succeed in any real way in addressing and remediating any of these challenges faced, he had to begin and act immediately. And he began laying out his approach to doing that, and he began following forward on that in his first inaugural address, where he declared war on the depression and where he uttered one of his most oft-remembered statements: “the only thing we have to fear is fear itself.”

Roosevelt did not wait until March 5th to begin acting on the promise of action that he made to the nation in that inaugural address. He immediately began reaching out to key members of the US Congress and to members of both political parties there to begin a collaborative effort that became known as the 100 Days Congress, for the wide ranging legislation that was drafted, refined, voted upon, passed and signed into law during that brief span of time (see First 100 Days of Franklin D. Roosevelt’s Presidency.) This ongoing flow of activity came to include passage of 15 major pieces of legislation that collectively reshaped the country, setting it on a path that led to an ultimate recovery from this depression. And that body of legislation formed the core of Roosevelt’s New Deal as he was able to bring it into effect.

• What did Roosevelt push for and get passed in this way, starting during those first 100 days?
• I would reframe that question in terms of immediate societal needs. What were the key areas that Roosevelt had to address and at least begin to resolve through legislative action, if he and his new presidential administration were to begin to effectively meet the challenge of this depression and as quickly as possible?

Rephrased in those terms, his first 100 days and their legislative push sought to grab public attention and support by simultaneously addressing a complex of what had seemed to be intractable challenges that included:

• Reassuring the public that their needs and their fears were understood and that they were being addressed,
• And building safeguards into the economy and into the business sector that drives it, to ensure their long-term viability and stability.
• Put simply, Roosevelt sought to create a new sense of public confidence, and put people back to work and with real full time jobs at long-term viable businesses.

Those basic goals were and I add still are all fundamentally interconnected. And to highlight that in an explicitly Great Depression context, I turn back to a source of challenge that I raised and at least briefly discussed in Part 8 of this series: banks and the banking system, to focus on their role in all of this.

• The public at large had lost any trust that they had had in banks and in their reliability, and with good reason given the number of them that had gone under in the months and first years immediately following the start of the Great Depression. And when those banks failed, all of the people and their families and all of the businesses that had money tied up in accounts with them, lost everything of that.
• So regulatory law was passed to prevent banks and financial institutions in general, from following a wide range of what had proven to be high-risk business practices that made them vulnerable to failure.
• And the Federal Deposit Insurance Corporation (FDIC) was created to safeguard customer savings in the event that a bank were to fail anyway, among other consumer-facing and supporting measures passed.

The goal there was to both stabilize banks and make them sounder, safer and more reliable as financial institutions, while simultaneously reassuring the private sector and its participants: individuals and businesses alike, that it was now safe to put their money back into those banks again. And rebuilding the banking system as a viable and used resource would make monies available through them for loans again, and that would help to get the overall economy moving and recovering again.

• Banks and the banking system in general, can in a fundamental sense be seen as constituting the heart of an economy, and for any national economy that is grounded in the marketplace and its participants, and that is not simply mandated from above, politically and governmentally as a command economy. Bank loans and the liquidity reserves and cash flows that they create, drive growth and make all else possible, and for both businesses, large and small and for their employees and for consumers of all sorts.
• So banks and banking systems constitute a key facet of a nation’s overall critical infrastructure, and one that was badly broken by the Great Depression and that needed to be fixed for any real recovery from it.

This is a series about infrastructure, and the banking system of a nation is one of the most important and vital structural component of its overall infrastructure system, and for how banks collectively create vast pools of liquid funds from monies saved in them, that can be turned back to their communities and for such a wide range of personal and business uses if nothing else. But the overall plan put forth and enacted into law in the 100 Days Congress (which adjourned on June 16, 1933) went way beyond simply reinforcing and rebuilding as needed, banks and other behind the scenes elements of the overall American infrastructure. It went on to address rebuilding and expansion needs for more readily visible aspects of the overall infrastructure in place too, and for systems that essentially anyone would automatically see as national infrastructure such as dams and highways. Roosevelt’s New Deal impacted upon and even fundamentally reshaped virtually every aspect of the basic large-scale infrastructure that had existed in the United States. And to highlight a more general principle here that I will return to in subsequent installments to this series, all of this effort had at least one key point of detail in common;

• It was all organized according to an overarching pattern rather than simply arising ad hoc, piece by piece as predominantly happened before the Great Depression.
• Ultimately any large scale infrastructure development or redevelopment effort has to be organized and realized as a coherent whole, even if that means developing it as an evolving effort, if coherent and gap-free results are to be realized and with a minimum of unexpected complications.

That noted, what did the New Deal, and the fruits of Roosevelt’s efforts and the 100 Days Congress actually achieve? I noted above that this included passage of 15 major pieces of legislation and add here that this included enactment of such programs as:

• The Civilian Conservation Corps as a jobs creating program that brought many back into the productive workforce in the United States,
• The Tennessee Valley Authority – a key regional development effort that made it possible to spread the overall national electric power grid into a large unserved part of the country while creating new jobs there in the process,
• The Emergency Banking Act, that sought to stop the ongoing cascade of bank failures that was plaguing the country,
• The Farm Credit Act that sought to provide relief to family farms and help restore American agriculture,
• The Agricultural Adjustment Act, that was developed coordinately with that, and that also helped to stabilize and revitalize American agriculture,
• The National Industrial Recovery Act,
• The Public Works Administration, which focused on creating jobs through construction of water systems, power plants and hospitals among other societally important resources,
• The Federal Deposit Insurance Corporation as cited above, and
• The Glass Steagall Act – legislation designed to limit if not block high risk, institutional failure creating practices in banks and financial institutions in general.

Five of the New Deal agencies that were created in response to the Great Depression and that contributed to ending it, still exist today, including the Federal Deposit Insurance Corporation, the Securities and Exchange Commission, the National Labor Relations Board, the Social Security Administration and the Tennessee Valley Authority. And while subsequent partisan political efforts have eroded some of the key features of the Glass Steagall Act, much of that is still in effect today too.

And with that noted, I conclude this posting by highlighting what might in fact be the most important two points that I could make here:

• I wrote above, of the importance of having a single, more unified vision when mapping out and carrying out a large scale infrastructure program, and that is valid. But flexibility in the face of the unexpected and in achieving the doable is vital there too. And so is a willingness to experiment and simply try things out and certainly when faced with novel and unprecedented challenges that you cannot address by anything like tried-and-true methods. A willingness to experiment and try possible solutions out and a willingness to step back from them and try something new if they do not work, is vital there.
• And seeking out and achieving buy-in is essential if any of that is going to be possible. This meant reaching out to politicians and public officials, as Roosevelt did when he organized and led his New Deal efforts. But more importantly, this meant his reaching out very directly to the American public and right in their living rooms, through his radio broadcast fireside chats, with his first of them taking place soon after he was first sworn into office as president. (He was sworn into office on March 4 and he gave his first fireside chat of what would become an ongoing series of 30, eight days later on March 12.)
• Franklin Delano Roosevelt most definitely did not invent the radio. But he was the first politician and the first government leader who figured out how to effectively use that means of communication and connections building, to promote and advance his policies and his goals. He was the first to use this new tool in ways that would lead to the type and level of overall public support that would compel even his political opponents to seek out ways to work with and compromise with him, on the issues that were important to him. So I add to my second bullet point here, the imperative of reaching out as widely and effectively as possible when developing that buy-in, and through as wide and effective a span of possible communications channels and venues as possible.

I am going to step back in my next installment to this series, from the now five case-in-point examples that I have been exploring in it up to here. And I will offer an at least first draft of the more general principles that I would develop out of all of this, as a basis for making actionable proposals as to how future infrastructure development projects might be carried out. And in anticipation of what is to follow here, I write all of this with the future, and the near-future and already emerging challenges of global warming in mind as a source of infrastructure development and redevelopment imperatives. Then after offering that first draft note, I am going to return to my initial plans for how I would further develop this series, as outlined in Part 6 of this series, and discuss infrastructure development as envisioned by and carried out the Communist Party of China and the government of the People’s Republic of China. And as part of that I will also discuss Russian, and particularly Soviet Union era, Russian infrastructure and its drivers. And my intention for now, as I think forward about this is that after completing those two case study example discussion, I will offer a second draft-refined update to the first draft version of that, that I will offer as a Part 10 here.

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. I also include this in Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory. And I include this in my United Nations Global Alliance for ICT and Development (UN-GAID) directory too for its relevance there.

Donald Trump, Xi Jinping, and the contrasts of leadership in the 21st century 20: some thoughts concerning how Xi and Trump approach and seek to create lasting legacies to themselves 8

Posted in book recommendations, macroeconomics, social networking and business by Timothy Platt on September 16, 2019

This is my 20th installment in a progression of comparative postings about Donald Trump’s and Xi Jinping’s approaches to leadership, as they have both turned to authoritarianism and its tools in their efforts to succeed there. And it is my 8th installment in that, to specifically address their legacy-building visions, ambitions and actions.

I have primarily addressed Xi Jinping’s narrative in this since Part 4, building from a preparatory start to that, that I offered towards the end of Part 3. And my primary focus in all of this has been on Xi’s China Dream: his Zhōngguó Mèng (中国梦), as it has served as an historically grounded, and historically justified foundation for all that he seeks to do.

And then Hong Kong erupted into protest again, in response to actions taken by Xi Jinping himself and by his hand-picked administrative leadership in that city, and with the most egregiously visible of that carried out and pushed forward by Carrie Lam: Hong Kong’s most senior administrator – who Xi himself explicitly had put into office there. So I changed directions in what I offer here, to focus on that immediate here-and-now, actual-legacy-realized news story and its history and context. Xi’s dreams and ambitions are one thing, representing his long-term and overall intentions; Hong Kong and its unfolding events are another as they represent his legacy that he is actually building it. (See my now six postings in the series: Xi Jinping and His China, and Their Conflicted Relationship with Hong Kong, as can be found at Macroeconomics and Business 2, as postings 343 and following.)

I have three specific follow-up postings to that Hong Kong-related series in mind, as of this writing, and may very well add more to that list as ongoing events continue to unfold there. I will simply say here in that regard that Xi’s brinksmanship approach to dealing with Hong Kong is both fueling a questioning of his judgment and his leadership in Beijing now, and fueling ambitions towards full independence in Hong Kong itself. But I will turn to that in future postings, and not today.

My goal for this posting is to turn back to Xi’s Zhōngguó Mèng and to the partly historically real, partly stereotypically fantasy foundation that it is built upon. Think of this as my turning back to more fully consider Xi’s and China’s here-and-now, and both in terms of that Dream itself as it has become Xi’s road map, and in terms of how he seeks to follow it.

Xi’s Dream is built upon two pillars: one positive insofar as it affirms what a China that is effectively led can achieve, and the other negative and grounded in the historical set-backs and humiliations that a great Chinese leader could undo and remediate from, while restoring his nation to its rightful, golden age path. I briefly outlined that positive image, golden age side of this Dream, as a perceived past glory to be restored, in Part 16 of this series. And I began writing of China’s fall from that golden age in Part 17, Part 18, and Part 19.

It is no accident that I have devoted more time and effort into presenting that darker side to Xi’s vision and narrative here, as it is clear that he has focused more on righting perceived wrongs, than he has on the details of that golden age too. And I continue that side of this narrative here, and with a goal of moving it forward along its timeline, from the 1830’s to at least the birth of Communist China and the system that Xi himself leads, and the system that he is at least as constrained by too.

I have already at least briefly raised and discussed the challenges that foreign powers and their commerce and profits oriented manifestations, created for China during this troubling period. More predatory behavior on the part of business-oriented enterprises such as the British East India Company, and more entirely private enterprises such as Jardine, Matheson & Company, and Lancelot Dent and Company, fundamentally shaped British foreign policy and how it was executed throughout Asia, and for generations, and with the British military intervening as needed to support that. And in China, this first-commercial and then military intervention and domination led to the Opium Wars and to the unequal treaties, as they came to be known, that ended them, and with first Hong Kong and then neighboring Kowloon being ceded to Great Britain as foreign owned colonies.

The historic emperors of China lived and ruled under a Mandate of Heaven, and according to a fundamental requirement that they maintain stability throughout their lands and for all of their peoples. The golden age of the Qing Dynasty ended and that mandate unraveled.

Provincial governments no longer turned to or fully supported the Emperor or their court in Beijing and the Forbidden City that was to be found at the heart of that larger urban center. Local governments no longer turned to or fully supported their provincial leadership as had always been both required and expected of them. The Qing Dynasty had a numerically small, lean and agile bureaucracy that developed a tradition of working collaboratively with their provincial governmental counterparts to create and maintain stability and order. And it was those provincial level officials who directly worked with and managed local government officials in a similar manner. But all of this began to unravel, and from foreign sourced pressures and from environmental challenges as already touched upon here, and by challenges to the food supply, and unrest began to grow.

I could write here of war lords and others who set up local and sometimes not so local enclaves within China where the Emperor and his officials had no voice or influence. China became rife with them. And I could write of larger and more individually notable outright rebellions as they arose and played out and particularly during the later Qing Dynasty as it spiraled into decline and failure. This list of rebellions in China at least briefly notes nine of them, and that is in fact an incomplete list, only touching upon more notable possible entries. All of these upheavals, all of this unrest had long-term, debilitating impact and all contributed to the death of both the Qing Dynasty and of dynastic rule per se in China. But perhaps arbitrarily, I cite three of these catastrophes by name here:

• The White Lotus Rebellion of 1796-1804 as a direct attack upon the Qing Dynasty and its legitimacy,
• A messianic uprising that came to be known as the Taiping Rebellion of 1850-1864 that was led by Hong Xiuquan: a self-proclaimed younger brother of Christianity’s Jesus Christ, come to Earth just like his older brother, and
• The Boxer Rebellion of 1899-1901, which was among other things an anti-Christian, anti-foreign influence uprising.

For a fuller and more detailed discussion of Hong Xiuquan and his uprising, see:

• Spence, J.D. (1996) God’s Chinese Son: the Taiping Heavenly Kingdom of Hong Xiuquan. WW Norton and Co.

The end result of all of this chaos, as arising from within China and as imposed from the outside was the abdication of China’s last imperial ruler, its last dynastic emperor: Pu Yi, or Henry as he was also called. And dynastic empire gave way to the Republic of China of 1912-1949, with its chaos, including Japan’s invasion and conquest of much of what is now China, with the Second Sino-Japanese War of 1937-1945. And I end this so briefly sketched historical timeline by citing Mao Zedong and his ultimately successful war against the Republic as he established his People’s Republic of China to replace it and all that had gone before, at least in mainland China itself. And then Mao’s version of chaos began.

Xi Jinping has built his Dream: his Zhōngguó Mèng out of this, as he seeks to return his nation and his peoples to the partly real, partly imaginary glory days of China’s golden age past, and with a goal of completing the dreams and ambitions of his country’s past great leaders and to their fullest possible extent. And Xi’s adversaries of today, are cast into the molds of adversaries past, from the years and decades of humiliation that he seeks to redress. And the adversities that these modern day versions of China’s past repressors create, mirror the adversities of that same image of China’s past too.

• Who is United States president Donald Trump in this? He is a wicked reincarnation of China’s foreign tormentors of those troubled and troubling years, that China’s Communism has sought to block and that Xi sees himself as finally completely ending as a source of threat. And Trump’s trade wars against China and the tariffs that drive them are simply a next generation iteration of what past foreign tormentors have inflicted upon China, as they attempted to subjugate that nation as a vassal state.
• And the “agreement”: the treaty that finally returned Hong Kong and Kowloon back to China from British colonial rule, is at least according to this imagining, a next-generation repetition of and continuation of the affronting humiliations that an early generation British government imposed on China, and certainly as far as those lands are concerned, with their Transfer of Sovereignty over Hong Kong.

And this brings me both to the uprisings taking place in Hong Kong as I write this, and Xi’s response to Donald Trump and other foreign aggressors. And this is where Xi’s efforts to control his people and his country, enter this narrative as his Dream plays out both within the borders of his nation and beyond them too. I am going to continue this narrative with a 21st installment to this series, and with a goal of pursuing that complex of issues. And I will also continue my Hong Kong-related series as briefly outlined for moving forward, towards the start of this posting.

In anticipation of that, I add here that I will at least briefly discuss the house that Mao built and that Xi now seeks to rule over: the Communist Party of China and the government that that Party leads and controls, and at least something of the history that this: Mao’s legacy has created. That history constitutes Xi’s fundamental grounding reality as a leader, and it would be impossible to meaningfully discuss his Dream or his legacy building efforts without taking that narrative into account too.

Meanwhile, you can find my Trump-related postings at Social Networking and Business 2 and its Page 3 continuation. And you can find my China writings as appear in this blog at Macroeconomics and Business and its Page 2 continuation, at Ubiquitous Computing and Communications – everywhere all the time, and at Social Networking and Business 2 and its Page 3 continuation.

Reconsidering Information Systems Infrastructure 11

This is the 11th posting to a series that I am developing, with a goal of analyzing and discussing how artificial intelligence and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 374 and loosely following for Parts 1-10. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

I conceptually divide artificial intelligence tasks and goals into three loosely defined categories in this series. And I have been discussing artificial intelligence agents and their systems requirements in a goals and requirements-oriented manner that is consistent with that, since Part 9 with those categorical types partitioned out from each other as follows:

• Fully specified systems goals and their tasks (e.g. chess with its fully specified rules defining a win and a loss, etc. for it),
• Open-ended systems goals and their tasks (e.g. natural conversational ability with its lack of corresponding fully characterized performance end points or similar parameter-defined success constraints), and
• Partly specified systems goals and their tasks (as in self-driving cars where they can be programmed with the legal rules of the road, but not with a correspondingly detailed algorithmically definable understanding of how real people in their vicinity actually drive and sometimes in spite of those rules: driving according to or contrary to the traffic laws in place.)

And I have focused up to here in this developing narrative on the first two of those task and goals categories, only noting the third of them as a transition category, where success in resolving tasks there would serve as a bridge from developing effective artificial specialized intelligence agents (that can carry out fully specified tasks and that have become increasingly well understood and both in principle and in practice) to the development of true artificial general intelligence agents (that can carry out open-ended tasks and that are still only partly understood for how they would be developed.)

And to bring this orienting starting note for this posting, up to date for what I have offered regarding that middle ground category, I add that I further partitioned that general category for its included degrees of task performance difficulty, in Part 10, according to what I identify as a swimming pool model:

• With its simpler, shallow end tasks that might arguably in fact belong in the fully specified systems goals and tasks category, as difficult entries there, and
• Deep end tasks that might arguably belong in the above-repeated open-ended systems goals and tasks category.

I chose self-driving vehicles and their artificial intelligence agent drivers as an intermediate, partly specified systems goal because it at least appears to belong in this category and with a degree of difficulty that would position it at least closer to the shallow end than the deep end there, and probably much closer.

Current self-driving cars have performed successfully (reaching their intended destinations and without accidents) and both in controlled settings and on the open road and in the presence of actual real-world drivers and their driving. And their guiding algorithms do seek to at least partly control for and account for what might be erratic circumambient driving on the part of others on the road around them, by for example allowing extra spacing between their vehicles and others ahead of them on the road. But even there, an “aggressive” human driver might suddenly squeeze into that space, and without signaling that they would change lanes, suddenly leaving a self-driving vehicle following too closely too. So this represents a task that might be encoded into a single if complex overarching algorithm, as supplemented by a priori sourced expert systems data and insight, based on real-world human driving behavior. But it is one that would also require ongoing self-learning and improvement on the part of the artificial intelligence agent drivers involved too, and both within these specific vehicles and between them as well.

• If all cars and trucks on the road were self-driving and all of them were actively sharing action and intention information with at least nearby vehicles in that system, all the time and real-time, self-driving would qualify as a fully specified systems task, and for all of the vehicles on the road. As soon as the wild card of human driving enters this narrative, that ceases to hold true. And the larger the percentage of human drivers actively on the road, the more statistically likely it becomes that one or more in the immediate vicinity of any given self-driving vehicle will drive erratically, making this a distinctly partly specified task challenge.

Let’s consider what that means in at least some detail. And I address that challenge by posing some risk management questions that this type of concurrent driving would raise, where the added risk that those drivers bring with them, move this out of a fully specified task category:

• What “non-standard” actions do real world drivers make?

This would include illegal lane changes, running red lights and stop signs, illegal turns, speeding and more. But more subtly perhaps, this would also include driving at, for example, a posted speed limit but under road conditions (e.g. in fog or during heavy rain) where that would not be safe.

• Are there circumstances where such behavior might arguably be more predictably likely to occur, and if so what are they and for what specific types of problematical driving?
• Are there times of the day, or other identifiable markers for when and where specific forms of problematical driving would be more likely?
• Are there markers that would identify problem drivers approaching, and from the front, the back or the side? Are there risk-predictive behaviors that can be identified before a possible accident, that a self-driving car and its artificial intelligence agent can look for and prepare for?
• What proactive accommodations could a self-driving car or truck make to lessen the risk of accident if, for example its sensors detect a car that is speeding and weaving erratically from lane to lane in the traffic flow, and without creating new vulnerabilities from how it would respond to that?

Consider, in that “new vulnerabilities” category, the example that I have already offered in passing above, when noting that increasing the distance between a self-driving car and a vehicle that is directly ahead of it, might in effect invite a third driver to squeeze in between them, and even if that meant it was now tailgating that leading vehicle and the self driving car that would now be behind it was tailgating it. A traffic light ahead, suddenly changing to red, or any other driving circumstance that would force the lead car in all of this to suddenly hit their brakes could cause a chain reaction accident.

What I am leading up to here in this discussion is a point that is simple to explain and justify in principle, even as it remains difficult to operationally resolve as a challenge in practice:

• With the difficulty in these less easily rules-defined challenges increasing, as the tasks that they would arise in it fit into deeper and deeper areas of that swimming pool in my above-cited analogy.

Fully specified systems goals and their tasks might be largely or even entirely deterministic in nature and rules determinable, where condition A always calls for action and response B, or at least a selection from among a specifiable set of particular such actions that would be chosen from, to meet the goals-oriented needs of the agent taking them. But partly specified systems goals and their tasks are of necessity significantly stochastic in nature, and with probabilistic evaluations of changing task context becoming more and more important as the tasks involved fit more and more into the deep end of that pool. And they become more open-endedly flexible in their response and action requirements too, no longer fitting cleanly into any given set of a priori if A then B rules.

Airplanes have had autopilot systems for years and even for human generations now, with the first of them dating back as far as 1912: more than a hundred years ago. But these systems have essentially always had human pilot back-up if nothing else, and have for the most part been limited to carrying out specific tasks, and under circumstances where the planes involved were in open air and without other aircraft coming too close. Self-driving cars have to be able to function in crowded roads and without human back-up – and even when a person is sitting behind the wheel, where it has to be assumed that they are not always going to be attentive to what the car or truck is doing, taking its self-driving capabilities for granted.

And with that noted, I add here that this is a goal that many are actively working to perfect, at least to a level of safe efficiency that matches the driving capabilities of an average safe driver on the road today. See, for example:

• The DARPA autonomous vehicle Grand Challenge, and
• Burns, L.D. and C Shulgan (2018) Autonomy: the quest to build the driverless car and how that will reshape the world. HarperCollins.

I am going to continue this discussion in a next series installment where I will turn back to reconsider open-ended goals and their agents again, and more from a perspective of general principles. Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Some thoughts concerning a general theory of business 30: a second round discussion of general theories as such, 5

Posted in blogs and marketing, book recommendations, reexamining the fundamentals by Timothy Platt on August 16, 2019

This is my 30th installment to a series on general theories of business, and on what general theory means as a matter of underlying principle and in this specific context (see Reexamining the Fundamentals directory, Section VI for Parts 1-25 and its Page 2 continuation, Section IX for Parts 26-29.)

I began this series in its Parts 1-8 with an initial orienting discussion of general theories per se, with an initial analysis of compendium model theories and of axiomatically grounded general theories as a conceptual starting point for what would follow. And I then turned from that, in Parts 9-25 to at least begin to outline a lower-level, more reductionistic approach to businesses and to thinking about them, that is based on interpersonal interactions. Then I began a second round, next step discussion of general theories per se in Parts 26-29 of this, building upon my initial discussion of general theories per se, this time focusing on axiomatic systems and on axioms per se and the presumptions that they are built upon. As a key part of that continued narrative, I offered a point of theory defining distinction in Part 28, that I began using there in this discussion, and that I continued using in Part 29 as well, and that I will continue using and developing here too, drawing a distinction between:

• Entirely abstract axiomatic bodies of theory that are grounded entirely upon sets of a priori presumed and selected axioms. These theories are entirely encompassed by sets of presumed fundamental truths: sets of axiomatic assumptions, as combined with complex assemblies of theorems and related consequential statements (lemmas, etc) that can be derived from them, as based upon their own collective internal logic. Think of these as axiomatically closed bodies of theory.
• And theory specifying systems that are axiomatically grounded as above, with at least some a priori assumptions built into them, but that are also at least as significantly grounded in outside-sourced information too, such as empirically measured findings as would be brought in as observational or experimental data. Think of these as axiomatically open bodies of theory.

And I have, and will continue to refer to them as axiomatically closed and open bodies of theory, as convenient terms for denoting them. And that brings me up to the point in this developing narrative that I would begin this installment to it at, with two topics points that I would discuss in terms of how they arise in closed and open bodies of theory respectively:

• How would new axioms be added into an already developing body of theory, and how and when would old ones be reframed, generalized, limited for their expected validity and made into special case rules as a result, or be entirely discarded as organizing principles there per se.
• Then after addressing that set of issues I said that I will turn to consider issues of scope expansion for the set of axioms assumed in a given theory-based system, and with a goal of more fully analytically discussing optimization for the set of axioms presumed, and what that even means.

I began discussing the first of these topics points in Part 29 and will continue doing so here. And after completing that discussion thread, at least for purposes of this digression into the epistemology of general theories per se, I will turn to and discuss the second of those points too. And I begin addressing all of this at the very beginning, with what was arguably the first, at least still-existing effort to create a fully complete and consistent axiomatically closed body of theory that would address what was expected at least, to encompass and resolve all possible problems and circumstances where it might conceivably be applied: Euclid’s geometry as developed from the set of axiomatically presumed truths that he built his system upon.

More specifically, I begin this narrative thread with Euclid’s famous, or if you prefer infamous Fifth postulate: his fifth axiom, and how that defines and constrains the concept of parallelism. And I begin here by noting that mathematicians and geometers began struggling with it more than two thousand years ago, and quite possibly from when Euclid himself was still alive.

Unlike the other axioms that Euclid offered, this one did not appear to be self-evident. So a seemingly endless progression of scholars sought to find a way to prove it from the first four of Euclid’s axioms. And baring that possibility, scholars sought to develop alternative bodies of geometric theory that either offered alternative axioms to replace Euclid’s fifth, or that did without parallelism as an axiomatic principle at all, or that explicitly focused on it and even if that meant dispensing with the metric concepts of angle and distance (where parallelism can be defined independently of them), with affine geometries.

In an axiomatically closed body of theory context, this can all be thought of as offering what amounts to alternative realities, and certainly insofar as geometry is applied for its provable findings, to the empirically observable real world. The existence of a formally, axiomatically specified non-Euclidean geometry such as a an elliptic or hyperbolic geometry that explicitly diverge from the Euclidean on the issue of parallelism, does not disprove Euclidean geometry, or even necessarily refute it except insofar as their existence shows that other equally solidly grounded, axiomatically-based geometries are possible too. So as long as a set of axioms that underlie a body of theory such as one of these geometries can be assumed to be internally consistent, the issues of reframing, generalizing, limiting or otherwise changing axioms in place, within a closed body of theory is effectively moot.

As soon as outside-sourced empirical or other information is brought in that arises separately from and independently from the set, a priori axioms in place in a body of theory, all of that changes. And that certainly holds if such information (e.g. replicably observed empirical observations and the data derived from them) is held to be more reliably grounded and “truer” than data arrived at entirely as a consequence of logical evaluation of the consequences of the a priori axioms in place. (Nota bene: Keep in mind that I am still referring here to initial presumed axioms that are not in and of themselves directly empirically validated, and that might never have even been in any way tested against outside-sourced observations and certainly for the range of observation types that that perhaps new forms of empirical data and its observed patterns might offer. Such new data might in effect force change in previously assumed axiomatically framed truth.)

All I have done in the above paragraph is to somewhat awkwardly outline the experimental method, where theory-based hypotheses are tested against carefully developed and analyzed empirical data to see if it supports or refutes them. And in that, I focus in the above paragraph, on experimental testing that would support or refute what have come to be seen as really fundamental, underlying principles and not just detail elaborations as to how the basic assumed principles in place would address very specific, special circumstances.

But this line of discussion overlooks, or at least glosses over a very large gap in the more complete narrative that I would need to address here. And for purposes of filling that gap, I return to reconsider Kurt Gödel and his proofs of the incompleteness of any axiomatic theory of arithmetic, and of the impossibility of proving absolute consistency for such a body of theory too, as touched upon here in Part 28. As a crude representation of a more complex overall concept, mathematical proofs can be roughly divided into two basic types:

Existence proofs, that simply demonstrate that at least one mathematical construct exists within the framework of a set of axioms under consideration that would explicitly sustain or refute that theory, but without in any way indicating its form or details, and
Constructive proofs, that both prove the existence of a theorem-supporting or refuting mathematical construct, and also specifically identify and specify it for at least one realistic example, or at least one realistic category of such examples.

Gödel’s inconsistency theorem is an existence proof insofar as it does not constructively indicate any specific mathematical contexts where inconsistency explicitly arises. And even if it did, that arguably would only indicate where specific changes might be needed in order to seamlessly connect two bodies of mathematical theory: A and B, within a to-them, sufficiently complete and consistent single axiomatic framework so as to be able to treat them as a single combined area of mathematics (e.g. combining algebra and geometry to arrive as a larger and more inclusive body of theory such as algebraic geometry.) And this brings me very specifically and directly to the issues of reverse mathematics, as briefly but very effectively raised in:

• Stillwell, J. (2018) Reverse Mathematics: proofs from the inside out. Princeton University Press.

And I at least begin to bring that approach into this discussion by posing a brief set of very basic questions, that arise of necessity from Gödel’s discoveries and the proof that he offered to validate them:

• What would be the minimum set of axioms, demonstrably consistent within that set, that would be needed in order to prove as valid, some specific mathematical theorem A?
• What would be the minimum set of axioms needed to so prove theorem A and also theorem B (or some other explicitly stated and specified finitely enumerable set of such theorems A, B, C etc.)?

Anything in the way of demonstrable incompleteness of a type required here, for bringing A and B (and C and …, if needed) into a single overarching theory would call for a specific, constructively demonstrable expansion of the set of axioms assumed in order to accomplish the goals implicit in those two bullet pointed questions. And any demonstrable inconsistency that were to emerge when seeking to arrive at such a minimal necessary axiomatic foundation for a combined theory, would of necessity call for a reframing or a replacement at a basic axiomatic level and even in what are overtly closed axiomatic bodies of theory. So Euclidean versus non-Euclidean geometries notwithstanding, even a seemingly completely closed such body of theory might need to be reconsidered and axiomatically re-grounded, or discarded entirely.

I am going to continue this line of discussion in a next series installment, where I will turn to more explicitly consider axiomatically open bodies of theory in this context. And in anticipation of that narrative to come, I will consider:

• The emergence of both disruptively new types of data and of empirical observations that could generate it,
• And shifts in the accuracy resolution, or the range of observations that more accepted and known types of empirical observations might suddenly be offering.

I add here that I have, of necessity, already begun discussing the second to-address topic point that I made note of towards the start of this posting:

• Scope expansion for the set of axioms assumed in a given theory-based system, and with a goal of more fully analytically discussing optimization for the set of axioms presumed, and what that even means.

I will continue on in this overall discussion to more fully consider that set of issues, and certainly where optimization is concerned in this type of context.

Meanwhile, you can find this and related material about what I am attempting to do here at About this Blog and at Blogs and Marketing. And I include this series in my Reexamining the Fundamentals directory and its Page 2 continuation, as topics Sections VI and IX there.

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 7

This is my 7th posting to a short series on the growth potential and constraints inherent in innovation, as realized as a practical matter (see Reexamining the Fundamentals 2, Section VIII for Parts 1-6.) And this is also my fourth posting to this series, to explicitly discuss emerging and still forming artificial intelligence technologies as they are and will be impacted upon by software lock-in and its imperatives, and by shared but more arbitrarily determined constraints such as Moore’s law (see Part 4, Part 5 and Part 6.)

I focused in Part 6 of this narrative, on a briefly stated succession of possible development possibilities that all relate to how an overall next generation internet will take shape, that is largely and even primarily driven at least for a significant proportion of functional activity carried out in it, by artificial intelligence agents and devices: an increasingly largely internet of things and of smart artifactual agents that act among them. And I began that with a continuation of a line of discussion that I began in earlier installments to this series, centering on four possible development scenarios as initially offered by David Rose in his book:

• Rose, D. (2014) Enchanted Objects: design, human desire and the internet of things. Scribner.

I added something of a fifth such scenario, or rather a caveat-based acknowledgment of the unexpected in how this type of overall development will take shape, in Part 6. And I ended that posting with a somewhat cryptic anticipatory note as to what I would offer here in continuation of its line of discussion, which I repeat now for smoother continuity of narrative:

• I am going to continue this discussion in a next series installment, where I will at least selectively examine some of the core issues that I have been addressing up to here in greater detail, and how their realized implementations might be shaped into our day-to-day reality. And in anticipation of that line of discussion to come, I will do so from a perspective of considering how essentially all of the functionally significant elements to any such system and at all levels of organizational resolution that would arise in it, are rapidly coevolving and taking form, and both in their own immediately connected-in contexts and in any realistic larger overall rapidly emerging connections-defined context too. And this will of necessity bring me back to reconsider some of the first issues that I raised in this series too.

The core issues that I would continue addressing here as follow-through from that installment, fall into two categories. I am going to start this posting by adding another scenario to the set that I began presenting here, as initially set forth by Rose with his first four. And I will use that new scenario to make note of and explicitly consider an unstated assumption that was built into all of the other artificial intelligence proliferation and interconnection scenarios that I have offered here so far. And then, and with that next step alternative in mind, I will reconsider some of the more general issues that I raised in Part 6, further developing them too.

I begin all of this with a systems development scenario that I would refer to as the piecewise distributed model.

• The piecewise distributed model for how artificial intelligence might arise as a significant factor in the overall connectiverse that I wrote of in Part 6 is based on current understanding of how human intelligence arises in the brain as an emergent property, or rather set of them, from the combined and coordinated activity of simpler components that individually do not display anything like intelligence per se, and certainly not artificial general intelligence.

It is all about how neural systems-based intelligence arises from lower level, unintelligent components in the brain and how that might be mimicked, or recapitulated if you will through structurally and functionally analogous systems and their interconnections, in artifactual systems. And I begin to more fully characterize this possibility by more explicitly considering scale, and to be more precise the scale of range of reach for the simpler components that might be brought into such higher level functioning totalities. And I begin that with a simple if perhaps somewhat odd sounding question:

• What is the effective functional radius of the human brain given the processing complexities and the numbers and distributions of nodes in the brain that are brought into play in carrying out a “higher level” brain activity, the speed of neural signal transmission in that brain as a parametric value in calculations here, and an at least order of magnitude assumption as to the timing latency to conscious awareness of a solution arrived at for a brain activity task at hand, from its initiation to its conclusion?

And with that as a baseline, I will consider the online and connected alternative that a piecewise distributed model artificial general intelligence, or even just a higher level but still somewhat specialized artificial intelligence would have to function within.

Let’s begin this side by side comparative analysis with consideration of what might be considered a normative adult human brain, and with a readily and replicably arrived at benchmark number: myelinated neurons as found in the brain send signals at a rate of approximately 120 meters per second, where one meter is equal to approximately three and a quarter feet in distance. And for simplicity’s sake I will simply benchmark the latency from the starting point of a cognitively complex task to its consciously perceived completion at one tenth of a second. This would yield an effective functional radius of that brain at 12 meters or 40 feet, or less – assuming as a functionally simplest extreme case for that outer range value that the only activity required to carry out this task was the simple uninterrupted transmission of a neural impulse signal along a myelinated neuron for some minimal period of time to achieve “task completion.”

An actual human brain is of course a lot more compact than that, and a lot more structurally complex too, with specialized functional nodes and complex arrays of parallel processor organized structurally and functionally duplicated elements in them. And that structural and functional complexity, and the timing needed to access stored information from and add new information back into memory again as part of that task activity, slows actual processing down. An average adult human brain is some 15 centimeters long, or six inches front to back so using that as an outside-value metric and a radius as based on it of some three inches, structural and functional complexities in the brain that would be called upon to carry out that tenth of a second task, would effectively reduce its effective functional radius some 120-fold from the speedy transmission-only outer value that I began this brief analysis with.

Think of that as a speed and efficiency tradeoff reduction imposed on the human brain by its basic structural and functional architecture and by the nature and functional parameters of its component parts, on the overall possible maximum rate of activity, at least for tasks performed that would fit the overall scale and complexity of my tenth of a second benchmark example. Now let’s consider the more artifactual overall example of computer and network technology as would enter into my above-cited piecewise distributed model scenario, or in fact into essentially any network distributed alternative to it. And I begin that by noting that the speed of light in a vacuum is approximately 300 million meters per second, and that electrons can travel along a pure copper wire at up to approximately 99% of that value.

I will assume for purposes of this discussion that photons in wireless networked and fiber optic connected aspects of such a system, and the electrons that convey information through their flow distributions in more strictly electronic components of these systems all travel on average at roughly that same round number maximum speed, as any discrepancy from it in what is actually achieved would be immaterial for purposes of this discussion, given my rounding off and other approximations as resorted to here. Then, using the task timing parameter of my above-offered brain functioning analysis, as sped up to one tenth of a millisecond for an electronic computer context, an outer limit transmission-only value for this system and its physical dimensions would suggest a maximum radius of some 30,000 kilometers, encompassing all of the Earth and all of near-Earth orbit space and more. There, in counterpart to my simplest case neural signal transmission processing as a means of carrying out the above brain task, I assume here that its artificial intelligence counterpart might be completed simply by the transmission of a single pulse of electrons or photons and without any processing step delays required.

Individual neurons can fire up to some 200 times per second, depending on the type of function carried out, and an average neuron in the brain connects to what is on the order of 1000 other neurons through complex dendritic branching and the synaptic connections they lead to, and with some neurons connecting to as many as 10,000 others and more. I assume that artificial networks can grow to that level of interconnected connectivity and more too, and with levels of involved nodal connectivity brought into any potentially emergent artificial intelligence activity that might arise in such a system, that matches and exceeds that of the brain for its complexity there too. That at least, is likely to prove true for any of what with time would become the all but myriad number of organizing and managing nodes, that would arise in at least functionally defined areas of this overall system and that would explicitly take on middle and higher level SCADA -like command and control roles there.

This would slow down the actual signal transmission rate achievable, and reduce the maximum physical size of the connected network space involved here too, though probably not as severely as observed in the brain. There, even today’s low cost readily available laptop computers can now carry out on the order of a billion operations per second and that number continues to grow as Moore’s law continues to hold forth. So if we assume “slow” and lower priority tasks as well as more normatively faster ones for the artificial intelligence network systems that I write of here, it is hard to imagine restrictions that might realistically arise that would effectively limit such systems to volumes of space smaller than the Earth as a whole, and certainly when of-necessity higher speed functions and activities could be carried out by much more local subsystems and closer to where their outputs would be needed.

And to increase the expected efficiencies of these systems, brain as well as artificial network in nature, effectively re-expanding their effective functional radii again, I repeat and invoke a term and a design approach that I used in passing above: parallel processing. That, and inclusion of subtask performing specialized nodes, are where effectively breaking up a complex task into smaller, faster-to-complete subtasks, whose individual outputs can be combined as a completed overall solution or resolution, can speed up overall task completion by orders of timing efficiency and for many types of tasks, allowing more of them to be carried out within any given nominally expected benchmark time for expected “single” task completions. This of course also allows for faster completion of larger tasks within that type of performance measuring timeframe window too.

• What I have done here at least in significant part, is to lay out an overall maximum connected systems reach that could be applied to the completion of tasks at hand, and in either a human brain or an artificial intelligence-including network. And the limitations of accessible volume of space there, correspondingly sets an outer limit to the maximum number of functionally connected nodes that might be available there, given that they all of necessity have space filling volumes that are greater than zero.
• When you factor in the average maximum processing speed of any information processing nodes or elements included there, this in turn sets an overall maximum, outer limit value to the number of processing steps that could be applied in such a system, to complete a task of any given time-requiring duration, within such a physical volume of activity.

What are the general principles beyond that set of observations that I would return to here, given this sixth scenario? I begin addressing that question by noting a basic assumption that is built into the first five scenarios as offered in this series, and certainly into the first four of them: that artificial intelligence per se reside as a significant whole in specific individual nodes. I fully expect that this will prove true in a wide range of realized contexts as that possibility is already becoming a part of our basic reality now, with the emergence and proliferation of artificial specialized intelligence agents. But as this posting’s sixth scenario points out, that assumption is not the only one that might be realized. And in fact it will probably only account for part of what will to come to be seen as artificial intelligence as it arises in these overall systems.

The second additional assumption that I would note here is that of scale and complexity, and how fundamentally different types of implementation solutions might arise, and might even be possible, strictly because of how they can be made to work with overall physical systems limitations such as the fixed and finite speed of light.

Looking beyond my simplified examples as outlined here: brain-based and artificial alike, what is the maximum effective radius of a wired AI network, that would as a distributed system come to display true artificial general intelligence? How big a space would have to be tapped into for its included nodes to match a presumed benchmark human brain performance for threshold to cognitive awareness and functionality? And how big a volume of functionally connected nodal elements could be brought to bear for this? Those are open questions as are their corresponding scale parameter questions as to “natural” general intelligence per se. I would end this posting by simply noting that disruptively novel new technologies and technology implementations that significantly advance the development of artificial intelligence per se, and the development of artificial general intelligence in particular, are likely to both improve the quality and functionality of individual nodes involved and regardless of which overall development scenarios are followed, and their capacity to synergistically network together.

I am going to continue this discussion in a next series installment where I will step back from considering specific implementation option scenarios, to consider overall artificial intelligence systems as a whole. I began addressing that higher level perspective and its issues here, when using the scenario offered in this posting to discuss overall available resource limitations that might be brought to bear on a networked task, within given time-to-completion restrictions. But that is only one way to parameterize this type of challenge, and in ways that might become technologically locked in and limited from that, or allowed to remain more open to novelty – at least in principle.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3 and also see Page 1 and Page 2 of that directory. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And also see its Page 1.

Addendum note: The above presumptive end note added at the formal conclusion of this posting aside, I actually conclude this installment with a brief update to one of the evolutionary development-oriented examples that I in effect began this series with. I wrote in Part 2 of this series, of a biological evolution example of what can be considered an early technology lock-in, or rather a naturally occurring analog of one: of an ancient biochemical pathway that is found in all cellular life on this planet: the pentose shunt.

I add a still more ancient biological systems lock-in example here that in fact had its origins in the very start of life itself as we know it, on this planet. And for purposes of this example, it does not even matter whether the earliest genetic material employed in the earliest life forms was DNA or RNA in nature for how it stored and transmitted genetic information from generation to generation and for how it used such information in its life functions within individual organisms. This is an example that would effectively predate that overall nucleic acid distinction as it involves the basic, original determination of precisely which basic building blocks would go into the construction and information carrying capabilities of either of them.

All living organisms on Earth, with a few viral exceptions employ DNA as their basic archival genetic material, and use RNA as an intermediary in accessing and making use of the information so stored there. Those viruses use RNA for their own archival genetic information storage, and the DNA replicating and RNA fabrication machinery of the host cells they live in to reproduce. And the genetic information included in these systems, and certainly at a DNA level is all encoded in patterns of molecules called nucleotides that are linearly laid out in the DNA design. Life on Earth uses combinations of four possible nucleotides for this coding and decoding: adenine (A), thymine (T), guanine (G) and cytosine (C). And it was presumed at least initially that the specific chemistry of these four possibilities made them somehow uniquely suited to this task.

More recently it has been found that there are other possibilities that can be synthesized and inserted into DNA-like molecules, with the same basic structure and chemistry, that can also carry and convey this type of genetic information and stably, reliably so (see for example:

Hachimoji DNA and RNA: a genetic system with eight building blocks.)

And it is already clear that this only indicates a small subset of the information coding possibilities that might have arisen as alternatives to the A/T/G/C genetic coding became locked-in, in practice in life on Earth.

If I could draw one relevant conclusion to this still unfolding story that I would share here, it is that if you want to find technology lock-ins, or their naturally occurring counterparts, look to your most closely and automatically held developmental assumptions, and certainly when you cannot rigorously justify them from first principles. Then question the scope of relevance and generality of your first principles there, for hidden assumptions that they carry within them.

Some thoughts concerning a general theory of business 29: a second round discussion of general theories as such, 4

Posted in blogs and marketing, book recommendations, reexamining the fundamentals by Timothy Platt on June 11, 2019

This is my 29th installment to a series on general theories of business, and on what general theory means as a matter of underlying principle and in this specific context (see Reexamining the Fundamentals directory, Section VI for Parts 1-25 and its Page 2 continuation, Section IX for Parts 26-28.)

I began this series in its Parts 1-8 with an initial orienting discussion of general theories per se, with an initial analysis of compendium model theories and of axiomatically grounded general theories as a conceptual starting point for what would follow. And I then turned from that, in Parts 9-25 to at least begin to outline a lower-level, more reductionistic approach to businesses and to thinking about them, that is based on interpersonal interactions. Then I began a second round, next step discussion of general theories per se in Parts 26-28 of this, building upon my initial discussion of general theories per se, this time focusing on axiomatic systems and on axioms per se and the presumptions that they are built upon.

More specifically, I have used the last three postings to that progression to at least begin a more detailed analysis of axioms as assumed and assumable statements of underlying fact, and of general bodies of theory that are grounded in them, dividing those theories categorically into two basic types:

• Entirely abstract axiomatic bodies of theory that are grounded entirely upon sets of a priori presumed and selected axioms. These theories are entirely encompassed by sets of presumed fundamental truths: sets of axiomatic assumptions, as combined with complex assemblies of theorems and related consequential statements (lemmas, etc) that can be derived from them, as based upon their own collective internal logic. Think of these as axiomatically closed bodies of theory.
• And theory specifying systems that are axiomatically grounded as above, with at least some a priori assumptions built into them, but that are also at least as significantly grounded in outside-sourced information too, such as empirically measured findings as would be brought in as observational or experimental data. Think of these as axiomatically open bodies of theory.

I focused on issues of completeness and consistency in these types of theory grounding systems in Part 28 and briefly outlined there, how the first of those two categorical types of theory cannot be proven either fully complete or fully consistent, if they can be expressed in enumerable form of a type consistent with, and as such including the axiomatic underpinnings of arithmetic: the most basic of all areas of mathematics, as formally axiomatically laid out by Whitehead and Russell in their seminal work: Principia Mathematica.

I also raised and left open the possibility that the outside validation provided in axiomatically open bodies of theory, as identified above, might afford alternative mechanisms for de facto validation of completeness, or at least consistency in them, where Kurt Gödel’s findings as briefly discussed in Part 28, would preclude such determination of completeness and consistency for any arithmetically enumerable axiomatically closed bodies of theory.

That point of conjecture began a discussion of the first of a set of three basic, and I have to add essential topics points that would have to be addressed in establishing any attempted-comprehensive bodies of theory: the dual challenges of scope and applicability of completeness and consistency per se as organizing goals, and certainly as they might be considered in the contexts of more general theories. And that has left these two here-repeated follow-up points for consideration:

• How would new axioms be added into an already developing body of theory, and how and when would old ones be reframed, generalized, limited for their expected validity and made into special case rules as a result, or be entirely discarded as organizing principles there per se.
• Then after addressing that set of issues I said that I will turn to consider issues of scope expansion for the set of axioms assumed in a given theory-based system, and with a goal of more fully analytically discussing optimization for the set of axioms presumed, and what that even means.

My goal for this series installment is to at least begin to address the first of those two points and its issues, adding to my already ongoing discussion of completeness and consistency in complex axiomatic theories while doing so. And I begin by more directly and explicitly considering the nature of outside-sourced, a priori empirically or otherwise determined observations and the data that they would generate, that would be processed into knowledge through logic-based axiomatic reasoning.

Here, and to explicitly note what might be an obvious point of observation on the part of readers, I would as a matter of consistency represent the proven lemmas and theorems of a closed body of theory such as a body of mathematical theory, as proven and validated knowledge as based on that theory. And I correspondingly represent open question still-unproven or unrefuted theoretical conjectures as they arise and are proposed in those bodies of theory, as potentially validatable knowledge in those systems. And having noted that point of assumption (presumption?), I turn to consider open systems as for example would be found in theories of science or of business, in what follows.

• Assigned values and explicitly defined parameters, as arise in closed systems such as mathematical theories with their defined variables and other constructs, can be assumed to represent absolutely accurate input data. And that, at least as a matter of meta-analysis, even applies when such data is explicitly offered and processed through axiomatic mechanisms as being approximate in nature and variable in range; approximate and variable are themselves explicitly defined, or at least definable in such systems applications, formally and specifically providing precise constraints on the data that they would organize, even then.
• But it can be taken as an essentially immutable axiomatic principle: one that cannot be violated in practice, that outside sourced data that would feed into and support an axiomatically open body of theory, is always going to be approximate for how it is measured and recorded for inclusion and use there, and even when that data can be formally defined and measured without any possible subjective influence – when it can be identified and defined and measured in as completely objective a manner as possible and free of any bias that might arise depending on who observes and measures it.

Can an axiomatically open body of theory somehow be provably complete or even just consistent for that matter, due to the balancing and validating inclusion of outside frame of reference-creating data such as experientially derived empirical observations? That question can be seen as raising an interesting at least-potential conundrum and certainly if a basic axiom of the physical sciences that I cited and made note of in Part 28 is (axiomatically) assumed true:

• Empirically grounded reality is consistent across time and space.

That at least in principle, after all, raises what amounts to an immovable object versus an unyieldable force type of challenge. But as soon as the data that is actually measured, as based on this empirically grounded reality, takes on what amounts to a built in and unavoidable error factor, I would argue that any possible outside-validated completeness or consistency becomes moot at the very least and certainly for any axiomatically open system of theory that might be contemplated or pursued here.

• This means that when I write of selecting, framing and characterizing and using axioms and potential axioms in such a system, I write of bodies of theory that are of necessity always going to be works in progress: incomplete and potentially inconsistent and certainly as new types of incoming data are discovered and brought into them, and as better and more accurate ways to measure the data that is included are used.

Let me take that point of conjecture out of the abstract by citing a specific source of examples that are literally as solidly established as our more inclusive and more fully tested general physical theories of today. And I begin this with Newtonian physics as it was developed at a time when experimental observation was limited for the range of phenomena observed and in the levels of experimental accuracy attainable for what was observed and measured, so as to make it impossible to empirically record the types of deviation from expected sightings that would call for new and more inclusive theories, with new and altered underlying axiomatic assumptions, as subsequently called for in the special theory of relativity as found and developed by Einstein and others. Newtonian physics neither calls for nor accommodates anything like the axiomatic assumptions of the special theory of relativity, holding for example that the speed of light is constant in all frames of reference. More accurate measurements as taken over wider ranges of experimental examination of observable phenomena forced change to the basic underlying axiomatic assumptions of Newton (e.g. his laws of motion.) And further expansion of the range of phenomena studied and the level of accuracy in which data is collected from all of this, might very well lead to the validation and acceptance of still more widely inclusive basic physical theories, and with any changes in what they would axiomatically presume in their foundations included there. (Discussion of alternative string theory models of reality among other possibilities, come to mind here, where experimental observational limitations of the types that I write of here, are such as to preclude any real culling and validating there, to arrive at a best possible descriptive and predictive model theory.)

At this point I would note that I tossed a very important set of issues into the above text in passing, and without further comment, leaving it hanging over all that has followed it up to here: the issues of subjectivity.

Data that is developed and tested for how it might validate or disprove proposed physical theory might be presumed to be objective, as a matter of principle. Or alternatively and as a matter of practice, it might be presumed possible to obtain such data that is arbitrarily close to being fully free from systematic bias, as based on who is observing and what they think about the meaning of the data collected. And the requirement that experimental findings be independently replicated by different researchers in different labs and with different equipment, and certainly where findings are groundbreaking and unexpected, serves to support that axiomatic assumption as being basically reliable. But it is not as easy or as conclusively presumable to assume that type of objectivity for general theories that of necessity have to include within them, individual human understand and reasoning with all of the additional and largely unstated axiomatic presumptions that this brings with it, as exemplified by a general theory of business.

That simply adds whole new layers of reason to any argument against presumable completeness or consistency in such a theory and its axiomatic foundations. And once again, this leaves us with the issues of such theories always being works in progress, subject to expansion and to change in general.

And this brings me specifically and directly to the above-stated topics point that I would address here in this brief note of a posting: the determination of which possible axioms to include and build from in these systems. And that, finally, brings me to the issues and approaches that are raised in a reference work that I have been citing in anticipation of this discussion thread for a while now in this series, and an approach to the foundation of mathematics and its metamathematical theories that this and similar works seek to clarify if not codify:

• Stillwell, J. (2018) Reverse Mathematics: proofs from the inside out. Princeton University Press.)

I am going to more fully and specifically address that reference and its basic underlying conceptual principles in a next series installment. But in anticipation of doing so, I end this posting with a basic organizing point of reference that I will build from there:

• The more traditional approach to the development and elaboration of mathematical theory, and going back at least as far as the birth of Euclidean geometry, was one of developing a set of axioms that would be presumed as if absolute truths, and then developing emergent lemmas and theories from them.
• Reverse mathematics is so named because it literally reverses that, starting with theories to be proven and then asking what are the minimal sets of axioms that would be needed in order to prove them.

My goal for the next installment to this series is to at least begin to consider both axiomatically closed and axiomatically open theory systems in light of these two alternative metatheory approaches. And in anticipation of that narrative line to come, this will mean reconsidering compendium models and how they might arise as need for new axiomatic frameworks of understanding arise, and as established ones become challenged.

Meanwhile, you can find this and related material about what I am attempting to do here at About this Blog and at Blogs and Marketing. And I include this series in my Reexamining the Fundamentals directory and its Page 2 continuation, as topics Sections VI and IX there.

Rethinking national security in a post-2016 US presidential election context: conflict and cyber-conflict in an age of social media 15

This is my 15th installment to a series on cyber risk and cyber conflict in a still emerging 21st century interactive online context, and in a ubiquitously social media connected context and when faced with a rapidly interconnecting internet of things among other disruptively new online innovations (see Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 354 and loosely following for Parts 1-14.)

My goal for this installment is to reframe what I have been offering up to here in this series, and certainly in its most recent postings up to now. And I begin that by offering a very specific and historically validated point of observation (that I admit up-front will have a faulty assumption built into it, that I will raise and discuss later on in this posting):

• It can be easily and cogently argued that the single greatest mistake that the civilian and military leadership of a nation can make, when confronting and preparing for possible future challenge and conflict,
• Is to simply think along familiar lines with that leading to their acting according to what is already comfortable and known – thinking through and preparing to fight a next war as if it would only be a repeat of the last one that their nation faced
• And no matter how long ago that happened, and regardless of whatever geopolitical change and technological advancement might have taken place since then.
• Strategic and tactical doctrine and the logistics and other “behind the lines” support systems that would enable them, all come to be set as if in stone: and in stone that was created the last time around in the crucible of their last conflict. And this has been the basic, default pattern followed by most and throughout history.
• This extended cautionary note applies in a more conventional military context where anticipatory preparation for proactively addressing threats is attempted, and when reactive responses to those threats are found necessary too. But the points raised here are just as cogently relevant in a cyber-conflict context too, or in a mixed cyber plus conventional context (as Russia has so recently deployed in the Ukraine as its leadership has sought to restore something of its old Soviet era protective buffer zone around the motherland if nothing else.)
• History shows more leaders and more nations that in retrospect have been unprepared for what is to come, than it does those who were ready to more actively consider and prepare for emerging new threats and new challenges, and in new ways.
• Think of the above as representing in outline, a strategic doctrine that is based on what should be more of a widening of the range and scope of what is considered possible, and the range and scope of how new possibilities might have to be addressed, but that by its very nature cannot be up to that task.

To take that out of the abstract, consider a very real world example of how the challenges I have just discussed, arise and play out.

• World War I with its reliance on pre-mechanized tactics and strategies, with its mass frontal assault charges and its horse cavalry among other “trusted traditions,” and with its reliance on trench warfare to set and hold front lines and territory in all of that:
• Traditions that had led to horrific loss of life even in less technologically enabled previous wars such as the United States Civil War,
• Arguably led to millions of what should have been completely avoidable casualties as foot soldiers faced walls of machinegun fire and tanks, aircraft bombardment and aerial machinegun attack and even poison gas attacks as they sought to prevail through long-outmoded military practice.

And to stress a key point that I have been addressing here, I would argue that cyber attacks and both as standalone initiatives and as elements in more complex offensives, hold potential for causing massive harm and to all sides involved in them too. And proactively seeking to understand and prepare for what might come next there, can be just as important as comparable preparation is in a more conventional warfare-oriented context. Think World War I if nothing else there, as a cautionary note example of the possible consequences in a cyber-theatre of conflict, of making the mistakes outlined in the above bullet pointed preparation and response doctrine.

Looking back at this series as developed up to here, and through its more recent installments in particular, I freely admit that I have been offering what might be an appearance of taking a more reactive and hindsight-oriented perspective here. And the possibility of confusion there on the part of a reader begins in its Part 1 from the event-specific wording of its title, and with the apparent focus on a single now historical event that that conveys. But my actual overall intention here is in fact more forward thinking and proactively so, than retrospective and historical-narrative in nature.

That noted, I have taken an at least somewhat historical approach to what I have written in this series up to here and even as I have offered a few more general thoughts and considerations here too. But from this point on I will offer an explicitly dual narrative:

• My plan is to initially offer a “what has happened”, historically framed outline of at least a key set of factors and considerations that have led us to our current situation. That will largely follow the pattern that I have been pursuing here and certainly as I have discussed Russia as a source of working examples in all of this.
• Then I will offer a more open perspective that is grounded in that example but not constrained by it, for how we might better prepare for the new and disruptively novel and proactively so where possible, but with a better reactive response where that proves necessary too.

My goal in that will not be to second guess the decisions and actions of others, back in 2016 and leading up to it or from then until now as of this writing. And it is not to offer suggestions as to how to better prepare for a next 2016-style cyber-attack per se and certainly not as a repeat of an old conflict, somehow writ new. To clarify that with a specific in the news, current detail example, Russian operatives and others who were effectively operating under their control for this, hacked Facebook leading up to the 2016 US presidential and congressional elections, using armies of robo-Facebook members: artifactual platforms for posting false content, that were set up to appear as coming from real people and from real American citizens in particular. Facebook has supposedly tightened its systems to better identify and delete such fake, manipulative accounts and their online disinformation campaigns. And with that noted, I cite:

In Ukraine, Russia Tests a New Facebook Tactic in Election Tampering.

Yes, this new approach (as somewhat belatedly noted above) is an arms race advancement meant to circumvent the changes made at Facebook as they have attempted to limit or prevent how their platform can be used as a weaponized capability by Russia and others as part of concerted cyber attacks. No, I am not writing here of simply evolutionary next step work-arounds or similar more predictable advances in cyber-weapon capabilities of this type, when writing of the need to move beyond simply preparing for a next conflict as if it would just be a variation on the last one fought.

That noted, I add that yes, I do expect that the social media based disinformation campaigns will be repeated as an ongoing means of cyber-attack, and both in old and in new forms. But fundamentally new threats will be developed and deployed too that will not fit the patterns of anything that has come before. So my goal here is to take what might be learnable lessons from history: recent history and current events included, combined with a consideration of changes that have taken place in what can be done in advancing conflicts, and in trends in what is now emerging as new possibilities there, to at least briefly consider next possible conflicts and next possible contexts that they might have to play out in. My goal for this series as a whole is to discuss Proactive as a process and even as a strategic doctrine, and in a way that at least hopefully would positively contribute to the national security dialog and offer a measure of value moving forward in general.

With all of that noted as a reframing of my recent installments to this series at the very least, I turn back to its Part 14 and how I ended it, and with a goal of continuing its background history narrative as what might be considered to be a step one analysis.

I wrote in Part 13 and again in Part 14 of Russia’s past as a source of the fears and concerns, that drive and shape that nation’s basic approaches as to how it deals with other peoples and other nations. And I wrote in that, of how basic axiomatic assumptions that Russia and its peoples and government have derived from that history, shape their basic geopolitical policy and their military doctrine for now and moving forward too. Then at the end of Part 14 I said that I would continue its narrative here by discussing Vladimir Putin and his story. And I added that that is where I will of necessity also discuss the 45th president of the United States: Donald Trump and his relationship with Russia’s leadership in general and with Putin in particular. And in anticipation of this dual narrative to come, that will mean my discussing Russia’s cyber-attacks and the 2016 US presidential election, among other events. Then, as just promised here, I will step back to consider more general patterns and possible transferable insights.

Then I will turn to consider China and North Korea and their current cyber-policies and practices. And I will also discuss current and evolving cyber-policies and practices as they are taking shape in the United States as well, as shaped by its war on terror among other motivating considerations. I will use these case studies to flesh out the proactive paradigm that I would at least begin to outline here as a goal of this series. And I will use those real world examples at least in part to in effect reality check that paradigmatic approach too, as I preliminarily offer it here.

And with that, I turn back to the very start of this posting, and to the basic orienting text that I begin all of the installments to this series with. I have consistently begun these postings by citing “cyber risk and cyber conflict in a still emerging 21st century interactive online context, and in a ubiquitously social media connected context and when faced with a rapidly interconnecting internet of things among other disruptively new online innovations. To point out an obvious example, I have made note of the internet of things 15 times now in this way, but I have yet to discuss it at all up to here in the lines of discussion that I have been offering. I do not even mention artificial intelligence-driven cyber-weaponization there in that first paragraph opening text, where that is in fact one of the largest and most complex sources of new threats that have ever been faced and at any time in history. And its very range and scope, and its rate of disruptively new development advancement will probably make it the single largest categorical source of weaponized threat that we will all face in this 21st century, and certainly as a source of weaponized capabilities that will be actively used. I will discuss these and related threat sources when considering the new and unexpected and as I elaborate on the above noted proactive doctrine that I offer here.

And as a final thought here, I turn back to my bullet pointed first take outline of that possible proactive doctrine, to identify and address the faulty assumption that I said I would build into it, and certainly as stated above. And I do so by adding one more bullet point to that initial list of them:

• I have just presented and discussed a failure to consider the New when preparing for possible future conflict, and its consequences. And I prefaced that advisory note by acknowledging that I would build a massive blind spot built into what I would offer there. I have written all of the above strictly in terms of nations and their leaders and decision makers. That might be valid in a more conventional military sense but it is not and cannot be considered so in anything like a cyber-conflict setting, and for either thinking about or dealing with aggressors, or thinking about and protecting, or remediating harm to victims. Yes, nations can and do develop, deploy and use cyber-weapon capabilities, and other nations can be and have been their intended targets. But this is an approach that smaller organizations and even just skilled and dedicated individuals can acquire, if not develop on their own. And it is a capability that can be used against targets of any scale of organization from individuals on up. That can mean attacks against specific journalists, or political enemies, or competing business executives or employees. It can mean attacks against organizations of any size or type, including nonprofits and political parties, small or large businesses and more. And on a larger than national scale, this can mean explicit attack against international alliances such as the European Union. Remember, Russian operatives have been credited with sewing disinformation in Great Britain leading up to its initial Brexit referendum vote, to try to break that country away from the European Union and at least partly disrupt it. And they have arguably succeeded there. (See for example, Brexit Goes Back to Square One as Parliament Rejects May’s Plan a Third Time.)

If I were to summarize and I add generalize this first draft, last (for now) bullet point addition to this draft doctrine, I would add:

• New and the disruptively new in particular, break automatically presumed, unconsidered “axiomatic truths,” rendering them invalid moving forward. This can mean New breaking and invalidating assumptions as to where threats might come from and where they might be directed, as touched upon here in this posting. But more importantly, this can mean the breaking and invalidating of assumptions that we hold to be so basic that we are fundamentally unaware of them in our planning – until they are proven to be wrong in an active attack and as a new but very real threat is realized in action. (Remember, as a conventional military historical example of that, how “everyone” knew that aircraft launched anti- ship torpedoes could not be effectively deployed and used in shallow waters as found in places such as Pearl Harbor – until, that is they were.)

And with that, I will offer a book recommendation that I will be citing in upcoming installments to this series, adding it here in anticipation of doing so for anyone interested:

• Kello, L. (2017) The Virtual Weapon and International Order. Yale University Press.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time 3, and at Page 1 and Page 2 of that directory. And you can also find this and related material at Social Networking and Business 2, and also see that directory’s Page 1.

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 6

This is my 6th posting to a short series on the growth potential and constraints inherent in innovation, as realized as a practical matter (see Reexamining the Fundamentals 2, Section VIII for Parts 1-5.) And this is also my third posting to this series, to explicitly discuss emerging and still forming artificial intelligence technologies as they are and will be impacted upon by software lock-in and its imperatives, and by shared but more arbitrarily determined constraints such as Moore’s law (see Part 4 and Part 5.)

I began discussing overall patterns of technology implementation in an advancing artificial intelligence agent context in Part 4, where I cited a set of possible scenarios that might significantly arise for that in the coming decades, for how artificial intelligence capabilities in general might proliferate, as originally offered in:

• Rose, D. (2014) Enchanted Objects: design, human desire and the internet of things. Scribner.

And to briefly repeat from what I offered there in this context, for smoother continuity of narrative, I cited and began discussing those four possible scenarios (using Rose’s names for them) as:

1. Terminal world, in which most or even essentially all human/artificial intelligence agent interactions take place through the “glass slabs and painted pixels” of smart phone and other separating, boundary maintaining interfaces.
2. Prosthetics, in which a major thrust of this technology development is predicated upon human improvement, with the internalization of these new technology capabilities within us.
3. Animism, and the emergence of artificial intelligence ubiquity through the development and distribution of seemingly endless numbers of smart robotic and artificially intelligence-enabled nodes.
4. And Enchanted Objects, in which the once routine and mundane of our everyday life becomes imbued with amazing new capabilities. Here, unlike the immediately preceding scenario, focus of attention and of action takes place in specific devices and their circumstances that individually arise to prominence of attention and for many if not most people, where the real impact of the animism scenario would be found in a mass effect gestalt arising from what are collectively impactful, but individually mostly unnoticed smart(er) parts.

I at least briefly argued the case there for assuming that we will in fact come to see some combination of these scenarios arise in actual fact, as each at least contextually comes to the top as a best approach for at least some set of recurring implementation contexts. And I effectively begin this posting by challenging a basic assumption that I built into that assessment:

• The tacit and all but axiomatic assumption that enters into a great deal of the discussion and analysis of artificial intelligence, and of most other still-emerging technologies as well,
• That while disruptively novel can and does occur as a matter of principle, it is unlikely to happen and certainly right now in any given technology development context that is actively currently being pursued, along some apparently fruitful current developmental path.

All four of the above repeated and restated scenario options have their roots in our here and now and its more readily predictable linear development moving forward. It is of the nature of disruptively new and novel that it comes without noticeable warning and precisely in ways that would be unexpected. The truly disruptively novel innovations that arise, come as if lightning out of a clear blue sky, and they blindside everyone affected by them for their unexpected suddenness and for their emerging impact, as they begin to gain traction in implementation and use. What I am leading up to here is very simple, at least in principle, even if the precise nature of the disruptively new and novel limits our ability to foresee in advance the details of what is to come of that:

• While all of the first four development and innovation scenarios as repeated above, will almost certainly come to play at least something of a role in our strongly artificially intelligence-shaped world to come, we also have to expect all of this to develop and play out in disruptively new ways too, and both as sources of specific contextually relevant solutions for how best to implement this new technology, and for how all of these more context-specific solutions are in effect going to be glued together to form overall, organized systems.

I would specifically stress the two sides to that more generally and open-endedly stated fifth option here, that I just touched upon in passing in the above bullet point. I write here of more locally, contextually specific implementation solutions, here for how artificial intelligence will connect to the human experience. But I also write of the possibility that overarching connectivity frameworks that all more local context solutions would fit into, are likely going to emerge as disruptively new too. And with that noted as a general prediction as to what is likely to come, I turn here to at least consider some of the how and why details of that, that would lead me to make this prediction in the first place.

Let’s start by rethinking some of the implications of a point that I made in Part 4 of this series when first addressing the issues of artificial intelligence, and of artificial intelligence agents per se. We do not even know what artificial general intelligence means, at least at anything like an implementation-capable level of understanding. We do not in fact even know what general intelligence is per se and even just in a more strictly human context, at least where that would mean our knowing what it is and how it arises in anything like a mechanistic sense. And in fact we are, in a fundamental sense, still learning what even just artificial specialized and single task intelligence is and how that might best be implemented.

All of this still-present, significantly impactful lack of knowledge and insight raises the likelihood that all that we know and think that we know here, is going to be upended by the novel, the unexpected and the disruptively so – and probably when we least expect that.

And with this stated, I raise and challenge a second basic assumption that by now should be more generally disavowed, but that still hangs on. In a few short decades from now, for all of the billions of human online nodes: human-operated devices and virtual devices that we connect online through, that will collectively only account for a small fraction of the overall online connected universe: the overall connectiverse that we are increasingly living in. All of the rest: all of the soon to be vast majority of the rest of this will all be device-to-device in nature, and fit into what we now refer to as the internet of things. And pertinently to this discussion that means that a vast majority of the connectedness that is touched upon in the above four (five?) scenarios, is not going to be about human connectedness per se at all, except perhaps indirectly. And this very specifically leads me back to what I view as the real imperative of the fifth scenario: the disruptively new and novel pattern of overall connectivity that I made note of above, and certainly when considering the glue that binds our emerging overall systems together with all of the overarching organizational implications that that option and possibility raises.

Ultimately, what works and both at a more needs-specific contextual level there, and at an overall systems connecting and interconnecting level, is going to be about optimization, with aesthetics and human tastes critically important and certainly for technology solution acceptance – for human-to-human and human-to-artificial intelligence agent contexts. But in a strictly, or even just primarily artificial intelligence agent-to-artificial intelligence agent and dumb device-to-artificial intelligence agent context, efficiency measures will dominate that are not necessarily human usage-centric. And they will shape and drive any evolutionary trends that arise as these overall systems continue to advance and evolve (see Part 3 and Part 5 for their discussions of adaptive peak models and related evolutionary trend describing conceptual tools, as they would apply to this type of context.)

If I were to propose one likely detail that I fully expect to arise in any such overall organizing, disruptively novel interconnection scenario, it is that the nuts and bolts details of the still just emerging overall networking system that I write of here, will most likely reside at and function at a level that is not explicitly visible and certainly to human participants in it, unless directly connected into and in any of the contextual scenario solutions that arise and that are developed and built into it: human-to-human, human-to-device or intelligent agent, or device or agent-to-device or agent. And this overarching technology, optimized in large part by the numerically compelling pressures of device or agent-to-device or agent connectivity needs, will probably take the form of a set of universally accepted and adhered to connectivity protocols: rules of the road that are not going to be all that human-centric.

I am going to continue this discussion in a next series installment, where I will at least selectively examine some of the core issues that I have been addressing up to here in greater detail, and how their realized implementations might be shaped into our day-to-day reality. And in anticipation of that line of discussion to come, I will do so from a perspective of considering how essentially all of the functionally significant elements to any such system and at all levels of organizational resolution that would arise in it, are rapidly coevolving and taking form, and both in their own immediately connected-in contexts and in any realistic larger overall rapidly emerging connections-defined context too. And this will of necessity bring me back to reconsider some of the first issues that I raised in this series too.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3 and also see Page 1 and Page 2 of that directory. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And also see its Page 1.

Some thoughts concerning a general theory of business 28: a second round discussion of general theories as such, 3

Posted in blogs and marketing, book recommendations, reexamining the fundamentals by Timothy Platt on April 6, 2019

This is my 28th installment to a series on general theories of business, and on what general theory means as a matter of underlying principle and in this specific context (see Reexamining the Fundamentals directory, Section VI for Parts 1-25 and its Page 2 continuation, Section IX for Parts 26 and 27.)

I began this series in its Parts 1-8 with an initial orienting discussion of general theories per se, with an initial analysis of compendium model theories and of axiomatically grounded general theories as a conceptual starting point for what would follow. And I then turned from that, in Parts 9-25 to at least begin to outline a lower-level, more reductionistic approach to businesses and to thinking about them, that is based on interpersonal interactions.

Then I began a second round, next step discussion of general theories per se in Part 26 and Part 27, to add to the foundation that I have been discussing theories of business in terms of, and as a continuation of the Parts 1-8 narrative that I began all of this with. More specifically, I used those two postings to begin a more detailed analysis of axioms per se, and of general bodies of theory that are grounded in them, dividing those theories categorically into two basic types:

• Entirely abstract axiomatic bodies of theory that are grounded entirely upon sets of a priori presumed and selected axioms. These theories are entirely comprised of their particular sets of those axiomatic assumptions as combined with complex assemblies of theorems and related consequential statements (lemmas, etc) that can be derived from them, as based upon their own collective internal logic. Think of these as axiomatically enclosed bodies of theory.
• And theory specifying systems that are axiomatically grounded as above, with at least some a priori assumptions built into them, but that are also at least as significantly grounded in outside-sourced information too, such as empirically measured findings as would be brought in as observational or experimental data. Think of these as axiomatically open bodies of theory.

Any general theory of business, like any organized body of scientific theory would fit the second of those basic patterns as discussed here and particularly in Part 27. My goal for this posting is to continue that line of discussion, and with an increasing focus on the also-empirically grounded theories of the second type as just noted, and with an ultimate goal of applying the principles that I discuss here to an explicit theory of business context. That noted, I concluded Part 27 stating that I would turn here to at least begin to examine:

• The issues of completeness and consistency, as those terms are defined and used in a purely mathematical logic context and as they would be used in any theory that is grounded in descriptive and predictive enumerable form. And I will used that more familiar starting point as a basis for more explicitly discussing these same issues as they arise in an empirically grounded body of theory too.
• How new axioms would be added into an already developing body of theory, and how old ones might be reframed, generalized, limited for their expected validity and made into special case rules as a result, or be entirely discarded as organizing principles per se.
• Then after addressing that set of issues I said that I will turn to consider issues of scope expansion for the set of axioms assumed in a given theory-based system, and with a goal of more fully analytically discussing optimization for the set of axioms presumed, and what that even means.

And I begin addressing the first of those points by citing two landmark works on the foundations of mathematics:

• Whitehead, A.N. and B. Russell. (1910) Principia Mathematica (in 3 volumes). Cambridge University Press.
• And Gödel’s Incompleteness Theorems.

Alfred North Whitehead and Bertrand Russell set out to develop and offer a complete axiomatically grounded foundation for all of arithmetic, as the most basic of all branches of mathematics in their above-cited magnum opus. And this was in fact viewed as a key step realized, in fulfilling the promise of David Hilbert: a renowned early 20th century mathematician who sought to develop a comprehensive and all-inclusive single theory of mathematics as what became known as Hilbert’s Program. All of this was predicated on the validity of an essentially unchallenged metamathematical axiomatic assumption, to the effect that it is in fact possible to encompass arbitrarily large areas of mathematics, and even all of validly provable mathematics as a whole, into a single finite scaled, completely consistent and completely decidable set of specific axiomatic assumptions. Then Kurt Gödel proved that even just the arithmetical system offered by Whitehead and Russell can never be complete in this sense, from how it would of necessity carry in it an ongoing requirement for adding in more new axioms to what is supportively presumed for it, and unending and unendingly so if any real comprehensive completeness was to be pursued. And on top if that, Gödel proved that it can never be possible to prove with comprehensive certainty that such an axiomatic system can be completely and fully consistent either! And this would apply to any abstractly, enclosed axiomatic system that can in any way be represented arithmetically: as being calculably enumerable. But setting aside the issues of a body of theory facing this type of limitation simply because it can be represented in correctly formulated mathematical form, for the findings developed out of its founding assumptions (where that might easily just mean larger and more inclusive axiomatically enclosed bodies of theory that do not depend on outside non-axiomatic assumptions for their completeness or validity – e.g. empirically grounded theories), what does this mean for explicitly empirically grounded bodies of theory, such as larger and more inclusive theories of science, or for purposes of this posting, of business?

I begin addressing that question, by explicitly noting what has to be considered the single most fundamental a priori axiom that underlies all scientific theory, and certainly for all bodies of theory such as physics and chemistry that seek to comprehensively descriptively and predictively describe what in total, would include the entire observable universe, and from its big bang origins to now and into the distant future as well:

• Empirically grounded reality is consistent. Systems under consideration, as based at least in principle on specific, direct observation might undergo phase shifts where system-dominating properties take on more secondary roles and new ones gain such prominence. But that only reflects a need for more explicitly comprehensive theory that would account for, explain and explicitly describe all of this predictively describable structure and activity. But underlying that and similar at-least seeming complexity, the same basic principles and the same conceptual rules that encode them for descriptive and predictive purposes, hold true everywhere and throughout time.
• To take that out of the abstract, the same basic types of patterns of empirically observable reality that could be representationally modeled by descriptive and predictive rules such as Charles’ law, or Boyle’s law, would be expected to arise wherever such thermodynamically definable systems do. And the equations they specify would hold true and with precisely the same levels and types of accuracy wherever so applied.

So if an axiomatically closed, in-principle complete in and of itself axiomatic system, and an enclosed body of theory that would be derived from it (e.g. Whitehead’s and Russell’s theory of arithmetic) cannot be made fully complete and consistent, as noted above:

• Could grounding a body of theory that could be represented in what amounts to its form and as if a case in point application of it, in what amounts to a reality check framework of empirical observation allow for or even actively support a second possible path to establishing full completeness and consistency there? Rephrasing that, could the addition of theory framing and shaping outside sourced information evidence, or formally developed experimental or observational data, allow for what amounts to an epistemologically meaningful grounding to a body of theory through inclusion of an outside-validated framework of presumable consistency?
• Let’s stretch the point made by Gödel, or at least risk doing so where I still at least tacitly assume bodies of theory that can in some meaningful sense be mapped to a Whitehead and Russell type of formulation of arithmetic, through theory-defined and included descriptive and predictive mathematical models and the equations they contain. Would the same limiting restrictions as found in axiomatically enclosed theory systems as discussed here, also arise in open theory systems so linked to them? And if so, where, how and with what consequence?

As something of an aside perhaps, this somewhat convoluted question does raise an interesting possibility as to the meaning and interpretation of quantum theory, and of quantum indeterminacy in particular, with resolution to a single “realized” solution only arrived at when observation causes a set of alternative possibilities to collapse down to one. But setting that aside, and the issue of how this would please anyone who still adheres to the precept of number: of mathematics representing the true prima materia of the universe (as did Pythagoras and his followers), what would this do to anything like an at least strongly empirically grounded, logically elaborated and developed theory such as a general theory of business?

I begin to address that challenge by offering a counterpart to the basic and even primal axiom that I just made note of above, and certainly for the physical sciences:

• Assume that a sufficiently large and complete body of theory can be arrived at,
• That would have a manageable finite set of underlying axiomatic assumptions that would be required by and sufficient to address any given empirically testable contexts that might arise in its practical application,
• And in a manner that at least for those test case purposes would amount to that theory functioning as if it were complete and consistent as an overall conceptual system.
• And assume that this reframing process could be repeated as necessary, when for example disruptively new and unexpected types of empirical observation arise.

And according to this, new underlying axioms would be added as needed, when specifically faced and once again particularly when an observer is faced with truly novel, disruptively unexpected findings or occurrences – of a type that I have at least categorically raised and addressed throughout this blog up to here, in business systems and related contexts. And with that, I have begun addressing the second of the three to-address topics points that I listed at the top of this posting:

• How would new axioms be added into an already developing body of theory, and how and when would old ones be reframed, generalized, limited for their expected validity or discarded as axioms per se?

I am going to continue this line of discussion in a next series installment, beginning with that topics point as here-reworded. And I will turn to and address the third and last point of that list after that, turning back to issues coming from the foundations of mathematics in doing so too. (And I will finally turn to and more explicitly discuss issues raised in a book that I have been citing here, but that I have not more formally gotten to in this discussion up to here, that has been weighing on my thinking of the issues that I address here:

• Stillwell, J. (2018) Reverse Mathematics: proofs from the inside out. Princeton University Press.)

Meanwhile, you can find this and related material about what I am attempting to do here at About this Blog and at Blogs and Marketing. And I include this series in my Reexamining the Fundamentals directory and its Page 2 continuation, as topics Sections VI and IX there.

Donald Trump, Xi Jinping, and the contrasts of leadership in the 21st century 12: some thoughts concerning how Xi and Trump approach and seek to follow an authoritarian playbook 3

Posted in book recommendations, macroeconomics, social networking and business by Timothy Platt on March 17, 2019

This is my third posting to explicitly discuss and analyze an approach to leadership that I have come to refer to as the authoritarian playbook. See Part 1 and Part 2 of this progression of series installments, and a set of three closely related postings that I offered immediately prior to them that focus in on one of the foundational building blocks to the authoritarian playbook approach as a whole: the cult of personality, with its Part 1 focusing on Donald Trump and his cult-building efforts, Part 2 focusing on Xi Jinping’s efforts in that direction, and Part 3 stepping back to consider cults of personality in general.

I at least briefly outlined a set of tools and approaches to using them in my second installment on the authoritarian playbook itself as just cited above, that in effect mirrors how I stepped back from the specifics of Trump and Xi in my third posting on cults of personality, to put their stories into an at least hopefully clearer and fuller context. And I continue this now-six posting progression here with a goal of offering a still fuller discussion of this general approach to leadership, while also turning back to more fully consider Donald Trump and Xi Jinping themselves as at least would-be authoritarians.

I begin this posting’s discussion on that note by raising a detail that I said that I would turn to here, but that might at first appear to be at least somewhat inconsistent too, and certainly when measured up against my discussions of cults of personality and the authoritarian playbook as a largely unified and consistent vision and approach.

I have presented authoritarians and would-be authoritarians as striding forth in palpably visible self-confidence to proclaim their exceptionalism and their greatness, and to proclaim that they are the only ones who could possibly lead and save their followers: their people from the implacable, evil enemies that they face. This means these at least would-be leaders building their entire foundation for leadership on trust and on its being offered and on an ongoing basis: a leader’s trust that if they build their cults of personality effectively and if they take the right steps in the right ways in pursuing and wielding power, others: their followers, will trust and follow them. And this also arguably has to include their own trust that these, their followers will consistently remain true to them in their own beliefs too. Then I ended my second playbook-outlining posting by raising a crucial form of doubt that enters into and in fact informs all of this as it actually plays out:

• Trust, or rather the abnegation of even a possibility of holding trust in anyone or anything outside of self for a would-be authoritarian.

Ultimately, I would argue that an authoritarian’s trust in their followers, or in anyone else outside of themselves, rings as hollow as the cult of personality mask that they wear. And raising that point of observation, I turn to at least briefly reconsider the basic tools to this leadership playbook itself again.

I said at the end of my second playbook installment that I would turn back to reconsider Xi and Trump as specific case in point examples here and do so starting with Xi and his story in order to take what follows out of the abstract, leaving it imbued with a real, historically knowable face. And I begin that by returning to a crucially important point that I made when discussing Xi’s cult of personality and how he has sought to build one about himself: the trauma that befell his father and through that, himself and the rest of his immediate family too.

I began writing of Xi Jinping’s father, Xi Zhongxun in my discussion of the son’s effort to create a cult of personality around himself. Mao Zedong launched his Cultural Revolution out of fear that he was losing control of his revolution as other, competing voices sought to realize the promise of liberation from China’s nobility-led surf peasant society. He began this counteroffensive against what he saw as a challenge to his ultimate authority with his promise to listen and include: his let a hundred flowers bloom promise. But then he pulled back and all who did speak up, all who did offer their thoughts as to how and where the Communist revolution should go from there, were swept up as revisionists by the Cultural Revolution, declared by its zealot cadres to be enemies of the state and of the people.

Many were so caught up, with academics and members of China’s intelligentsia targeted in large numbers, as particular threats to Mao’s vision and to the revolution that he was leading through his cult of personality. And even people who had served Mao well and from early on in the revolution, form the long days of the Long March with all of its risks and uncertainties were caught up in this. Xi Zhongxun was a loyal follower of Mao and from the beginning. As such he was elevated into Chinese Communism’s new emerging proletarian nobility as Mao succeeded and took power. Xi Jinping, his son was raised as a member of China’s new Crown Prince Party and as an heir apparent to his father’s social and political stature there. And then Mao turned on them, just as he had turned on so many before them, and all fell into chaos with his father hauled up for public ridicule before screaming hoards in Cultural Revolution struggle sessions: public appearances in which the accused were beaten and ridiculed and forced to publically confess to whatever crimes they were being accused of at that moment.

Many who faced those gauntlets of ridicule and torture did not survive them and certainly when they were repeatedly subjected to these assaults. Zhongxun did survive, as he did confess and confess and confess. So he and his family, his young son Jinping included, were sent to a distant and isolated peasant community for reeducation.

One of the lessons that Jinping learned was that his survival meant his becoming more orthodox in his Communist purity than anyone else, and more actively ambitious in advancing through the Party ranks for his purity and reliability of thought and action. But a second, and more important lesson learned from all of this, and certainly for its long-term impact was that Xi learned to never trust anyone else and certainly in ways that might challenge his position and security, or his rising power and authority. I have written here in this progression of postings, of cults of personality as masks. Xi learned and both early on and well in this, to smile and to fit in and to strive to be the best and to succeed and with that expression of approval and agreement always there on his face. He learned to wear a mask, and to only trust himself. And his mask became the basis of his cult of personality as he advanced along the path of fulfilling his promise to himself and yes, to his father, and as he rose through the ranks in the Party system that had so irrevocably shaped his life and certainly starting with his father’s initial arrest.

How did this pattern arise and play out for a young Donald Trump? Turn to consider his father for his pivotal role in shaping his son too.

• What happens when only perfection is acceptable and when that is an always changing and never achievable goal, and when deception and duplicity can be the closest to achieving it that can actually be achievable?
• What happens to a young impressionable son when winning is the only acceptable outcome, ever, and when admitting weakness or defeat can only lead to dire humiliating consequences?
• Fear comes to rule all, and it is no accident that an adult Donald Trump sees fear and instilling it in others as his most powerful tool and both in his business and professional life, and apparently in his personal life too. (See:
• Woodword, B. (2018) Fear: Trump in the White House. Simon & Schuster, for a telling account of how Donald Trump uses this tool as his guiding principle when dealing with others.)
• And genuine trust in others, or trustworthiness towards others become impossible, with those who oppose The Donald on anything, considered as if existential treats and enemies, and those who support and enable him considered disposable pawns and gullible fools to be used and then discarded.

Cults of personality are masks and ultimately hollow ones for those who would pursue the autocratic playbook. And ultimately so is the promise of the autocrat and their offer of better for those who would follow them. And that is why they have to so carefully and assiduously grab and hold onto power, using the other tools of the playbook to do so.

I have focused in this posting on trust and on how it does and does not arise and flow forward towards others. Turning back to my initial comments on this, as offered here, a would-be authoritarian, a would-be tyrant or dictator calculatingly develops and promotes his cult of personality with a goal of gaining trust and support from as large and actively engaged a population of supporters as possible. So they calculatingly seek to develop and instill trust in themselves, in others. But ultimately they do not and in a fundamental sense cannot trust any of those others and certainly not as individuals. The closest they can come to achieving that is to develop a wary trust on the more amorphous face of their followers as nameless markers in larger demographics. And that, arguably just means their trusting themselves for their capability to keep those individually nameless and faceless members of the hoard in line.

I am going to continue this narrative in a next series installment where I will turn to consider legacies and the authoritarian’s need to build what amounts to monuments to their glory that they might never be forgotten. In anticipation of that discussion to come I will argue that while the underlying thought and motivation that would enter into this for any particular authoritarian might be complex, much if not most of that is shaped at least as a matter of general principles by two forces: fear, and a desire to build for permanence and with grandiosity driving both sides to that. And for working examples, I will discuss Trump’s southern border wall ambitions, and his more general claims to seek to rebuild the American infrastructure, and Xi’s imperially unlimited infrastructure and related ambitions too.

Meanwhile, you can find my Trump-related postings at Social Networking and Business 2. And you can find my China writings as appear in this blog at Macroeconomics and Business and its Page 2 continuation, and at Ubiquitous Computing and Communications – everywhere all the time and Social Networking and Business 2.

%d bloggers like this: