Platt Perspective on Business and Technology

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 7

This is my 7th posting to a short series on the growth potential and constraints inherent in innovation, as realized as a practical matter (see Reexamining the Fundamentals 2, Section VIII for Parts 1-6.) And this is also my third posting to this series, to explicitly discuss emerging and still forming artificial intelligence technologies as they are and will be impacted upon by software lock-in and its imperatives, and by shared but more arbitrarily determined constraints such as Moore’s law (see Part 4, Part 5 and Part 6.)

I focused in Part 6 of this narrative on a briefly stated succession of possible development possibilities, that all relate to how an overall next generation internet will take shape, that is largely and even primarily driven at least for proportion of functional activity carried out in it, by artificial intelligence agents and devices: an increasingly largely internet of things and of smart artifactual agents among them. And I began that with a continuation of a line of discussion that I began in earlier installments to this series, centering on four possible development scenarios as initially offered by David Rose in his book:

• Rose, D. (2014) Enchanted Objects: design, human desire and the internet of things. Scribner.

I added something of a fifth such scenario, or rather a caveat-based acknowledgment of the unexpected in how this type of overall development will take shape, in Part 6. And I ended that posting with a somewhat cryptic anticipatory note as to what I would offer here in continuation of its line of discussion, which I repeat now for smoother continuity of narrative:

• I am going to continue this discussion in a next series installment, where I will at least selectively examine some of the core issues that I have been addressing up to here in greater detail, and how their realized implementations might be shaped into our day-to-day reality. And in anticipation of that line of discussion to come, I will do so from a perspective of considering how essentially all of the functionally significant elements to any such system and at all levels of organizational resolution that would arise in it, are rapidly coevolving and taking form, and both in their own immediately connected-in contexts and in any realistic larger overall rapidly emerging connections-defined context too. And this will of necessity bring me back to reconsider some of the first issues that I raised in this series too.

The core issues that I would continue addressing here as follow-through from that installment, fall into two categories. I am going to start this posting by adding another scenario to the set that I began presenting here, as initially set forth by Rose with his first four. And I will use that new scenario to make note of and explicitly consider an unstated assumption that was built into all of the other artificial intelligence proliferation and interconnection scenarios that I have offered here so far. And then, and with that next step alternative in mind, I will reconsider some of the more general issues that I raised in Part 6, further developing them too.

I begin all of this with a systems development scenario that I would refer to as the piecewise distributed model.

• The piecewise distributed model for how artificial intelligence might arise as a significant factor in the overall connectiverse that I wrote of in Part 6 is based on current understanding of how human intelligence arises in the brain as an emergent property, or rather set of them, from the combined and coordinated activity of simpler components that individually do not display anything like intelligence per se, and certainly not artificial general intelligence.

It is all about how neural systems-based intelligence arises from lower level, unintelligent components in the brain and how that might be mimicked, or recapitulated if you will through structurally and functionally analogous systems and their interconnections, in artifactual systems. And I begin to more fully characterize this possibility by more explicitly considering scale, and to be more precise the scale of range of reach for the simpler components that might be brought into such higher level functioning totalities. And I begin that with a simple of perhaps somewhat odd sounding question:

• What is the effective functional radius of the human brain given the processing complexities and the numbers and distributions of nodes in the brain that are brought into play in carrying out a “higher level” brain activity, the speed of neural signal transmission in that brain as a parametric value in calculations here, and an at least order of magnitude assumption as to the timing latency to conscious awareness of a solution arrived at for a brain activity task at hand, from its initiation to its conclusion?

And with that as a baseline, I will consider the online and connected alternative that a piecewise distributed model artificial general intelligence, or even just a higher level but still somewhat specialized artificial intelligence would have to function within.

Let’s begin this side by side comparative analysis with consideration of what might be considered a normative adult human brain, and with a readily and replicably arrived at benchmark number: myelinated neurons as found in the brain send signals at a rate of approximately 120 meters per second, where one meter is equal to approximately three and a quarter feet in distance. And for simplicity’s sake I will simply benchmark the latency from the starting point of a cognitively complex task to its consciously perceived completion at one tenth of a second. This would yield an effective functional radius of that brain at 12 meters or 40 feet, or less – assuming as a functionally simplest extreme case for that outer range value that the only activity required to carry out this task was the simple uninterrupted transmission of a neural impulse signal along a myelinated neuron for some minimal period of time to achieve “task completion.”

And actual human brain is of course a lot more compact than that, and a lot more structurally complex too, with specialized functional nodes and complex arrays of parallel processor organized structurally and functionally duplicated elements in them. And that structural and functional complexity, and the timing needed to access stored information from and add new information back into memory again as part of that task activity, slows actual processing down. An average adult human brain is some 15 centimeters long, or six inches front to back so using that as an outside-value metric and a radius as based on it of some three inches, structural and functional complexities in the brain that would be called upon to carry out that tenth of a second task, would effectively reduce its effective functional radius some 120-fold from the speedy transmission-only outer value that I began this brief analysis with.

Think of that as a speed and efficiency tradeoff reduction imposed on the human brain by its basic structural and functional architecture and by the nature and functional parameters of its component parts, on the overall possible maximum rate of activity, at least for tasks performed that would fit the overall scale and complexity of my tenth of a second benchmark example. Now let’s consider the more artifactual overall example of computer and network technology as would enter into my above-cited piecewise distributed model scenario, or in fact into essentially any network distributed alternative to it. And I begin that by noting that the speed of light in a vacuum is approximately 300 million meters per second, and that electrons can travel along a pure copper wire at up to approximately 99% of that value.

I will assume for purposes of this discussion that photons in wireless networked and fiber optic connected aspects of such a system, and the electrons that convey information through their flow distributions in more strictly electronic components of these systems all travel on average at roughly that same round number maximum speed, as any discrepancy from it in what is actually achieved would be immaterial for purposes of this discussion, given my rounding off and other approximations as resorted to here. Then, using the task timing parameter of my above-offered brain functioning analysis, as sped up to one tenth of a millisecond for an electronic computer context, an outer limit transmission-only value for this system and its physical dimensions would suggest a maximum radius of some 30,000 kilometers, encompassing all of the Earth and all of near-Earth orbit space and more. There, in counterpart to my simplest case neural signal transmission processing as a means of carrying out the above brain task, I assume here that its artificial intelligence counterpart might be completed simply by the transmission of a single pulse of electrons or photons and without any processing step delays required.

Individual neurons can fire up to some 200 times per second, depending on the type of function carried out, and an average neuron in the brain connects to what is on the order of 1000 other neurons through complex dendritic branching and the synaptic connections they lead to, and with some neurons connecting to as many as 10,000 others and more. I assume that artificial networks can grow to that level of interconnected connectivity and more too, and with levels of involved nodal connectivity brought into any potentially emergent artificial intelligence activity that might arise in such a system, that matches and exceeds that of the brain for its complexity there too. That at least, is likely to prove true for any of what with time would become the all but myriad number of organizing and managing nodes, that would arise in at least functionally defined areas of this overall system and that would explicitly take on middle and higher level SCADA -like command and control roles there.

This would slow down the actual signal transmission rate achievable, and reduce the maximum physical size of the connected network space involved here too, though probably not as severely as observed in the brain. There, even today’s low cost readily available laptop computers can now carry out on the order of a billion operations per second and that number continues to grow as Moore’s law continues to hold forth. So if we assume “slow” and lower priority tasks as well as more normatively faster ones for the artificial intelligence network systems that I write of here, it is hard to imagine restrictions that might realistically arise that would effectively limit such systems to volumes of space smaller than the Earth as a whole, and certainly when of-necessity higher speed functions and activities could be carried out by much more local subsystems and closer to where their outputs would be needed.

And to increase the expected efficiencies of these systems, brain as well as artificial network in nature, effectively re-expanding their effective functional radii again, I repeat and invoke a term and a design approach that I used in passing above: parallel processing. That, and inclusion of subtask performing specialized nodes, are where effectively breaking up a complex task into smaller, faster-to-complete subtasks, whose individual outputs can be combined as a completed overall solution or resolution, can speed up overall task completion by orders of timing efficiency and for many types of tasks, allowing more of them to be carried out within any given nominally expected benchmark time for expected “single” task completions. This of course also allows for faster completion of larger tasks within that type of performance measuring timeframe window too.

• What I have done here at least in significant part, is to lay out an overall maximum connected systems reach that could be applied to the completion of tasks at hand, and in either a human brain or an artificial intelligence-including network. And the limitations of accessible volume of space there, correspondingly sets an outer limit to the maximum number of functionally connected nodes that might be available there, given that they all of necessity have space filling volumes that are greater than zero.
• When you factor in the average maximum processing speed of any information processing nodes or elements included there, this in turn sets an overall maximum, outer limit value to the number of processing steps that could be applied in such a system, to complete a task of any given time-requiring duration, within such a physical volume of activity.

What are the general principles beyond that set of observations that I would return to here, given this sixth scenario? I begin addressing that question by noting a basic assumption that is built into the first five scenarios as offered in this series, and certainly into the first four of them: that artificial intelligence per se reside as a significant whole in specific individual nodes. I fully expect that this will prove true in a wide range of realized contexts as that possibility is already becoming a part of our basic reality now, with the emergence and proliferation of artificial specialized intelligence agents. But as this posting’s sixth scenario points out, that assumption is not the only one that might be realized. And in fact it will probably only account for part of what will to come to be seen as artificial intelligence as it arises in these overall systems.

The second additional assumption that I would note here is that of scale and complexity, and how fundamentally different types of implementation solutions might arise, and might even be possible, strictly because of how they can be made to work with overall physical systems limitations such as the fixed and finite speed of light.

Looking beyond my simplified examples as outlined here: brain-based and artificial alike, what is the maximum effective radius of a wired AI network, that would as a distributed system come to display true artificial general intelligence? How big a space would have to be tapped into for its included nodes to match a presumed benchmark human brain performance for threshold to cognitive awareness and functionality? And how big a volume of functionally connected nodal elements could be brought to bear for this? Those are open questions as are their corresponding scale parameter questions as to “natural” general intelligence per se. I would end this posting by simply noting that disruptively novel new technologies and technology implementations that significantly advance the development of artificial intelligence per se, and the development of artificial general intelligence in particular, are likely to both improve the quality and functionality of individual nodes involved and regardless of which overall development scenarios are followed, and their capacity to synergistically network together.

I am going to continue this discussion in a next series installment where I will step back from considering specific implementation option scenarios, to consider overall artificial intelligence systems as a whole. I began addressing that higher level perspective and its issues here, when using the scenario offered in this posting to discuss overall available resource limitations that might be brought to bear on a networked task, within given time to completion restrictions. But that is only one way to parameterize this type of challenge, and in ways that might become technologically locked in and limited from that, or allowed to remain more open to novelty – at least in principle.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3 and also see Page 1 and Page 2 of that directory. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And also see its Page 1.

Addendum note: The above presumptive end note added at the formal conclusion of this posting aside, I actually conclude this installment with a brief update to one of the evolutionary development-oriented examples that I in effect began this series with. I wrote in Part 2 of this series, of a biological evolution example of what can be considered an early technology lock-in, or rather a naturally occurring analog of one: of an ancient biochemical pathway that is found in all cellular life on this planet: the pentose shunt.

I add a still more ancient biological systems lock-in example here that in fact had its origins in the very start of life itself as we know it, on this planet. And for purposes of this example, it does not even matter whether the earliest genetic material employed in the earliest life forms was DNA or RNA in nature for how it stored and transmitted genetic information from generation to generation and for how it used such information in its life functions within individual organisms. This is an example that would effectively predate that overall nucleic acid distinction as it involves the basic, original determination of precisely which basic building blocks would go into the construction and information carrying capabilities of either of them.

All living organisms on Earth, with a few viral exceptions employ DNA as their basic archival genetic material, and use RNA as an intermediary in accessing and making use of the information so stored there. Those viruses use RNA for their own archival genetic information storage, and the DNA replicating and RNA fabrication machinery of the host cells they live in to reproduce. And the genetic information included in these systems, and certainly at a DNA level is all encoded in patterns of molecules called nucleotides that are linearly laid out in the DNA design. Life on Earth uses combinations of four possible nucleotides for this coding and decoding: adenine (A), thymine (T), guanine (G) and cytosine (C). And it was presumed at least initially that the specific chemistry of these four possibilities made them somehow uniquely suited to this task.

More recently it has been found that there are other possibilities that can be synthesized and inserted into DNA-like molecules, with the same basic structure and chemistry, that can also carry and convey this type of genetic information and stably, reliably so (see for example:

Hachimoji DNA and RNA: a genetic system with eight building blocks.)

And it is already clear that this only indicates a small subset of the information coding possibilities that might have arisen as alternatives to the A/T/G/C genetic coding became locked-in, in practice in life on Earth.

If I could draw one relevant conclusion to this still unfolding story that I would share here, it is that if you want to find technology lock-ins, or their naturally occurring counterparts, look to your most closely and automatically held developmental assumptions, and certainly when you cannot rigorously justify them from first principles. Then question the scope of relevance and generality of your first principles there, for hidden assumptions that they carry within them.

Some thoughts concerning a general theory of business 29: a second round discussion of general theories as such, 4

Posted in blogs and marketing, book recommendations, reexamining the fundamentals by Timothy Platt on June 11, 2019

This is my 29th installment to a series on general theories of business, and on what general theory means as a matter of underlying principle and in this specific context (see Reexamining the Fundamentals directory, Section VI for Parts 1-25 and its Page 2 continuation, Section IX for Parts 26-28.)

I began this series in its Parts 1-8 with an initial orienting discussion of general theories per se, with an initial analysis of compendium model theories and of axiomatically grounded general theories as a conceptual starting point for what would follow. And I then turned from that, in Parts 9-25 to at least begin to outline a lower-level, more reductionistic approach to businesses and to thinking about them, that is based on interpersonal interactions. Then I began a second round, next step discussion of general theories per se in Parts 26-28 of this, building upon my initial discussion of general theories per se, this time focusing on axiomatic systems and on axioms per se and the presumptions that they are built upon.

More specifically, I have used the last three postings to that progression to at least begin a more detailed analysis of axioms as assumed and assumable statements of underlying fact, and of general bodies of theory that are grounded in them, dividing those theories categorically into two basic types:

• Entirely abstract axiomatic bodies of theory that are grounded entirely upon sets of a priori presumed and selected axioms. These theories are entirely encompassed by sets of presumed fundamental truths: sets of axiomatic assumptions, as combined with complex assemblies of theorems and related consequential statements (lemmas, etc) that can be derived from them, as based upon their own collective internal logic. Think of these as axiomatically closed bodies of theory.
• And theory specifying systems that are axiomatically grounded as above, with at least some a priori assumptions built into them, but that are also at least as significantly grounded in outside-sourced information too, such as empirically measured findings as would be brought in as observational or experimental data. Think of these as axiomatically open bodies of theory.

I focused on issues of completeness and consistency in these types of theory grounding systems in Part 28 and briefly outlined there, how the first of those two categorical types of theory cannot be proven either fully complete or fully consistent, if they can be expressed in enumerable form of a type consistent with, and as such including the axiomatic underpinnings of arithmetic: the most basic of all areas of mathematics, as formally axiomatically laid out by Whitehead and Russell in their seminal work: Principia Mathematica.

I also raised and left open the possibility that the outside validation provided in axiomatically open bodies of theory, as identified above, might afford alternative mechanisms for de facto validation of completeness, or at least consistency in them, where Kurt Gödel’s findings as briefly discussed in Part 28, would preclude such determination of completeness and consistency for any arithmetically enumerable axiomatically closed bodies of theory.

That point of conjecture began a discussion of the first of a set of three basic, and I have to add essential topics points that would have to be addressed in establishing any attempted-comprehensive bodies of theory: the dual challenges of scope and applicability of completeness and consistency per se as organizing goals, and certainly as they might be considered in the contexts of more general theories. And that has left these two here-repeated follow-up points for consideration:

• How would new axioms be added into an already developing body of theory, and how and when would old ones be reframed, generalized, limited for their expected validity and made into special case rules as a result, or be entirely discarded as organizing principles there per se.
• Then after addressing that set of issues I said that I will turn to consider issues of scope expansion for the set of axioms assumed in a given theory-based system, and with a goal of more fully analytically discussing optimization for the set of axioms presumed, and what that even means.

My goal for this series installment is to at least begin to address the first of those two points and its issues, adding to my already ongoing discussion of completeness and consistency in complex axiomatic theories while doing so. And I begin by more directly and explicitly considering the nature of outside-sourced, a priori empirically or otherwise determined observations and the data that they would generate, that would be processed into knowledge through logic-based axiomatic processing.

Here, and to explicitly note what might be an obvious point of observation on the part of readers, I would as a matter of consistency represent the proven lemmas and theorems of a closed body of theory such as a body of mathematical theory, as proven and validated knowledge as based on that theory. And I correspondingly represent open question still-unproven or unrefuted theoretical conjectures as they arise and are proposed in those bodies of theory, as potentially validatable knowledge in those systems. And having noted that point of assumption (presumption?), I turn to consider open systems as for example would be found in theories of science or of business, in what follows.

• Assigned values and explicitly defined parameters, as arise in closed systems such as mathematical theories with their defined variables and other constructs, can be assumed to represent absolutely accurate input data. And that, at least as a matter of meta-analysis, even applies when such data is explicitly offered and processed through axiomatic mechanisms as being approximate in nature and variable in range; approximate and variable are themselves explicitly defined, or at least definable in such systems applications, formally and specifically providing precise constraints on the data that they would organize, even then.
• But it can be taken as an essentially immutable axiomatic principle: one that cannot be violated in practice, that outside sourced data that would feed into and support an axiomatically open body of theory, is always going to be approximate for how it is measured and recorded for inclusion and use there, and even when that data can be formally defined and measured without any possible subjective influence – when it can be identified and defined and measured in as completely objective a manner as possible and free of any bias that might arise depending on who observes and measures it.

Can an axiomatically open body of theory somehow be provably complete or consistent for that matter, due to the balancing and validating inclusion of outside frame of reference-creating data such as experientially observable empirical observations? That question can be seen as raising an interesting at least-potential conundrum and certainly if a basic axiom of the physical sciences that I cited and made note of in Part 28 is (axiomatically) assumed true:

• Empirically grounded reality is consistent across time and space.

That at least in principle, after all, raises what amounts to an immovable object versus an unyieldable force type of challenge. But as soon as the data that is actually measured, as based on this empirically grounded reality, takes on what amounts to a built in and unavoidable error factor, I would argue that any possible outside-validated completeness or consistency becomes moot at the very least and certainly for any axiomatically open system of theory that might be contemplated or pursued here.

• This means that when I write of selecting, framing and characterizing and using axioms and potential axioms in such a system, I write of bodies of theory that are of necessity always going to be works in progress: incomplete and potentially inconsistent and certainly as new types of incoming data are discovered and brought into them, and as better and more accurate ways to measure the data that is included are used.

Let me take that point of conjecture out of the abstract by citing a specific source of examples that are literally as solidly established as our more inclusive and more fully tested general physical theories of today. And I begin this with Newtonian physics as it was developed at a time when experimental observation was limited for the range of phenomena observed and in the levels of experimental accuracy attainable for what was observed and measured, so as to make it impossible to empirically record the types of deviation from expected sightings that would call for new and more inclusive theories, with new and altered underlying axiomatic assumptions, as subsequently called for in the special theory of relativity as found and developed by Einstein and others. Newtonian physics neither calls for nor accommodates anything like the axiomatic assumptions of the special theory of relativity, holding for example that the speed of light is constant in all frames of reference. More accurate measurements as taken over wider ranges of experimental examination of observable phenomena forced change to the basic underlying axiomatic assumptions of Newton (e.g. his laws of motion.) And further expansion of the range of phenomena studied and the level of accuracy in which data is collected from all of this, might very well lead to the validation and acceptance of still more widely inclusive basic physical theories, and with any changes in what they would axiomatically presume in their foundations included there. (Discussion of alternative string theory models of reality among other possibilities, come to mind here, where experimental observational limitations of the types that I write of here, are such as to preclude any real culling and validating there, to arrive at a best possible descriptive and predictive model theory.)

At this point I would note that I tossed a very important set of issues into the above text in passing, and without further comment, leaving it hanging over all that has followed it up to here: the issues of subjectivity.

Data that is developed and tested for how it might validate or disprove proposed physical theory might be presumed to be objective, as a matter of principle. Or alternatively and as a matter of practice, it might be presumed possible to obtain such data that is arbitrarily close to being fully free from systematic bias, as based on who is observing and what they think about the meaning of the data collected. And the requirement that experimental findings be independently replicated by different researchers in different labs and with different equipment, and certainly where findings are groundbreaking and unexpected, serves to support that axiomatic assumption as being basically reliable. But it is not as easy or as conclusively presumable to assume that type of objectivity for general theories that of necessity have to include within them, individual human understand and reasoning with all of the additional and largely unstated axiomatic presumptions that this brings with it, as exemplified by a general theory of business.

That simply adds whole new layers of reason to any argument against presumable completeness or consistency in such a theory and its axiomatic foundations. And once again, this leaves us with the issues of such theories always being works in progress, subject to expansion and to change in general.

And this brings be specifically and directly to the above-stated topics point that I would address here in this brief note of a posting: the determination of which possible axioms to include and build from in these systems. And that, finally, brings me to the issues and approaches that are raised in a reference work that I have been citing in anticipation of this discussion thread for a while now in this series, and an approach to the foundation of mathematics and its metamathematical theories that this and similar works seek to clarify if not codify:

• Stillwell, J. (2018) Reverse Mathematics: proofs from the inside out. Princeton University Press.)

I am going to more fully and specifically address that reference and its basic underlying conceptual principles in a next series installment, but in anticipation of doing so end this posting with a basic organizing point of reference that I will build from there:

• The more traditional approach to the development and elaboration of mathematical theory, and going back at least as far as the birth of Euclidean geometry, was one of developing a set of axioms that would be presumed as if absolute truths, and then developing emergent lemmas and theories from them.
• Reverse mathematics is so named because it literally reverses that, starting with theories to be proven and then asking what is the minimal set of axioms that would be needed in order to prove it.

My goal for the next installment to this series is to at least begin to consider both axiomatically closed and axiomatically open theory systems in light of these two alternative metatheory approaches. And in anticipation of that narrative line to come, this will mean reconsidering compendium models and how they might arise as need for new axiomatic frameworks of understanding arise, and as established ones become challenged.

Meanwhile, you can find this and related material about what I am attempting to do here at About this Blog and at Blogs and Marketing. And I include this series in my Reexamining the Fundamentals directory and its Page 2 continuation, as topics Sections VI and IX there.

Dissent, disagreement, compromise and consensus 31 – the jobs and careers context 30

This is my 31st installment to a series on negotiating in a professional context, starting with the more individually focused side of that as found in jobs and careers, and going from there to consider the workplace and its business-supportive negotiations (see Guide to Effective Job Search and Career Development – 3 and its Page 4 continuation, postings 484 and following for Parts 1-30.)

I have been working my way through a to-address topics list since Part 25 that addresses a succession of workplace challenges and opportunities that can and often do arise when working for a business for any significant period of time. And my goal for this posting is to continue that process, completing my discussion, at least for purposes of this series, of Point 5 and continuing my discussion of Plan B approaches as I began addressing in that context. After that I will turn to and discuss Point 6 and both as an important source of relevant issues in its own right and to illustrate how the types of issues and approaches that I have been discussing in this series can and do fit together in real life.

To put what is to come here and what will follow this installment in clearer context, I begin by repeating this topics and issues list as a whole, with parenthetical references as to where I have already discussed its first five points:

1. Changes in tasks assigned, and resources that would at least nominally be available for them: timeline allowances and work hour requirements definitely included there (see Part 25 and Part 26),
2. Salary and overall compensation changes (see Part 27),
3. Overall longer-term workplace and job responsibility changes and constraints box issues as change might challenge or enable you’re reaching your more personally individualized goals there (see Part 28),
4. Promotions and lateral moves (see Part 29),
5. Dealing with difficult people (see Part 30),
6. And negotiating possible downsizings and business-wide events that might lead to them. I add this example last on this list because navigating this type of challenge as effectively as possible, calls for skills in dealing with all of the other issues on this list and more, and with real emphasis on Plan B preparation and planning, and on its execution too as touched upon in Part 23 and again in Part 30.

And with that orienting and series-connecting text in place I turn to further consider Plan B approaches, starting with a point of detail that might seem obvious:

• Standard and routine tasks, processes and work flows, as carried out by the people expected to do them, rarely call for negotiations and of any sort, except insofar as it might prove necessary to argue a case against sudden disruptive change. But that exception cannot be expected and certainly very often; most of us never in fact find ourselves having to negotiate that type of scenario and certainly given the day-to-day momentum of simply pursuing and doing business as usual.
• So it can essentially be taken as a given, that when negotiations of some sort are needed as to the what, how and who of work, that means that at least one critically involved stakeholder in an involved part of the business sees need for change and for trying a more non-standard approach, or for reaching agreement on new goals or benchmarks that would be used to gauge and performance track outcomes and results achieved.
• So as soon as a sufficiently compelling need arises so as to make negotiations per se, tenable or even necessary enough to pursue them, the people involved are already facing what might be considered at least something of a Plan B situation: a shift to the less known and the less comfortably familiar of breaking away from normal routines at all. And when I write of Plan B approaches in this series and in this blog as a whole, I am primarily if not exclusively writing of situations where both standard and routine, and the more obvious alternatives to it all would fall by the wayside as not adequately meeting perceived needs.

I briefly outlined an alternative approach that might at least in principle be attempted to avoid a Plan B requirement, and certainly as just specified there, where negotiating an acceptable alternative to whatever would be default, cannot be made to work. And that is a key defining feature of Plan B approaches as more stringently defined here and in my earlier writings to this blog.

I would start to more fully flesh out what I am discussing here as Plan B options, by picking up on and continuing discussion of a tactic that I raised later on in Part 30, and only made note of there for its potential risks:

• The negotiating tactic of selecting, where possible, who you actually have to and get to negotiate with, and certainly when attempting to work with more obvious first choices for this as based on their job titles and positions at the business, could not be made to work.

If you do attempt to work your way around one or more people who are legitimate stakeholders in whatever matters that you would see need to negotiate over, and if they come to see you as having bypassed them because you would not like what they would have to say on that, then you run a significant risk of burning bridges that you might have found useful to have intact, later on. And you will have probably created animosity and of a type that can have radiating impact on your overall reputation there at that employing business, and certainly insofar as you would seek to be viewed as a supportively involved and connected team player.

Circumstances are important there, and both as far as the ongoing actions and decisions and reputations of the people who you would not want to get involved in this are concerned, and in the people who you would turn to as alternatives in this type of negotiating context. As a perhaps obvious example, if the person with a gatekeeper, decision making title and position who you would want to avoid having to negotiate with has a terrible reputation for their short sightedness and their lack of professional capability, and you seek out alternatives who are well respected for getting the right things done, then a lot less harm is likely to arise than if you seek to shift who you would negotiate with in the opposite direction to this.

But regardless of that type of consideration, assume that you and the people who you would prefer to negotiate with and those who you wish to avoid in this are all going to be around at that business, longer-term. And one way or the other you will have to deal with all of these other stakeholders and with their friends there and more, longer-term too.

I note the likely need for what amounts to bridge mending when negotiating around a difficult stakeholder and certainly in this type of longer-term context. And I point out in this context that as soon as I begin taking and proposing a longer timeframe approach to job performance as I do here, I am actually discussing careers and a longer-term career perspective as well.

• Plan B approaches: Plan B strategies and tactics and related negotiating for the long-term, always bring career considerations into your planning and into your follow-through of it. And that is even true, and it can even be particularly true when you find yourself more mentally oriented towards the here-and-now and when you are in the midst of job-level navigating, where that more immediate perspective and its imperatives might be more overtly pressing and attention demanding.

And with that last detail added to this posting’s narrative, I turn to the above repeated Point 6 of the to-address list that I have been working my way through here:

• Negotiating possible downsizings and business-wide events that might lead to them, with all of the issues and complications that this type of situation brings with it.

I am going to at least begin to explicitly discuss that complex of issues in my next installment to this series, simply repeating for now, that this represents a type challenge, and a type of opportunity that brings essentially everything that I have been discussing here up to now, into active consideration again. Meanwhile, you can find this and related material at Page 4 to my Guide to Effective Job Search and Career Development, and also see its Page 1, Page 2 and Page 3. And you can also find this series at Social Networking and Business 2 and also see its Page 1 for related material.

Innovation, disruptive innovation and market volatility 47: innovative business development and the tools that drive it 17

Posted in business and convergent technologies, macroeconomics by Timothy Platt on June 5, 2019

This is my 47th posting to a series on the economics of innovation, and on how change and innovation can be defined and analyzed in economic and related risk management terms (see Macroeconomics and Business and its Page 2 continuation, postings 173 and loosely following for its Parts 1-46.)

I have been discussing innovation discovery, and the realization of value from it through applied development in this series through most of this series. And I have sought to take that line of discussion at least somewhat out of the abstract since Part 43, through an at least selective discussion and analysis of a specific case in point example of how this can and does take place:

• The development of a new synthetic polymer-based outdoor paint type as an innovation example, as developed by one organization (a research lab at a university), that would be purchased or licensed by a second organization for profitable development: a large paint manufacturer.

I focused for the most part on the innovation acquiring business that is participating in this, from Part 43 through Part 45, and turned to more specifically consider the innovation creating organization and its functioning in Part 46. And at the end of that installment and with this and subsequent entries to this series in mind, I said that I would continue from there by:

• Completing at least for purposes of this series, my discussion of this university research lab and outside for-profit manufacturer scenario.
• And I added that I will then step back to at least briefly consider this basic two organization model in more general terms, where for example, the innovating organization there might in fact be another for profit business too – including one that is larger than the acquiring business and that is in effect unloading patents that do not fit into their own needs planning.
• I will also specifically raise and challenge an assumption that I just built into Part 46 and its narrative, regarding the value of scale in the innovation acquiring business in their being able to successfully compete in this type of innovation as product market.

And I begin addressing this topics list with the first of those bullet points and with my real world, but nevertheless very specialized university research lab-based example. And I do so by noting a point of detail in what I have offered here up to now, that anyone who has worked in a university-based research lab has probably noted, and whether that has meant their working there as a graduate student or post-doc or as a lead investigator faculty member. Up to here, I have discussed both the innovation acquiring, and the innovation providing organizations in these technology transfer transactions as if they were both simple monolithic entities. Nothing could be further from the truth, and the often competing dynamics that play out within these organizations are crucially important as a matter of practice, to everything that I would write of here.

I begin this next phase of this discussion with the university side to that, and with the question of how grant money that was competitively won from governmental and other funding sources is actually allocated. For a basic reference on this, see Understanding Cost Allocation and Indirect Cost Rates.

Research scientists who run laboratories at universities as faculty members there, write and submit grant proposals as to what they would do if funded. And they support their grant funding requests for this, by outlining the history of the research project that they would carry out, and both to illustrate how their work would fit into ongoing research and discovery efforts in their field as a whole and to prove the importance of the specific research problems that they seek funding to work on, as they fit into that larger context. As part of that, they argue the importance of what they seek to find or validate, and they seek to justify their doing this work in particular and their receiving funding support for it, based on their already extant research efforts and the already published record of their prior and ongoing there-relevant research as can be found in peer reviewed journals.

They do the work needed to successfully argue the case for their receiving new grant funding for this research and they carry out the voluminous and time consuming work needed to document that in grant applications. And they are generally the ones who have to find the funds needed to actually apply for this too (e.g. with filing fees where they apply and grant application related office expenses.) Then the universities that they work for, demand and receive a percentage off of the top of the overall funds actually received from this, that would go towards what are called indirect costs (and related administrative costs, etc., though many funding agencies that will pay these types of expenses under one name will not do so under another, so labels are important here.)

My above-cited reference link points to a web page that focuses in its working example on a non-research grant in aid funding request, and on how monies received there would be allocated. But it does offer basic definitions of some of the key terms involved, which tend to be similar regardless of what such outside-sourced grant funding would be applied to, and certainly where payment to the institution as a whole is permitted under the range of labels offered.

And with that noted as to nomenclatural detail, the question of how funds received would be allocated can set up some interesting situations, as for example where a university that a productive research lab is a part of, might in general require a larger percentage of the overall funds received for meeting its indirect costs, than the funding agency offering those monies would allow. For a university sourced reference to this and to put those funding requirements in a numerically scaled perspective, see Indirect Costs Explanation as can be found as of this writing on the website of the Northern Michigan University. Their approach and their particular fee scale here are in fact representative of what is found in most research supportive colleges and universities and certainly in the United States. And they, to be specific but still fairly representative here, apply an indirect cost rate of 36.5% as their basic standard.

The Bill and Melinda Gates Foundation, to cite a grant funding source that objects to that level of indirect costs expenditures, limits permitted indirect cost rates to 10% – a difference that can be hard to reconcile and certainly as a matter of anything like “rounding off.” And that leads to an interesting challenge. No university would willingly turn away outside grant money and certainly from a prestigious source. But if they agree to accept such funds under terms that significantly undercut their usual indirect costs funding guidelines, do they run the risk of facing challenge from other funding sources that might have accepted their rates in the past but that no longer see them as acceptable? Exceptions there and particularly when they are large-discrepancy exceptions, can challenge the legitimacy of higher indirect cost rates in place and in the eyes of other potential funding agencies too.

• Funding agencies support research and have strong incentives to see as many pennies on the dollar of what they send out, actually directly going towards the funding of that research. Excessive and perceived excessive loss of granted funds for more general institutional support, very specifically challenge that.

Universities that have and use the type of innovation development office that I wrote of in Part 46 for managing the sale or licensing of innovation developed on-campus to outside businesses and other organizations, generally fund them from monies gained from research grants in aid received, as payment made to them in support of allowed indirect expenses. And this makes sense as they are university-wide research lab and research program-supportive facilities. But indirect expenses also cover utilities and janitorial services and even what amounts to rent of the lab space used – among other expenses.

To round out this example here, I add that one of the most important parts of any grant application is its budget documentation in which it spells out as precisely as possible what monies received will be expended upon. This includes equipment and supplies and related expenses that would directly go towards fulfilling a grant application’s goals but it also includes salaries for postdoctoral fellows who might work at that lab and it usually includes at least part of the salary of the lead investigator faculty member who runs the lab too, as well as the salaries of any technicians employed there. And I freely admit that I wrote the above with at least something of a bias towards the research lab side of this dynamic, and at least in part because I also find the one third or more cut taken by the universities involved for its use, to be excessive. And this sentiment is reinforced by the simple fact that very little of the monies coming into such a university as a result of innovation sales or licensing agreements actually goes back to the specific labs that came up with those innovations in the first place, and certainly as earmarked shares of funds so received.

• Bottom line: even this brief and admittedly very simplified accounting of the funding dynamics of this example, as take place within a research supportive university and between that institution and its research labs and its lead investigators, should be enough indicate that these are not simple monolithic institutions and that they are not free of any internal conflict over funding and its allocation.

Innovation acquiring businesses are at least as complex and certainly as different stakeholders and stakeholder groups view the cost-benefits dynamics of these agreements differently. And that just begins with the questions and issues of what lines on their overall budget would pay for this innovation acquisition and in competition with what other funding needs that would be supported there, and what budget lines (and functional areas of that business) would receive the income benefits of any positive returns on these investments that are received.

• Neither of these institutions can realistically be considered to be simple monoliths in nature, or be thought of as if everyone involved in these agreements and possible agreements where always going to be in complete and automatic agreement as to their terms.
• And these complex dynamics as take place at least behind the scenes for both sides to any such technology transfer negotiations, shape the terms of the agreements discussed and entered into, and help determine who even gets to see those negotiating tables in the first place.

I am going to continue this discussion, as outlined towards the top of this posting by considering a wider range of organizational types and business models here, and for both the innovation source and the innovation acquisition sides to these transfer agreements. And as part of that, I will at least begin to discuss the third to-address bullet pointed topic that I listed there, and organizational scale as it enters into this complex of issues. Meanwhile, you can find this and related postings at Macroeconomics and Business and its Page 2 continuation. And also see Ubiquitous Computing and Communications – everywhere all the time 3 and that directory’s Page 1 and Page 2.

It’s not who you network connect with, it’s who you actually know and who knows you

Posted in social networking and business by Timothy Platt on June 4, 2019

I am in part at least, offering this note as a brief follow-up to a recent posting that I wrote to this blog on social networking strategy, where I focused on better practices for achieving specific goals from your online efforts (see It’s Not Just Who You Network Connect With, It’s How You Network With Them.) And as its title indicates it is objectives and priorities and purpose oriented.

But to round out this opening remark to this posting, I based that posting at least to a significant degree on a particular type of “who to network with” analysis, as developed around the particular individual networking approaches and strategies followed by others who you might try networking with (as outlined by type in a still-earlier posting here: Social Network Taxonomy and Social Networking Strategy.)

• If you want your social networking strategies and the practices that you use in pursuing them to work for you, you need to know how the people who you would network with, think and act there too. You need to know and understand what they would and would not do and what they would and would not favorably respond to when facing the prospects of initiating or continuing an active social networking relationship. You need to be able to mesh the What and How of your networking efforts with the basic strategies that they follow.
• Crucially importantly here, you should reach out to connect with others in ways that fit into their social networking comfort zones, as mapped out by their networking strategies and as demonstrated by any visible networking activities that you can see them carrying out online through the social networking sites that you would reach out to them through. This is certainly important if you intend to achieve your basic goals from your efforts there.

And I in turn offered that posting as an analytically reasoned line of argument in favor of open networking, as a means of finding the right people and being able to actively connect with them as needed. My focus there was on finding people who can help you find and open doors, and the more such doors the better. So if you peel back the layers to this briefly-stated progression of postings, and if you include others that also fit into this same pattern that can also be found at Social Networking and Business 1 and its Page 2 continuation, most of that is in fact grounded in the issues of who to network with, and in open networking per se.

To be clear here, I still see positive meaning and value in what I have successively offered in those and related postings. I have explicitly, directly gained from pursuing the strategies and tactics laid out there.

• I have in fact found people who I have needed to meet professionally through them, with unusual skills and experience that I have needed to tap into, who I would never have learned of let alone met, absent networking help from people who follow the types of open networking strategies that I cite in my above referenced social networking taxonomy posting.
• And I have found the networking reach that I have been able to develop through LinkedIn, to cite a particularly valuable resource here, has been an invaluable source of insight when doing my preparatory homework too, when for example working with and pitching for opportunities to work with businesses as a consultant. I have literally at times, found myself walking into a room to meet with startup founders (to cite one source of possible examples here) where I have been able to uncover information and insight that would be of crucial importance to them concerning what they were doing and seeking to do, but that those founders did not know about. To take that out of the abstract, by way of real world example, there are times when I have learned of and about expected backers who a new venture’s founders were negotiating with, when they were seeking out angel investor support to jumpstart their effort. There, in those instances, I have been able to figure out who they were turning to for that type of support from my networking and from study of the personal profiles that I have been able to see and assemble insight from. I have been able to walk into those meetings knowing that they were in fact were considering that type of move. And I arrived there knowing things about those specific potential backers and their professional backgrounds that the people I was meeting with did not know but would benefit from knowing.

No, that type of value creation from online social networking tools and their use was not and still is not a common situation, and either for me or for any other consultants who I know. But it and similar instances can and do happen and at least eventually for anyone who effectively systematically networks and who approaches that as a source of learning opportunity as well as a source of introductions opportunities. (And yes, it can take real thought as to how or even whether to reveal that type of knowledge, as simply throwing it on the table can easily kill a possible consulting opportunity for how off-putting that might make it. Never, ever come across as not respecting the people who you would work with, and even if you know up-front that they have not done their necessary due diligence – their homework, and even when gaps in what they know and should know would have significant adverse consequences if not addressed and remediated. So share selectively so as to enable conversations and follow-through, and save any further details for later as areas that can be worked on, on the job there.)

So open business oriented social networking can work; it can offer really positive value, and value that can be shared at that, in creating mutually beneficial opportunities. But having acknowledged that, I also find myself looking at the accumulation of social networking “contacts” that cumulatively pile up through processes such as Facebook friending too; I find myself looking at and thinking about that form of open networking too. And to pick up on, and continue a thought expressed in one of my above-cited earlier postings, I find myself thinking …

It’s not just who you network connect with, it’s how you network with them … but it is also really about who you network with too

… and in ways that cannot always simply and automatically be contained or defined by the types of reasoning and strategy that I lay out in postings such as my above cited taxonomy note.

So I find myself writing this as a thought piece on what arguably might be considered the challenge and the trap of Facebook friending, to focus here on that social networking venue, and on networking by the numbers … and even when doing so for explicit strategic and tactical reasons on your own part.

Let’s consider Facebook friending and let’s consider posting to the walls of your contacts on a site such as Facebook, and the consequences and impact of their so posting to yours, as a direct manifestation of how friending works in practice. And let’s at least start with what can rapidly become all but overwhelming Facebook “wall” clutter. Personal home page clutter: “wall” clutter in Facebook terms, can become overwhelmingly impactful and in both your personal home page on a social networking site as a whole, and certainly as essentially all of the social networking pages of people who you might need to connect with become overwhelmed by incessantly steady floods of stuff, as shared by and from others and often simply to share and be seen. And this point of concern certainly applies when that flood of content includes vast amounts of marketing and advertising material that is for the most part both unsolicited and unwanted by any real individual human site member. And yes, I express this in that way because sites such as Facebook, and Facebook in particular appear to have more robo-members: artifactual fake account members, set up for trolling and other message manipulation purposes, than real accounts – and they post and post and post and share and share and share too.

• Developing and maintaining a clear, effectively working focus on what you would network for and how and why, in the face of this deluge of distraction can quickly become impossible,
• And regardless of your intended social networking approach and strategy,
• And certainly when this same type and level of content flood is hitting anyone and everyone who you might want to reach out to, and at least as much as it does your own social networking site home page.

And if you need to reach out to, or find people through really widely open networkers of the type that I cite and discuss in my above-noted social networking taxonomy paper, and even if they actively work to reduce the clutter on their own social networking site pages and show a proportionately lower level of this type of deluge per contact than usual, the increased number of contact that they make can still effectively kill any business oriented social networking strategy for you, and for your contacts too, and for them as well. Clutter, like background static, can kill any intended meaningful signal and essentially every single time, and certainly when and as it comes to effectively overwhelm for its noise to signal intensity.

Can a site such as Facebook offer business networking value? Yes, if you are an advertiser there, and if you can cost-effectively purchase access to a sufficient volume of site members who fit into effectively targeted market demographics that would constitute a viable market for you. And if you do not have reason to feel concerned about a possible downside from coming across as spamming a perhaps large percentage of the people who you do wall post to. But it does not and even fundamentally cannot offer a correspondingly positive value to you if your goal is to actually network there, initiating and cultivating a wide, effective reach of genuine two way conversations out of that effort. And this leaves out essentially all of the types of targeted networking that would go into promoting and advancing your jobs and careers efforts as successively discussed in this blog in my (see Guide to Effective Job Search and Career Development and its Page 2, Page 3 and Page 4 continuations.)

It also leaves out a lot of the types of online social networking that you might seek to pursue if you work as an outside consultant, with a small or stand-alone shop, to cite one possible career path where effective online social networking can offer essential value. And it can effectively eliminate at least significant possible sources of value for small business participants too, unless that is they follow a strictly opt-in networking and posting approach with their customers, and if they do not come across as seeking to buy support or positive reviews that might indicate it.

A high volume and scale of business can compensate for these potential downsides for high volume businesses and advertisers, bringing value to Facebook wall sharing to them and even as part of an overly open (e.g. non-opt-in here) networking strategy. But even there, returns on marketing and advertising investment are not going to be guaranteed and they will probably not actually come to match what a Facebook would claim as possible there. And other business social networking participant types cannot always realize anything like corresponding levels of positive value from this.

And with that offered, I return to reframe and complete my works in progress conclusion note for this posting as already offered here in two iterations: once as a posting title and again in bullet pointed italics:

It’s not just who you network connect with, It’s how you network with them … but it is also really about who you network with too, where that can be fundamentally determined by where you online network and their business model and practices.

The title of this posting is “it’s not who you network connect with, it’s who you actually know and who knows you.” And ultimately, if you do not or cannot get this right, you cannot know the people you friend, or otherwise seek to network with and they cannot get to know you. And I finish this posting by acknowledging that Facebook is currently updating their user interfaces and at least some of their business practice details with an at least stated goal of addressing at least some of the challenges that I have raised here. But their basic business model is still centered on monetizing and selling access to their member users’ data and access to their eyeballs. So I do not expect the basic issues that I raise here to change, and certainly not any time soon.

You can find this posting and related material at Social Networking and Business and its Page 2 continuation.

Planning for and building the right business model 101 – 43: goals and benchmarks and effective development and communication of them 23

Posted in startups, strategy and planning by Timothy Platt on June 2, 2019

This is my 43rd posting to a series that addresses the issues of planning for and developing the right business model, and both for initial small business needs and for scalability and capacity to evolve from there (see Business Strategy and Operations – 3 and its Page 4 and Page 5 continuations, postings 499 and loosely following for Parts 1-42.) I also include this series in my Startups and Early Stage Businesses directory and its Page 2 continuation.

I have been discussing three specific possible early stage growth scenarios that a new business’ founders might pursue for their venture, in recent installments to this series, which I repeat here for smoother continuity of narrative as I continue addressing them:

1. A new venture that has at least preliminarily proven itself as viable and as a source of profitability can go public through an initial public offering (IPO) and the sale of stock shares. (See Part 33 and Part 34.)
2. A new venture can transition from pursuing what at least begins as if following an organic growth and development model (as in exit strategy 1, above) but with a goal of switching to one in which they seek out and acquire larger individually sourced outside capital investment resources, and particularly from venture capitalists. (See Part 35.)
3. And a new venture, and certainly one that is built around a growth-oriented business model, might build its first bricks and mortar site, in effect as a prototype effort that it would refine with a goal of replication through, for example a franchise system. (See Part 36 through Part 39.)

And more recently here, I have been analyzing and discussing these business development possibilities, for how they would address a set of three specific business performance requirements that are important for long-term success:

A. Fine tuning their products and/or services offered,
B. Their business operations and how they are prioritized and carried out, and certainly in the context of this Point A, and
C. Their branding and how it would be both centrally defined and locally expressed through all of this.

I at least preliminarily finished discussing the first of those due diligence issues: the above Point A, in Part 42, for all three of the above listed business development scenarios, briefly touching on Point B and C while doing so for how the three functionally interconnect. My goal here is to more fully and specifically address Point B and its issues as it plays out for the three business models under consideration here. And I begin by noting a point of comparison:

• The selection and fine tuning of what a business would bring to market as its defining source of revenue generating value might be influenced by outside forces, with that including market demands and pressures from their competitors. But ultimately, the most compelling drivers for this come from within the business itself, from its founders and owners and from its ongoing leadership as codified by them in its ongoing business model. And standardization of marketable, sellable production tends to be held as essential. So for example, if a business bakes rolls, all of its product should be of the same size and shape at least within very narrow preset limits and all should be made according to the same recipe for any given roll type offered and with tight quality control over ingredients used. And all should be cooked the same way and to the same doneness and all should packaged, and easy to package quickly for shipment and sale, in the same way too (bringing Point 3 into this narrative here too.)
• Higher level, internal to the business decision making plays a very significant role in the What and How of their business operations as actually carried out too. And that can mean standardized work performance patterns that significantly mirror those just noted for what the business would offer to its outside world. But when you look beyond the officially expected as detailed there, to see how business processes are actually carried out, you can often find that determined by a much wider range of in-house participants, with that larger stakeholder group also including mid and lower level managers and even non-managerial hands-on employees who actually carry out much of the work so specified. And this at-least capacity for variation, can be necessary and even essential if a business is to be flexible enough in order to be able to accommodate change and the unexpected, and certainly as it might more locally arise and even in day-to-day operations. Setting aside the questions and issues of when variation here is a positive and when it is a negative, it does happen and in ways and to degrees that would not be allowed for, or even make sense for products shipped out the doors and even for the same business.
• And outside considerations can and do have very significant impact on the operational What and How of a business too, and with a higher level of functional significance than might be expected in a marketable product context, and for entire functional areas of a business. Consider Finances and the outside mandated requirements of generally accepted accounting procedures (GAAP) as a perhaps best known example there, though outside regulatory forces can and in fact do reach into most if not all aspects of business operations, and for many types of businesses and for essentially all industries.

But for purposes of this discussion, let’s set aside outside-mandated, standardized demands and their pressures (as just exemplified here by adherence to GAAP) that all businesses would face, as leaving competitors on what at least in principle would still be a level playing field competitively. (Here, I set aside biased regulatory requirements that might for example explicitly favor larger already established businesses in an industry or sector, while thwarting possible new entrants there with new competing products or services that their more established peers would be less able to directly compete with.)

All three of the basic business scenarios that I have been discussing here since Part 33 are built around a premise of long-term stability and business strength, and all are built around a basic assumption of overall completeness in what they would carry out and be prepared to carry out operationally, in order to insure that happening. So for example, I did not include here, a fourth business development scenario in which a business founder, or founding team might build a new business venture with an explicit goal of selling it to some larger business entity through a mergers and acquisitions process, where:

• Some of the functional areas that they would assemble in it,
• And at least proof of principle development of all of its key marketable and sellable products or services might have to be very robust
• But where other more supportive capabilities might be left more vestigial as they would be provided by an acquiring entity that would be less inclined to have to pay for what to it, would be unnecessary duplications. (Think here in terms of founders who have ownership to a key next-step advancement patent or set of them, that seek to develop and prove the value of that holding in order to maximize its value to a buyer and its profitability to themselves.)

And with that tripartite point I have offered a simplified and even simplistic outline to what has become an increasingly complex options-rich range of business development possibilities. And I at least acknowledge the legitimacy of that view of matters by posing some Point B oriented questions that the strategic decision makers of a new and forming, or more established and growing business might very well find themselves facing, which I phrase here as being asked of you, the reader:

• What are the core functionalities in this business that you would have to keep in-house, and develop and expand there as your business as a whole scales up (assuming once again that you would retain ownership of this enterprise and continue to lead it)?
• And what more supportive and ancillary functionalities and services might best be outsourced to third party provider specialists, where doing so would be more cost-effective and not carry additional new risk management issues? (And when and how might you do that and under what terms?)
• And focusing on functionalities and services that really should be maintained in-house, and particularly in distributed business systems, which of them should be managed from a home office or other more central facility and which of them should be managed and effectively owned, more locally?

At least aspects of what have traditionally been seen as Information Technology in a business are now routinely outsourced with that including server farms and enterprise-wide cloud storage and access management solutions, and the vast majority of web site and online presence support, or at least its more technical hardware and software underpinnings. Though this can and often does include call center operations and certainly for off-hour coverage when 24/7 customer assistance is a desired goal, and outsourcing can become an attractive option for at least areas of Human Resources and Personnel management too.

I have in fact pushed this in-house versus outsourced dynamic to a lean and outsourced extreme in one of my earlier series: Virtualizing and Outsourcing Infrastructure, as can be found at Business Strategy and Operations as its postings 127 and loosely following (for its Parts 1-10.) More specific in-house versus outsource decisions become important in the types of context that I raise here, because they have stakeholder-responsibility and stakeholder-autonomy implications that can be shaped by a determination of precisely what type of business model and business development plan is in place. And to take that out of the abstract, consider the above-repeated business development Scenario 3, and at least potential areas for home office/parent company versus franchise facility/franchisee conflicts. What would and in fact should best be carried out and controlled system-wide from a parent company controlled and managed facility, and whether that means in-house maintained and operated or centrally contracted and managed-outsourced? And what should best be done by, and best be considered a responsibility of the individual franchisees in place and their local business outlets in these larger systems?

• Both overall cost-effectiveness and consistency, and market-facing standardization and the branding that serves as a public face to all of this, would enter into any realistic answer to those types of questions, and certainly as best for now solutions to them are to be sought out and implemented. And with that, I tie this back to the above stated performance Points A and C.

I am going to continue this narrative in a next series installment with an explicit focus on those market-facing issues, and with particular attention to Point 3 and its implicit questions and issues. And in anticipation of that discussion to come, I note here that we live and work in contexts that have increasingly come to expect and even demand social and environmental responsibility from businesses, and good corporate citizenship from them. So actually addressing the demands of Point C, of necessity have to include active consideration of Point B issues as well as Point A ones.

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. And you can find this and related material at my Startups and Early Stage Businesses directory too and at its Page 2 continuation.

Donald Trump, Xi Jinping, and the contrasts of leadership in the 21st century 18: some thoughts concerning how Xi and Trump approach and seek to create lasting legacies to themselves 6

Posted in macroeconomics, social networking and business by Timothy Platt on May 31, 2019

This is my 18th installment in a progression of comparative postings about Donald Trump’s and Xi Jinping’s approaches to leadership per se. And it is my 12th installment in that on Trump and his rise to power in the United States, and on Xi and his in China, as they have both turned to authoritarian approaches and tools in their efforts to succeed there.

I focused on Donald Trump and his legacy oriented narrative in Part 14 and Part 15 of this, and have continued from there to correspondingly consider Xi Jinping and his legacy oriented actions and ambitions too:

• In Part 16 with its focus on the mythos and realities of China’s Qing Dynasty during its Golden Age, as a source of visionary legacy defining possibilities,
• And in what has followed, leading up to the reign of Mao Zedong as China’s first communist god emperor, with that narrative thread pointed to at the end of Part 16, and with an initial, more detailed discussion of it continuing on through Part 17.

I began that second narrative thread in 1830’s China as the Qing Dynasty began what became a slow but seemingly inexorable decline that led to its end in 1912 with the abdication of its last emperor: Puyi. And I focused in Part 17 on challenges and responses to them that China and its leadership faced through at least the first decades of that period, that can for the most part be seen as endemically Chinese and as arising from within their nation and their system of governance.

That, I explicitly note here includes my Part 17 discussion of a source of challenge that in all fairness at least originated outside of the country and outside of anyone’s control there, and either individually (as for example through the actions or decisions of an emperor) or collectively (as for example through the actions or decisions of a state bureaucracy that would, or would not function in accordance with the dictates of a more central authority in creating a commonly held, unified response to larger scale societal challenges faced.) That source of challenge consisted of the twin stressors of climate change and of environmental degradation as they adversely impacted upon agricultural productively and the basic food supply that China’s many millions would turn to. The adverse climate changes that I wrote of came from outside of China, or at least from outside of any possible direct human control there, even as they took explicit shape there for how they specifically affected that nation and its peoples. But their government’s failure to effectively respond to this seemingly ever-expanding challenge and in a way that might have at least limited its negative impact, was in fact endemic to the nation and its leadership. That failure of effective systematic response was human created and sustained.

And a great many of the environmental challenges faced, and certainly in China’s agriculturally most important lands, were in effect home grown too and even more so than any climate level changes were. But that only tells one half of this story; I focused in Part 17 on endemically Chinese pieces of the puzzle of what happened to end the Qing Golden Age and bring China as a whole into decline, and I have continued addressing that side of this here too, at least up to now in this posting. But China’s history and certainly since the Qing Dynasty cannot be understood, absent an at least equally complete narrative of and understanding of their relationship with the world around them. And I begin addressing that set of issues with some demographics and with what for purposes of this series and its narrative flow, can best be seen as old and even ancient history.

The Ming Dynasty ruled China from 1368 to 1644, and it was the first ethnically Chinese led dynasty to rule that nation in centuries. And it also proved itself to be the last ethnically Chinese dynasty to rule there, at least if “ethnically Chinese” is construed to mean Han Chinese; the Qing Dynasty that followed it as China’s last hereditary dynasty was led by ethnically distinct non-Han outsiders as well. But this is not the type of outside influence that I would primarily write of here when raising and addressing the issues of foreign influence and impact upon and within China.

Still, officially, according to the Beijing government of Xi Jinping, China currently contains within it 56 separate and distinct ethnic groups as of their Fifth National Population Census of 2000, with Han Chinese accounting for approximately 91.59% of the entire population and with the remaining 55 ethnic groups accounting for the remaining 8.41% (and also see this official government release on China’s 2010 national population census.)

Yes, there are grounds for debate there, where a variety of smaller ethnically distinguishable populations are not afforded separate recognition in those demographics surveys and their accompanying official analyses. And that lack of official recognition means a lack of legal protection of those ethnically distinctive groups and their peoples, as such. But even so, China includes within it a range of ethnic diversity that it does officially recognize and that it does offer officially protected status to, for their unique cultural identities. And the non-Han peoples that emperors of earlier dynasties sprang from, that have in their days ruled over China, have for the most part been assimilated as recognized minority groups in what is now the official 56. And they can be and are seen as belonging to a larger single, overall Chinese citizenry. China’s current government certainly sees matters that way, as is recurringly indicated by their efforts to retain and control and mainstream any and all ethnic diversity within their country, and certainly where that might be seen as representing separation in self identity that might become a push towards some form of independence.

And with that all noted, I raise three crucially important points:

• For all of the official acceptance and inclusion of the official census in China with its 56 culturally and ethnically distinct recognized groups, the Han Chinese are still considered in a very fundamental sense to be the only “true Chinese,” and in a way that members of the other 55 have never been afforded. They have always been seen as being different and other, and even when members of those groups have gained hereditary dynastic leadership over the country as a whole.
• And even as the Chinese government and the Chinese Communist Party that leads it recognize a controlled measure of diversity in their nation and its overall citizenry, they still consider as a matter of paramount importance that all Chinese citizens and of whatever ethnicity or cultural persuasion must be Chinese first and foremost, and that they must believe and act accordingly and with their primary loyalties aimed towards China’s one government and one Party.
• But at the same time, Han primacy of place as representing the true Chinese people, places very real practical day-to-day and ongoing constraints on what members of the other 55 minority groups can achieve, and both as measured by Party membership and opportunity to join, and by opportunity to advance up the Party’s ranks if allowed in as card carrying members. And these de facto restrictions have impact on status and opportunity in general, and throughout Chinese society. For a particularly striking example of how this plays out in practice, and certainly as of this writing, consider the restricted status and the restrictions on opportunity faced in China by their Uyghur minority today.

I am not addressing the issues of ethnic diversity here as a primary source of us versus them of foreign impact on China. But I offer this discussion thread here in specific preparation for delving into that complex of history and ideas. And I do so because it would be impossible to fully understand that, let alone address it absent a clearer understanding of what “us” means in China with at least an outline awareness of something of the historically grounded nuances that enter into that determination. Are Han and Chinese synonymous? No, but there are contexts where they become close to that, even as Party and government calls upon all Chinese nationals to be Chinese, and effectively entirely so and regardless of ethnicity or local cultural self-identity.

I will come back to reconsider the complex of issues that I have raised and at least briefly touched upon in this posting, later on in this series and its overall narrative, and certainly in the context of Xi’s within-China legacy building ambitions and actions. But for what is to more immediately follow now, I am going to focus on what might be considered true outsiders, some of whom as national and culturally distinct groups are and will remain outsiders and foreign nationals (e.g. European and American trade partners and their governments) and some of which, at least for my earlier historical references to come here, were eventually brought in and assimilated – but with nothing like that possible during the times under discussion. And I begin addressing that by turning at least closer to the beginning in China’s early history.

China has faced challenges from outside peoples and foreign cultures that go back at least as far as the construction of the first sections of fortifications that were eventually incorporated into their Great Wall (their 萬里長城), that were themselves initially built starting as far back as the 7th century BCE. (Construction of the Great Wall of China itself is generally dated as having been started by the historically acknowledged first true emperor of China: Qin Shi Huang (220–206 BCE), expansively building out from those earlier more locally limited protective efforts.) I say “… at least as far back” here because foreign attacks and incursions and even outright invasions, were a scourge to the people of then China for a long time before the building of those early walls and their supporting fortifications.

Stepping back from this China-focused narrative for a brief orienting note: I have written in this blog of Russia’s long history of invasion and threat of invasion and from many directions. See in that regard, my posting Rethinking National Security in a Post-2016 US Presidential Election Context: conflict and cyber-conflict in an age of social media 13, where I lay a foundation for discussing Russia’s current foreign policy and some of the essentially axiomatic assumptions that help to shape it from that nation’s past. Foreign invasion and threat of it have held powerful influence there for a great many centuries now. China is not unique in having faced foreign invasion and threat of it, any more than Russia is, or any of a wide range of other nations and peoples that I could cite here. But this history and this type of history is and has been an important source of influence in shaping China for its ongoing impact and persistence, and certainly over the years that I write of here, from the 1830’s on where threat and possibility became an ongoing reality.

I am going to continue this narrative in a next series installment where I will look at China’s international trade and other relations starting in the Qing Golden Age, and how they spiraled out of Chinese control for their side of all of that as the Qing Dynasty began to fail from its center outward and from its periphery inward. I will of course, continue that narrative thread with an at least brief and selective discussion of the first Republic of China, as formally existed from 1912 until 1949 with its final overthrow at the hands of Mao Zedong’s communist forces. And I will equally selectively discuss the Mao years of his Peoples Republic of China, as he developed and envisioned it as a response to what had come before, and that in turn helped shape Xi Jinping into who he is today, with his legacy goals and ambitions and with the axiomatic assumptions that be brings to all of that.

Meanwhile, you can find my Trump-related postings at Social Networking and Business 2. And you can find my China writings as appear in this blog at Macroeconomics and Business and its Page 2 continuation, and at Ubiquitous Computing and Communications – everywhere all the time and Social Networking and Business 2.

Building a business for resilience 35 – open systems, closed systems and selectively porous ones 27

Posted in strategy and planning by Timothy Platt on May 30, 2019

This is my 35th installment to a series on building flexibility and resiliency into a business in its routine day-to-day decisions and follow-through, so it can more adaptively anticipate and respond to an ongoing low-level but with time, significant flow of change and its cumulative consequences, that every business faces in its normal course of operation (see Business Strategy and Operations – 3 and its Page 4 and Page 5 continuations, postings 542 and loosely following for Parts 1-34.)

I began working my way through a brief to-address topics list in Part 32 of this, that I repeat here for smoother continuity of narrative as I continue discussing its points:

1. Even the most agile and responsive and effectively inclusive communications capabilities can only go so far. Effective communications, and with the right people involved in them, have to lead to active effectively prioritized action if they are to matter, and with feedback monitoring and resulting reviews included.
2. Both reactive and proactive approaches to change and to effectively addressing it, enter in there and need to be explicitly addressed in rounding out any meaningful response to the above Point 1.
3. And I will at least begin to discuss corporate learning, and the development and maintenance of effectively ongoing experience bases at a business, and particularly in a large and diverse business context where this can become a real challenge.
4. In anticipation of that, I note here that this is not so much about what at least someone at the business knows, as it is about pooling and combining empirically based factual details of that sort, to assemble a more comprehensively valuable and applicable knowledge base.
5. And more than just that, this has to be about bringing the fruits of that effort to work for the business as a whole and for its employees, by making its essential details accessible and actively so for those who need them, and when they do.

I have offered at least preliminary responses to the first two of those points since then, leading up to this series installment and a more detailed discussion of the above Point 3.

I in fact began addressing that in Part 34 and recommend you’re reviewing that as offering an explicit orienting foundation for what is to follow from here on in this narrative. As I noted there, Point 3 is the farthest-reaching, most complex of the five topics points that I am seeking to successively address here, and as at least partial proof of that, Points 4 and 5 can be seen as specific aspects of it that have been spun off from it for more focused attention.

I begin, or rather continue addressing this complex of issues by at least briefly turning back to Point 2 again, where communications and the data and processed knowledge that it conveys is turned into action and activity, and either reactively or proactively, or what is perhaps most common, as some combination thereof. And I begin addressing that by adding one of the villains of this type of narrative into it: business systems friction with its at least largely unconsidered, background static-like limiting restrictions on both the development of organized actionable information in a business, and on its effective communications to the stakeholder who need it, and when they do.

I said in Part 35 that reactive and proactive per se both become more important, and I add more meaningful as points of distinction when the people involved – or who should be involved, are dealing with the non-standard, the non-routine, and have to find more creative solutions to the problems they face than can be encompassed in their usual day-to-day task level approaches.

• As a caveat to that, I have to add here that one of the first victims of business systems friction, is that anything like before the fact understanding that stakeholders are in fact facing the unexpectedly different, can become lost. Reactive often starts from what has suddenly been found to be the unexpected, as standard operating practices as rote-followed suddenly break down and in unexpected but significant ways and with what are proving to be significant consequences from that.

Proactive as such arises when at least one key stakeholder spots something unusual or unexpected, and early on and in a way that highlights its novelty to them. And it actually takes place when they can and do start informing others who would also have to know of this, so they can begin addressing this unexpected but real new circumstance and from early on too, rather than have to play reactive catch-up for their part of the overall task and process performance flow that they would be responsible for in all of this.

I have on occasion written here in this blog of reactive and proactive occurring in a blend, and I add in what can be a confusing blend of them. All relevant stakeholders are not always brought into these now very necessary conversations. And even when such a stakeholder is included, that is no guarantee that they will actually chose to deviate from “tried and true” and even as pursuit of that might have already proven problematical for others involved in these overall cause and effect cascades.

Point 1 as listed above, addressed follow-through and action but it also focused on communications and on what is communicated. Point 2 as started for discussion in Part 34 of this series and as continued here, is all about what would, or would not be done with that knowledge, assuming that it can even be made available in a timely manner. And localized breakdowns there can contribute to the pursuit of mixed reactive plus proactive, as much as conservative insistence on following routine practices does, in the face of new and emergent issues and challenges. And Point 3, and by extension Points 4 and 5 as well, all deal with the basic issues on raw data, processed knowledge and its organized assembly and communications again, though I will also reconsider usage and follow-through issues when discussing Point 5, as begun here when considering Point 2 of the above list. And with that orienting note added, I turn here to address Point 3 again. And I do so by delving into a fundamentally important issue, and one that is often overlooked for its prevalence and significance when discussing the types of technology-based approaches to information and knowledge management in a business, that I touched on here in Part 34: version 2.0 intranets and related capabilities for bringing people together in needs-focused ways.

• The issue that I refer to here is that of nuanced, experience based judgment and the simple fact that all processed knowledge in a business cannot simply be typed into a database and in a form that can always, automatically be used and to full benefit by anyone tapping into it as a shared resource.

Let’s consider a judgment call example here, to illustrate the point that I am trying to make with that assertion. And I offer it as a realistic if stylized feedback-framed verbal response. “The problem that you’re telling me about sounds familiar even if it isn’t exactly common here. And it sounds like you are going to need X to handle it (where X is a resource that is “owned” by a manager on a different part of the table of organization.) And you might very well need some specific help in both accessing and using it. Watch out for A. He might or might not be able to help you access X but he doesn’t know as much as he thinks he does and certainly when doing anything here that falls outside of his more day-to-day routine. B, on the other hand is a lot more widely experienced and he can think outside of his day-to-day box. He’s not as easy to work with; he is not all that outgoing and he tends to keep pretty focused on his immediate tasks at hand. But if you could get him to help you, and if his boss C will let him take the time to do that, you are going have a lot better chances of success here than if you let A try and take over on this for you. And he will try and take over.”

It is essentially guaranteed that any advice: any insight of this type, focusing on interpersonal issues and on individual strengths and weaknesses, is going to come with an at least strongly implied “… but don’t tell any of them that I told you this. I have to work with A and B and their manager too!” So this might be crucially important information for successfully carrying out an important task, and a high priority novel one. But how would anyone put this type of judgment-call insight as to who is best and who is worst to work with, into a database? So I will focus in what follows on what might be deemed more operational insight and knowledge – data and processed knowledge that does not carry this type of interpersonal judgmental quality, that might be so codified, organized and shared. And with this caveat offered, I will begin addressing that complex of issues in my next series installment, turning back to my Point 3 notes of Part 34 as an organizing framework, or at least as the start of one for doing so. Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory.

Reconsidering the varying faces of infrastructure and their sometimes competing imperatives 7: the New Deal and infrastructure development as recovery 1

Posted in business and convergent technologies, strategy and planning, UN-GAID by Timothy Platt on May 27, 2019

This is my 8th installment to a series on infrastructure as work on it, and as possible work on it are variously prioritized and carried through upon or set aside for future consideration (see United Nations Global Alliance for ICT and Development (UN-GAID), postings 46 and following for Parts 1-6 with its supplemental posting Part 4.5.)

I have already briefly discussed four infrastructure development case study examples in this narrative. And my goal for here is to at least begin a similarly brief and selective discussion of a fifth such case study, large scale infrastructure development (or redevelopment) initiative, as drawn from at least relatively recent history:

• US president Franklin Delano Roosevelt’s New Deal and related efforts to help bring his country out of a then actively unfolding Great Depression.

Then after discussing that, I will turn to consider a sixth such example, with an at least brief and selective accounting of how:

• A newly formed Soviet Union sought to move from being a backward agrarian society, or rather a disparate ethnically diverse collection of them that had all existed under a single monarchical rule, to become a single more monolithic modern industrial state.

And I will of necessity find myself referring back to two other infrastructure development examples that I have already cited and explicitly discussed here, continuing them for purposes of comparison to these two: the Marshall and Molotov Plans as can be found in Part 4 and Part 5 of this series respectively.

My overall goal for this series as a whole has been to successively explore a progression of such current and historic large scale infrastructure initiatives, with a goal of distilling out of them, a set of guiding principles that might offer planning and execution value when moving forward on other such programs. I continue developing in this narrative as a foundation building outline of real world experience at infrastructure development with that goal in mind, as we all face large-scale and even globally spanning challenges that only begin with global warming and its already visible impact: challenges that history will all but certainly come to see as defining a large part of this 21st century.

I write a lot in this blog about the new and unexpected and about the disruptively new as that would call for entirely new understandings of challenges and opportunities faced, and of best ways to address them. There were a variety of issues that led to the Great Depression that people in elected office and that people with training in economics and related fields thought they understood, from their apparent similarity to at least seemingly parallel events from their past. And they were correct on some of those points, even as they were hopelessly wrong about others and with great consequence for how they sought to correct for them.

I begin this posting’s main line of discussion here by citing two such factors: one effectively more understood at least in principle if not in practice, and the other much less so and certainly when effective action could have had a positive impact:

• Pre-Great Depression bank holding companies and their acceptance of their own stock shares as preferred collateral when making loans, creating vast liquidity and reserves gaps in their systems in times of stress (and also see Banking Panics of 1930-31.)
• And tariff barriers with their effect of killing overseas markets that American industry depended upon for its very survival, and particularly at a time when local and national markets were drying up for lack of available liquidity. See in particular the Hawley–Smoot Tariff Act of 1930 for an orienting discussion as to how these business challenging and economy breaking barriers were erected.

Both of these developments happened: the first as a leading cause for what became the Great Depression for its bad banking practices consequences, and the second as an ill-considered response to a deepening recession, that made it the Great Depression for its duration and for its depths of severity.

I just identified the first of those two contributing factors: bad banking practices, as having been understood in principle if not in practice, and the second of them: tariff barrier protectionism as being less fully understood of the two. But in all fairness, faulty assumptions and fundamental misunderstandings contributed to both, and both for their occurring and for contributing to their consequences. And as I will briefly note in what follows here, these and other related failures in understanding and action fed upon each other and over a course of overlapping timeframes. And those toxic synergies made the Great Depression into the systemic collapse that it turned into. But let’s start with the banking system and its challenges.

Even the leadership of the bank holding companies that failed during the Great Depression knew and understood that they should not make unsecured loans and certainly not as a matter of routine practice. They required collateral on the part of businesses and other entities that would take out loans from them, as assurance that if those loans were in danger of defaulting, they could recover at least significant amounts of their invested capital. Their mistake, or at least the fundamental one that I would cite here: the point where their understanding of this type of due diligence as a matter of principle, broke with their understanding of it in practice, came from what in retrospect can only be called their hubris. They saw themselves as absolutely secure and stable, and as a result saw themselves as organizations as being immune from market-based stress or volatility. So when their customers saw the economy that they had invested in begin to collapse and when those customers started going to their banks that they had put their savings in, and in increasing numbers, those banks were unprepared. And as their customers started lining up at their doors to take their money out, more and more of them panicked and joined the lines until their banks’ cash holdings were so depleted that they could no longer function.

I cited in my above bullet point how these holding companies had accepted shares in their own business as even preferred collateral for the loans they made. As their own systems began to fail, one member bank at a time, they found themselves as having in effect made vast numbers of loans without requiring any real collateral at all – or at least any that could still hold outside-validated value. So a bank in such a holding company might fail with others in that system finding themselves in distress but still remaining recoverable, at least in principle. But the house of cards nature of how they had all managed their businesses, in-house, as their own collateral valuation standard meant that those banks folded too. Their customers knew they had nothing real backing the loans they had entered into and they knew that the banks they had entrusted their savings in were now unreliable for that. And there was no Federal Deposit Insurance Corporation (FDIC) or similar outside supportive agency in place, at least yet, that might have quelled the panic and stopped the cascades of failures that quickly brought down entire bank holding companies and all of their member banks.

The dynamics that led to these banking system collapses were understood in principle and in the abstract, even if no one in those bank holding companies seemed capable of turning that abstract understanding into prudent due diligence and risk management practice. The second of the pair of examples that I cite here was, and I have to add still is a lot less well understood and certainly as the current presidential administration in the United States is playing with the fire of trade wars and tariff barriers even as I write this.

Ultimately, there is no such thing as a national economy. Nations operate in larger contexts and they have financially for as long as there has been international trade, and with the roots to that going back to before the dawn of recorded history. And ultimately, economies are liquid reserve and cash flow determined, and with trade shaping and driving all of that. Stop trade and you stop the flow of money and you challenge and even kill the economies involved. And that holds when considering the overall and even global economy as a whole, or when considering the portions thereof that are based in some particular country or region, but that depend for their existence on the ongoing functioning health of the larger economy that they are part of.

As just noted, even today there are people in positions of real power and authority who do not understand this. And a lot fewer seem to have understood this in 1930 when the Hawley–Smoot Tariff Act was put into law and into practice, and the American economy and other national and regional components of the overall global economy began to really collapse.

I at least alluded to timing overlap and toxic synergies in these systematic challenges in the above text, and explicitly make note of that point of detail here. The stock market of the Roaring 20’s was impulse driven for the most part and with seemingly everyone investing in it looking for quick riches. And investing in it was comparable to walking a tightrope without a safety net. But the structural instabilities and the lack of anything like regulatory oversight to limit if not prevent bad business and investment practices that characterized this early stock market, only constituted one of several systematic sources of failure, that all came to collapse together, with a pattern-setting start to that developing over a period of about one year starting in October, 1929 and continuing on to the Fall of 1930.

• To explicitly connect two points of detail just touched upon here, stocks and other investment instruments that might be counted as representing saved and invested wealth, were not necessarily considered to be reliable sources of collateral to individual banks or to their parent holding companies, for their volatility and their potential for it. But these banking institutions thought that they at least knew and could trust their own paper: their own traded stock shares as reliable collateral when making loans. So many if not most came to preferentially keep this “in-house” and asked for proof of ownership in their stock and with it used to back loans signed for through them.
• My point here is that all of the factors and considerations that I make note of here, and a lot more that also contributed to the Great Depression, interacted and reinforced each other for their toxic potential and for their eventual consequences.

The American economy and other national and regional economies in general, all took a real beating in late 1929, and with the US stock market collapse only serving as one measure of that. But these markets were actually beginning what would have seemed a long slow path back to stability and recovery. Recessions end and more recent ones of note in particular, had generally begun to significantly turn towards recovery within about 15 months from their visibly impactful starts. As one admittedly limited and skewed measure in support of that claim for what might have happened here too, consider the collapse of the New York City based stock market itself as tracked by an already relied upon and trusted Dow Jones Industrial Average as a measure of stock market performance and of public confidence in that. The stock market crash began on Thursday, October 24, 1929 with nervous investors trading a single day record 12.9 million shares and with many more trying to sell than to buy. The overall market valuation at measured by the Dow Jones average began falling precipitously.

The weekend that followed did not have a positive impact, in giving investors time to reconsider and settle their nerves. Monday, October 8: Black Monday came and by the end of that trading day, the overall Dow Jones average had fallen nearly 13%. And the stock market was effectively in freefall as investors panicked, losing their faith in the value of their investments, and any sense of safety in the security of their life savings insofar as they were invested in stock shares. (For further background information on this see for example, this piece on the Wall Street Crash of 1929.) But even the stock market was beginning to recover by March 13, 1930 as the Hawley–Smoot Tariff Act was first put into effect, as investors began buying stock shares again, looking for undervalued ones and real bargains to be gained from them. Then international trade effectively died as nation after nation began raising their own retaliatory and presumed self-protective trade barriers in response to what was going on around them and from their erstwhile trading partners. And that, to my thinking is when the actual Great Depression itself actually began. That is when what might have been just another significant recession became The Great Depression.

That overall systemic economic collapse did not take place all at once; it took a number of months for the real impact of this decision and action on the part of the US Congress to be realized. So for example, US based banks began to be stressed from panicked customer withdrawals and from larger numbers of their customers no longer being able to pay back loans, in late 1929. But many of the larger bank holding companies that failed from this onslaught of challenges, did so in the Fall of 1930 and over the next two years as the strangling of international trade really took hold with so many of their business customers – so many employers facing bankruptcy from loss of business and incoming revenue. And they continued to fail for years to come and at an incredibly impactful rate.

My goal for this posting has been to outline something of the challenge that Franklin Delano Roosevelt faced when first taking office as the 32nd president of the United States. I will continue its narrative thread in a next installment to this series where I will among other things, quantify the bank failures and their timeline to put what I am writing here into clearer perspective. And I will then discuss Roosevelt’s New Deal as a massive recovery effort, and one that had within it a massive infrastructure redevelopment effort too. Then after completing that narrative thread, I will continue in this series as a whole, as briefly noted above. But in anticipation of the next installment to this to come, I step back from the details to reframe this discussion in a second, and here-crucially important way. What Roosevelt faced, underlying and infusing all of the toxic details of policy and practice and on so many fronts, was a complete failure of an underlying world view as to how businesses and economies run, and of how and why they fail when they do that too. Roosevelt faced the problem of a broken puzzle with its pieces having to be reconnected, but at least as importantly he faced a problem of a broken puzzle where its business as usual assumptions and understandings could no longer be made to apply. He had to find a way to fundamentally reshape and redefine the overall image in that puzzle too. And that is the challenge that I will write of in my next installment to this series.

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. I also include this in Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory. And I include this in my United Nations Global Alliance for ICT and Development (UN-GAID) directory too for its relevance there.

Don’t invest in ideas, invest in people with ideas 44 – the issues and challenges of communications in a business 11

Posted in HR and personnel, strategy and planning by Timothy Platt on May 24, 2019

This is my 44th installment in a series on cultivating and supporting innovation and its potential in a business, by cultivating and supporting the creative and innovative potential and the innovative drive of your employees and managers, and throughout your organization (see HR and Personnel – 2, postings 215 and loosely following for Parts 1-43.)

I have been at least relatively systematically working my way through a series of background issues since Part 39, with a goal of using that foundation-forming narrative as a basis for addressing two topics points and their issues, that I have held forth as organizing goals for this overall narrative. And my primary goal for this installment is to at least begin discussing those topic points directly now. That means I will at least begin to:

1. Offer an at least brief analysis of the risk management-based information access-determination process, or rather flow of such processes, as would arise and play out in a mid-range risk level context, where I sketched out and used a simplified risk management scale system in Part 39 for didactic purposes, that I will continue to make use of here and in what follows.
2. Then continue on from there to discuss how this type of system (or rather a more complete and functionally effective alternative to it as developed around a more nuanced and complete risk assessment metric than I pursue here), can and in fact must be dynamically maintained for how the business would address both their normative and predictably expected, and their more novel potential information sharing contexts as they might arise too. I note here in anticipation of that, that when innovation is involved and particularly when disruptively novel innovation is, novel information sharing contexts have to be considered the norm in that. And that significantly shapes how all of the issues encompassed in these two numbered points would be understood and addressed.

I concluded Part 43 of this series with some general comments regarding new employee selection, and working with people already on-staff at a business, so as to bring in the right creatively innovative people and support them for what they can bring to the business. Then after (presumably) finishing that discussion thread, at least for purposes of that posting, I said that I would turn to discuss the above-repeated topics points and essentially from the start of this one. But I have decided since writing that, to at least selectively expand upon the complex of personnel related issues that I was addressing in Part 43 first, to further complete the foundation that I will begin addressing the above Point 1 from. This, I add, is particularly important and relevant here given the overall thrust of this series as a while, with its personnel-oriented focus.

I begin this posting-starting digression by explicitly noting that when I refer to creative hands-on and managerial staff and the innovation that they can bring to a business, I am not just referring to innovation as it can arise in marketable products or services; I am writing of innovation in general as it can more widely hold potential for enriching a business, with that including business process innovation, innovation in how a business reaches out to and connects with its markets, or with possible supply chain partners or other business-to-business collaborators, or in any other context where such creativity might arise – or be thwarted. And I am writing of innovation as a product of individual effort, and even when it requires coordinated effort of several or even many to bring an innovative insight to meaningful fruition.

If you are a business owner or a more senior executive there, or a lower or midlevel manager there for that matter, who finds yourself having to hire a new employee and at whatever level in the organization, what should you look for that would go beyond your already current and ongoing routine and normal? And what should you do to identify, support and encourage such potential in the people who you already have working with you on payroll? Going beyond that, and in acknowledgement of the constraints that businesses both face and create for themselves from bringing in people part-time and short-term, and as part of a gig economy approach to hiring, where and how should you look for innovative potential and other sources of value that would make it more worthwhile for all involved, to actively bring people onboard as full time in-house employees, who prove themselves as good candidates for that from their performance in more transient positions there?

Some qualities and habits come immediately to mind that would offer worth from essentially anyone who a business would seek to bring in and keep, and actively support for their abilities to create new sources of value for an employing business. And this would just start with recognition of a driving curiosity that would bring an employee or a prospective employee to look beyond their own particular day-to-day routine to see at least something of what might be possible beyond that. And it would also include a willingness and ability to connect the dots, seeing how what might at first seem to be unrelated needs and resources might be brought together in unexpected ways. Communications skills enter in here too, as do what might perhaps best be considered marketing skills, for sharing and arguing the case for possible innovative insights that have been realized.

There are, of course, a number of other traits that could be added to that list as offering at least supportive value to what I have just outlined here. But for purposes of this posting and the above-repeated Point 1 that I will at least begin addressing here, these are the key characteristics and qualifications that I would start the main line of discussion of this posting from, here.

At the risk of repeating myself on some key points that I have already discussed in this blog, and even at least touched upon in this series too, I offer the following as context in which possible innovation and possible innovators might be addressed:

• The more standardized and the more routine a business process, or a product and its manufacturing are, or any other area of business activity that might be considered here, the less new information is actually going to be required to carry it out. And I even include operational possibilities here such as customized manufacturing, where customization per se is usually still tightly constrained and where options are in most cases routinized with customers selecting from suites of already-prepared-for options.
• The more standardized and the more routine here, the less new information, and at least as importantly the fewer the number of types of information have to be explicitly shared in order to carry it out. And the second half of that bullet point even holds true when considering big-data intensive operations, such as Customer Fulfillment Center operations where employees have to be able to access large amounts of data concerning the specific customers they are dealing with, in order to set up and complete sales or other transactions with them. They might see a diversity of specific data through this, but it is essentially certain that they will primarily if not exclusively see this coming up through specifically formatted and populated database tables that they have specific vetted access too, and with the same data fields tapped into for this every time and without exceptions. So even there, they are only seeing and working with a limited set of types of data for this.

Innovation, and looking beyond the standard and routine, calls for wider vision and understanding if it is to bring relevant value to a business or to the people who work there as they carry out their workplace responsibilities. So when I initially wrote and offered the above Point 1, I was setting up what can become a fundamental conflict that can play out in a business. And it is one in which the default decision making option is one of simply pursuing standard and routine, and the cost of that can easily become a loss of creative opportunity – and of overall opportunity that in the long run might have proven essential to the business as a whole for its ongoing success.

I explicitly framed Point 1 in terms of a simplified risk management tool that I have offered in recent installments to this series in a few progressively more refined iterations, for use as a didactic tool. And I continue this discussion from that point where I will focus on mid-range risk scenarios as discussed in the context of that tool.

Why mid-range risk issues and scenarios? I have already touched upon that question and its answers when initially offering the above cited risk management tool that would be used for managing this area of consideration, or at least for organizing its management. But I will address that complex of issues again here, this time expanding on what I have already offered on this topic, embedding my earlier response to it in what follows:

• Risk management is all about limiting the possible negative impact of uncertainty and the unexpected.
• In a standard business-as-usual context, and when standard and routine processes and flows of them are carried out in the performance of required tasks, this uncertainty fits essentially entirely into the category of known unknowns. You might not know when an adverse event will specifically arise but most of the time, and with only rare exceptions, you will start out knowing basically what types of such events can and with time will occur.
• Under these circumstances, the primary difference between low risk, mid-range risk and high risk events would be a calculated product of progressively higher likelihood of an adverse event occurring, and progressively higher negative consequence if one does, or some combination of these considerations.
• In practice, this is going to actually mean a calculated product that is weighted towards progressively more adverse outcomes, per incident as the risk assessed increases. Quite simply, a business process that fails too often is going to be replaced. A marketable product that does so is going to be recalled and the business will stop producing or selling it, at least until it can be redesigned or until its production line quality control can be improved.
• But even rare events can call for such change if their negative consequences rise to a sufficiently high level. Consider the recall of playpens and other items for infants and toddlers when a child dies as a result of a product failure.
• The key here, summarizing across the preceding bullet points is that when risk is essentially entirely driven by known and familiar unknowns, it becomes possible to in effect establish actuarial table approaches to dealing with it. And it becomes possible to comparatively setting up tables of low risk to high risk activities and their outputs that can be supported or avoided too, as a matter of business routine. And the mid-range risks and the activities that they arise from, that I cited in Point 1, do not hold any particular significance beyond that of how they fit into larger, overall business-wide risk determination schemes.
• Now add in the additional uncertainties of innovation and of the disruptively novel and new. This is when unknown unknowns enter this narrative. And bottom line, that makes this source of uncertainty and the actual uncertainty that arise from it fundamentally qualitatively different from the known unknowns and their risks that I have just been writing of here.
• Lowest and highest risk possibilities are in most cases going to remain the same whether or not unknown unknowns are added to the mix. But when you add the possibility or even the likelihood of unknown unknowns into this and significantly so, they add a level of impact to any overall risk determination, and a maximized level of overall risk for any possible events that would fit into a mid-range risk category. And they specifically add whole new layers of uncertainty as to what is likely to arise as an adverse event or outcome of that categorical level too.

I am going to continue this discussion in a next series installment where I will specifically turn to consider information sharing, and the innovative context, as the underlying logic of the above set of bullet points plays out. And as a key part of that, I will explain the immediately preceding bullet point as just offered here regarding mid-level risk events and their possibility too. Then I will turn to consider Point 2, in light of this still ongoing discussion. Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. Also see HR and Personnel and HR and Personnel – 2.

%d bloggers like this: