Platt Perspective on Business and Technology

Hands-off management, micromanagement and in-between – some thoughts on what they mean in practice 1

One of the most difficult issues that managers face – essentially all managers and regardless of their industry or their titles or scope of responsibility, can be found in simply knowing when to actively supervise and manage, and when to step back. Most managers spend essentially their entire careers and work lives, working in the context of their own specific areas of hands-on experience and training, whether that means working in technical and related areas such as Information Technology or Finance, or in soft people-skills areas such as Marketing and Communications or Personnel. They, as such, have training and experience that would at least offer a foundation for addressing challenges and opportunities in the functional areas that they are responsible for, and even when facing what to them personally are the new and unexpected. The challenges and the at least potential opportunities that I write of here are, however, essentially pure management in nature. And they are of a type that is not generally addressed all that effectively in standard MBA and related programs with their all but laser-focused subject area orientations and specializations. These issues do not, after all, clearly fit into any particular arena of business-defined functional area expertise or responsibility.

• When should a manager step back and even knowingly allow at least more minor mistakes, delays and related learning curve inefficiencies?
• And when should they step in and more directly intervene, and even if that means their in-effect taking over from a hands-on employee or a lower level manager who reports to them? When does this become micromanagement?
• When does hands-off mean giving others a chance to make mistakes and learn and grow professionally from them, and when does it mean leaving them hanging and without the support that they actually need, and that they might even actively want?
• When does more actively hands-on mean actively helping and when does it more primarily become an otherwise avoidable challenge to those so “helped,” and of a form that undercuts those subordinates and limits their ability to do better on their own the next time?

I have in effect already at least partly addressed those questions from how I phrased them, when I raised the possibility of at least more minor mistakes, delays and related learning curve inefficiencies, and by implication the possibility of more impactfully significant challenges that would require more immediate and effective response and resolution too.

• If a new, more junior manager is slower than might be desired at first, when using a new-to-them administrative tool and its online screens, and when they are still figuring out where everything is in it on their own, it can be better to wait for them to ask questions if they hit a wall in that, instead of automatically, in effect taking over. They might take a little longer at first, in effectively completing the tasks that they would use this tool for. But you’re making the investment as a more senior manager, of letting them learn this new tool as a matter of ingrained hands-on experience, and at their own pace can really pay off for you and for your business later on, and certainly if this means their learning their next new software tool that much faster, and if they learn this one better from how they learned how to use it by doing with it, too.
• If, on the other hand, that new more junior manager is on the brink of making a mistake that would create serious problems for a major business client, that would probably call for a more immediate and direct intervention.

But that, at least categorical level context in which a step-in or step-back decision would be made, only represents one of several possible arenas where the questions that I raise here, as to how to better manage, actually arise. What are the work performance issues involved that a step-in or step-back decision would be made about? But just as importantly, who are the people involved in this and what are the most productive ways for working with them, and certainly when everything is not moving ahead like clockwork for them?

• And it is important to note in this context, that addressing the who side of that, can and generally does call for more individualized management approaches and more flexible ones at that, than a focus on business tasks and goals would call for, and certainly as a general rule.
• Business tasks and goals, and certainly as organized and called for from a big picture perspective, are laid out in business plans in place, or at least in effectively drafted ones. They are formally understood for what they would accomplish and how, at least for an organized and efficient business and for its ongoing business systems.
• But few if any businesses have anything like formal guidelines in place for working more effectively with others, depending on their personalities and on what specific ways of completing tasks work best for them. Few if any businesses have anything like formal guidelines in place for working more effectively with others in addressing how they would work when facing special-to-them needs: short term and time-limited or ongoing, and with only a few special exception circumstances such as parental need guidelines, and disabilities accommodations standing out as exceptions there.

Actually addressing the issues raised here as a senior manager, means thinking through the tasks and goals involved and the priorities that they carry, while also thinking and acting with a matching awareness of the other people involved in carrying them out – as well as maintaining an active awareness of other involved parties, including third party stakeholders who need to have the tasks involved, completed correctly and on time. And at least as importantly, this also means better understanding ourselves as the senior managers in charge in this too.

• Management is about organizing and coordinating what has to be done, to get it done and as smoothly and effectively, and cost-effectively as possible. But it is also about working with and enabling the people involved in carrying this work out. Managers are people, who work with, and in this context supervise other people.

Personality and management style, as shaped by it, enter this narrative here, and a need to be able and willing to work with others in ways that they can be positively receptive to, and in ways that can help bring out their best. This means finding the right balance between challenge to perform, and the opportunity for professional growth that the right types and amounts of such challenge can foster, while giving others both the opportunity and the tools needed to get their work done, even as they learn from trying and doing.

I have seen way too many managers who do not allow for any error or delay (from others). And that lack of flexibility and yes – lack of adaptability, makes it all the easier to fall into one or the other of the two chasms of problematical management that I have been discussing here: hands-off that can and will leave subordinates twisting in the wind, or its overly involved counterpart of longer-term performance-thwarting micromanagement. And this brings me to the final point that I would raise here in this brief note: the final point of challenge that not finding the right balance between hands-off and hands-on can bring.

• Whether a senior manager leaves no room for errors or delays and even when a subordinate is learning, or when they are trying to navigate the unexpected or unusual,
• Or whether they make the mistake of stepping in too often and on problems and issues that do not genuinely call for their direct intervention,
• They make the path that they themselves would follow as a more senior manager in charge, that much more difficult too. The more they do this, and certainly if they vacillate between the two, the poorer their own work performance can be from their failure to focus on and expend effort, and time and other resources where their effort is really needed, and where it will have been expected in their own performance reviews.

And the types of problems that I write of here can radiate down lines of a table of organization from more senior managers on down. At its worst, poor managerial decision making of the type that I write of here, can come to shape and damage entire corporate cultures, and businesses as a whole, undermining morale at a business as a whole in the process.

I offer this posting with a goal of explicitly raising and outlining a type of management problem. And I will return to this topic area in future postings, with a goal of offering some thoughts on how to better address it.

In anticipation of my next posting on this, I note here that I have made a number of assumptions in this one that are true for many involved participants and across a range of real-world scenarios, but without their being universally true, or even close to that. As an example of that, I have assumed that all of the people involved in the scenarios that I have touched upon here, are good employees who can do their jobs effectively or even exceptionally well, even if they do face at least occasional learning curve slow-downs in that. And I have assumed that such learning would be more autodidactic in nature. But not all employees are as effective as others and not all show the same levels of potential for developing into the good or even great there. And some need and really benefit from more formal training and particularly on more complex training issues.

The devil, it is said, is in the details and that definitely applies for the issues and at least potential problems that I write of here. And the detail-of-necessity nature of the issues that I raise here, explains at least in part why this is not necessarily a topic area that is addressed as effectively as might be needed in at least most MBA and related degree programs. The details that arise here are all experience based, if they are to be fully learned and understood. My goal here is to offer tools that might help to shorten this type of learning curve. And I will continue this effort in a next installment to what will become a short series.

Meanwhile, you can find this and related postings and series at Page 4 to my Guide to Effective Job Search and Career Development, with this put into its addendum section (and also see its Page 1, Page 2 and Page 3.) And you can also find this at Social Networking and Business 2 (and also see its Page 1), and at HR and Personnel – 2 (and see its Page 1.)

Building a startup for what you want it to become 38: moving past the initial startup phase 24

Posted in startups by Timothy Platt on June 17, 2019

This is my 38th installment to a series on building a business that can become an effective and even a leading participant in its industry and its business sector, and for its targeted marketplaces (see Startups and Early Stage Businesses and its Page 2 continuation, postings 186 and loosely following for Parts 1-37.)

I often refer to a business’ financials and its cash availability and flow as representing an equivalent to its life’s blood – vitally essential to its life and with any real interruptions in it quickly leading to business failure and even outright business death. And loss there, as traditionally documented in red ink, simply illustrates the general validity of this understanding as it has more generally been held by others too. Information and as both raw business and marketplace data and as processed actionable knowledge, can also be considered vitally essential to any business and on an equally ongoing and pressingly impactful basis. So if a business’ cash flow and related financials can be seen as being analogous to its blood supply, think of its information flow and availability, and access to accurate timely information at that, as being comparable to the air that that business would breathe if it were a living organism in a biological sense.

I have been addressing a set of risk management and related due diligence issues in recent installments to this series, that all directly involve the challenges of information development, use, storage and sharing, where business intelligence per se has become an increasingly valuable and sought-after marketable commodity in its own right. And the points of observation and conclusion that I have been raising and addressing here can only become more valid and more consequentially important:

• And both for their ranges of applicability within specific organizations
• And across all industries and business types that they would be included in,
• And for how their information management practices impact on the markets and the people who enter into them that those businesses ultimately all do business with.

I focused in the immediately preceding installment to this narrative progression: Part 37, on an increasingly pressing challenge that all information requiring businesses will come to face if they have not done so already.

• Legal requirements and restrictions as well as business ethics concerns, demand that personally identifiable and other sensitive information regarding customers and employees among others, that might cause harm to them if made openly publically available, must be protected.
• And I stress the importance of the ethical side to this imperative here, as a failure to safeguard sensitive, potentially risk-creating personal information can create marketing and image challenges for an inattentive business that can cost it financially and much longer-term than any regulatory agency-demanded monetary fine could. If a business comes to be seen in its target markets as being unreliable there and unsafe to do business with for its failures to safeguard information such as credit card numbers, potential customers who know of that failing will at the very least think twice before doing business with them and giving them those credit card numbers. And sales transactions with them are likely to slow down or stop as a consequence.
• Information, as noted above can be thought of as the air that a business breathes. And a failure to safeguard it, and a public awareness of that failure can become a noose around that business’ neck. But at the same time, businesses that seek to remain competitive find themselves in races to acquire and more effectively make use of seemingly ever-increasing volumes of this air: this all important information and both for more immediate transactional purposes as for example when developing effective customer relations and at point of sale events, and for their overall business planning and its execution.
• And that led me to a direct consideration of the challenge that I discussed in at least broad outline in Part 37 where the bigger and the more effectively, actionably organized and processed, big data becomes, the more of a mirage any attempt to anonymize individually sourced raw data that is included in it becomes.

I repeat and stress the above, both to allow for smoother continuity of narrative in this series and to make this posting more meaningful as a stand-alone narrative. And I also repeat, and I add expand on what I have already written of in this blog on this matter, because the challenge that I am raising and at least briefly addressing here is one of the most important ones that businesses are increasingly going to face and head-on, as this 21st century advances.

• Businesses increasingly face a conundrum here, from the conflicting needs they face to simultaneously wring as much possible descriptive and predictive value from the information that they hold as they possibly can,
• While at the same time limiting what can be inferred from it in understanding the people this data comes from, so as to explicitly protect their personal privacy and certainly as effective use of this information requires its sharing among wider ranges of potential stakeholders.

I offered a necessary part of any realistic resolution of this conundrum, when I noted the importance of actually checking to see how the progressively more inclusive big data systems in place in a business, might actually compromise any data anonymization and related risk minimization efforts that are also in place, through intentional effort made to “break” that anonymization as a risk management exercise. And that is certainly important for any business that initially develops those information resources, and certainly if any of their in-house developed and maintained data might go out of its doors, and either as marketable, profit generating commodities or as transactional data shared with supply chain or other partner businesses when carrying out specific sales transaction and related activities. But it also applies to acquiring businesses and certainly where they might intentionally or inadvertently further share this, and where their big data accumulation, aggregation and processing might further degrade any still-effective data source anonymization that was still in place from its various individual, more original sources.

I said at the end of that posting that I would turn here to more fully consider the three basic participant classes that enter into all of this, in light of the issues that I raise here:

• Data sourcing and providing businesses (which might or might not actually be data aggregating, developing and selling businesses as determined by their business models),
• Data acquiring and using businesses, and
• The original sources of all of this data with that ultimately coming to a large degree from individual consumers and customers.

I in fact begin this next step analysis here with businesses that explicitly gather in, aggregate, develop and sell individually sourced data as at least a key part of their business models, as explicitly cited in my anticipatory note as offered at the end of Part 37. And I do so at least in part with a goal of explaining why the white hat hacker approach to testing and validating any data anonymization system in place in a business, as touched upon in Part 37, cannot succeed, and certainly if it is employed as a stand-alone solution to this problem and not simply as one brick in a larger and more inclusive edifice. And I begin this with what has become the publically visible poster child if you will, for how not to behave as a business as far as personally sourced information is concerned: Facebook.

I begin that line of discussion by at least attempting to expose and perhaps explode what has become something of an overly simplistic and even toxic myth. When you look to the laws in place regarding personally identifiable, sensitive, risk-carrying information, and when you follow the ongoing public discussions as more commonly address this challenge, you essentially always see the same small set of data types showing up. And I of course, refer to social security numbers and related government-systems sourced personal identifiers, credit card numbers and the generally three digit security codes that also appear in conjunction with them, full names and addresses and phone numbers, etc, and precise healthcare and health status information to add in at least one general information category to this list. These data types, and categories are important and they do in fact represent genuine sources of risk and exposure vulnerability and both for identity theft and for direct monetary value theft and for other immediately impactful risk-creating reasons. But it is a mistake to focus essentially entirely on this smaller set of possible high value targets, for use and possible misuse. Ultimately the real risk can come from the cumulative amassing of vast and even seemingly open-ended amounts of individually sourced information that is in and of itself not sensitive and compromising but that holds a potential for collectively causing harm.

I only touch on one aspect of that possible and progressively more likely exposure problem in my here-continuing discussion of data anonymization through selective redaction, as begun in Part 37. And I only point beyond that to one small part of how such big data can be used for harmful purpose when I go beyond credit card number exposure and the like as more commonly considered, to make note of what Cambridge Analytica did in its efforts to subvert elections in the United States and elsewhere, beginning in 2013 (and also see Facebook and Cambridge Analytica: What You Need to Know as Fallout Widens.)

Facebook and its executive leadership have been called out on this, and more specifically for how their business practices in organizing, commoditizing and selling access to their members’ data made a Cambridge Analytica scandal both possible and even inevitable. But that is still only the now-visible tip of a much larger iceberg.

I am going to continue this discussion in a next series installment where I will, among other things discuss how Facebook incentivizes large, and even vast numbers of small businesses to use their platform for any online connectivity with their customers that they might enter into, in effect forcing those business’ customers to join Facebook if they need to online connect with these small business members. I will also discuss how Facebook sells information, and the impact of this on businesses that buy rights to it, and that buy advertising space on Facebook member pages that they target as members of specific market audiences based on this data. This will, among other things mean my specifically addressing the opportunities and challenges that startups and other newer businesses face as they make their due diligence, participate or not decisions here. And I will, of course, discuss the impact of all of this on individual Facebook members as they share more and more and more of their information through the site and with all of that going into Facebook’s marketable and sellable databases.

Facebook is currently, as of this writing, rolling out a new site design that is supposedly more privacy oriented and protective and that would be freed from at least a significant amount of the paying business sourced and other “friend”-deluge that floods most individual Facebook user’s pages now, drowning out any shared content that they might wish to see from people who they actually know. I will discuss their new website design roll-out too, where bottom line, data and member data in particular is still going to be Facebook’s most valuable marketable product and the most important source of revenue that they have too, and with its accumulation and sale still held as a central feature of their still ongoing business model from before.

Meanwhile, you can find this and related material at my Startups and Early Stage Businesses directory and at its Page 2 continuation.

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 7

This is my 7th posting to a short series on the growth potential and constraints inherent in innovation, as realized as a practical matter (see Reexamining the Fundamentals 2, Section VIII for Parts 1-6.) And this is also my third posting to this series, to explicitly discuss emerging and still forming artificial intelligence technologies as they are and will be impacted upon by software lock-in and its imperatives, and by shared but more arbitrarily determined constraints such as Moore’s law (see Part 4, Part 5 and Part 6.)

I focused in Part 6 of this narrative on a briefly stated succession of possible development possibilities, that all relate to how an overall next generation internet will take shape, that is largely and even primarily driven at least for proportion of functional activity carried out in it, by artificial intelligence agents and devices: an increasingly largely internet of things and of smart artifactual agents among them. And I began that with a continuation of a line of discussion that I began in earlier installments to this series, centering on four possible development scenarios as initially offered by David Rose in his book:

• Rose, D. (2014) Enchanted Objects: design, human desire and the internet of things. Scribner.

I added something of a fifth such scenario, or rather a caveat-based acknowledgment of the unexpected in how this type of overall development will take shape, in Part 6. And I ended that posting with a somewhat cryptic anticipatory note as to what I would offer here in continuation of its line of discussion, which I repeat now for smoother continuity of narrative:

• I am going to continue this discussion in a next series installment, where I will at least selectively examine some of the core issues that I have been addressing up to here in greater detail, and how their realized implementations might be shaped into our day-to-day reality. And in anticipation of that line of discussion to come, I will do so from a perspective of considering how essentially all of the functionally significant elements to any such system and at all levels of organizational resolution that would arise in it, are rapidly coevolving and taking form, and both in their own immediately connected-in contexts and in any realistic larger overall rapidly emerging connections-defined context too. And this will of necessity bring me back to reconsider some of the first issues that I raised in this series too.

The core issues that I would continue addressing here as follow-through from that installment, fall into two categories. I am going to start this posting by adding another scenario to the set that I began presenting here, as initially set forth by Rose with his first four. And I will use that new scenario to make note of and explicitly consider an unstated assumption that was built into all of the other artificial intelligence proliferation and interconnection scenarios that I have offered here so far. And then, and with that next step alternative in mind, I will reconsider some of the more general issues that I raised in Part 6, further developing them too.

I begin all of this with a systems development scenario that I would refer to as the piecewise distributed model.

• The piecewise distributed model for how artificial intelligence might arise as a significant factor in the overall connectiverse that I wrote of in Part 6 is based on current understanding of how human intelligence arises in the brain as an emergent property, or rather set of them, from the combined and coordinated activity of simpler components that individually do not display anything like intelligence per se, and certainly not artificial general intelligence.

It is all about how neural systems-based intelligence arises from lower level, unintelligent components in the brain and how that might be mimicked, or recapitulated if you will through structurally and functionally analogous systems and their interconnections, in artifactual systems. And I begin to more fully characterize this possibility by more explicitly considering scale, and to be more precise the scale of range of reach for the simpler components that might be brought into such higher level functioning totalities. And I begin that with a simple of perhaps somewhat odd sounding question:

• What is the effective functional radius of the human brain given the processing complexities and the numbers and distributions of nodes in the brain that are brought into play in carrying out a “higher level” brain activity, the speed of neural signal transmission in that brain as a parametric value in calculations here, and an at least order of magnitude assumption as to the timing latency to conscious awareness of a solution arrived at for a brain activity task at hand, from its initiation to its conclusion?

And with that as a baseline, I will consider the online and connected alternative that a piecewise distributed model artificial general intelligence, or even just a higher level but still somewhat specialized artificial intelligence would have to function within.

Let’s begin this side by side comparative analysis with consideration of what might be considered a normative adult human brain, and with a readily and replicably arrived at benchmark number: myelinated neurons as found in the brain send signals at a rate of approximately 120 meters per second, where one meter is equal to approximately three and a quarter feet in distance. And for simplicity’s sake I will simply benchmark the latency from the starting point of a cognitively complex task to its consciously perceived completion at one tenth of a second. This would yield an effective functional radius of that brain at 12 meters or 40 feet, or less – assuming as a functionally simplest extreme case for that outer range value that the only activity required to carry out this task was the simple uninterrupted transmission of a neural impulse signal along a myelinated neuron for some minimal period of time to achieve “task completion.”

And actual human brain is of course a lot more compact than that, and a lot more structurally complex too, with specialized functional nodes and complex arrays of parallel processor organized structurally and functionally duplicated elements in them. And that structural and functional complexity, and the timing needed to access stored information from and add new information back into memory again as part of that task activity, slows actual processing down. An average adult human brain is some 15 centimeters long, or six inches front to back so using that as an outside-value metric and a radius as based on it of some three inches, structural and functional complexities in the brain that would be called upon to carry out that tenth of a second task, would effectively reduce its effective functional radius some 120-fold from the speedy transmission-only outer value that I began this brief analysis with.

Think of that as a speed and efficiency tradeoff reduction imposed on the human brain by its basic structural and functional architecture and by the nature and functional parameters of its component parts, on the overall possible maximum rate of activity, at least for tasks performed that would fit the overall scale and complexity of my tenth of a second benchmark example. Now let’s consider the more artifactual overall example of computer and network technology as would enter into my above-cited piecewise distributed model scenario, or in fact into essentially any network distributed alternative to it. And I begin that by noting that the speed of light in a vacuum is approximately 300 million meters per second, and that electrons can travel along a pure copper wire at up to approximately 99% of that value.

I will assume for purposes of this discussion that photons in wireless networked and fiber optic connected aspects of such a system, and the electrons that convey information through their flow distributions in more strictly electronic components of these systems all travel on average at roughly that same round number maximum speed, as any discrepancy from it in what is actually achieved would be immaterial for purposes of this discussion, given my rounding off and other approximations as resorted to here. Then, using the task timing parameter of my above-offered brain functioning analysis, as sped up to one tenth of a millisecond for an electronic computer context, an outer limit transmission-only value for this system and its physical dimensions would suggest a maximum radius of some 30,000 kilometers, encompassing all of the Earth and all of near-Earth orbit space and more. There, in counterpart to my simplest case neural signal transmission processing as a means of carrying out the above brain task, I assume here that its artificial intelligence counterpart might be completed simply by the transmission of a single pulse of electrons or photons and without any processing step delays required.

Individual neurons can fire up to some 200 times per second, depending on the type of function carried out, and an average neuron in the brain connects to what is on the order of 1000 other neurons through complex dendritic branching and the synaptic connections they lead to, and with some neurons connecting to as many as 10,000 others and more. I assume that artificial networks can grow to that level of interconnected connectivity and more too, and with levels of involved nodal connectivity brought into any potentially emergent artificial intelligence activity that might arise in such a system, that matches and exceeds that of the brain for its complexity there too. That at least, is likely to prove true for any of what with time would become the all but myriad number of organizing and managing nodes, that would arise in at least functionally defined areas of this overall system and that would explicitly take on middle and higher level SCADA -like command and control roles there.

This would slow down the actual signal transmission rate achievable, and reduce the maximum physical size of the connected network space involved here too, though probably not as severely as observed in the brain. There, even today’s low cost readily available laptop computers can now carry out on the order of a billion operations per second and that number continues to grow as Moore’s law continues to hold forth. So if we assume “slow” and lower priority tasks as well as more normatively faster ones for the artificial intelligence network systems that I write of here, it is hard to imagine restrictions that might realistically arise that would effectively limit such systems to volumes of space smaller than the Earth as a whole, and certainly when of-necessity higher speed functions and activities could be carried out by much more local subsystems and closer to where their outputs would be needed.

And to increase the expected efficiencies of these systems, brain as well as artificial network in nature, effectively re-expanding their effective functional radii again, I repeat and invoke a term and a design approach that I used in passing above: parallel processing. That, and inclusion of subtask performing specialized nodes, are where effectively breaking up a complex task into smaller, faster-to-complete subtasks, whose individual outputs can be combined as a completed overall solution or resolution, can speed up overall task completion by orders of timing efficiency and for many types of tasks, allowing more of them to be carried out within any given nominally expected benchmark time for expected “single” task completions. This of course also allows for faster completion of larger tasks within that type of performance measuring timeframe window too.

• What I have done here at least in significant part, is to lay out an overall maximum connected systems reach that could be applied to the completion of tasks at hand, and in either a human brain or an artificial intelligence-including network. And the limitations of accessible volume of space there, correspondingly sets an outer limit to the maximum number of functionally connected nodes that might be available there, given that they all of necessity have space filling volumes that are greater than zero.
• When you factor in the average maximum processing speed of any information processing nodes or elements included there, this in turn sets an overall maximum, outer limit value to the number of processing steps that could be applied in such a system, to complete a task of any given time-requiring duration, within such a physical volume of activity.

What are the general principles beyond that set of observations that I would return to here, given this sixth scenario? I begin addressing that question by noting a basic assumption that is built into the first five scenarios as offered in this series, and certainly into the first four of them: that artificial intelligence per se reside as a significant whole in specific individual nodes. I fully expect that this will prove true in a wide range of realized contexts as that possibility is already becoming a part of our basic reality now, with the emergence and proliferation of artificial specialized intelligence agents. But as this posting’s sixth scenario points out, that assumption is not the only one that might be realized. And in fact it will probably only account for part of what will to come to be seen as artificial intelligence as it arises in these overall systems.

The second additional assumption that I would note here is that of scale and complexity, and how fundamentally different types of implementation solutions might arise, and might even be possible, strictly because of how they can be made to work with overall physical systems limitations such as the fixed and finite speed of light.

Looking beyond my simplified examples as outlined here: brain-based and artificial alike, what is the maximum effective radius of a wired AI network, that would as a distributed system come to display true artificial general intelligence? How big a space would have to be tapped into for its included nodes to match a presumed benchmark human brain performance for threshold to cognitive awareness and functionality? And how big a volume of functionally connected nodal elements could be brought to bear for this? Those are open questions as are their corresponding scale parameter questions as to “natural” general intelligence per se. I would end this posting by simply noting that disruptively novel new technologies and technology implementations that significantly advance the development of artificial intelligence per se, and the development of artificial general intelligence in particular, are likely to both improve the quality and functionality of individual nodes involved and regardless of which overall development scenarios are followed, and their capacity to synergistically network together.

I am going to continue this discussion in a next series installment where I will step back from considering specific implementation option scenarios, to consider overall artificial intelligence systems as a whole. I began addressing that higher level perspective and its issues here, when using the scenario offered in this posting to discuss overall available resource limitations that might be brought to bear on a networked task, within given time to completion restrictions. But that is only one way to parameterize this type of challenge, and in ways that might become technologically locked in and limited from that, or allowed to remain more open to novelty – at least in principle.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3 and also see Page 1 and Page 2 of that directory. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And also see its Page 1.

Addendum note: The above presumptive end note added at the formal conclusion of this posting aside, I actually conclude this installment with a brief update to one of the evolutionary development-oriented examples that I in effect began this series with. I wrote in Part 2 of this series, of a biological evolution example of what can be considered an early technology lock-in, or rather a naturally occurring analog of one: of an ancient biochemical pathway that is found in all cellular life on this planet: the pentose shunt.

I add a still more ancient biological systems lock-in example here that in fact had its origins in the very start of life itself as we know it, on this planet. And for purposes of this example, it does not even matter whether the earliest genetic material employed in the earliest life forms was DNA or RNA in nature for how it stored and transmitted genetic information from generation to generation and for how it used such information in its life functions within individual organisms. This is an example that would effectively predate that overall nucleic acid distinction as it involves the basic, original determination of precisely which basic building blocks would go into the construction and information carrying capabilities of either of them.

All living organisms on Earth, with a few viral exceptions employ DNA as their basic archival genetic material, and use RNA as an intermediary in accessing and making use of the information so stored there. Those viruses use RNA for their own archival genetic information storage, and the DNA replicating and RNA fabrication machinery of the host cells they live in to reproduce. And the genetic information included in these systems, and certainly at a DNA level is all encoded in patterns of molecules called nucleotides that are linearly laid out in the DNA design. Life on Earth uses combinations of four possible nucleotides for this coding and decoding: adenine (A), thymine (T), guanine (G) and cytosine (C). And it was presumed at least initially that the specific chemistry of these four possibilities made them somehow uniquely suited to this task.

More recently it has been found that there are other possibilities that can be synthesized and inserted into DNA-like molecules, with the same basic structure and chemistry, that can also carry and convey this type of genetic information and stably, reliably so (see for example:

Hachimoji DNA and RNA: a genetic system with eight building blocks.)

And it is already clear that this only indicates a small subset of the information coding possibilities that might have arisen as alternatives to the A/T/G/C genetic coding became locked-in, in practice in life on Earth.

If I could draw one relevant conclusion to this still unfolding story that I would share here, it is that if you want to find technology lock-ins, or their naturally occurring counterparts, look to your most closely and automatically held developmental assumptions, and certainly when you cannot rigorously justify them from first principles. Then question the scope of relevance and generality of your first principles there, for hidden assumptions that they carry within them.

Some thoughts concerning a general theory of business 29: a second round discussion of general theories as such, 4

Posted in blogs and marketing, book recommendations, reexamining the fundamentals by Timothy Platt on June 11, 2019

This is my 29th installment to a series on general theories of business, and on what general theory means as a matter of underlying principle and in this specific context (see Reexamining the Fundamentals directory, Section VI for Parts 1-25 and its Page 2 continuation, Section IX for Parts 26-28.)

I began this series in its Parts 1-8 with an initial orienting discussion of general theories per se, with an initial analysis of compendium model theories and of axiomatically grounded general theories as a conceptual starting point for what would follow. And I then turned from that, in Parts 9-25 to at least begin to outline a lower-level, more reductionistic approach to businesses and to thinking about them, that is based on interpersonal interactions. Then I began a second round, next step discussion of general theories per se in Parts 26-28 of this, building upon my initial discussion of general theories per se, this time focusing on axiomatic systems and on axioms per se and the presumptions that they are built upon.

More specifically, I have used the last three postings to that progression to at least begin a more detailed analysis of axioms as assumed and assumable statements of underlying fact, and of general bodies of theory that are grounded in them, dividing those theories categorically into two basic types:

• Entirely abstract axiomatic bodies of theory that are grounded entirely upon sets of a priori presumed and selected axioms. These theories are entirely encompassed by sets of presumed fundamental truths: sets of axiomatic assumptions, as combined with complex assemblies of theorems and related consequential statements (lemmas, etc) that can be derived from them, as based upon their own collective internal logic. Think of these as axiomatically closed bodies of theory.
• And theory specifying systems that are axiomatically grounded as above, with at least some a priori assumptions built into them, but that are also at least as significantly grounded in outside-sourced information too, such as empirically measured findings as would be brought in as observational or experimental data. Think of these as axiomatically open bodies of theory.

I focused on issues of completeness and consistency in these types of theory grounding systems in Part 28 and briefly outlined there, how the first of those two categorical types of theory cannot be proven either fully complete or fully consistent, if they can be expressed in enumerable form of a type consistent with, and as such including the axiomatic underpinnings of arithmetic: the most basic of all areas of mathematics, as formally axiomatically laid out by Whitehead and Russell in their seminal work: Principia Mathematica.

I also raised and left open the possibility that the outside validation provided in axiomatically open bodies of theory, as identified above, might afford alternative mechanisms for de facto validation of completeness, or at least consistency in them, where Kurt Gödel’s findings as briefly discussed in Part 28, would preclude such determination of completeness and consistency for any arithmetically enumerable axiomatically closed bodies of theory.

That point of conjecture began a discussion of the first of a set of three basic, and I have to add essential topics points that would have to be addressed in establishing any attempted-comprehensive bodies of theory: the dual challenges of scope and applicability of completeness and consistency per se as organizing goals, and certainly as they might be considered in the contexts of more general theories. And that has left these two here-repeated follow-up points for consideration:

• How would new axioms be added into an already developing body of theory, and how and when would old ones be reframed, generalized, limited for their expected validity and made into special case rules as a result, or be entirely discarded as organizing principles there per se.
• Then after addressing that set of issues I said that I will turn to consider issues of scope expansion for the set of axioms assumed in a given theory-based system, and with a goal of more fully analytically discussing optimization for the set of axioms presumed, and what that even means.

My goal for this series installment is to at least begin to address the first of those two points and its issues, adding to my already ongoing discussion of completeness and consistency in complex axiomatic theories while doing so. And I begin by more directly and explicitly considering the nature of outside-sourced, a priori empirically or otherwise determined observations and the data that they would generate, that would be processed into knowledge through logic-based axiomatic reasoning.

Here, and to explicitly note what might be an obvious point of observation on the part of readers, I would as a matter of consistency represent the proven lemmas and theorems of a closed body of theory such as a body of mathematical theory, as proven and validated knowledge as based on that theory. And I correspondingly represent open question still-unproven or unrefuted theoretical conjectures as they arise and are proposed in those bodies of theory, as potentially validatable knowledge in those systems. And having noted that point of assumption (presumption?), I turn to consider open systems as for example would be found in theories of science or of business, in what follows.

• Assigned values and explicitly defined parameters, as arise in closed systems such as mathematical theories with their defined variables and other constructs, can be assumed to represent absolutely accurate input data. And that, at least as a matter of meta-analysis, even applies when such data is explicitly offered and processed through axiomatic mechanisms as being approximate in nature and variable in range; approximate and variable are themselves explicitly defined, or at least definable in such systems applications, formally and specifically providing precise constraints on the data that they would organize, even then.
• But it can be taken as an essentially immutable axiomatic principle: one that cannot be violated in practice, that outside sourced data that would feed into and support an axiomatically open body of theory, is always going to be approximate for how it is measured and recorded for inclusion and use there, and even when that data can be formally defined and measured without any possible subjective influence – when it can be identified and defined and measured in as completely objective a manner as possible and free of any bias that might arise depending on who observes and measures it.

Can an axiomatically open body of theory somehow be provably complete or even just consistent for that matter, due to the balancing and validating inclusion of outside frame of reference-creating data such as experientially derived empirical observations? That question can be seen as raising an interesting at least-potential conundrum and certainly if a basic axiom of the physical sciences that I cited and made note of in Part 28 is (axiomatically) assumed true:

• Empirically grounded reality is consistent across time and space.

That at least in principle, after all, raises what amounts to an immovable object versus an unyieldable force type of challenge. But as soon as the data that is actually measured, as based on this empirically grounded reality, takes on what amounts to a built in and unavoidable error factor, I would argue that any possible outside-validated completeness or consistency becomes moot at the very least and certainly for any axiomatically open system of theory that might be contemplated or pursued here.

• This means that when I write of selecting, framing and characterizing and using axioms and potential axioms in such a system, I write of bodies of theory that are of necessity always going to be works in progress: incomplete and potentially inconsistent and certainly as new types of incoming data are discovered and brought into them, and as better and more accurate ways to measure the data that is included are used.

Let me take that point of conjecture out of the abstract by citing a specific source of examples that are literally as solidly established as our more inclusive and more fully tested general physical theories of today. And I begin this with Newtonian physics as it was developed at a time when experimental observation was limited for the range of phenomena observed and in the levels of experimental accuracy attainable for what was observed and measured, so as to make it impossible to empirically record the types of deviation from expected sightings that would call for new and more inclusive theories, with new and altered underlying axiomatic assumptions, as subsequently called for in the special theory of relativity as found and developed by Einstein and others. Newtonian physics neither calls for nor accommodates anything like the axiomatic assumptions of the special theory of relativity, holding for example that the speed of light is constant in all frames of reference. More accurate measurements as taken over wider ranges of experimental examination of observable phenomena forced change to the basic underlying axiomatic assumptions of Newton (e.g. his laws of motion.) And further expansion of the range of phenomena studied and the level of accuracy in which data is collected from all of this, might very well lead to the validation and acceptance of still more widely inclusive basic physical theories, and with any changes in what they would axiomatically presume in their foundations included there. (Discussion of alternative string theory models of reality among other possibilities, come to mind here, where experimental observational limitations of the types that I write of here, are such as to preclude any real culling and validating there, to arrive at a best possible descriptive and predictive model theory.)

At this point I would note that I tossed a very important set of issues into the above text in passing, and without further comment, leaving it hanging over all that has followed it up to here: the issues of subjectivity.

Data that is developed and tested for how it might validate or disprove proposed physical theory might be presumed to be objective, as a matter of principle. Or alternatively and as a matter of practice, it might be presumed possible to obtain such data that is arbitrarily close to being fully free from systematic bias, as based on who is observing and what they think about the meaning of the data collected. And the requirement that experimental findings be independently replicated by different researchers in different labs and with different equipment, and certainly where findings are groundbreaking and unexpected, serves to support that axiomatic assumption as being basically reliable. But it is not as easy or as conclusively presumable to assume that type of objectivity for general theories that of necessity have to include within them, individual human understand and reasoning with all of the additional and largely unstated axiomatic presumptions that this brings with it, as exemplified by a general theory of business.

That simply adds whole new layers of reason to any argument against presumable completeness or consistency in such a theory and its axiomatic foundations. And once again, this leaves us with the issues of such theories always being works in progress, subject to expansion and to change in general.

And this brings me specifically and directly to the above-stated topics point that I would address here in this brief note of a posting: the determination of which possible axioms to include and build from in these systems. And that, finally, brings me to the issues and approaches that are raised in a reference work that I have been citing in anticipation of this discussion thread for a while now in this series, and an approach to the foundation of mathematics and its metamathematical theories that this and similar works seek to clarify if not codify:

• Stillwell, J. (2018) Reverse Mathematics: proofs from the inside out. Princeton University Press.)

I am going to more fully and specifically address that reference and its basic underlying conceptual principles in a next series installment. But in anticipation of doing so, I end this posting with a basic organizing point of reference that I will build from there:

• The more traditional approach to the development and elaboration of mathematical theory, and going back at least as far as the birth of Euclidean geometry, was one of developing a set of axioms that would be presumed as if absolute truths, and then developing emergent lemmas and theories from them.
• Reverse mathematics is so named because it literally reverses that, starting with theories to be proven and then asking what are the minimal sets of axioms that would be needed in order to prove them.

My goal for the next installment to this series is to at least begin to consider both axiomatically closed and axiomatically open theory systems in light of these two alternative metatheory approaches. And in anticipation of that narrative line to come, this will mean reconsidering compendium models and how they might arise as need for new axiomatic frameworks of understanding arise, and as established ones become challenged.

Meanwhile, you can find this and related material about what I am attempting to do here at About this Blog and at Blogs and Marketing. And I include this series in my Reexamining the Fundamentals directory and its Page 2 continuation, as topics Sections VI and IX there.

Dissent, disagreement, compromise and consensus 31 – the jobs and careers context 30

This is my 31st installment to a series on negotiating in a professional context, starting with the more individually focused side of that as found in jobs and careers, and going from there to consider the workplace and its business-supportive negotiations (see Guide to Effective Job Search and Career Development – 3 and its Page 4 continuation, postings 484 and following for Parts 1-30.)

I have been working my way through a to-address topics list since Part 25 that addresses a succession of workplace challenges and opportunities that can and often do arise when working for a business for any significant period of time. And my goal for this posting is to continue that process, completing my discussion, at least for purposes of this series, of Point 5 and continuing my discussion of Plan B approaches as I began addressing in that context. After that I will turn to and discuss Point 6 and both as an important source of relevant issues in its own right and to illustrate how the types of issues and approaches that I have been discussing in this series can and do fit together in real life.

To put what is to come here and what will follow this installment in clearer context, I begin by repeating this topics and issues list as a whole, with parenthetical references as to where I have already discussed its first five points:

1. Changes in tasks assigned, and resources that would at least nominally be available for them: timeline allowances and work hour requirements definitely included there (see Part 25 and Part 26),
2. Salary and overall compensation changes (see Part 27),
3. Overall longer-term workplace and job responsibility changes and constraints box issues as change might challenge or enable you’re reaching your more personally individualized goals there (see Part 28),
4. Promotions and lateral moves (see Part 29),
5. Dealing with difficult people (see Part 30),
6. And negotiating possible downsizings and business-wide events that might lead to them. I add this example last on this list because navigating this type of challenge as effectively as possible, calls for skills in dealing with all of the other issues on this list and more, and with real emphasis on Plan B preparation and planning, and on its execution too as touched upon in Part 23 and again in Part 30.

And with that orienting and series-connecting text in place I turn to further consider Plan B approaches, starting with a point of detail that might seem obvious:

• Standard and routine tasks, processes and work flows, as carried out by the people expected to do them, rarely call for negotiations and of any sort, except insofar as it might prove necessary to argue a case against sudden disruptive change. But that exception cannot be expected and certainly very often; most of us never in fact find ourselves having to negotiate that type of scenario and certainly given the day-to-day momentum of simply pursuing and doing business as usual.
• So it can essentially be taken as a given, that when negotiations of some sort are needed as to the what, how and who of work, that means that at least one critically involved stakeholder in an involved part of the business sees need for change and for trying a more non-standard approach, or for reaching agreement on new goals or benchmarks that would be used to gauge and performance track outcomes and results achieved.
• So as soon as a sufficiently compelling need arises so as to make negotiations per se, tenable or even necessary enough to pursue them, the people involved are already facing what might be considered at least something of a Plan B situation: a shift to the less known and the less comfortably familiar of breaking away from normal routines at all. And when I write of Plan B approaches in this series and in this blog as a whole, I am primarily if not exclusively writing of situations where both standard and routine, and the more obvious alternatives to it all would fall by the wayside as not adequately meeting perceived needs.

I briefly outlined an alternative approach that might at least in principle be attempted to avoid a Plan B requirement, and certainly as just specified there, where negotiating an acceptable alternative to whatever would be default, cannot be made to work. And that is a key defining feature of Plan B approaches as more stringently defined here and in my earlier writings to this blog.

I would start to more fully flesh out what I am discussing here as Plan B options, by picking up on and continuing discussion of a tactic that I raised later on in Part 30, and only made note of there for its potential risks:

• The negotiating tactic of selecting, where possible, who you actually have to and get to negotiate with, and certainly when attempting to work with more obvious first choices for this as based on their job titles and positions at the business, could not be made to work.

If you do attempt to work your way around one or more people who are legitimate stakeholders in whatever matters that you would see need to negotiate over, and if they come to see you as having bypassed them because you would not like what they would have to say on that, then you run a significant risk of burning bridges that you might have found useful to have intact, later on. And you will have probably created animosity and of a type that can have radiating impact on your overall reputation there at that employing business, and certainly insofar as you would seek to be viewed as a supportively involved and connected team player.

Circumstances are important there, and both as far as the ongoing actions and decisions and reputations of the people who you would not want to get involved in this are concerned, and in the people who you would turn to as alternatives in this type of negotiating context. As a perhaps obvious example, if the person with a gatekeeper, decision making title and position who you would want to avoid having to negotiate with has a terrible reputation for their short sightedness and their lack of professional capability, and you seek out alternatives who are well respected for getting the right things done, then a lot less harm is likely to arise than if you seek to shift who you would negotiate with in the opposite direction to this.

But regardless of that type of consideration, assume that you and the people who you would prefer to negotiate with and those who you wish to avoid in this are all going to be around at that business, longer-term. And one way or the other you will have to deal with all of these other stakeholders and with their friends there and more, longer-term too.

I note the likely need for what amounts to bridge mending when negotiating around a difficult stakeholder and certainly in this type of longer-term context. And I point out in this context that as soon as I begin taking and proposing a longer timeframe approach to job performance as I do here, I am actually discussing careers and a longer-term career perspective as well.

• Plan B approaches: Plan B strategies and tactics and related negotiating for the long-term, always bring career considerations into your planning and into your follow-through of it. And that is even true, and it can even be particularly true when you find yourself more mentally oriented towards the here-and-now and when you are in the midst of job-level navigating, where that more immediate perspective and its imperatives might be more overtly pressing and attention demanding.

And with that last detail added to this posting’s narrative, I turn to the above repeated Point 6 of the to-address list that I have been working my way through here:

• Negotiating possible downsizings and business-wide events that might lead to them, with all of the issues and complications that this type of situation brings with it.

I am going to at least begin to explicitly discuss that complex of issues in my next installment to this series, simply repeating for now, that this represents a type challenge, and a type of opportunity that brings essentially everything that I have been discussing here up to now, into active consideration again. Meanwhile, you can find this and related material at Page 4 to my Guide to Effective Job Search and Career Development, and also see its Page 1, Page 2 and Page 3. And you can also find this series at Social Networking and Business 2 and also see its Page 1 for related material.

Innovation, disruptive innovation and market volatility 47: innovative business development and the tools that drive it 17

Posted in business and convergent technologies, macroeconomics by Timothy Platt on June 5, 2019

This is my 47th posting to a series on the economics of innovation, and on how change and innovation can be defined and analyzed in economic and related risk management terms (see Macroeconomics and Business and its Page 2 continuation, postings 173 and loosely following for its Parts 1-46.)

I have been discussing innovation discovery, and the realization of value from it through applied development through most of this series, and as one of the primary topics considered here. And I have sought to take that line of discussion at least somewhat out of the abstract since Part 43, through an at least selective discussion and analysis of a specific case in point example of how this can and does take place:

• The development of a new synthetic polymer-based outdoor paint type as an innovation example, as developed by one organization (a research lab at a university), that would be purchased or licensed by a second organization for profitable development: a large paint manufacturer.

I focused for the most part on the innovation acquiring business that is participating in this, from Part 43 through Part 45, and turned to more specifically consider the innovation creating organization and its functioning in Part 46. And at the end of that installment and with this and subsequent entries to this series in mind, I said that I would continue from there by:

• Completing at least for purposes of this series, my discussion of this university research lab and outside for-profit manufacturer scenario.
• And I added that I will then step back to at least briefly consider this basic two organization model in more general terms, where for example, the innovating organization there might in fact be another for profit business too – including one that is larger than the acquiring business and that is in effect unloading patents that do not fit into their own needs planning.
• I will also specifically raise and challenge an assumption that I just built into Part 46 and its narrative, regarding the value of scale in the innovation acquiring business in their being able to successfully compete in this type of innovation as product market.

And I begin addressing this topics list with the first of those bullet points and with my real world, but nevertheless very specialized university research lab-based example. And I do so by noting a point of detail in what I have offered here up to now, that anyone who has worked in a university-based research lab has probably noted, and whether that has meant their working there as a graduate student or post-doc or as a lead investigator faculty member. Up to here, I have discussed both the innovation acquiring, and the innovation providing organizations in these technology transfer transactions as if they were both simple monolithic entities. Nothing could be further from the truth, and the often competing dynamics that play out within these organizations are crucially important as a matter of practice, to everything that I would write of here.

I begin this next phase of this discussion with the university side to that, and with the question of how grant money that was competitively won from governmental and other funding sources is actually allocated. For a basic reference on this, see Understanding Cost Allocation and Indirect Cost Rates.

Research scientists who run laboratories at universities as faculty members there, write and submit grant proposals as to what they would do if funded. And they support their grant funding requests for this, by outlining the history of the research project that they would carry out, and both to illustrate how their work would fit into ongoing research and discovery efforts in their field as a whole and to prove the importance of the specific research problems that they seek funding to work on, as they fit into that larger context. As part of that, they argue the importance of what they seek to find or validate, and they seek to justify their doing this work in particular and their receiving funding support for it, based on their already extant research efforts and the already published record of their prior and ongoing there-relevant research as can be found in peer reviewed journals.

They do the work needed to successfully argue the case for their receiving new grant funding for this research and they carry out the voluminous and time consuming work needed to document that in grant applications. And they are generally the ones who have to find the funds needed to actually apply for this too (e.g. with filing fees where they apply and grant application related office expenses.) Then the universities that they work for, demand and receive a percentage off of the top of the overall funds actually received from this, that would go towards what are called indirect costs (and related administrative costs, etc., though many funding agencies that will pay these types of expenses under one name will not do so under another, so labels are important here.)

My above-cited reference link, points to a web page that focuses in its working example on a non-research grant- in-aid funding request, and on how monies received there would be allocated. But it does offer basic definitions of some of the key terms involved, which tend to be similar regardless of what such outside-sourced grant funding would be applied to, and certainly where payment to the institution as a whole is permitted under the range of labels offered.

And with that noted as to nomenclatural detail, the question of how funds received would be allocated can set up some interesting situations, as for example where a university that a productive research lab is a part of, might in general require a larger percentage of the overall funds received for meeting its indirect costs, than the funding agency offering those monies would allow. For a university sourced reference to this and to put those funding requirements in a numerically scaled perspective, see Indirect Costs Explanation as can be found as of this writing on the website of the Northern Michigan University. Their approach and their particular fee scale here are in fact representative of what is found in most research supportive colleges and universities and certainly in the United States. And they, to be specific but still fairly representative here, apply an indirect cost rate of 36.5% as their basic standard.

The Bill and Melinda Gates Foundation, to cite a grant funding source that objects to that level of indirect costs expenditures, limits permitted indirect cost rates to 10% – a difference that can be hard to reconcile and certainly as a matter of anything like “rounding off.” And that leads to an interesting challenge. No university would willingly turn away outside grant money and certainly from a prestigious source. But if they agree to accept such funds under terms that significantly undercut their usual indirect costs funding guidelines, do they run the risk of facing challenge from other funding sources that might have accepted their rates in the past but that no longer see them as acceptable? Exceptions there and particularly when they are large-discrepancy exceptions, can challenge the legitimacy of higher indirect cost rates in place and in the eyes of other potential funding agencies too.

• Funding agencies support research and have strong incentives to see as many pennies on the dollar of what they send out, actually directly going towards the funding of that research. Excessive and perceived excessive loss of granted funds for more general institutional support, very specifically challenge that.

Universities that have and use the type of innovation development office that I wrote of in Part 46 for managing the sale or licensing of innovation developed on-campus to outside businesses and other organizations, generally fund them from monies gained from research grants in aid received, as payment made to them in support of allowed indirect expenses. And this makes sense as they are university-wide research lab and research program-supportive facilities. But indirect expenses also cover utilities and janitorial services and even what amounts to rent of the lab space used – among other expenses.

To round out this example here, I add that one of the most important parts of any grant application is its budget documentation in which it spells out as precisely as possible what monies received will be expended upon. This includes equipment and supplies and related expenses that would directly go towards fulfilling a grant application’s goals but it also includes salaries for postdoctoral fellows who might work at that lab and it usually includes at least part of the salary of the lead investigator faculty member who runs the lab too, as well as the salaries of any technicians employed there. And I freely admit that I wrote the above with at least something of a bias towards the research lab side of this dynamic, and at least in part because I also find the one third or more cut taken by the universities involved for its use, to be excessive. And this sentiment is reinforced by the simple fact that very little of the monies coming into such a university as a result of innovation sales or licensing agreements actually goes back to the specific labs that came up with those innovations in the first place, and certainly as earmarked shares of funds so received.

• Bottom line: even this brief and admittedly very simplified accounting of the funding dynamics of this example, as take place within a research supportive university and between that institution and its research labs and its lead investigators, should be enough indicate that these are not simple monolithic institutions and that they are not free of any internal conflict over funding and its allocation.

Innovation acquiring businesses are at least as complex and certainly as different stakeholders and stakeholder groups view the cost-benefits dynamics of these agreements differently. And that just begins with the questions and issues of what lines on their overall budget would pay for this innovation acquisition and in competition with what other funding needs that would be supported there, and what budget lines (and functional areas of that business) would receive the income benefits of any positive returns on these investments that are received.

• Neither of these institutions can realistically be considered to be simple monoliths in nature, or be thought of as if everyone involved in these agreements and possible agreements where always going to be in complete and automatic agreement as to their terms.
• And these complex dynamics as take place at least behind the scenes for both sides to any such technology transfer negotiations, shape the terms of the agreements discussed and entered into, and help determine who even gets to see those negotiating tables in the first place.

I am going to continue this discussion, as outlined towards the top of this posting by considering a wider range of organizational types and business models here, and for both the innovation source and the innovation acquisition sides to these transfer agreements. And as part of that, I will at least begin to discuss the third to-address bullet pointed topic that I listed there, and organizational scale as it enters into this complex of issues. Meanwhile, you can find this and related postings at Macroeconomics and Business and its Page 2 continuation. And also see Ubiquitous Computing and Communications – everywhere all the time 3 and that directory’s Page 1 and Page 2.

It’s not who you network connect with, it’s who you actually know and who knows you

Posted in social networking and business by Timothy Platt on June 4, 2019

I am in part at least, offering this note as a brief follow-up to a recent posting that I wrote to this blog on social networking strategy, where I focused on better practices for achieving specific goals from your online efforts (see It’s Not Just Who You Network Connect With, It’s How You Network With Them.) And as its title indicates it is objectives and priorities and purpose oriented.

But to round out this opening remark to this posting, I based that posting at least to a significant degree on a particular type of “who to network with” analysis, as developed around the particular individual networking approaches and strategies followed by others who you might try networking with (as outlined by type in a still-earlier posting here: Social Network Taxonomy and Social Networking Strategy.)

• If you want your social networking strategies and the practices that you use in pursuing them to work for you, you need to know how the people who you would network with, think and act there too. You need to know and understand what they would and would not do and what they would and would not favorably respond to when facing the prospects of initiating or continuing an active social networking relationship. You need to be able to mesh the What and How of your networking efforts with the basic strategies that they follow.
• Crucially importantly here, you should reach out to connect with others in ways that fit into their social networking comfort zones, as mapped out by their networking strategies and as demonstrated by any visible networking activities that you can see them carrying out online through the social networking sites that you would reach out to them through. This is certainly important if you intend to achieve your basic goals from your efforts there.

And I in turn offered that posting as an analytically reasoned line of argument in favor of open networking, as a means of finding the right people and being able to actively connect with them as needed. My focus there was on finding people who can help you find and open doors, and the more such doors the better. So if you peel back the layers to this briefly-stated progression of postings, and if you include others that also fit into this same pattern that can also be found at Social Networking and Business 1 and its Page 2 continuation, most of that is in fact grounded in the issues of who to network with, and in open networking per se.

To be clear here, I still see positive meaning and value in what I have successively offered in those and related postings. I have explicitly, directly gained from pursuing the strategies and tactics laid out there.

• I have in fact found people who I have needed to meet professionally through them, with unusual skills and experience that I have needed to tap into, who I would never have learned of let alone met, absent networking help from people who follow the types of open networking strategies that I cite in my above referenced social networking taxonomy posting.
• And I have found the networking reach that I have been able to develop through LinkedIn, to cite a particularly valuable resource here, has been an invaluable source of insight when doing my preparatory homework too, when for example working with and pitching for opportunities to work with businesses as a consultant. I have literally at times, found myself walking into a room to meet with startup founders (to cite one source of possible examples here) where I have been able to uncover information and insight that would be of crucial importance to them concerning what they were doing and seeking to do, but that those founders did not know about. To take that out of the abstract, by way of real world example, there are times when I have learned of and about expected backers who a new venture’s founders were negotiating with, when they were seeking out angel investor support to jumpstart their effort. There, in those instances, I have been able to figure out who they were turning to for that type of support from my networking and from study of the personal profiles that I have been able to see and assemble insight from. I have been able to walk into those meetings knowing that they were in fact were considering that type of move. And I arrived there knowing things about those specific potential backers and their professional backgrounds that the people I was meeting with did not know but would benefit from knowing.

No, that type of value creation from online social networking tools and their use was not and still is not a common situation, and either for me or for any other consultants who I know. But it and similar instances can and do happen and at least eventually for anyone who effectively systematically networks and who approaches that as a source of learning opportunity as well as a source of introductions opportunities. (And yes, it can take real thought as to how or even whether to reveal that type of knowledge, as simply throwing it on the table can easily kill a possible consulting opportunity for how off-putting that might make it. Never, ever come across as not respecting the people who you would work with, and even if you know up-front that they have not done their necessary due diligence – their homework, and even when gaps in what they know and should know would have significant adverse consequences if not addressed and remediated. So share selectively so as to enable conversations and follow-through, and save any further details for later as areas that can be worked on, on the job there.)

So open business oriented social networking can work; it can offer really positive value, and value that can be shared at that, in creating mutually beneficial opportunities. But having acknowledged that, I also find myself looking at the accumulation of social networking “contacts” that cumulatively pile up through processes such as Facebook friending too; I find myself looking at and thinking about that form of open networking too. And to pick up on, and continue a thought expressed in one of my above-cited earlier postings, I find myself thinking …

It’s not just who you network connect with, it’s how you network with them … but it is also really about who you network with too

… and in ways that cannot always simply and automatically be contained or defined by the types of reasoning and strategy that I lay out in postings such as my above cited taxonomy note.

So I find myself writing this as a thought piece on what arguably might be considered the challenge and the trap of Facebook friending, to focus here on that social networking venue, and on networking by the numbers … and even when doing so for explicit strategic and tactical reasons on your own part.

Let’s consider Facebook friending and let’s consider posting to the walls of your contacts on a site such as Facebook, and the consequences and impact of their so posting to yours, as a direct manifestation of how friending works in practice. And let’s at least start with what can rapidly become all but overwhelming Facebook “wall” clutter. Personal home page clutter: “wall” clutter in Facebook terms, can become overwhelmingly impactful and in both your personal home page on a social networking site as a whole, and certainly as essentially all of the social networking pages of people who you might need to connect with become overwhelmed by incessantly steady floods of stuff, as shared by and from others and often simply to share and be seen. And this point of concern certainly applies when that flood of content includes vast amounts of marketing and advertising material that is for the most part both unsolicited and unwanted by any real individual human site member. And yes, I express this in that way because sites such as Facebook, and Facebook in particular appear to have more robo-members: artifactual fake account members, set up for trolling and other message manipulation purposes, than real accounts – and they post and post and post and share and share and share too.

• Developing and maintaining a clear, effectively working focus on what you would network for and how and why, in the face of this deluge of distraction can quickly become impossible,
• And regardless of your intended social networking approach and strategy,
• And certainly when this same type and level of content flood is hitting anyone and everyone who you might want to reach out to, and at least as much as it does your own social networking site home page.

And if you need to reach out to, or find people through really widely open networkers of the type that I cite and discuss in my above-noted social networking taxonomy paper, and even if they actively work to reduce the clutter on their own social networking site pages and show a proportionately lower level of this type of deluge per contact than usual, the increased number of contact that they make can still effectively kill any business oriented social networking strategy for you, and for your contacts too, and for them as well. Clutter, like background static, can kill any intended meaningful signal and essentially every single time, and certainly when and as it comes to effectively overwhelm for its noise to signal intensity.

Can a site such as Facebook offer business networking value? Yes, if you are an advertiser there, and if you can cost-effectively purchase access to a sufficient volume of site members who fit into effectively targeted market demographics that would constitute a viable market for you. And if you do not have reason to feel concerned about a possible downside from coming across as spamming a perhaps large percentage of the people who you do wall post to. But it does not and even fundamentally cannot offer a correspondingly positive value to you if your goal is to actually network there, initiating and cultivating a wide, effective reach of genuine two way conversations out of that effort. And this leaves out essentially all of the types of targeted networking that would go into promoting and advancing your jobs and careers efforts as successively discussed in this blog in my (see Guide to Effective Job Search and Career Development and its Page 2, Page 3 and Page 4 continuations.)

It also leaves out a lot of the types of online social networking that you might seek to pursue if you work as an outside consultant, with a small or stand-alone shop, to cite one possible career path where effective online social networking can offer essential value. And it can effectively eliminate at least significant possible sources of value for small business participants too, unless that is they follow a strictly opt-in networking and posting approach with their customers, and if they do not come across as seeking to buy support or positive reviews that might indicate it.

A high volume and scale of business can compensate for these potential downsides for high volume businesses and advertisers, bringing value to Facebook wall sharing to them and even as part of an overly open (e.g. non-opt-in here) networking strategy. But even there, returns on marketing and advertising investment are not going to be guaranteed and they will probably not actually come to match what a Facebook would claim as possible there. And other business social networking participant types cannot always realize anything like corresponding levels of positive value from this.

And with that offered, I return to reframe and complete my works in progress conclusion note for this posting as already offered here in two iterations: once as a posting title and again in bullet pointed italics:

It’s not just who you network connect with, It’s how you network with them … but it is also really about who you network with too, where that can be fundamentally determined by where you online network and their business model and practices.

The title of this posting is “it’s not who you network connect with, it’s who you actually know and who knows you.” And ultimately, if you do not or cannot get this right, you cannot know the people you friend, or otherwise seek to network with and they cannot get to know you. And I finish this posting by acknowledging that Facebook is currently updating their user interfaces and at least some of their business practice details with an at least stated goal of addressing at least some of the challenges that I have raised here. But their basic business model is still centered on monetizing and selling access to their member users’ data and access to their eyeballs. So I do not expect the basic issues that I raise here to change, and certainly not any time soon.

You can find this posting and related material at Social Networking and Business and its Page 2 continuation.

Planning for and building the right business model 101 – 43: goals and benchmarks and effective development and communication of them 23

Posted in startups, strategy and planning by Timothy Platt on June 2, 2019

This is my 43rd posting to a series that addresses the issues of planning for and developing the right business model, and both for initial small business needs and for scalability and capacity to evolve from there (see Business Strategy and Operations – 3 and its Page 4 and Page 5 continuations, postings 499 and loosely following for Parts 1-42.) I also include this series in my Startups and Early Stage Businesses directory and its Page 2 continuation.

I have been discussing three specific possible early stage growth scenarios that a new business’ founders might pursue for their venture, in recent installments to this series, which I repeat here for smoother continuity of narrative as I continue addressing them:

1. A new venture that has at least preliminarily proven itself as viable and as a source of profitability can go public through an initial public offering (IPO) and the sale of stock shares. (See Part 33 and Part 34.)
2. A new venture can transition from pursuing what at least begins as if following an organic growth and development model (as would most likely at least initially be followed in exit strategy 1, above too) but with a goal of switching to one in which they seek out and acquire larger individually sourced outside capital investment resources, and particularly from venture capitalists. (See Part 35.)
3. And a new venture, and certainly one that is built around a growth-oriented business model, might build its first bricks and mortar site, in effect as a prototype effort that it would refine with a goal of replication through, for example a franchise system. (See Part 36 through Part 39.)

And more recently here, I have been analyzing and discussing these business development possibilities, for how they would address a set of three specific business performance requirements that are important for long-term success:

A. Fine tuning their products and/or services offered,
B. Their business operations and how they are prioritized and carried out, and certainly in the context of this Point A, and
C. Their branding and how it would be both centrally defined and locally expressed through all of this.

I at least preliminarily finished discussing the first of those due diligence issues: the above Point A, in Part 42, for all three of the above listed business development scenarios, briefly touching on Point B and C while doing so for how the three functionally interconnect. My goal here is to more fully and specifically address Point B and its issues as it plays out for the three business models under consideration here. And I begin by noting a point of comparison:

• The selection and fine tuning of what a business would bring to market as its defining source of revenue generating value might be influenced by outside forces, with that including market demands and pressures from their competitors. But ultimately, the most compelling drivers for this come from within the business itself, from its founders and owners and from its ongoing leadership as codified by them in its ongoing business model. And standardization of marketable, sellable production tends to be held as essential. So for example, if a business bakes rolls, all of its product should be of the same size and shape at least within very narrow preset limits and all should be made according to the same recipe for any given roll type offered and with tight quality control over ingredients used. And all should be cooked the same way and to the same doneness and all should packaged, and easy to package quickly for shipment and sale, in the same way too (bringing Point 3 into this narrative here too.)
• Higher level, internal to the business decision making plays a very significant role in the What and How of their business operations as actually carried out too. And that can mean standardized work performance patterns that significantly mirror those just noted for what the business would offer to its outside world. But when you look beyond the officially expected as detailed there, to see how business processes are actually carried out, you can often find that determined by a much wider range of in-house participants, with that larger stakeholder group also including mid and lower level managers and even non-managerial hands-on employees who actually carry out much of the work so specified. And this at-least capacity for variation, can be necessary and even essential if a business is to be flexible enough in order to be able to accommodate change and the unexpected, and certainly as it might more locally arise and even in day-to-day operations. Setting aside the questions and issues of when variation here is a positive and when it is a negative, it does happen and in ways and to degrees that would not be allowed for, or even make sense for products shipped out the doors and certainly within a single business and within a single one of its production runs.
• And outside considerations can and do have very significant impact on the operational What and How of a business too, and with a higher level of functional significance than might be expected in a marketable product context, and for entire functional areas of a business. Consider Finances and the outside mandated requirements of generally accepted accounting procedures (GAAP) as a perhaps best known example there, though outside regulatory forces can and in fact do reach into most if not all aspects of business operations, and for many types of businesses and for essentially all industries.

But for purposes of this discussion, let’s set aside outside-mandated, standardized demands and their pressures (as just exemplified here by adherence to GAAP) that all businesses would face, as leaving competitors on what at least in principle would still be a level playing field competitively. (Here, I set aside biased regulatory requirements that might for example explicitly favor larger already established businesses in an industry or sector, while thwarting possible new entrants there with new competing products or services that their more established peers would be less able to directly compete with.)

All three of the basic business scenarios that I have been discussing here since Part 33 are built around a premise of long-term stability and business strength, and all are built around a basic assumption of overall completeness in what they would carry out and be prepared to carry out operationally, in order to insure that happening. So for example, I did not include here, a fourth business development scenario in which a business founder, or founding team might build a new business venture with an explicit goal of selling it to some larger business entity through a mergers and acquisitions process, where:

• Some of the functional areas that they would assemble in it,
• And at least proof of principle development of all of its key marketable and sellable products or services might have to be very robust
• But where other more supportive capabilities might be left more vestigial as they would be provided by an acquiring entity that would be less inclined to have to pay for what to it, would be unnecessary duplications. (Think here in terms of founders who have ownership to a key next-step advancement patent or set of them, that seek to develop and prove the value of that holding in order to maximize its value to a buyer and its profitability to themselves.)

I offer this build-to-sell scenario, in the context of at least briefly mapping out something of a fourth case in point business development alternative. But the core details included in my tripartite description of it, also offers a simplified and even simplistic outline to what has become an increasingly complex options-rich range of business development possibilities and for a wide and growing range of business types: the three that I have primarily been addressing here included. And I at least acknowledge the legitimacy of that view of matters by posing some Point B oriented questions that the strategic decision makers of a new and forming, or more established and growing business might very well find themselves facing, which I phrase here as being asked of you, the reader:

• What are the core functionalities in this business that you would have to keep in-house, and develop and expand there as your business as a whole scales up (assuming once again that you would retain ownership of this enterprise and continue to lead it)?
• And what more supportive and ancillary functionalities and services might best be outsourced to third party provider specialists, where doing so would be more cost-effective and not carry additional new risk management issues? (And when and how might you do that and under what terms?)
• And focusing on functionalities and services that really should be maintained in-house, and particularly in distributed business systems, which of them should be managed from a home office or other more central facility and which of them should be managed and effectively owned, more locally?

At least aspects of what have traditionally been seen as Information Technology in a business are now routinely outsourced with that including server farms and enterprise-wide cloud storage and access management solutions, and the vast majority of web site and online presence support, or at least their more technical hardware and software underpinnings. And this type of outsourcing can and often does include call center operations and certainly for off-hours coverage when 24/7 customer assistance is a desired goal. And outsourcing can become an attractive option for at least areas of Human Resources and Personnel management too, to add a third increasingly common example of this here (see my series When and If It Might Make Sense to Outsource Human Resources, as can be found at HR and Personnel as postings 134 and following.)

I have in fact pushed this in-house versus outsourced dynamic to a lean and outsourced extreme in one of my earlier series: Virtualizing and Outsourcing Infrastructure, as can be found at Business Strategy and Operations as its postings 127 and loosely following (for its Parts 1-10.) More specific in-house versus outsource decisions become important in the types of context that I raise here, because they have stakeholder-responsibility and stakeholder-autonomy implications that can be shaped by a determination of precisely what type of business model and business development plan is in place. And to take that out of the abstract, consider the above-repeated business development Scenario 3, and at least potential areas for home office/parent company versus franchise facility/franchisee conflicts. What would and in fact should best be carried out and controlled system-wide from a parent company controlled and managed facility, and whether that means in-house maintained and operated or centrally contracted and managed-outsourced? And what should best be done by, and best be considered a responsibility of the individual franchisees in place and their local business outlets in these larger systems?

• Both overall cost-effectiveness and consistency, and market-facing standardization and the branding that serves as a public face to all of this, would enter into any realistic answer to those types of questions, and certainly as best for now solutions to them are to be sought out and implemented. And with that, I tie this back to the above stated performance Points A and C.

I am going to continue this narrative in a next series installment with an explicit focus on those market-facing issues, and with particular attention to Point C and its implicit questions and issues. And in anticipation of that discussion to come, I note here that we live and work in contexts that have increasingly come to expect and even demand social and environmental responsibility from businesses, and good corporate citizenship from them. So actually addressing the demands of Point C, of necessity have to include active consideration of Point B issues as well as Point A ones.

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. And you can find this and related material at my Startups and Early Stage Businesses directory too and at its Page 2 continuation.

Donald Trump, Xi Jinping, and the contrasts of leadership in the 21st century 18: some thoughts concerning how Xi and Trump approach and seek to create lasting legacies to themselves 6

Posted in macroeconomics, social networking and business by Timothy Platt on May 31, 2019

This is my 18th installment in a progression of comparative postings about Donald Trump’s and Xi Jinping’s approaches to leadership per se. And it is my 12th installment in that on Trump and his rise to power in the United States, and on Xi and his in China, as they have both turned to authoritarian approaches and tools in their efforts to succeed there.

I focused on Donald Trump and his legacy oriented narrative in Part 14 and Part 15 of this, and have continued from there to correspondingly consider Xi Jinping and his legacy oriented actions and ambitions too:

• In Part 16 with its focus on the mythos and realities of China’s Qing Dynasty during its Golden Age, as a source of visionary legacy defining possibilities,
• And in what has followed, leading up to the reign of Mao Zedong as China’s first communist god emperor, with that narrative thread pointed to at the end of Part 16, and with an initial, more detailed discussion of it continuing on through Part 17.

I began that second narrative thread in 1830’s China as the Qing Dynasty began what became a slow but seemingly inexorable decline that led to its end in 1912 with the abdication of its last emperor: Puyi. And I focused in Part 17 on challenges and responses to them that China and its leadership faced through at least the first decades of that period, that can for the most part be seen as endemically Chinese and as arising from within their nation and their system of governance.

That, I explicitly note here includes my Part 17 discussion of a source of challenge that in all fairness at least originated outside of the country and outside of anyone’s control there, and either individually (as for example through the actions or decisions of an emperor) or collectively (as for example through the actions or decisions of a state bureaucracy that would, or would not function in accordance with the dictates of a more central authority in creating a commonly held, unified response to larger scale societal challenges faced.) That source of challenge consisted of the twin stressors of climate change and of environmental degradation as they adversely impacted upon agricultural productively and the basic food supply that China’s many millions would turn to. The adverse climate changes that I wrote of came from outside of China, or at least from outside of any possible direct human control there, even as they took explicit shape there for how they specifically affected that nation and its peoples. But their government’s failure to effectively respond to this seemingly ever-expanding challenge and in a way that might have at least limited its negative impact, was in fact endemic to the nation and its leadership. That failure of effective systematic response was human created and sustained.

And a great many of the environmental challenges faced, and certainly in China’s agriculturally most important lands, were in effect home grown too and even more so than any climate level changes were. But that only tells one half of this story; I focused in Part 17 on endemically Chinese pieces of the puzzle of what happened to end the Qing Golden Age and bring China as a whole into decline, and I have continued addressing that side of this here too, at least up to now in this posting. But China’s history and certainly since the Qing Dynasty cannot be understood, absent an at least equally complete narrative of and understanding of their relationship with the world around them. And I begin addressing that set of issues with some demographics and with what for purposes of this series and its narrative flow, can best be seen as old and even ancient history.

The Ming Dynasty ruled China from 1368 to 1644, and it was the first ethnically Chinese led dynasty to rule that nation in centuries. And it also proved itself to be the last ethnically Chinese dynasty to rule there, at least if “ethnically Chinese” is construed to mean Han Chinese; the Qing Dynasty that followed it as China’s last hereditary dynasty was led by ethnically distinct non-Han outsiders as well. But this is not the type of outside influence that I would primarily write of here when raising and addressing the issues of foreign influence and impact upon and within China.

Still, officially, according to the Beijing government of Xi Jinping, China currently contains within it 56 separate and distinct ethnic groups as of their Fifth National Population Census of 2000, with Han Chinese accounting for approximately 91.59% of the entire population and with the remaining 55 ethnic groups accounting for the remaining 8.41% (and also see this official government release on China’s 2010 national population census.)

Yes, there are grounds for debate there, where a variety of smaller ethnically distinguishable populations are not afforded separate recognition in those demographics surveys and their accompanying official analyses. And that lack of official recognition means a lack of legal protection of those ethnically distinctive groups and their peoples, as such. But even so, China includes within it a range of ethnic diversity that it does officially recognize and that it does offer officially protected status to, for their unique cultural identities. And the non-Han peoples that emperors of earlier dynasties sprang from, that have in their days ruled over China, have for the most part been assimilated as recognized minority groups in what is now the official 56. And they can be and are seen as belonging to a larger single, overall Chinese citizenry. China’s current government certainly sees matters that way, as is recurringly indicated by their efforts to retain and control and mainstream any and all ethnic diversity within their country, and certainly where that might be seen as representing separation in self identity that might become a push towards some form of independence.

And with that all noted, I raise three crucially important points:

• For all of the official acceptance and inclusion of the official census in China with its 56 culturally and ethnically distinct recognized groups, the Han Chinese are still considered in a very fundamental sense to be the only “true Chinese,” and in a way that members of the other 55 have never been afforded. They have always been seen as being different and other, and even when members of those groups have gained hereditary dynastic leadership over the country as a whole.
• And even as the Chinese government and the Chinese Communist Party that leads it recognize a controlled measure of diversity in their nation and its overall citizenry, they still consider as a matter of paramount importance that all Chinese citizens and of whatever ethnicity or cultural persuasion must be Chinese first and foremost, and that they must believe and act accordingly and with their primary loyalties aimed towards China’s one government and one Party.
• But at the same time, Han primacy of place as representing the true Chinese people, places very real practical day-to-day and ongoing constraints on what members of the other 55 minority groups can achieve, and both as measured by Party membership and opportunity to join, and by opportunity to advance up the Party’s ranks if allowed in as card carrying members. And these de facto restrictions have impact on status and opportunity in general, and throughout Chinese society. For a particularly striking example of how this plays out in practice, and certainly as of this writing, consider the restricted status and the restrictions on opportunity faced in China by their Uyghur minority today.

I am not addressing the issues of ethnic diversity here as a primary source of us versus them of foreign impact on China. But I offer this discussion thread here in specific preparation for delving into that complex of history and ideas. And I do so because it would be impossible to fully understand that, let alone address it absent a clearer understanding of what “us” means in China with at least an outline awareness of something of the historically grounded nuances that enter into that determination. Are Han and Chinese synonymous? No, but there are contexts where they become close to that, even as Party and government calls upon all Chinese nationals to be Chinese, and effectively entirely so and regardless of ethnicity or local cultural self-identity.

I will come back to reconsider the complex of issues that I have raised and at least briefly touched upon in this posting, later on in this series and its overall narrative, and certainly in the context of Xi’s within-China legacy building ambitions and actions. But for what is to more immediately follow now, I am going to focus on what might be considered true outsiders, some of whom as national and culturally distinct groups are and will remain outsiders and foreign nationals (e.g. European and American trade partners and their governments) and some of which, at least for my earlier historical references to come here, were eventually brought in and assimilated – but with nothing like that possible during the times under discussion. And I begin addressing that by turning at least closer to the beginning in China’s early history.

China has faced challenges from outside peoples and foreign cultures that go back at least as far as the construction of the first sections of fortifications that were eventually incorporated into their Great Wall (their 萬里長城), that were themselves initially built starting as far back as the 7th century BCE. (Construction of the Great Wall of China itself is generally dated as having been started by the historically acknowledged first true emperor of China: Qin Shi Huang (220–206 BCE), expansively building out from those earlier more locally limited protective efforts.) I say “… at least as far back” here because foreign attacks and incursions and even outright invasions, were a scourge to the people of then China for a long time before the building of those early walls and their supporting fortifications.

Stepping back from this China-focused narrative for a brief orienting note: I have written in this blog of Russia’s long history of invasion and threat of invasion and from many directions. See in that regard, my posting Rethinking National Security in a Post-2016 US Presidential Election Context: conflict and cyber-conflict in an age of social media 13, where I lay a foundation for discussing Russia’s current foreign policy and some of the essentially axiomatic assumptions that help to shape it from that nation’s past. Foreign invasion and threat of it have held powerful influence there for a great many centuries now. China is not unique in having faced foreign invasion and threat of it, any more than Russia is, or any of a wide range of other nations and peoples that I could cite here. But this history and this type of history is and has been an important source of influence in shaping China for its ongoing impact and persistence, and certainly over the years that I write of here, from the 1830’s on where threat and possibility became an ongoing reality.

I am going to continue this narrative in a next series installment where I will look at China’s international trade and other relations starting in the Qing Golden Age, and how they spiraled out of Chinese control for their side of all of that as the Qing Dynasty began to fail from its center outward and from its periphery inward. I will of course, continue that narrative thread with an at least brief and selective discussion of the first Republic of China, as formally existed from 1912 until 1949 with its final overthrow at the hands of Mao Zedong’s communist forces. And I will equally selectively discuss the Mao years of his Peoples Republic of China, as he developed and envisioned it as a response to what had come before, and that in turn helped shape Xi Jinping into who he is today, with his legacy goals and ambitions and with the axiomatic assumptions that he brings to all of that.

Meanwhile, you can find my Trump-related postings at Social Networking and Business 2. And you can find my China writings as appear in this blog at Macroeconomics and Business and its Page 2 continuation, and at Ubiquitous Computing and Communications – everywhere all the time and Social Networking and Business 2.

Building a business for resilience 35 – open systems, closed systems and selectively porous ones 27

Posted in strategy and planning by Timothy Platt on May 30, 2019

This is my 35th installment to a series on building flexibility and resiliency into a business in its routine day-to-day decisions and follow-through, so it can more adaptively anticipate and respond to an ongoing low-level but with time, significant flow of change and its cumulative consequences, that every business faces in its normal course of operation (see Business Strategy and Operations – 3 and its Page 4 and Page 5 continuations, postings 542 and loosely following for Parts 1-34.)

I began working my way through a brief to-address topics list in Part 32 of this, that I repeat here for smoother continuity of narrative as I continue discussing its points:

1. Even the most agile and responsive and effectively inclusive communications capabilities can only go so far. Effective communications, and with the right people involved in them, have to lead to active effectively prioritized action if they are to matter, and with feedback monitoring and resulting reviews included.
2. Both reactive and proactive approaches to change and to effectively addressing it, enter in there and need to be explicitly addressed in rounding out any meaningful response to the above Point 1.
3. And I will at least begin to discuss corporate learning, and the development and maintenance of effectively ongoing experience bases at a business, and particularly in a large and diverse business context where this can become a real challenge.
4. In anticipation of that, I note here that this is not so much about what at least someone at the business knows, as it is about pooling and combining empirically based factual details of that sort, to assemble a more comprehensively valuable and applicable knowledge base.
5. And more than just that, this has to be about bringing the fruits of that effort to work for the business as a whole and for its employees, by making its essential details accessible and actively so for those who need them, and when they do.

I have offered at least preliminary responses to the first two of those points since then, leading up to this series installment and a more detailed discussion of the above Point 3.

I in fact began addressing that in Part 34 and recommend you’re reviewing that as offering an explicit orienting foundation for what is to follow from here on in this narrative. As I noted there, Point 3 is the farthest-reaching, most complex of the five topics points that I am seeking to successively address here, and as at least partial proof of that, Points 4 and 5 can be seen as specific aspects of it that have been spun off from it for more focused attention.

I begin, or rather continue addressing this complex of issues by at least briefly turning back to Point 2 again, where communications and the data and processed knowledge that it conveys is turned into action and activity, and either reactively or proactively, or what is perhaps most common, as some combination thereof. And I begin addressing that by adding one of the villains of this type of narrative into it: business systems friction with its at least largely unconsidered, background static-like limiting restrictions on both the development of organized actionable information in a business, and on its effective communications to the stakeholder who need it, and when they do.

I said in Part 35 that reactive and proactive per se both become more important, and I add more meaningful as points of distinction when the people involved – or who should be involved, are dealing with the non-standard, the non-routine, and have to find more creative solutions to the problems they face than can be encompassed in their usual day-to-day task level approaches.

• As a caveat to that, I have to add here that one of the first victims of business systems friction, is that anything like a before the fact understanding that stakeholders are in fact facing the unexpectedly different, can become lost. Reactive often starts from what has suddenly been found to be the unexpected, as standard operating practices as rote-followed suddenly break down and in unexpected but significant ways and with what are proving to be significant consequences from that.

Proactive as such arises when at least one key stakeholder spots something unusual or unexpected, and early on and in a way that highlights its novelty to them. And it actually takes place when they can and do start informing others who would also have to know of this, so they can begin addressing this unexpected but real new circumstance and from early on too, rather than having to play reactive catch-up for their part of the overall task and process performance flow that they would be responsible for in all of this.

I have on occasion written here in this blog of reactive and proactive occurring in a blend, and I add in what can be a confusing blend of them. All relevant stakeholders are not always brought into these now very necessary conversations. And even when such a stakeholder is included, that is no guarantee that they will actually chose to deviate from “tried and true” and even as pursuit of that might have already proven problematical for others involved in these overall cause and effect cascades.

Point 1 as listed above, addressed follow-through and action but it also focused on communications and on what is communicated. Point 2 as started for discussion in Part 34 of this series and as continued here, is all about what would, or would not be done with that knowledge, assuming that it can even be made available in a timely manner. And localized breakdowns there can contribute to the pursuit of mixed reactive plus proactive, as much as conservative insistence on following routine practices does, in the face of new and emergent issues and challenges. And Point 3, and by extension Points 4 and 5 as well, all deal with the basic issues on raw data, processed knowledge and its organized assembly and communications again, though I will also reconsider usage and follow-through issues when discussing Point 5, as begun here when considering Point 2 of the above list. And with that orienting note added, I turn here to address Point 3 again. And I do so by delving into a fundamentally important issue, and one that is often overlooked for its prevalence and significance when discussing the types of technology-based approaches to information and knowledge management in a business, that I touched on here in Part 34: version 2.0 intranets and related capabilities for bringing people together in needs-focused ways.

• The issue that I refer to here is that of nuanced, experience based judgment and the simple fact that all processed knowledge in a business cannot simply be typed into a database and in a form that can always, automatically be used and to full benefit by anyone tapping into it as a shared resource.

Let’s consider a judgment call example here, to illustrate the point that I am trying to make with that assertion. And I offer it as a realistic if stylized feedback-framed verbal response. “The problem that you’re telling me about sounds familiar even if it isn’t exactly common here. And it sounds like you are going to need X to handle it (where X is a resource that is “owned” by a manager on a different part of the table of organization.) And you might very well need some specific help in both accessing and using it. Watch out for A. He might or might not be able to help you access X but he doesn’t know as much as he thinks he does and certainly when doing anything here that falls outside of his more day-to-day routine. B, on the other hand is a lot more widely experienced and he can think outside of his day-to-day box. He’s not as easy to work with; he is not all that outgoing and he tends to keep pretty focused on his immediate tasks at hand. But if you could get him to help you, and if his boss C will let him take the time to do that, you are going have a lot better chances of success here than if you let A try and take over on this for you. And he will try and take over.”

It is essentially guaranteed that any advice: any insight of this type, focusing on interpersonal issues and on individual strengths and weaknesses, is going to come with an at least strongly implied “… but don’t tell any of them that I told you this. I have to work with A and B and their manager too!” So this might be crucially important information for successfully carrying out an important task, and a high priority novel one. But how would anyone put this type of judgment-call insight as to who is best and who is worst to work with, into a database? So I will focus in what follows on what might be deemed more operational insight and knowledge – data and processed knowledge that does not carry this type of interpersonal judgmental quality, that might be so codified, organized and shared. And with this caveat offered, I will begin addressing that complex of issues in my next series installment, turning back to my Point 3 notes of Part 34 as an organizing framework, or at least as the start of one for doing so. Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory.

%d bloggers like this: