Platt Perspective on Business and Technology

Reconsidering Information Systems Infrastructure 5

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on July 13, 2018

This is the 5th posting to a series that I am developing here, with a goal of analyzing and discussing how artificial intelligence, and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Ubiquitous Computing and Communications – everywhere all the time 2, postings 374 and loosely following for Parts 1-4. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

This is a series, as just stated, that is intent on addressing how artificial intelligence will inform, and in time come to fundamentally reshape information management systems, and both in general for more localized computer-based hardware and software systems, and in larger networked contexts as they interconnect: the global internet included. And I have been focusing on the core term that enters into that goal in this series, up to here: what artificial intelligence is, and with a particular focus on the possibilities and potential of the holy grail of AI research and development: the development of what can arguably be deemed to be true artificial general intelligence.

• You cannot even begin to address how artificial intelligence, and artificial general intelligence agents in particular would impact upon our already all but omnipresent networked information and communications systems, until you at least offer a broad brushstroke understanding as to what you mean by and include within the AI rubric, and outline at least minimally what such actively participating capabilities and agencies would involve and particularly as we move beyond simple, single function-limited artificial intelligence agents.

So I continue here with my admittedly preliminary, and first step discussion as to what artificial general intelligence is, and when moved past the simply conceptual of a traditionally stated Turing test, to include at least a starting point discussion of how this might be operationally defined too.

I have been developing my line of argument, analysis and discussion here around the possibilities inherent in more complex, hierarchically structured systems, as constructed out of what are individually just single task, specialized limited-intelligence artificial agent building blocks. And I acknowledge here, a point that is probably already obvious to many if not most readers: I am modeling my approach to artificial general intelligence at least to the level of analogy, on how the human brain is designed for its basic architecture, and on how a mature brain develops ontologically out of simpler more specialized components and sub-systems of specialized and single function units, that both individually and collectively show developmental plasticity that is experience based.

Think bird’s wings versus insect wings there where different-in-detail building block elements, come together to achieve essentially the same overall functional results, while doing so by means of distinctly different functional elements. But while those differently evolved flight solutions differ in structure and form details, they still bear points of notable similarity, and because they seek to address the same functional need and in the context of the same basic physical constraints, if for no other reason. Or if you prefer, consider bird wings and the wings found on airplanes, where bird wings combine both air foil capability for developing lift, and forward propulsive capability in the same mechanism, while fixed wing aircraft separate air flow and resulting lift capability from engine-driven propulsion.

And this brings me to the note that I offered at the end of Part 4, in anticipation of this posting and its discussion. I offered a slightly reformulated version of the Turing test in that installment, in which I referred to an artificial intelligence-based agent under review as a black box entity, where a human observer and tester of it can only know what input they provide and what output the entity they are in conversation is offering in response. But what does this mean, when looking past the simply conceptual, and when peering into the black box of such an AI system? I stated at the end of Part 4 that I would at least begin to address that question here, by discussing two fundamentally distinct sets of issues that I would argue enter into any valid answer to it:

• The concept of emergent properties as they might be more operationally defined, and
• The concept of awareness as a process of information processing (e.g. with pre- or non- directly empirical simulation and conceptual model building that would be carried out prior to any actualized physical testing or empirical validation of approaches and solutions considered.)

And as part of that, I added that I will address the issues of specific knowledge based expert systems in this, and of granularity in the scope and complexity of what a system might be in-effect hardwired to address, as for example in a Turing test context. What would more properly be developed as hardware, software, and I add firmware here? I will also discuss error correction as a dual problem, with half of it at least conceptually arising and carried out within the simpler agents that arise within an overall intelligent hierarchically structured system, and half of it carried out at a higher level within such a system, and as a function of properties and capabilities that are emergent to that larger system.

I begin addressing the first of those topics points and I add the rest of the above-stated to-address issues, by specifically delving into one two-word phrase that I offered in it, that all else here, hinges upon: “emergent properties.”

• What is an emergent property? Property, as that word is used here, refers to functionalities: mechanisms that directly cause or prevent, or directly modify an outcome step in a process or flow of them. So I use this term here in a strictly functional, operational manner.
• What makes a specific property as so defined and considered here, to be emergent?
• Before you can convincingly cite the action of a putative emergent property as a significant factor in a complex system, it has to be able to meet two specific criteria:
• You have to be able to represent it as the direct and specific causal consequence of an empirically testable and verifiable process, and
• This process should not arise at simpler organizational levels within the overall system under consideration, than the level at which you claim it to hold significant or defining impact.

Let me explain that with an admittedly gedanken experiment level “working” example. Consider a three tiered hierarchically structured system of what could best be individually considered to be simple, artificial specialized intelligence, single task agents. And you observe a type of measurable, functionally significant intermediate-step outcome arising within that system as you track its processing, and both overall and for how this system functions internally (within its black box.) Is this outcome the product of an emergent property in this system? The answer to that would likely be yes, if the following criteria can be demonstrably met, here assuming for purposes of this example that this outcome appears in the second, middle level up in the network architecture hierarchy here:

• The outcome in question does not and cannot arise from any of the lowest, first level agent components in place in this system.
• The outcome only arises at the second level of this hierarchy if one or more of the agents operating at that level receive input from a correct combination of agents that function at the lowest, first level, and with appropriate input from one or more agents at the second level too, as they communicate together as part of their feedback control capabilities.
• And this outcome, while replicable and reliably so given the right inputs, is not simply a function of one of the second level agents. It is not a non-emergent property of the second level and its agents.
• And here, this emergent property would most likely become functionally important at the third, highest level of this hierarchical stack, where its functional presence would serve as meaningfully distinct input to agents operating there.

Think of emergent properties here, as capabilities that add to the universe of possible functional output states and the conditions that would lead to them, that might be predictably expected when reviewing the overall functional range of this system when looking at its element agents and their expectable outcomes as if they were more independently functioning elements of a simple aggregation.

To express that at least semi-mathematically, consider an agent A, as being functionally defined in terms of what it does and can do as defined by its output states, coupled with the set of input states that it can act upon in achieving those outputs. As such it could be representable as a mapping function:

• FA(the set of all possible input values) → {the set of all possible output values}A

where FA can be said to map specific input values that this process can identify and respond to, as a one-to-one isomorphism as a simplest case, to specific counterpart output values (outcomes) that it can produce. This formulation addresses the functioning of agents that can and would consistently carry out one precisely characterizable response to any given separately identifiable input that it can respond to. And when the complete ensemble of inputs and responses as found across the complete set of elemental simple function agents in a hierarchical array system of this type, is considered at this simplest organizational level, where no separate agents in the system duplicate the actions of any others (as backups for example), their collective signal recognition and reaction system: their collective function space (or universe if considered in set theory terms) of input recognized and input-specific actions taken, would also fit a one-to-one isomorphism, input to output mapping pattern.

That simplest case assumes a complete absence of emergent process, response activity. And one measure of the level of emergent process activity in such a system would be found in the level of deviation from a more basic one-to-one input signal to output response pattern observed when studying this system as a whole.

The more extra activity is observed, that cannot be accounted for in this type of systems analysis, the more emergent property activity must be taking place, and by inference, the more emergent properties there must be there – or the more centrally important those that are there, must be to the system.

Note that I also exclude outcomes convergence in this simplest case model too, at least for single simple agents included in these larger systems, where several or even many inputs processed by at least one of the simple agents in such a system, would lead to some same output from it. That, however, is largely a matter of how specific inputs are specified. Consider, for example, an agent that is set up algorithmically to group any input signal X that falls within some value range (e.g. greater than or equal to 5) as functionally identical for processing and output considerations. If a value for X of 5, or 6.5 or 10 is registered, all else remaining the same as far as pertinent input is concerned, a same output would be expected – and for purposes of this discussion, such input value deviation for X would be evaluated as moot and the above type of input to output isomorphism would still be deemed valid.

I am going to continue this narrative in a next series installment, where I will conclude my discussion of emergent properties per se, at least for purposes of this phase of this series. And then I will move on to consider the second major to-address bullet point as offered above:

• The concept of awareness as a process of information processing (e.g. with pre- or non- directly empirical simulation and conceptual model building that would be carried out prior to any actualized physical testing or empirical validation of approaches and solutions considered.)

And I will also, of course discuss the issues that I listed above, immediately after first offering this bullet point:

• The issues of specific knowledge based expert systems in this, and of granularity in the scope and complexity of what a system might be in-effect hardwired to address, as for example in a Turing test context. What would more properly be developed as hardware, software, and I add firmware here? I will also discuss error correction as a dual problem, with half of it at least conceptually arising and carried out within the simpler agents that arise within an overall intelligent hierarchically structured system, and half of it carried out at a higher level within such a system, and as a function of properties and capabilities that are emergent to that larger system.

And my overall goal in all of this, is to use this developing narrative as a foundation point for addressing how such systems will enter into and fundamentally reshape our computer and network based information management and communications systems.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Advertisements

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 2

This is my second posting to a short series on the growth potential and constraints inherent in innovation, as realized as a practical matter (see Part 1.)

My primary goal in Part 1 was to at least briefly lay out the organizing details of a technology development dynamic, that is both ubiquitously present and certainly in rapidly changing technologies, and all but ubiquitously overlooked and even by technologists.

• One half of this dynamic represents what can be seen as the basic underlying vision of futurologists, and certainly when they essentially axiomatically assume a clear and even inevitable technology advancement forward. I add that this same basic assumption can be found in the thoughts and writings of many of the more cautionary and even pessimistic of that breed too. The basic point that both of those schools of thought tend to build from is that whether the emergence and establishment of ongoing New is for the good or bad or for some combination in between, the only real limits to what can be achieved through that progression are going to be set by the ultimate physical limits imposed upon us by nature, as steered towards by some combination of chance and more intentional planning. And chance and planning in that, primarily just shape the details of what is achieved, and do not in and of themselves impose limits on the technological progression itself.
• The second half of this dynamic, as briefly outlined in Part 1, can be represented at least in part by the phenomenon of technology development lock-out, where the cumulative impact of mostly small, early stage development decisions in how new technologies are first implemented, can become locked in and standardized as default norms.

I cited a simple but irksome example of this second point and its issues in Part 1, that happens to be one of the banes of existence for professional musicians such as Jaron Lanier, who chaff at how its limitations challenge any real attempt to digitally record and represent live music with all of its nuances. I refer here to the Musical Instrument Digital Interface (MIDI) coding protocol for digitally representing musical notes, that was first developed and introduced as an easy-for-then, way to deal with and resolve what was at the time a more peripheral and minor-seeming challenge: the detail-level digital encoding of single notes of music as a data type, where the technologists working on this problem were more concerned with developing the overall software packages that MIDI would be used in. The problem was that this encoding scheme did not allow for all that much flexibility or nuance on the part of the performer in how they shaped those musical notes, leaving the resulting music recordings more crudely, stereotypically formed then the original had been, and lacking in the true character of the music to be recorded.

One of the hallmarks of technological lock-ins is that they arise when no one is looking, and usually as quick and at least easier solutions to what at the time seem to be peripheral problems. But with time they become so entrenched and in so many ways, as the once-early technology they were first devised for grows, that they become de facto standards, and in ways that make them very hard to move beyond or even just significantly change. And such entrenched “solutions,” to cite a second defining detail of this technology development constraint, are never very scalable. The chaffing constraints that they create make them lock-ins because of this and certainly as the technologies that they are embedded in, in effect outgrow them. The way that they become entrenched in the more developed technologies that form around them, leaves them rigidly inflexible in their cores.

Human technology is new in the universe, so I decided while writing Part 1 to turn to consider biological evolution and at least one example drawn from that, to illustrate how lock-in can develop and be sustained over long periods of time: here on the order of over one billion years, and in a manner that has impacted upon essentially all multi-cellular organisms that have arisen and lived on this planet, as well as all single cell organisms that follow the eukaryotic cell pattern that is found in all multi-cellular organisms. My point in raising and at least briefly discussing this example is to illustrate the universality of the basic principle that I discuss here.

The example that I cited at the end of Part 1 that I turn to here, is a basic building block piece of the standard core metabolic pathway system that is found in life on Earth: the pentose shunt, or pentose phosphate pathway as it is also called.

• What is the pentose shunt? It is a short metabolic pathway that plays a role in producing pentoses, or 5-carbon sugars. Pentose sugars are in turn used in the synthesis of nucleotides: basic building blocks of the DNA and RNA that carry and convey our genetic information. So this pathway, while short and simple in organizational structure, is centrally important. And any mutations in any of the genes that code for any of the enzyme proteins that participate in this pathway are 100% fatal, every time and essentially immediately so.
• Think of this as one of biochemistry’s MIDI’s. If a change were made in the MIDI protocol that prevented a complete set of digitized notes from being expressed, any software incorporating that mutation would fail to work, and would constitute a source of fatal errors as far as any users are concerned. Any change: any mutation in the pentose shunt that limited its ability to produce the necessary range of metabolic products that it is tasked with producing, would be fatal too.

Does this description of the pentose shunt suggest that it is the best of all possible tools for producing those necessary building block ingredients of life? No, it does not, any more than any current centrality of need for MIDI and its particular standard in music software as that has been developed, indicates that MIDI must be the best of all possible solutions for the technology challenge that it addresses. All you can say and in both cases is that life for one, and music software for the other, have evolved and adapted around these early, early designs and their capabilities and limitations, as they have become locked-in and standardized for use.

Turning back to biology as a source of inspiration in this, and to the anatomy of the human body with its design trade-offs and compromises, and with its MIDI-like design details, I cite a book that many would find of interest: and particularly if they have conditions such as arthritis or allergies, or know anyone who does, or if they are simply curious as to why we are built the way we are:

• Lents, N.H. (2018) Human Errors: a panorama of our glitches, from pointless bones to broken genes. Houghton Mifflin Harcourt Publishing Co.

This book discusses a relatively lengthy though still far from complete listing of what can be considered poor design-based ailments and weaknesses: poor design features that arose early, and that have become locked-in for all of us. So it discusses how poor our wrist and knee designs are from a biomechanical perspective, how humans catch colds some 200 times more often than our dogs do from how our upper respiratory tract is designed and from immune system limitations that we have built into us, and more. Why for example do people have a vermiform appendix still as an evolutionary hold-over, when the risk of and consequences of acute appendicitis so outweigh any purported benefits from our still having one? Remember that surgery is still a very, very recent innovation, so until very recently acute appendicitis and a resulting ruptured appendix was all but certain to lead to fatal consequences. And this book for all of its narrative detail just touches upon a few primarily anthropocentric examples of a much larger list of them that could be raised there, all of which serve as biological evolutionary systems examples of design lock-in as discussed here.

Looking at this same basic phenomenon more broadly, why do cetaceans (whales, dolphins, etc), for example, all develop olfactory lobes in their brains during embryonic development, just to reabsorb them before birth? None of these animals have, or need a sense of smell from birth on but they all evolved from land animal ancestors who had that sense and needed it. See for example: Early Development of the Olfactory and Terminalis Systems in Baleen Whales for a reference to that point of observation.

I am going to continue this narrative in a next series installment, where I will introduce and briefly discuss a point of biological evolutionary understanding that I would argue, is crucially important in understanding the development of technology in general, and of more specific capabilities such as artificial intelligence in particular: the concepts of fitness landscapes as a way to visualize systems of natural selection, and adaptive peaks as they arise in these landscapes. In anticipation of that line of discussion to come, I add that I began to at least briefly make note of the relationships between steady evolutionary change and disruptively new-direction change, and the occurrence and stability of lock-ins in Part 1. I will return to that set of issues and discuss it more fully in light of adaptive peaks and the contexts that they arise in. Then after developing this foundational narrative for purposes of this series, I will turn very specifically to consider artificial intelligence and its development – which I admit here to be a goal that I have been building to in this progression of postings.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3 and also see Page 1 and Page 2 of that directory. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And also see its Page 1.

Some thoughts concerning a general theory of business 24: considering first steps toward developing a general theory of business 16

This is my 24th installment to a series on general theories of business, and on what general theory means as a matter of underlying principle and in this specific context (see Reexamining the Fundamentals directory, Section VI for Parts 1-23.)

I have been discussing, since Part 20 of this, a brief set of what can be seen as hiring process exceptions that can categorically arise in businesses, and that impact upon employees and potential employees as well as upon management, when they arise. And my goal in that developing narrative has been to use these real-world business process-based, interpersonal interactions as grounding points for discussing more general issues that would help illuminate and develop a more general theory of business as a whole.

I began discussing two such hiring situations in Part 23, that I repeat here as I continue to address them, as renumbered here from the original, more complete list. Please note that both of these exception scenarios are offered in contrast to a more normative hiring context scenario that they would prove to be an explicit exception to, with their normative counterpart offered first:

1. More routine positions, managerial or not – versus – special skills and experience new hires, hands-on or managerial. (Here, the emphasis in this second possibility is in finding and bringing in people with rare and unusual skills and experience sets that are in high demand, and at levels of need that exceed any realistic pool of available candidates holding them.)
2. And job candidates and new hires and employees who reached out to the business, applying for work there as discussed up to here in this narrative, doing so on their own initiative – versus – professionals who might not even be explicitly looking for new job opportunities who the business itself has reached out to, to at least attempt to bring them in-house as special hires and as special for all that would follow.

I started out comparing and contrasting these two hiring exception scenarios in Part 23, and then began to consider them from a participant-oriented game theory-based strategy perspective there, building that line of discussion from the points of similarity and of difference that I had just noted for them. Or to be more specific here, I began to so analyze the first of those two hiring scenarios in that installment in that manner. My goal for this posting is to take that same approach as a tool for examining and understanding the second of those scenarios too, and with further points of comparison between the two added in while doing so. And I begin this by at least briefly repeating, and then expanding on a basic point that I made in Part 23 when considering the above renumbered (and somewhat rephrased) Scenario 1, that holds pivotal importance in any theory of business as a whole and across wide ranges of context as they would arise in them:

• The phenomenon of competing alternative strategies, and how real world business contexts can come to require reconciling and coordinately following more than one such strategic approach at the same time – or at least finding a workable and mutually acceptable hybrid combination of them.
• It is obvious that different participants: different players, to couch this in game theory terms can and often do hold to differing and even overtly competing strategies and goals as they interact and seek to influence the interactive processes that arise between them and the goals reached from that.
• When I raise the issues of competing strategies here, I am focusing on competing alternatives that can arise and play out within the individual participants involved there, as for example when they individually have to simultaneously find and promote negotiated approaches that would work for them on both a short and a long-term basis, or in accordance with essentially any other dichotomous (at least) parameter that would hold importance to them, while pressing them with significantly differing alternative best paths forward.

As noted in Part 23, potential new hires who would fit into a Scenario 1 pattern as offered above, and both from their own perspective and from that of a hiring company, generally have specific currently must-have skills and experience sets that that hiring business feels compelled to add to their staff capabilities and as quickly and early as possible. This type of scenario is most likely going to arise for businesses that operate in very fast paced and rapidly changing, technology-driven business arenas that are continually racing to achieve an ever-changing goal: top position in a very competitive industry. As such, this scenario is usually all about businesses seeking a new and cutting edge technology advantage over their competition, and certainly while the defining edge sought in winning this is new and emerging skills-set driven race, would still hold first mover advantage for them in capturing emerging new markets. And that dynamic leads to both short-term and longer-term consequences, and a need for both short term and long term strategy and from both the would-be employee, and from the would-be hiring business perspective, and with game theory-defined strategic understandings to all of this, to match and for both sides of this too.

A job candidate seeking out this type of hiring opportunity has to be able to leverage any possible advantage that they might be able to offer from their holding a still rare, high demand skills and experience set, while those special capabilities attributes still hold this type of defining value for them. So they need to be able to negotiate towards a hiring decision from their side of the table that would leverage their being able to achieve their goals, and help them gain the best possible terms of employment and compensation levels, commensurate with the current (but perhaps soon to fade) special value of what they have to offer now, and with a short-term strategic approach pursued in doing this. But at the same time, if they want to stay employed at that business longer term instead of only pursuing shorter-term gigs as an ongoing career path, they need to develop a relationship with this hiring manager who will be their supervisor and direct boss there, and with this business, that is not going to chaff and create resentment there too. This, of course holds for terms of employment and the details and levels agreed to in the overall compensation package.

I offer that last point with my own direct experience in mind, where I once found myself taking a consulting assignment that could in principle have lasted longer than it did – but I negotiated terms from too much of a short-term perspective and not from a longer-term one. So that business agreed to bring my in to work with them, but at a pay rate that they came to see as too out of range from what they paid others at the same level in their organization to be long-term sustainable. That realization on the part of this hiring business, I add, colored my entire work experience there, and even as I successively achieved the goals that I was initially brought in to work towards. And that brings me to the hiring manager and business side of this. They seek to meet the short term strategy requirements that they face in being able to bring in necessary and even essential skills and experience, but in ways that are going to be longer-term sustainable too – assuming that is, that they are not simply hiring short-term and intentionally so as their basic strategy.

Now let’s consider these same types of issues from a Scenario 2 perspective, where a business has decided to seek the services of some specific individual as a new hire, who they reach out to and attempt to convince to work for them, and regardless of their current work and employment status. These efforts are not generally directed towards addressing short-term needs, and the people they would bring in usually have skills and experience sets that they would want to retain longer-term. So their shorter term and here-and-now strategies and tactics for this would revolve around their seeking to catch the interest of such a potential hire, and in ways that would bring them in through their doors. Their longer-term strategy here would align with that, and function as a continuation of it, with a goal of finding a mutually agreeable overall, terms of employment and compensation package that both sides of these negotiations could live with moving forward.

• Both the potential new hire and the potentially hiring business in this, seek to reach an agreement that would best serve their particular needs and for both of these hiring scenarios. Short term, and certainly when only considering that timeframe, this would likely mean both of these two sides pursuing more of a win-lose strategy approach, that would at least likely turn out to be at least somewhat close to being diametrically opposed.
• But both of the types of scenarios under consideration here, and the above-stated Scenario 2 in particular are essentially never short-term only and for either side of the negotiating table. So it is usually in the best interest of all parties to seek out more of a win-win solution here and once again, most certainly where Scenario 2 applies.

This leads me to the final crucially important point that I would address here in this posting: business systems friction and the fact that neither side to the negotiations that are under consideration here is going to know enough of the information that is held on the other side of the table to be able to make an optimally best-for-them decision when crafting the offers that they would propose. Neither side, for example, is certain to know if their counterparts on the other side of the table are negotiating with others too, and even if they do know that they are unlikely to know the crucial details they would have to compete with there. And neither side is going to know the outer parameters as to what the other side would deem acceptable, and either in detail for specific points or in overall balance where significant trade-offs might be possible.

How conservative in their thought and actions are the people involved in these negotiations? And how much would they seek to press the limits of what might be possible and achievable for their side, on the assumption that they could probably concede ground if needed when making adjusted offers and still keep these negotiations in play? Personalities involved, and basic business and negotiating styles pursued here can become very important, and both in shaping any dual or alternative negotiating tactics and approaches pursued, and in identifying and understanding the thinking on the other side of the table. (Look to the corporate culture in place in the hiring business, and the corporate cultures that a potential hire here, have succeeded in and even thrived in, that they might turn to for guidance as they negotiate possible next career moves that they might accept.)

• The points that I have been making here, and certainly in the last several paragraphs, while framed in terms of a hire-or-not negotiations, hold much wider importance in understanding the dynamics of business decision making and the agreements and disagreements that can arise in them, and both when dealing with outside stakeholders and when negotiating strictly in-house and across what can become highly competitive boundaries there.

I am going to more fully explore and discuss that last bullet point in my next series installment. And then I am going to turn to and consider the last hiring scenario from my original list in the next installment to this series, as first offered in Part 20as noted above: nepotism as a specific case in point example of how hiring process exceptions can take more toxic forms. I will consider intentionally, overtly family owned and run businesses in that context, that simply seek to keep their business in their family, there. And I will also discuss more overtly problematical examples of how this type of scenario can play out too. Then after completing that line of discussion, at least for purposes of this series, I will step back from consideration of theories of business and special case contexts that they apply to, as an overall special categorical form of general theory, to delve into a set of what have become essential foundation elements for that discussion, with further consideration of general theories per se. I began this series in its Parts 1-8 by offering a start to an approach to thinking about and understanding general theories as such. I will add some further basic building blocks to that foundation after completing my business theory discussion here, up through a point where a new hire first successfully joins a business as an in-house employee, hands-on or managerial. Then I will turn back to further consider general theories of business per se, on the basis of that now-enlarged general theory discussion.

Meanwhile, you can find this and related material about what I am attempting to do here at About this Blog and at Blogs and Marketing. And I include this series in my Reexamining the Fundamentals directory, as topics section VI there, where I offer related material regarding theory-based systems. And I also include this individual participant oriented subseries of this overall theory of business series in Page 3 of my Guide to Effective Job Search and Career Development, as a sequence of supplemental postings there.

Reconsidering Information Systems Infrastructure 4

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on May 15, 2018

This is the 4th posting to a series that I am developing here, with a goal of analyzing and discussing how artificial intelligence, and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Ubiquitous Computing and Communications – everywhere all the time 2, postings 374 and loosely following for Parts 1-3. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

I began discussing what I have referred to as a possible “modest proposal” task for artificial intelligence-based systems in Part 1 and Part 2 of this series, as drawn from the pharmaceutical industry and representing what can be considered one of their holy grail goals: developing new possible drugs that would start out having a significant likelihood of therapeutic effectiveness based on their chemistry and a chemistry-level knowledge of the biological processes to be affected. And I have been using that challenge as a performance benchmark case-in-point example of how real world tasks might arise as problems that would be resolved by artificial intelligence, that go beyond the boundaries of what can be carried out using single, specialized artificial intelligence agents, at least in their current here-and-now state of development.

Then I further developed and explored that real world test-case challenge and what would go into effectively addressing it, in Part 3 where I began delving in at least preliminary-step detail, into the possibility of what might be achieved from developing and deploying what could become multiple hierarchical-level, complex problem solving arrays of what are individually only distinctly separate specialized, single task artificial agents, where some of them would carry out single specialized types of command and control functions, in coordinating the activities of lower level agents that report to them and that more directly work on solving specific aspects of the initial problem at hand.

Up to here, I have simply sidestepped the issue of whether such a system of individually simple, single algorithm agents, with its collective feedback and self-directing control systems could in some sense automatically qualify as having achieved some form of general intelligence: a level and type of reasoning capacity that would at least categorically match what is assumed by the term “human intelligence,” even if differing from that in detail. I categorically state here, that that type of presumptive conclusion need not in any way be valid or justified. And I cite, by way of well known example for justifying that claim, the feedback and automated control functionalities that have been built into complex steam powered systems going back to the 19th century, for regulating furnace temperatures, systems pressures and the like and even in tremendously complex systems. No one would deny that well designed systems of that sort do in fact manage and control their basic operational parameters for keeping them functioning and both effectively and safely. But it would be difficult to convincingly argue that a steam power plant and associated equipment in an old steam powered ship, for example, were in some sense intelligent and generally so. There, specialized and limited in overall aggregated form, is still specialized and limited, and even if at the level of managing larger and more complex problems than any of its single component-level parts could address.

With that noted, I add here that I concluded Part 3 of this series by stating that I have been attempting to offer a set of building block elements that would in all likelihood have to go into creating what would become an overall artificial intelligence system in general, and an arguably genuinely intelligent information management and communications network and system of such networks as a whole, in particular. Think of my discussion in this series up to here as focusing on “necessary but not sufficient” issues and systems resources.

I am going to turn to at least briefly discuss the questions and issues of infrastructure architecture in this, and how that would arise and how it would be managed and controlled, in what follows. (Think self-learning and self-evolving systems there, where a probably complex structured hierarchical system of agents would over time, optimize itself and effectively rewire itself in that process.)

And my goal there will be to at least offer some thoughts as to what might go into the “sufficient” side of intelligent and generally intelligent systems. And as part of that, I will more fully consider at least some basic-outline requirements and parameters for functionally defining an artificial general intelligence system per se, and in what would qualify as more operational terms (and not just in more vaguely stated conceptual terms as are more usually considered for this.)

Let’s start addressing all of that with those “higher” level but still simple, single algorithm agents that function higher up in a networked hierarchy of them, and that act upon and manage the activities of “lower” level agents, and how they in fact connect with them and manage them, and the question of what that means. In that, let’s at least start with the type of multi-agent system that I have been discussing in the context of my above noted modest proposal example, and build from there. And I begin that by raising what might be considered a swear word in this type of narrative for how it can be expansively and misleadingly used: “emergent properties.” And let me begin addressing that, by in-effect reframing Turing’s hypothesis in at least somewhat operationalized terms:

• A system of simple, pre-intelligent components does not become intelligent as whole because some subset of its component building block elements does. It becomes intelligent because it reaches a point in its overall development where examination of that system as a whole, and as a black box system, indicates that it is no longer possible to distinguish between it and its informational performance output, and that of a benchmark presumed-general intelligence and its output that it would be compared to (e.g. in Turning’s terms, a live and conscious person who has been deemed to be of at least average intelligence, when tested against a machine.)

And with a simple conceptual wave of the hands, an artificial construct becomes at least arguably intelligent. But what does this mean, when looking past the simply conceptual, and when peering into the black box of such a system? I am going to at least begin to address that question starting in a next series installment, by discussing two fundamentally distinct sets of issues that I would argue enter into any valid answer to it:

• The concept of emergent properties as they might be more operationally defined, and
• The concept of awareness, as a process of information processing level (e.g. pre- or non- directly empirical) simulation and model building.

And as part of that, I will address the issues of specific knowledge based expert systems, and of granularity in the scope and complexity of what a system might be in-effect hardwired to address, as for example in a Turing test context.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 1

Usually I know when setting out to write a posting, precisely where I would put it in this blog, and that includes both prior decision as to what directories it might go into, and decision as to what series if any that I might write it to. And if I am about to write a stand-alone posting that would not explicitly go into a series, new or already established, I generally know that too, and once again with that including where I would put it at a directories level. And I generally know if a given posting is going into an organized series, and even if as a first installment there, or if it is going to be offered as a single stand-alone entry.

This posting is to a significant degree, an exception to all of that. More specifically, I have been thinking about the issues that I would raise here, and have been considering placing this in a specific ongoing series: Reconsidering Information Systems Infrastructure as can be found at Ubiquitous Computing and Communications – everywhere all the time 2 (as its postings 374 and loosely following.) But at the same time I have felt real ambivalence as to whether I should do that or offer this as a separate line of discussion in its own right. And I began writing this while still considering whether to write this as a single posting or as the start to a short series.

I decided to start this posting with this more behind the scenes, editorial decision making commentary, as this topic and its presentation serves to highlight something of what goes on as I organize and develop this larger overall effort. And I end that orienting note to turn to the topic that I would write of here, with one final thought. While I do develop a number of the more central areas of consideration for this blog, as longer series of postings, I have also offered some of my more significantly important organizing, foundational ideas and approaches in single postings or as very brief series. As a case in point example that I have referred back to many, many times in longer series, I cite my two posting series: Management and Strategy by Prototype (as can be found at Business Strategy and Operations as postings 124 and 126.) I fully expect this line of discussion to take on a similar role in what follows in this blog.

I begin this posting itself by pointing out an essential dynamic, and to be more specific here, an essential contradiction that is implicit in its title. Moore’s Law, as initially posited in 1965 by Gordon Moore at Intel, proposed that there was a developmental curve-defining trend in place, according to which the number of transistors in integrated circuits was doubling approximately every year and a fraction – but without corresponding price increases. And Moore went out on a limb by his reckoning and predicted that this pattern would persist for another ten years so so – roughly speaking, up to around 1975. And now it is 2018 and what began as a short-term prediction has become enshrined in the thinking of many, and as if an all-but law of nature, from how it still persists in holding true. And those who work at developing next generation integrated circuits are still saying the same things about its demise: that Moore’s law will run its course and end as ultimate physical limitations are finally reached … in another few next technology generations and perhaps in ten years or so. This “law” is in fact eventually going to run its course, ending what has been a now multi-decade long golden age of chip development. But even acknowledging that hazy end date limitation, it also represents an open ended vision and yes, an open ended expectation of what is essentially unencumbered disruptively new growth and development that is unburdened by any limitations of the past, or present for that matter.

Technology lock-in does not deny the existence of or the impact of a Moore’s Law but it does force a reconsideration as to what this still ongoing phenomenon means. And I begin addressing this half of the dynamic that I write of here, by at least briefly stating what lock-in is here.

As technologies take shape, decisions are made as to precisely how they would be developed and implemented, and many of these choices made are in fact small and at least seemingly inconsequential in nature – at least when they are first arrived at. But these at-the-time seemingly insignificant specific design and implement decisions can and often do become enshrined in those technologies as they develop and take off, and as such take on lives of their own. That certainly holds true when they, usually by unconsidered default, become all but ubiquitous for their application and for the range of contexts that they are applied to as those technologies that they are embedded in, mature and spread. Think of this as the development and elaboration of what effectively amount to unconsidered standards for further development, that are arrived at, often as here-and-now decisions and without consideration of scalability or other longer-term possibilities.

To cite a specific example of this, Jaron Lanier is a professional musician as well as a technologist and a founder of virtual reality technologies. So the Musical Instrument Digital Interface (MIDI) coding protocol for digitally representing musical notes, with all of its limitations in representing music as actually performed live, is his personal bête noire, or at least one of them. See his book:

• Lanier, J. (2011) You Are Not a Gadget: a manifesto. Vintage Books,

for one of his ongoing discussion threads regarding that particular set-in-stone decision and its challenges.

My point here is that while open and seemingly open ended growth patterns, as found in examples such as Moore’s Law take place, and while software applications counterpart to it such as the explosive development of new database technology and the internet arise and become ubiquitous, they are all burdened with their own versions of “let’s just go with MIDI because we already have it and that would be easy” decisions, and their sometimes entirely unexpected long-term consequences. And there are thousands of these locked-in decisions, and in every wide-spread technology (and not just in information technology systems per se)

The dynamic that I write of here arises as change and disruptive change take place, with so many defining and even limiting constraints put in place in their implementations and from their beginnings: quick and seemingly easy and simple decisions that these new overall technologies would then be built and elaborated and scaled up around. And to be explicitly clear here, I refer in this to what become functionally defining and even limiting constraints that were more backed into than proactively thought through, than anything else.

I just cited a more cautionary-note reference to this complex of issues, and one side to how we might think about it and understanding it, with Lanier’s above-cited book. Let me balance that with a second book reference that sets aside the possibilities or the limitations of lock-in to presume an ever green, always newly forming future that is not burdened by that form of challenge:

• Kaku, M. (2018) The Future of Humanity. Doubleday.

Michio Kaku writes of a gloriously open-ended human future in which new technologies arise and develop without any such man made limitations: only with the fundamental limitations of the laws of nature to set any functionally defining constraints. Where do I stand in all of this? I am neither an avowed optimist nor a pessimist there, and to clarify that I point out that:

• Yes, lock-in happens, and it will continue to happen. But one of the defining qualities of truly disruptive innovation is that it can in fact start fresh, sweeping away the old lock-ins of the technologies that it would replace – to develop its own that will in turn disappear at least in part as they are eventually supplanted too.
• In this, think of evolutionary change in technology as an ongoing effort to achieve greater effectiveness and efficiency with all of the current, basic constraints held within it, remaining intact there.
• And think of disruptive new technology as break away development that can shed at least a significant measure of the constraints and assumptions that have proven to at least no longer be scalable and effectively so. But even there, at least some of the old lock-ins are still likely to persist. And this next revolutionary step will most likely bring its own suite of new lock-ins with it too.

Humanity’s technology is still new and young, so I am going to continue this narrative in a next posting to what will be this brief series, with an ancient example as drawn from biology, and from the history of life at its most basic, biochemically speaking: the pentose shunt, or pentose phosphate pathway as it is also called. I will then proceed from there to consider the basic dynamic that I raise and make note of in this series, and this source of at least potential innovative development conflict, as it plays out in a software and an artificial intelligence development context, as that is currently taking shape and as decisions (backed into or not) that would constitute tomorrow’s lock-ins are made.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And also see its Page 1.

Some thoughts concerning a general theory of business 23: considering first steps toward developing a general theory of business 15

This is my 23rd installment to a series on general theories of business, and on what general theory means as a matter of underlying principle and in this specific context (see Reexamining the Fundamentals directory, Section VI for Parts 1-22.)

I have been discussing a series of what can perhaps best be considered exceptions scenarios, that would arise in the hiring process in a business, in this series since its Part 20, alternating between discussion of these specific business process issues, and more general theory of business considerations, that I have been exploring by way of these special case contexts. For smoother continuity of narrative, I repeat my four hiring scenario list, with a goal of addressing its third entry here:

1. More routine hire, hands-on non-managerial employees, and I add more routine and entry level and middle managers – versus – the most senior managers and executives when they are brought in, and certainly from the outside.
2. More routine positions, managerial or not – versus – special skills and experience new hires and employees, hands-on or managerial.
3. Job candidates and new hires and employees who reached out to the business, applying as discussed up to here in this narrative on their own initiative – versus – those who the business has reached out to, to at least attempt to bring them in-house as special hires and as special for all that would follow.
4. And to round out this list, I will add one more entry here, doing so by citing one specific and specifically freighted word: nepotism. Its more normative alternative should be obvious.

And I begin addressing Scenario 3 by pointing out the similarities that can arise, and the overlap that can occur between this and Scenario 2. Both involve a business coming to realize that it needs to hire one or more very rare, high demand special-case new employees, at whatever level they would work at on the table of organization. This makes these hiring processes, seller’s market oriented with advantage held by any who can convincingly present themselves as fulfilling the wish list requirements of the hiring business. Both involve situations where more possible employers, quite arguably would wish to hire these types of people than there are actual job candidates – and certainly ones who are looking or willing to look for new work opportunities elsewhere. But even with all of that held in common, these are two distinct and separate special exception hiring scenarios.

• First of all, a really proactive, entrepreneurial professional who has skills and experience that are coming into high demand and need, and at levels the market cannot meet, can reach out to hiring managers and potential hiring managers at businesses that they would like to work at, and basically make sales pitches directed towards starting a conversation. Their goal in that would be to discuss the possibilities of what they could offer, that would specifically bring benefit to that business and to the people there who they get to meet with.
• A business in question, and at least one of its hiring managers have to have thought all of this out first, for a Scenario 3 as offered above to apply; a potential job candidate and new hire can reach out to inform and to provoke that type of thinking process, making an initial effort in order to explore their possibilities and see what they can develop. In Scenario 2, they can easily be the more proactive participants in this. In Scenario 3, it is the potential new hire catch, who would be reactive and the potentially hiring business that would be more proactive in setting this type of process in motion.
• And to cite one other at least potentially significantly differentiating detail here, Scenario 2 tends to apply more for finding and securing special here-and-now hires, and with a goal of keeping the business cutting edge and competitive from that in some rapidly changing, generally technical functional area. What is hot enough in the jobs market to qualify for Scenario 2’s preferential treatment today, is probably going to cool down enough and in a relatively short period of time, to fit more smoothly and realistically into that company’s routine candidate selection and new hire processes and procedures, and from the early job description preparation and initial candidate screening and filtering process onward. And this can happen very quickly, making Scenario 2 into more of a narrow window of opportunity phenomenon.
• Scenario 3 candidates on the other hand, and the people that a business would want to convince to become candidates, might fit that pattern. But this scenario is also were businesses reach out to special possible hires who would offer long-term defining value too, such as marketing or sales professionals with a well established golden touch track record, or senior executives who have proven track records of stellar excellence as visionary leaders and managers. I write here of having more persistent soft skills excellences, versus simply having a more state-of-the-art based, ephemeral technical skills edge.

With that offered as a starting point for discussing what Scenario 3 actually is, let’s consider it and I add reconsider Scenario 2 again, from a game theory perspective. And I begin addressing that, by picking up on the last sentence of the immediately preceding bullet point description, and the basic message that I seek to convey through it, and with timing considerations.

• Any specifically short-term, time limited Scenario 2 advantage that a prospective job candidate might hold in a hiring process there, would of necessity significantly shape the strategy that they would pursue, and I add that the hiring business would pursue too, when meeting and negotiating with them. This biases all that would transpire on both sides of the hiring negotiations table there, and in terms of short timeframes and in terms of strategic and game strategy considerations that would support them.
• But a Scenario 2 candidate who is hired, is in most cases going to want to continue on at that job for longer than just the perhaps brief span in which their special skills that brought them there, still retain their special edge. I am not suggesting that they would want to finish their overall career paths with this employer: only that they would want to have a say in how long they remain there, and on what they can develop and take with them from that experience, as and when they do move on. This gives them positive incentive to think and plan in terms of longer-term career strategy too, and according to a game theory approach that would promote and advance their interests along that timeframe too. And this might in fact be at odds with a strictly short-term interest and short-term planning strategy and game theory approach that they might take if only thinking in terms of getting hired in the first place.
• And a Scenario 2 hiring business, would see compelling need to pursue an at least short-term compatible hiring strategy and game theory approach at first and when negotiating to bring in such a new hire. But as an ongoing organization, they would also have to consider and take on a dual approach there too, building from day one in the hiring process for longer term viability in any hiring agreements reached.

And with this, I raise the issues of dual and competing strategies and their game theory implementations, and the need to reconcile and coordinate between them, to find what for a participant would be their best, more timeframe-independent path forward. I will continue this discussion of Scenario 3 (and of Scenario 2 as well) in my next series installment, and will then move on to Scenario 4, which I offer here in this series as one of several potentially toxic hiring scenarios. And after completing that line of discussion, at least for purposes of this series, I will step back from consideration of general theories of business as a special categorical case, to delve into a set of what have become essential foundation elements for that discussion, with further consideration of general theories per se. And looking ahead, I will then turn back to the more specific context of theories of business again, where I will begin using this newly added, more-general foundational material in its more specific context. My goal there is to follow the discussion of business hiring processes and their exceptions that I have been pursuing up to here, with one that focuses on the new hire probationary period and its dynamics. And I will use that as a source of special case examples, in order to develop and present more general theory of business considerations.

Meanwhile, you can find this and related material about what I am attempting to do here at About this Blog and at Blogs and Marketing. And I include this series in my Reexamining the Fundamentals directory, as topics section VI there, where I offer related material regarding theory-based systems. And I also include this individual participant oriented subseries of this overall theory of business series in Page 3 of my Guide to Effective Job Search and Career Development, as a sequence of supplemental postings there.

Reconsidering Information Systems Infrastructure 3

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on April 1, 2018

This is the third posting to a series that I am developing here, with a goal of analyzing and discussing how artificial intelligence, and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Part 1 and Part 2 of this series, and also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

I have at least briefly touched upon three conceptual and functionally operational dichotomies in this series up to here:

• Artificial specialized, or single task intelligence and its implementation, which is a rapidly advancing reality already, versus artificial general intelligence that is still a vaguely understood possibility, and certainly as for how it might be arrived at,
• Simple tasks, which as I define as tasks that could be carried out algorithmically by a single specialized artificial intelligence powered agent as that form of AI is defined above, versus general tasks that could not be,
• And in the context of a specific “modest proposal” task goal that I have been developing here in a pharmaceutical industry, new drug development context, I have posited two types of drug discovery problems that I identified as Type 1 and Type 2. Type 1 problems would at least in principle, be amenable to management and resolution by simple, single process-oriented (e.g. single task type) algorithms and by single agents that carry them out, while Type 2 problems would require more complex AI systems: systems that might include several or even many such problem solving agents, each with their own separate guiding AI-driven algorithm, with each of these functional elements directly working on and seeking to solve their own specific part of the overall larger problem at hand. And these agents might be functionally managed in a coordinated manner by other still-specialized, organizationally higher level single algorithm agents that carry command and control responsibilities. The lower level agents that they would work with and manage would work on parts of the overall problem presented to this system for resolution, and agents at this next level up, organizationally, would work on “solving” the problem of enabling their coordinated functioning as a more effective collective effort, managing output-to-input sharing between “their” lower level agents for example. And they would flag and preferentially share such data as it would offer more optimized lower level task solutions, coming from those lower level agents, with other agents in this system. That would functionally serve as a filtering process that would serve to restrict and channel next-step action paths taken by those lower level agents, to keep them collectively focused on what has come to seem the most fruitful overall path forward in solving the original Type 2 problem in place. I stress here that however complex this type of agent network might become, and however many organizational levels of action and management that it might come to include, all agent nodes in this would in and of themselves fit the basic pattern of the artificial specialized, or single task intelligence-driven agent.

I also finished Part 2 of this series as an example of what in the film industry is sometimes called a cliffhanger. I have intimated in Parts 1 and 2 that one possible path to the development of a true artificial general intelligence might at least include the development of systems of simpler, more single function agents, and functionally specialized subsystems of them that fit the type of hierarchically networked structure that I just touched upon above, with suitable feedback systems put in place to enable sharable learning across those entire systems and at all relevant organizational levels. I will further discuss this area of consideration and its network infrastructure implications a bit later in this series, simply noting for now that I have still just touched upon this with what amounts to a teaser preview up to here.

And for purposes of this discussion, I have raised the issues of the second and third dichotomous distinctions of this series as repeated above, with that done in Part 2. But I have not effectively connected the more general second and the more problem category-specific third dichotomy sets together yet, at least in any directly practical sense. I began so connecting them here with my reframed and expanded-for-details third bullet point as just offered above. And I begin addressing a second aspect to that here that I have not even touched upon yet, by raising a point that should be pellucidly obvious.

• Any realistic, practical sorting-as-to-type distinction between simple and general tasks is going to be, and will remain fluid and open, and both for what any given point in time’s current technology can effectively address, and for how that technology is perceived and understood.
• And we can expect that much of what would now and in our current stage of technology development, seem to be more general than simple here and of necessity so, will become simple in nature and manageable by even just single task specialized AI agents and in ways not possible now and not even imagined yet.
• Ongoing innovation, and the steady flow of disruptively novel innovations in particular in this, will all but certainly drive that happening. Though even just the accumulated weight of cumulative smaller evolutionary changes will significantly contribute to that reframing too, and to creating the paradigm shifts that we will progressively organize all of this into as we see apparent tipping points reached in what our technologies have become capable of.

What I have been doing in this series, up to here has been to offer a set of building block elements that would go into creating what will become an overall artificial intelligence information management and communications network, and system of such networks as a whole. I am going to turn to at least briefly discuss the questions and issues of infrastructure in this and how it would arise and how it would be managed and controlled, starting in my next series installment. And as part of that, I will more fully consider at least some basic outline requirements and parameters for functionally defining an artificial general intelligence system.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Some thoughts concerning a general theory of business 22: considering first steps toward developing a general theory of business 14

This is my 22nd installment to a series on general theories of business, and on what general theory means as a matter of underlying principle and in this specific context (see Reexamining the Fundamentals directory, Section VI for Parts 1-21.)

I began addressing a set of four example scenarios as to how someone might be hired into a business, outside of the more standard framework of processes and understandings that Human Resources would develop and follow as its routine business practice, in Part 20 of this series, which I repeat here for smoother continuity of narrative:

1. More routine hire, hands-on non-managerial employees, and I add more routine and entry level and middle managers – versus – the most senior managers and executives when they are brought in, and certainly from the outside.
2. More routine positions, managerial or not – versus – special skills and experience new hires and employees, hands-on or managerial.
3. Job candidates and new hires and employees who reached out to the business, applying as discussed up to here in this narrative on their own initiative – versus – those who the business has reached out to, to at least attempt to bring them in-house as special hires and as special for all that would follow.
4. And to round out this list, I will add one more entry here, doing so by citing one specific and specifically freighted word: nepotism. Its more normative alternative should be obvious.

And I have addressed the first of these more novel-case scenarios, at least for purposes of this narrative, since then with that line of discussion leading me to the above repeated hiring Scenario 2. My goal for this posting is to address that. And as done in my now completed discussion of Scenario 1, I do so in the context of alternating between the specifics of the scenario at hand, and more general theory of business considerations.

Focusing here on the more general line of discussion pursued in this series up to here, I analytically characterized trust as it enters into the types of decision making processes that arise in situations such as the above four scenarios in Part 21, splitting off three categorical varieties of it that differ from each other for the most part, according to how fully and directly a process or transaction participant can know, of what they would need to know, in order to make a best-for-them, or a best for their business decision.

My goal here is to delve into at least some of the more general issues that arise when discussing Scenario 2, and use that specific business circumstance as a means for further developing my more general business theory narrative, taking an explicitly game theory approach to that here. Think of my Part 21 discussion of trust and its information based foundations, as a foundational element to that, and for what is to come here.

First, let’s consider Scenario 2 itself. And I begin by noting that this arises when a business seeks to address a specific strategically significant challenge by finding and securing the hire of one or more specific individuals, who are at least relatively uniquely capable of addressing this new or emerging need. This type of scenario is most likely to arise in the context of addressing new or expanding gaps that have been found in the skills and capabilities sets that a business can apply towards fulfilling its core business goals, at least from its current staff and management. This means making its core capabilities more effective, that directly generate its incoming revenue streams and its profitability. But this can also arise if specific individuals can be found who for their special skills and experience, could make a necessary cost center, more competitively cost-effective too, freeing up resources for more profits-center use. Either way, a Scenario 2 situation as outlined above would arise when a business has to find and bring in specialized and difficult to find and secure new hires, with an overall goal of becoming or remaining as competitively effective as possible from that.

And in either case, this means finding and securing specific people:

• Where more generically available, routinely effective job candidates would not be able to offer the types of new value required in a hire here,
• And where wider candidate selective choice considerations, and the compensation benchmarking that such hiring patterns would provide, could not be used to help define or limit what the business would have to offer in overall compensation here.
• Together the above two points throw away the possibility of meaningful, standardized marketplace norms for hiring or for compensation offered for these Scenario 2 candidates, increasing the bargaining power of the most desired job candidates here, relative to that held by a hiring business.

And as soon as the managers directly involved in these hiring decisions begin to realize that their more routine personnel policies for managing compensation levels cannot apply here, due to uniqueness of circumstance and scale of need, this second scenario becomes inevitable in one form or other.

And with my earlier discussion of Scenario 1 as noted above, and this discussion of Scenario 2 noted, I turn back to the issues of trust and how it can take different forms, depending on the availability of necessary information. And I move forward to at least begin to reframe this in more explicitly game theory terms.

I have written in this blog on several occasions, about win-win and win-lose scenarios. And as part of that, I have noted that win-lose can come to predominate as an approach taken, as a consequence of at least one of several possible factors applying to the contexts in which these games play out, as perceived and understood by the participants involved. And I begin this phase of this narrative by at least briefly and selectively listing a few of those possible strategy triggers here, that I will then explicitly consider, and certainly for Scenarios 1 and 2.

Win-lose strategies are defensive in nature, and arise when, among other possibilities:

A. One or more participants in a business transactions system see the overall pool of rewards available from participating in it, as smaller than the overall pool of value that those participants would collectively have to pay in to it, in order to participate and with a chance of gaining some share of those rewards. This is the “pie to be divided is too small” scenario, where win-lose arises as participants seek to secure what they see as their fair share (or more) and even at the expense of others not even coming close to that.
B. One or more of those participants see that while they might be in effect paid back, if with a delay if they play cooperatively, it is uncertain, or even unlikely that the business transaction system game that they are in will be sustainable long enough for that to happen. This is the “cut your losses and play for as quick a return on investment as possible and regardless of impact upon others” scenario.
C. And with my discussion of trust in its varying forms, as outlined in Part 21, I add in business systems friction and the types of communications and information availability challenges that engenders it. The more incomplete and unreliable the timely availability of necessary information when choosing and pursuing a game strategy as a strategic option here, the more attractive and the more specifically risk reducing a win-lose strategy can appear to become. Limitations in information availability for quality and/or timeliness, and the friction that creates and drives that, degrades types and levels of trust that can be sustained, and that degrades any possible cooperation and any possible win-win alternative.

And to add one more detail to this brief narrative on why people would be drawn towards pursuing a win-lose, or zero-sum strategy, I stress that the strongest driver of all that would drive participants towards pursuing win-lose strategies (at briefly touched upon above, in a Rationale A context) is a sense of inequality and of not being offered a fair chance at receiving compensation in proportion to value and commitment invested. That point of observation, in fact underlies all that I have said here regarding win-lose and its root causes. With that noted, let’s reconsider first Scenario 1 as repeated above, and then Scenario 2 with this general business theory note in mind.

Employees at businesses do not in general see themselves as directly competing with the owners of those businesses or with their more senior executives, and certainly in big business and corporate contexts. This means Rationale A of the above list is unlikely to apply with much force for Scenario 1, unless employees as a whole come to see the leadership of the business that they work for as in effect, looting them of their fair due from how they loot the business as a whole, denying for example adequate cost of living wage increases to their employees while claiming the business cannot afford them, while significantly increasing their own paychecks or other direct personal benefits out of the resulting “savings.”

But employees who see their work as being important to their employing business, and who take pride in doing their work very well, do not necessarily see this same type of categorical separation as noted in the above paragraph, as existing between themselves and others who at least nominally work at the same general levels as them on the table of organization, simply because those employees, or at least some of them happen to know and use some currently “must have” new set of technical skills that they themselves do not need or use, such as knowledge of some new specialized computer language or tool set. The Phrase “as understood and accepted by …” as a contextual qualifier, becomes crucially important here (with this paragraph addressing Rationale A and Scenario 2, as both are offered above.)

I mentioned three rationales in my above list, as to why people working at a business or with an organization might come to pursue a more win-lose, zero-sum personal strategy there. Then I focused on the first of them: the limited pie that cannot suffice to fully meet all participants’ reasonable claims. It is possible for two or more of the types of sample rationales touched upon in that list, to co-occur, and I add that different participants and participant groups can in fact see and be driven by different ones, even if all involved basically follow the same general strategy at least most of the time (e.g. win-win, or in this case win-lose.) But that makes any analysis of the type I am offering here, more complex. And this type of complexity, of necessity means all involved face at least significant levels of Rationale C from my above list of them: friction stemming from faulty and incomplete information as to why others make the decisions that they do, that would be needed for making their own decisions more effectively.

I am going to complete this line of discussion in my next series installment, and will then turn to Scenarios 3 and 4 to complete this phase of this series and its overall discussion. Then I will step back from general theories of business as a special categorical case, and delve into a set of what have become essential foundation elements for that with further consideration of general theories per se. And looking ahead, I will then turn back to the more specific context of theories of business again, where I will begin using this newly added, more-general foundational material in its more specific context.

Meanwhile, you can find this and related material about what I am attempting to do here at About this Blog and at Blogs and Marketing. And I include this series in my Reexamining the Fundamentals directory, as topics section VI there, where I offer related material regarding theory-based systems. And I also include this individual participant oriented subseries of this overall theory of business series in Page 3 of my Guide to Effective Job Search and Career Development, as a sequence of supplemental postings there.

Reconsidering Information Systems Infrastructure 2

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on February 22, 2018

This is the second posting to a series that I am developing here, with a goal of analyzing and discussing how artificial intelligence, and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Part 1 of this. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

Think of those two background postings as offering a starter perspective on a conceptual problem I would address in this brief series. And with their evolving starting point perspective on this overall conceptual problem noted, I began looking at and considering some of the more specific implementation level details of it in Part 1 to this series, where I began discussing artificial intelligent agents and to be more specific here, ones that are capable of learning and self-improving: ones that are capable of self-directed automated evolution in what they are and in how, and how well they perform their tasks.

There are a number of ways to parse artificial intelligent systems and the intelligent agents that comprise them. But one that is particularly pertinent here arises from distinguishing between artificial specialized or single task intelligence, and artificial general intelligence: a capability that if realized would indicate an artificial agent that is capable of general abstract reasoning coupled with a general wide ranging capability for carrying out a diversity of different types of tasks, including new ones that it has never encountered.

As of now and for the foreseeable future, artificial general intelligence is a distant goal, and one that seems to recede into the future by one more year with every passing year. But the development of progressively more and more capable single task specific artificial intelligence has proceeded at a very rapid pace. And some of the more advanced of those systems, have even at times come to be seen as if showing at least glimmers of more general intelligence, even if that is more mirage than anything else – yet.

I find myself thinking back as I write that last sentence, to how IBM’s Deep Blue computer beat the then world’s leading human chess champion: Garry Kasparov in the first of a series of six chess games in 1997 (see alsoDeep Blue versus Garry Kasparov .) This marked the first time that an artificial agent: a computer running an algorithm, was able to beat the top rated human chess player of the time, and as such was seen as a watershed moment in the development of artificial intelligence per se. Until then, highest level chess play was widely considered a human-only capability, and by many.

And looking back at those games and how this automated system played them, this international grandmaster in chess saw what he understood to be at least glimmers of real intelligence and creativity in his machine opponent. But I set that perception aside and assume here that at least as of now, the only artificial intelligence-based agents that need to be considered here are essentially entirely one task-only in capability and range. For purposes of this series, or at least this part of it, the possibility of artificial general intelligence does not matter. The implications and the emerging consequences of even just highly limited and specialized one task-type agents have already proven to be profound. And we have only seen the beginning of what that will come to mean and for its impact on all of us and on all that we do. So I at least begin this series with a focus on that arena of development and action.

I began Part 1 of this series by picking up on a very simple example of a self-learning and evolving algorithm-based agent, that becomes more and more adept at carrying out its task as it works at performing it. Its task itself is very simple at least from a human performance perspective, consisting of placing each of a set of objects into the right containers, and regardless of the presence of alternative containers that it could put them into and regardless of how all of these containers are positioned relative to each other. Think of this as a simple case in point, proof of principle example of how a self-adjusting and self-learning/self-changing algorithm can in fact evolve under its own action towards greater and greater efficiency and accuracy in carrying out its task.

Then I began discussing as a perhaps “modest proposal,” a next step self-learning enabled artificial intelligence based project: a much more ambitious task goal, that in fact addresses a holy grail problem for the pharmaceutical industry and for new drug development, that I only partly tongue in cheek referred to as a more complex and demanding variation on the same basic target container identification and object placement task of the above already-achieved example:

• Identifying possible drugs that would selectively bind to the correct types of chemical reaction sites on the right target molecules that are specifically associated with specific diseases or other pathological states, that would have to be acted upon to achieve therapeutic benefit.
• Note that this can mean blocking biologically significant activity there, activating or enhancing it, or otherwise modulating or modifying it in ways that would likely lead to the achievement of therapeutic value.

The goal of this task is simply to put the right objects: the right prospective drug molecules, in the right containers: the right chemical binding sites on the right target molecules, to express this in terms of the above cited proof of principle agent and its example.

I offered this challenge in Part 1, in what is referred to as a rational drug design context, based upon a detailed and effectively usable understanding of the disease processes in question and of their normal biological counterparts, at a deep molecular level. And I freely admit that while I did note how this type of task would be solved through an algorithmic self-learning process, starting with simpler systems, I did sketch out the basic problem itself in what can only be considered a dauntingly intimidating manner. I actually did that for a reason and quite intentionally; my goal there was to posit a problem to be solved from the perspective of square one in doing so, when you know basically what you seek to achieve overall, but when you do not know any of the intermediate steps yet that you will face as you have yet to really start addressing any of this.

My goal here is to at least briefly discuss a development process for managing and resolving such complex tasks, using this one as a working example. And I begin doing that, by drawing a distinction that parallels and complements the more standard and common one of parsing artificial intelligence into specific-task-only, and general forms. That type of distinction applies to and at least categorically describes what does task work: what categorical type of agent is involved. The distinction I would offer here is one of categorically parsing the overall pool of tasks to be carried out, into:

Simple tasks that can essentially by definition be encompassed in a single progressively more efficient and compact algorithm, and
General tasks that can and do become more open ended, and both for the range of possible actions that could be taken, that an effective task performance would have to be selected from, and that cannot simply be encompassed in a single algorithm that could be optimized with time in the manner of a simple task, as defined here.

The object placement task of Part 1 of this series, and of my second background posting as noted above, is clearly a simple task as that term is offered here. And it can demonstrably be carried out by one single task agent, acting on its own. The potential drug development task as touched upon in Part 1 and here in this posting, is a general task. But that does not mean it can only be carried out to effective completion by a general intelligence: human or artificial. What that means is that it is a task that can in principle be carried out at a specialist-only single task intelligence agent level, and more specifically by a coordinated team of such agents, if the overall problem to be resolved can be broken down into a series of specific-task, algorithm organized components: simple tasks. Note that carrying out this now-suite of subtasks requires partitioning an overall complex task into a set of simpler tasks that individual single task agents could perform. And it requires the development and implementation of command and control organizing, single task agents whose single task would be to coordinate the functioning of other (lower level) agents, feeding output from some lower level agents to others as their input, and with feedback and collectively enabled learning built into this system to help evolve and improve all agent level elements of these systems.

• That means assembling artificial intelligence agents into at least two organizational levels, both (all) of which would in fact consist of simpler single task agents but that collectively can accomplish overall tasks that no such simple agent could perform on its own; this is where emergent properties and emergent capabilities enter this narrative.

I will discuss the issues of artificial general intelligence in light of this conceptual model in a later posting to this series, but I set that aside for now, simply noting it as a possibility worth keeping in mind.

With that offered, let’s reconsider the poster child example of a successful rational drug development initiative that I made brief note of in Part 1: the development of a new drug by researchers at Novartis International AG, for treating chronic myelogenous leukemia (CML). And I begin addressing that by pointing out a crucial detail from Part 1 that I have simply noted in passing, but that I have not discussed yet for its crucial relevance to this series and its discussion. Rational drug development as a new drug discovery and development approach is based upon a tremendously detailed understanding of the underlying biology of the disease processes under consideration. And in this case that means these researchers started out knowing that CML, and I add a group of other, otherwise seemingly unrelated types of cancer, have a very specific molecular level Achilles’ heel. CML, to focus on the specific cancer type that they did their initial research on here, consists of pathologically transformed cells that come to require an active supply of functional tyrosine kinase molecules that can bind phosphate groups to specific types of protein that they specifically produce and need, in order to activate them and give them functionality. Normal cells that these transformed cells arise from do not require this type of tyrosine kinase activity in the same way.

So these researchers knew a precise chemical target to try to bind to with their test drug possibilities, for stopping these cancer cells. And they had access to and extensive literature on, and research findings on kinases and on tyrosine kinases in particular, and that told them precisely what parts of those molecules their test drugs would have to bind to, in this case to block their functioning. CML cancer cells require active functioning tyrosine kinase to even survive. A drug that would effectively remove their tyrosine kinase from them would be expected to stop those cancer cells cold. And that is precisely what their test drug Imatinib (Gleevec) did and does.

I wrote the goals specifications of this type of drug discovery approach in Part 1, as an activity that might be carried out by artificially intelligence-based agents, assuming that the researchers that set these agents loose on a problem, do so starting out with a much more modest foundational understanding of the underlying disease in question itself, and certainly at a molecular level. Here, in the case of CML they started out with a clear road map and with a great deal of relevant information that could go into shaping the initial starting form for any search algorithm that might be pursued. This tremendously limited the search range that would have to be considered in finding possible drugs for testing.

Think of this as a Type 1 drug discovery and development problem. Now consider a disease that is known and understood for its symptoms and severity and for its mortality and morbidity demographics. And assume that along with knowing its overt overall symptoms, physicians also know a significant amount about the laboratory test findings that would be associated with it, such as blood chemistry abnormalities that might consequentially arise from it. But little if anything is known of its underlying biology and certainly at a chemical and a molecular level, beyond what might be inferred from higher organizational level tests and observations – for its impact on specific organs and organ systems and on the complete person. This is where much or even all of the research and discovery requirements listed in Part 1 for addressing this challenge in its most general terms, enters this narrative and as a matter of necessity. And I refer to these drug discovery problems as Type 2 challenges.

I am going to continue this narrative in a next series installment where I will delve into at least some of the conceptual and organizational details of addressing Type 1 and Type 2 problems here. And in anticipation of that, I note that I will delve into the issues of a priori knowledge based expert systems, versus more entirely self-learning and self-discovery based ones. And as briefly touched upon in Part 1, I will also discuss stepwise development of these agents from simple-problem systems exposure and mastery to more complex and perhaps real world problem exposure and mastery. And I will do so in terms of specific and general tasks as identified above and in terms of how groups of individually single task agents can be coordinately applied to address larger and more complex tasks. And yes, after developing a conceptual system and approach from these examples in order to develop a tool set for further use, I will turn to consider information systems and their architectures per se, using them in that process.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Some thoughts concerning a general theory of business 21: considering first steps toward developing a general theory of business 13

This is my 21st installment to a series on general theories of business, and on what general theory means as a matter of underlying principle and in this specific context (see Reexamining the Fundamentals directory, Section VI for Parts 1-20.)

I began Part 20 of this by offering a set of four case in point examples as to how someone might be hired into a business, outside of the more standard framework of processes and understandings that Human Resources would develop as its basic default candidate selection, vetting and onboarding system. Then, and with an initial discussion of the first of those four alternative scenarios in mind, I briefly touched upon a set of more general principles that would help to conceptually organize and explain how business systems function as networks of interacting individuals and as networks of more closely aligned groups of them, and in contexts of the type raised by those scenarios among others. (N.B. I will return in detail to the issues of groups and their decision making processes and their consequences, later in this series when considering emergent properties as arise in higher organizational levels where convergence and divergence of opinion, and social and authority based hierarchies enter this narrative, among other considerations. I only considered what might be viewed as simplest baseline scenarios in Part 20, as far as this is set of issues is concerned, assuming that any groups involved would start out as collections of individuals of like mind on all pertinent matters.)

I plan on reversing the order pursued there in this installment, and will begin with further discussion of more general principles. And then I will delve into the specifics again, as outlined at the end of Part 20 as an anticipatory closing note. And I will complete my discussion of that first scenario as my specific application of theory, half of this posting. But for smoother continuity of narrative, I begin this posting’s narrative by repeating those four scenarios as a group, as I will develop and present my more general organizing comments of this series installment, with all of them in mind:

1. More routine hire, hands-on non-managerial employees, and I add more routine and entry level and middle managers – versus – the most senior managers and executives when they are brought in, and certainly from the outside.
2. More routine positions, managerial or not – versus – special skills and experience new hires and employees, hands-on or managerial.
3. Job candidates and new hires and employees who reached out to the business, applying as discussed up to here in this narrative on their own initiative – versus – those who the business has reached out to, to at least attempt to bring them in-house as special hires and as special for all that would follow.
4. And to round out this list, I will add one more entry here, doing so by citing one specific and specifically freighted word: nepotism. Its more normative alternative should be obvious.

With that list noted as a source of specific case in point contexts that any overarching theory would have to accommodate and account for, I built my more general discussion portion of Part 20 around a conceptual model of how businesses are functionally organized:

• A business can, among other things, be viewed as a dynamic system of overlapping and interconnected stakeholder networks, some assembled on the spot for specific purposes just to dissolve as their sources of impetus for forming are resolved, and some enduring long term.
• And long-term ones, can even become effectively enshrined in the business model and in strategic and operational systems, where membership can become title and position based to allow for smoother member turnover as people come and go in a business’ workforce, and as people there change jobs within the organization. Much of this is in fact laid out at least for likelihood of arising in the table of organization itself, but even stable and seemingly permanent functional networks of this type can systematically cut across the table of organization too, and even intentionally so.
• Ultimately and according to this understanding, business systems and their functioning can be viewed as representing complex and evolving networks of interpersonal interactions, and interpersonal commitments. Think of the Scenario 1 hiring process and new employee onboarding process example that I have been focusing on here, as a case in point example of this much more widespread general set of phenomena.

My goal here is to at least begin to reconsider this model in more explicitly game theory terms, and with a focus there on how stable networks that are assembled from these interpersonal relationships, arise. And I begin that by noting that historical track records mean everything there. Stability in this is based on trust, and that is based on the at least presumably reliable foundation of prior and currently ongoing experience, and both with specific individuals and with the organizations that they work for.

• Let’s begin with the familiar. You know a particular individual and how well they do or do not carry out their work tasks, and how promptly or not they do this. You know how reliable or not, any collateral and supporting information would be, that they would have to provide as part of a task completion on their part – and also from past experience with them and from their track record from working with your own direct colleagues. And in this, I could be writing about a direct member of the work team that you yourself work for who reports to the same supervisor as you, or I could be referring to a more distant colleague who works in a different part of the same business as you, but who works there at least situationally as a fellow stakeholder with you in some larger overall task or work process. Or I could be writing of a professional who works for a completely different organization, who you find yourself dealing with through, for example, supply chain transactions.
• In any of these cases you might directly, individually know these people who you would work with. Or you might only know them to start with at least, by reputation and from knowledge of who they work for and from what you know of the reliability of their hiring and retention and employee training systems. Yes, that applies to new members of your own same-supervisor work team, as much as it does to the context of working with more distant task participants and stakeholders. Then, and focusing for the moment on new hires to your work team, your basis for initial trust is based on your sense of the reliability and thoroughness of the vetting and screening that new prospective hires go through there, to make sure that the right people are hired, and both for their professional skills and experience and for their qualities as people who others can readily work with.
• Think of this set of bullet points as having described individually based trust that is grounded in your own direct experience, and institutionally based trust that is based on your trust in organized vetting systems in place.
• New hires, arriving from outside of the organization as a whole, bring with them more and I add more fundamental unknowns in this, barring direct prior knowledge of them from earlier experience or from knowing them from outside of the hiring workplace. Then the questions of reliability and trust and of genuineness in what they claim, for example in their resumes and covering letters, can come to depend on how reliable their references might be viewed. I add a third category to this list with that: surrogate-based trust, where individually based trust as grounded in face to face meetings and interviews, and knowledge of the businesses that a job candidate has worked for and their hiring and retention practices, would be supplemented by insight from third party references who might themselves be largely unknown. Think of this as an example of trust in the presence of information limited friction.
• To complicate matters and certainly for that and when hiring from the outside, businesses that a prospective new hire has worked for in the past are often reluctant to discuss or even admit that a now former employee of theirs might have left under a cloud if they did, and with problematical entries or worse in their personnel files there. Such disclosures can and often are seen as creating legal liability in the event that a job candidate who is not hired by a prospective new employer, might file suit of defamation of character from a former employer.

And this brings me directly to the issues of hiring from completely outside of a business, as opposed to promoting from within a business and from within the same local line of a table of organization at that, or hiring from a different functionally and perhaps geographically distinct part of a same, generally larger and more geographically dispersed company.

All three of these possibilities carry both positives and negatives. Outsiders can for example bring new ideas and insights and approaches, where insiders might start out with sets of career-based blinders in the form of same-as-everyone-else assumptions and presumptions. I mentioned in Part 20, how managers and executives in nonprofits tend to advance in their careers to higher level positions by moving between organizations, rather than by moving upward along a table of organization within some single employer. Partly that is because these organizations tend to keep their headcounts to a minimum so it is unlikely that an appropriate next step upward would even be available in-house where a career advancer is now. And that is the point that I raised there, in this context. But it is also true that nonprofits often explicitly seek out fresh blood and fresh, new ideas, and the greater breadth of experience of having worked, and successfully so at one or more other nonprofits already.

But this reaching outside also brings an increased measure of risk too – which among reasons is why essentially all hiring businesses have an at least informal probationary period for new hires, as a means of validating that they actually work out before more fully committing to having them there as full time in-house members of the team.

Let me bring that back to Scenario 1 to conclude this posting, and to the issues of bringing in senior executives in particular. I write this thinking of a nonprofit that I worked for, that brought in an experienced Chief Strategy Officer from the outside as a means of bringing in new ideas and approaches for making more fundamental overall changes in the organization as a whole: strategic and operational changes that a significant amount of organizational growth had now made necessary as they had significantly ramped up the number of local chapter offices that they had to support nationally, and as they had instituted larger regional offices to help national run their overall operations in this, as a new intermediate organizational and supervisory layer. But they brought this individual in-house this way for a second reason too; their Chief Executive Officer of long-standing was beginning to prepare to step down and retire. And they wanted to bring in new blood there too, but at least as importantly they wanted and needed to find someone who really knew their systems and their corporate culture too, and who they could rely upon to be a good fit for both and from day one at that job. They brought in their new Chief Strategy Officer and kept them at that position for something over a full year, to groom them, and yes to validate them too, as a good fit next CEO. He worked out and was advanced to be their replacement CEO and has worked there in that capacity for a number of years now since then. Think of this as representing a hybrid strategy as far as balancing risks and benefits is concerned.

I am going to turn to Scenario 2 in my next series installment: special skills and experience, new hires and employees who might work hands-on or in a more managerial capacity. And I will also continue my discussion of more general principles there – applying this posting’s more general discussion to an explicitly game theory context. Meanwhile, you can find this and related material about what I am attempting to do here at About this Blog and at Blogs and Marketing. And I include this series in my Reexamining the Fundamentals directory, as topics section VI there, where I offer related material regarding theory-based systems. And I also include this individual participant oriented subseries of this overall theory of business series in Page 3 of my Guide to Effective Job Search and Career Development, as a sequence of supplemental postings there.

%d bloggers like this: