Platt Perspective on Business and Technology

Rethinking the dynamics of software development and its economics in businesses 8

Posted in business and convergent technologies by Timothy Platt on January 22, 2020

This is my 8th installment to a thought piece that at least attempts to shed some light on the economics and efficiencies of software development as an industry and as a source of marketable products, in this period of explosively disruptive change (see Ubiquitous Computing and Communications – everywhere all the time 3, postings 402 and loosely following for Parts 1-7.)

I have been at least somewhat systematically discussing a series of eight historically grounded benchmark development steps in both the software that is deployed and used, and by extension the hardware that it is run on, since Part 2 of this series:

1. Machine language programming
2. And its more human-readable and codeable upgrade: assembly language programming,
3. Early generation higher level programming languages (here, considering FORTRAN and COBOL as working examples),
4. Structured programming as a programming language-defining and a programming style-defining paradigm,
5. Object-oriented programming,
6. Language-oriented programming,
7. Artificial Intelligence programming, and
8. Quantum computing.

And in the course of that still unfolding narrative, I have addressed the first five entries on this list as representing what amount to fundamentally mature technology examples. And I have been discussing the above Step 6 development in how software is coded, since Part 6, citing that as an in-effect transition step. Language-oriented programming, in this regard, represents a developmental step in the evolution of software and computing that holds both more settled qualities, and more disruptively new and novel potential too, and certainly as of this writing.

As a consequence of that fact, I have offered both an initial, more legacy-facing bullet pointed description of Point 6’s language-oriented programming paradigm, and an updated Point 6 Prime version of it as well, that adds in the still emerging complexities of self-learning systems, which I repeat here as given in Part 7:

6.0 Prime: Language-oriented programming seeks to provide customized computer languages with specialized grammars and syntaxes that would make them more optimally efficient in representing and solving complex problems, that current more generically structured computer languages could not resolve as efficiently. In principle, a new, problem type-specific computer language, with a novel syntax and grammar that are selected and designed in order to optimize computational efficiency for resolving it, or some class of such problems, might start out as a “factory standard” offering (n.b. as in Point 6 as originally offered in this series, where in that case, “factory standard” is what it would remain, baring human programming updates.) But there is no reason why a capability for self-learning and a within-implementation capacity for further ontological self-development and change, could not be built into that.

My point of focus in discussing that software development stage in Part 7 was on risk management issues, of a type that I see as likely to arise for software development companies that develop, market and sell such products to customers and on a non-exclusive basis for more generically faced information processing problems, where different customers would likely end up with very different ontologically developed software language products: some of which might be much more functionally effective than others. That is in fact at least potentially important and certainly where marked differences of performance achieved by different customer businesses that have paid essentially the same for what would ostensibly be the same software product, actually end up with very different products as a result. But that is only one possible liability issue that I could raise here as a consequence of self-learning software:

• The emergence of self-learning and ontologically developing and changing software can only lead to the functional death of one of software developers’ most used resources for managing emergent software bugs that might become visible in post-sales beta testing, or security issues that might come to light for it after sales and after customer installation and use: software patches.

When an initial software developer and provider can control the code that goes into its software and any given release or version of it, and stably so, they have a fixed starting point that any and all customers who have that software would consistently hold in their computers, and for whatever identified version or build that they have and are running. These software developers have a fixed starting point that they can develop single, fixed patches and updates to, and with essentially single responses developed, tested, and released that should apply identically across all copies of whatever software release they are intended to correct or update. The only exception to this uniform software release stability and consistency would be expected in the event that a copy of it on a customer’s computer were to become corrupted in some way, and that would involve accidental change, arising for the most part in the customer’s information management systems and its use, and fall outside of the responsibilities of the original software developer and provider. But as soon as that software begins to mutate on its own and by design: functionally and by paradigmatic intent, the essential stability needed for set and settled software patches evaporates.

Company A develops a great new piece of software and releases it – in this case a specialized new computer language that would address a class of widely faced business problems and needs, that would not just be exclusively sold to any one of many potential customer businesses. Then a significant security vulnerability is found in it as originally coded, six months after they began selling licensing rights to it and effectively that same full six months after that initially standardized software package began to individuate on the computers and in the networked systems of each and every one of those buying customers. That vulnerability might or might not still reside there in all of those differently self-evolving copies but if enough have been sold it is essentially certain that some will still show it. But given the wildcard nature of self-learning, and at a software code level, a settled patch that would close that vulnerability in the initial version shipped might just break things where it would be applied … now.

• Self-learning can mean the same basic processing code with newly updating expert knowledge data support, but in this case, self-learning would also at the very least have to mean an emergence of processing code level change too and particularly where that leads to improved efficiencies as determined according to whatever goals-directed criteria, that software has built into it – which at least in principle might be subject to self-learning updates too.

Note: this point has an assumption built into it that I will turn to and address in the next installment to this series. But with that simply acknowledged here for now, I continue on from it by noting that a whole new range of potential risk-creating or at least risk-enhancing possibilities arise when a software development Point 6 with its settled and even legacy grounding is shifted to a more disruptively new and novel Point 6 Prime form. I cite self-learning and particularly in the above bullet point’s more extreme form as a working example there. And all of this has business financials and microeconomic implications.

I said at the end of Part 7 that I would turn here to address:

• The role of the data that would be run on this programming language and its code and both as it is developed by the offering business, and as it is used by specific purchasing businesses.

I briefly noted this complex of issues in passing in Part 6 and stated that I would delve into its issues and complications in some detail here in Part 8. And after rounding out that phase of this programming language-focused line of discussion, I added that I would step back to consider how at least some of the risk issues that I would discuss here might apply more widely to self-learning software as a sellable product too. I have at least somewhat reversed the order there, and will in fact turn to and focus on the data that would be applied to self-learning, ontologically self-developing software next, with this posting’s discussion held in mind while doing so.

And then, as already noted, I will continue on to discuss the above listed software development steps of Points 7 and 8 too: artificial intelligence programming and quantum computing. And looking beyond that, my goal in all of this is to (at least somewhat) step back from the specifics of these individual example, development stages to raise and discuss some of the general principles that they both individually and collectively raise, as to the overall dynamics of software development and its economics.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory.

The challenge of strategy precluding tactics, and vice versa

Posted in reexamining the fundamentals by Timothy Platt on January 20, 2020

Strategy and tactics are often presented and discussed as if representing somehow-opposites of an either/or, starkly differentiated dichotomy. But for a variety of practical purposes and in a variety of contexts it can make more sense to see them for how they would fit together along more commonly shared continua that are scaled along organizational-structural, and action-and-response based analytical scales.

I began addressing the second of those two approaches to thinking about strategy and tactics as business planning tool sets in a recent posting, that I decided to include in this blog as a supplemental entry to an ongoing series on general theories per se, and on general theories of business in particular; see Going from Tactical to Strategic and Vice Versa, as an Emergent Process (as can be found at Reexamining the Fundamentals 2 and its Section IX.) And my goal for this posting is to continue its basic discussion, but from a distinctly different perspective than the one that I that pursued there. But that intended divergence from my earlier posting on this noted, I begin again with what are essentially the fundamentals that I built that posting from:

• The presumption of distinction and separation that can be drawn between strategy and tactics, that I just made note of at the beginning to this posting (and in my above cited earlier one), is both very real and very commonly held.
• And it shapes how tactics and strategy are understood and how they are actually carried out.

To cite an example context where those points of observation most definitely hold, consider how often managers or leaders are judged to be taking a strategic (and therefore not a tactical) approach to understanding and organizing their efforts and those of their teams, or a more tactical (and therefore not a strategic) one. As a reality check there, ask yourself what you would read into and simply assume about the two people generically cited by first name here:

• Mary is a real strategist, always focusing on and grasping the big picture.
• Bob is a tactician and plans and acts accordingly.

But for any real Mary or Bob, they probably wear both of those hats at least occasionally and certainly as circumstances would demand that they do, if they ever actually effectively wear either of them.

This, up to here, simply continues my line of discussion of my earlier above-cited strategy and tactics posting. But now let’s add one more factor to this discussion: mental bandwidth and the range and diversity of information (along with accompanying metadata about it as to its likely accuracy, timeliness, etc) that a would-be strategist or tactician can keep in their mind and make immediate use of at any one time. Consider the consequences of a single individual being able to hold at most some limited maximum number of details and facts and ideas in general, in their mind at once, with all of that actively entering into a single at least relatively coherent understanding that they can plan and act from.

Think of this as my invoking a counterpart to the social networking-reach limitations of a Dunbar’s number here, where in this case different individuals might be able to juggle more such data at once, or less so than others, but where everyone has their own maximum capacity for how much information they can have in immediate here-and-now use at any given time. Their maximum such capacity might expand or contract depending for example on whether they are rested or exhausted but they would always face some maximum capacity limitations there and at any given time. And I presume here that this limitation as faced at any one time and for any given individual, remains the same whether they are pursuing a more tactical or a more strategic planning approach.

• An alternative way to think about strategy and tactics and of differentiating between them, is to map out the assortment of data details that they would each be based upon, for where they would be sourced and for how closely related they are to each other from that.

If you consider a tactics versus strategy understanding in that context, it can be argued that good strategists hold more widely scattered details that collectively cover a wider range of experience and impact, when so planning. And their selection and filtering process in choosing which data to plan from is grounded at least in part on what amounts to an axiomatic presumption of value in drawing it from as wide an organizational context as possible so as to avoid large-scale gaps in what is being addressed. And in a corresponding manner, and according to this approach to understanding strategy and tactics, good tacticians can and do bring a more immediate and localized context into clearer focus in their understanding and planning and with a goal of avoiding the trap of unaddressed and unacknowledged gaps at that level. But the same “content limit” for their respective understandings holds in both cases.

• According to this, micromanagement occurs when strategic intent is carried out with a tactical focus and with a tactician’s actively used data set, and with this done by people who should be offering strategic level insight and guidance.
• And its equally out of focus tactical counterpart would arise when a would-be tactician is distracted by details that are of lower priority in their immediate here-and-now, even if relevant to a larger perspective understanding – where their focusing on them means crowding out information and insight of more immediate here-and-now relevance and importance to what they should be planning for.

And with that, let’s reconsider the above cited Mary and Bob: strategist and tactician respectively. According to this, Mary more naturally throws a wider net, looking for potentially key details from a fuller organizational context. And Bob more naturally focuses on their more immediate here-and-now and on what is either in front of them or most likely to arrive there and soon. And actually thinking and planning along a fuller range of the continua that more properly include both strategy and tactics as cited here and in my earlier, above-noted posting, means being able to switch between those types of data sets.

In most organizations, it is usually only a few of the people there, if any who can effectively strategically plan and lead. It all too often it is only a few who can really effectively tactically plan and lead too. And it is one of the key roles of organized processes and systems of them in a business, that they help those who would plan and lead and either strategically or tactically, to do so more effectively and with better alignment to the business plan and its overall framework in place. Good operational systems make effective tactical and strategic management and leadership easier.

It is even rarer that an individual in a business or organization be able to effectively take either a strong and capable tactical or a strong and capable strategic approach and that they be able to smoothly transition from one to the other and back as circumstances and needs dictate. And ultimately, this scarcity probably dictates the strategy versus tactics dichotomy that I write of here, more than anything else does.

• This discussion up to here, of course, leaves out the issues of how those working data sets that strategists and tacticians use, would be arrived at and updated and maintained, as an ongoing understanding of the context that any such planning would have to take place in. I leave discussion of that to another posting.

Meanwhile, you can find this and related material at my Reexamining the Fundamentals directory and its Page 2 continuation, as topics Sections VI and IX there, and with this posting specifically included as a supplemental addition to Section IX there.

Meshing innovation, product development and production, marketing and sales as a virtuous cycle 22

Posted in business and convergent technologies, strategy and planning by Timothy Platt on January 19, 2020

This is my 22nd installment to a series in which I reconsider cosmetic and innovative change as they impact upon and even fundamentally shape product design and development, manufacturing, marketing, distribution and the sales cycle, and from both the producer and consumer perspectives (see Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 342 and loosely following for Parts 1-21.)

I have been discussing the complex of issues and challenges that arise for innovation acceptance and diffusion, and of resistance to innovation and to New in general here too, since Part 16, focusing through that developing narrative on two basic paradigmatic models:

• The standard innovation acceptance diffusion curve that runs from pioneer and early adaptors on to include eventual late and last adaptors, and
• Patterns of global flatting and its global wrinkling, pushback alternative.

And then in Part 21 of this, I began to at least briefly discuss how the boundaries between these two models can and do blur and overlap, and certainly as so much of the basic acceptance or rejection implicit in them is now driven by the voices and pressures of social media, and of online reviews and evaluations.

Details, I have to add, are not always important or even considered there by most online participants, and certainly where a one to five stars valuation scales with their up-front visibility can in effect render any more detailed reviews moot, and with their evaluation reduced to a search for confirmation, rather than a source of new and perhaps conflicting insight.

This becomes particularly important when negatively reviewing trolls and equally artifactual positive reviews are considered, that in effect game the “community based voice” that social media reviews ostensibly represent.

From a communications theory perspective, think of that as representing background static – noise in these systems, and with all of the signal degradation that noise would be expected to bring with it.

What I am writing of here is informed choice as might be based on valid and reliable data and insight, where no one can realistically expect that and certainly for any area of discourse that can be considered controversial and consequentially so, for any who might be inclined to skew the overall shared public message to their advantage.

Let’s consider this from a specifically business perspective and particularly where a business seeks to bring the innovatively new into production and to market, but in the face of headwind resistance. That resistance might be based at least in part on the underlying technology involved, on where and how those new products would be made, on where the raw materials that would go into them are sourced, or how, or on any of a range of other production and distribution cycle issues. Or they might be based on the products themselves and how they might be used, and both negatively and positively. The term “dual use” is often attached to products that can very specifically be used in a peaceful civilian context, but that can also be used and directly so for military purposes as well. But for purposes of this line of discussion, let’s generalize that designation. For purposes of this narrative, consider dual use as referring to both positive and negative usability potential as that would arise from the perspective of a beholder, where different people might see different boundaries there – if they seen any such dual use potential at all, and where significantly impactful voices can sway others and even large numbers of them. See my Part 21 discussion of the Pareto principle in that regard and particularly where negative and positive, dual use capabilities can become fluid and malleable in meaning and with all of the potential for opinion shaping influence that that can lead to.

And this brings me back to the fundamental question of what innovation actually is, and certainly in a noisy channel, controversial context. And I begin addressing that by citing two examples, both of which, unfortunately, are quite real:

1. The development of drought and disease resistant crops that can be grown with little if any fertilizer and without the use of insecticides or other pesticides, and
2. Russia’s Novichok (Новичо́к or newcomer) nerve agents.

Both quite arguably represent genuine innovations and even disruptively new ones. But reasonable people would probably view them very differently, and address them very differently in any social media driven, or other public communications.

It is both possible and easy to presume and essentially axiomatically so, that innovation per se is basically more values-neutral and certainly as a general categorical consideration, than anything else. The overall thrust of innovative change is for the most part considered a positive if its overall neutrality is considered and challenged at all, at least for those who are at all open to novelty and change, and certainly insofar as most innovation is developed and pursued with a goal of at least attempting to address specific publically realized challenges and the opportunities that effectively resolving them might bring, and for at least specific demographic groups. Innovation that is realized and certainly as marketable products, and the innovative process that leads to it, tend to focus on what New does and on what it could and can do, for meeting at least those perceived needs. And that perspective is in most cases valid; it is certainly understandable. But all of this just addresses innovation as a whole and even as an abstraction. Individual innovations are not, and probably cannot be considered in that way, and certainly automatically. Individual innovations have equally particular and even at least relatively unique consequences. And they arise in equally specific contexts.

To add a third example to this list, where longer-term cumulative effects become critically important, consider:

3. Disposable single use plastic bags and other petrochemical plastics-based wrapping materials.

I stated at the end of Part 21 that I would further discuss public voices and their influence, and I then turned in this installment to reconsider innovations per se. I am going to continue that line of discussion in a next series installment, at least starting with my three here-stated examples. I will then reconsider the two innovation acceptance versus resistance models that I have been considering here, but in less abstract terms than I have up to here. And then, and on the basis of that narrative, I will reconsider individual and social group, and governmental and other organizational influence in both creating and shaping conversations about change. (And as part of that narrative, I will explore some assumptions that I built into my above-offered presumptions paragraph as appears between my first two innovation examples and example three.)

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. And see also Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations.

Finding virtue in simplicity when complexity becomes problematical, and vice versa 20

Posted in social networking and business by Timothy Platt on January 16, 2020

This is my 20th installment to a series on simplicity and complexity in business communications, and on carrying out and evaluating the results of business processes, tasks and projects (see Social Networking and Business 2 and its Page 3 continuation, postings 257 and loosely following for Parts 1-19.)

As noted in Part 18 of this, I have been discussing trade-offs and related contingency issues in recent installments to this series, that arise when the managers and owners of a business seek to balance the sometimes contradictory needs and pressures of:

• Allowing and even actively supporting free and open communications in a business, in order to facilitate work done and in order to create greater organizational agility and flexibility there while doing so …
• While also maintaining effective risk management oversight of sensitive and confidential information.

And one of the core issues that I have raised and discussed in that context, and both here and in other, topically related series in this blog is friction as that both parametrically defines both sides of that two part dynamic and, constrain its possible resolution. I focused on at least one key aspect of that line of discussion in Part 19 and then ended that posting with a statement that I offered as a provocation if nothing else and that I would more fully analyze here, and both for what it overtly says and for what is more implicitly assumed:

• “Put simplistically, a genuinely effective fully frictionless system would of necessity also be an organizationally optimized system with the right people communicating with the right people, and in accordance with effective, protective yet agile information management policy and practices. Such a business would not be too lean and sparse, so it would lack the gaps that that would cause. And it would not have functionality limiting (and friction-creating) barrier layers interposed in its operations and their execution either. Friction and its consequences challenge all of this and on all levels.”

Note that I just adjusted that assertion here by putting two key words offered in it, in italics for special emphasis. Everything in that repeated text in fact hinges on those two words: “genuinely effective.” And I would go further and argue that ultimately all business theory and all business practice – at least where that is grounded in a striving for excellence, is about operationally and strategically fleshing out those words in a practical, doable manner.

This series is in fact all about understanding and realizing in practice, the correct level of organizational structure and complexity that would be needed to support business policy and practices, that at least actively seek to reach and maintain a meaningful “genuinely effective” for any given business enterprise.

I stated at the end of Part 19 that I would turn here to discuss:

• Information and its management, and information-related risk management, and both for defining good and best practices and for carrying them out, and for monitoring them and improving upon them as change comes to demand reconsideration and adaptive business process and business strategy adjustments – proactive or reactive.
• And I added that I would discuss those higher level perspective issues here too, and as such.

I begin addressing all of this here, at what would constitute a standardized, business-specific higher level perspective that should sound comfortably familiar to essentially any business professional, and certainly if they have any strategic planning or execution responsibilities where they work: the basic business plan and the business model in place.

My above quote is too higher level as a stand-alone assertion to offer any real value to any particular business and marketplace context. Business plans and business models, as carefully developed, and hopefully as carefully maintained and updated too, serve to rein that type of assertion in so that it can offer value from the more specific real-world foci of understanding that they would give it.

• Ultimately, businesses are systems grounded in, and even competitively definable as process flows for information acquisition, processing and management, accumulation and use.
• I do not in any way deny or denigrate the products or services that they create and bring to market in this; that is what they do where the first of these bullet points addressed how they do that, organizationally.

Effective business plans and the business models that they operationally and strategically lay out for development and execution, define “genuinely effective” as that would serve individual businesses as they seek to become uniquely competitive value creators. And as part of that, they define and functionally lay out what a meaningful and effective uniqueness would even be for a given business. Note that this does not necessarily mean entirely unique and disruptively innovatively so. This does not even explicitly require innovative newness at all. In fact all that “unique” calls for in this context is that a business or prospective business offer products, services or both that would hold sufficient marketplace value to a sufficiently large enough customer base, that has been unmet as a source of real perceived need.

And with that noted, I turn back to my above-repeated quote and to a key word in it that I have repeatedly addressed in this blog: friction. I have written of that from a macroeconomic perspective as that word is more commonly used at that organizational level. And I have written of its more microeconomic counterpart which I refer to as business systems friction. More specifically, I have written at least briefly of its more negative side at both of those organizational levels and I have correspondingly written of its more positive sides at both of those levels too, where selectively limiting and controlling the flow of particular types of information for specific operationally and strategically defined purposes can hold positive value.

• Consider in that regard the at least attempted control of access to personally identifying information that in the wrong hands could be used to perpetrate identity theft and related problems, as both a macro and a more microeconomic challenge.
• And consider the general rubric of sensitive and confidential business intelligence per se and its attempted control.

Consider those two points as proof of principle examples of how friction per se is not always going to be problematical, and certainly insofar as “friction” is taken as a synonym for “any barrier or restriction to free and open access of any information, whatsoever, and to anyone and under any circumstances.”

I have been discussing both the risk reducing, benefits enhancing positive, and the risk increasing, benefits reducing deleteriously negative of that, up to her in this blog, primarily in terms of clear cut negative and positive examples, and often without explicitly linking the positives there to friction per se (as for example when discussing regulatory law as that relates to the protection of individuals and their personal information.) I am going to turn in the next installment to this series, to at least begin to map out and discuss what might be considered more gray areas where information access and sharing, and its more limiting control both carry mixed positive and negative potentials.

And in anticipation of that line of discussion to come, change and rapid change in particular, and the ongoing emergence of the disruptively new in all of this, both creates genuine areas of uncertainty, and of “grayness” there, and makes a need for clarification there more pressingly important – and even as it makes that progressively more and more difficult too.

Meanwhile, you can find this and related material at Social Networking and Business and its Page 2 and Page 3 continuation pages. And also see my series: Communicating More Effectively as a Job and Career Skill Set, for its more generally applicable discussion of focused message best practices per se. I initially offered that with a specific case in point jobs and careers focus, but the approaches raised and discussed there are more generally applicable. You can find that series at Guide to Effective Job Search and Career Development – 3, as its postings 342-358.

Reconsidering Information Systems Infrastructure 13

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on January 13, 2020

This is the 13th posting to a series that I am developing, with a goal of analyzing and discussing how artificial intelligence and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 374 and loosely following for Parts 1-12. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

I have been discussing artificial intelligence tasks and goals as divided into three loosely defined categories in this series. And I have been discussing artificial intelligence agents and their systems requirements in a goals and requirements-oriented manner that is consistent with that, since Part 9 with those categorical types partitioned out from each other as follows:

• Fully specified systems goals and their tasks (e.g. chess with its fully specified rules defining a win and a loss, etc. for it),
• Open-ended systems goals and their tasks (e.g. natural conversational ability with its lack of corresponding fully characterized performance end points or similar parameter-defined success constraints), and
• Partly specified systems goals and their tasks (as in self-driving cars where they can be programmed with the legal rules of the road, but not with a correspondingly detailed algorithmically definable understanding of how real people in their vicinity actually drive and sometimes in spite of those rules: driving according to or contrary to the traffic laws in place.)

And much if not most of that discussion has centered on the middle-ground category of partly specified goals and their tasks, and the agents that would carry them out. That gray area category that resides between tasks for tools, and tasks for arguably people, serves as a source of transition testing and of development steps that would almost certainly have to be successfully met in order to develop systems that can in fact successfully carry out true open-ended tasks and achieve their goals.

And as part of that still unfolding narrative, and in a still-partly specified context, I began discussing antagonistic networks in Part 12, citing and considering them as possible ontological development resources within single agents that would promote both faster overall task-oriented systems improvement, and more effective learning and functioning there. Consider that as one possible approach that would go beyond simple random-change testing and any improvement that might be arrived at from it (as might for example arise in a biological evolutionary context where randomness enters into the determination of precisely which genetic mutations arise that would be selected upon for their survival value fitness.)

I initially wrote Part 11 of this series with a goal of similarly considering open-ended tasks and goal in Part 12. Then I postponed that shift in focus, with a goal of starting that phase of this series here. I will do so, but before I turn this discussion in that direction, I want to at least briefly outline a second fundamentally distinct approach that would at least in principle help to reduce the uncertainties and at least apparent complexities of partly specified tasks and goals, just as effective use of antagonistic neural network subsystems would allow for ontological improvements in that direction. And this alternative is in fact the possibility that has been at least wistfully considered, more than any other in a self-driving vehicle context.

I saw a science fiction movie recently in which all cars and trucks, busses and other wheeled motorized vehicles were self-driving and with all such vehicles at least presumably, continuously communicating with and functioning in coordinated concert with all others – and particularly with other vehicles in immediate and close proximity, where driving decision mismatches might lead to immediate cause and effect problems. It was blithely stated in that movie that people could no longer drive because they could not drive well enough. But on further thought, the real problem there would not be in the limitations of any possible human driver atavists who might push their self-driving agent chauffeurs aside to take the wheel in their own hands. It is much more likely, as already touched upon in this series and in its self-driving example context, that human drivers would not be allowed on the road because the self-driving algorithms in use there were not good enough to be able to safely drive in the presence of the added uncertainty of drivers who were not part of and connected into their information sharing system, who would not always follow their decision making processes.

• Artificial intelligence systems that only face less challenging circumstances and contexts in carrying out their tasks, do not need the nuanced complexity of data analytical capability and decision making complexity that they would need if they were required to function more in the wild.

For a very real-world, working example of this principle and how it is addressed in our already every day lives, consider how we speak to our currently available generation of online verbally communicative assistants such as Alexa and Siri. When artificial intelligence systems and their agents do not already have context and performance needs simplifications built into them by default, we tend to add them in ourselves in order to help make them work, and at least effectively enough to meet our needs.

So I approach the possibility of more open-ended systems and their tasks and goals with four puzzle pieces to at least consider:

• Ontological development that is both driven by and shaped by the mutually self-teaching and learning behavior of antagonistically positioned subsystems, and similar/alternative paradigmatic approaches,
• Scope and range of new data input that might come from the environment in general but that might also come from other intelligent agents (which might mean simple tool agents that carry out single fully specified tasks, gray area agents that carry out partly specified tasks, or genuinely artificial general intelligence agents: artificial or human, or some combination of all of these source options.)
• How people who would work with and make use of these systems, simplify or add complexity to the contexts that those agents would have to perform in, shifting tasks and goals actually required of them either more towards the simple and fully specified, or more towards the complex and open-ended.
• And I add here, the issues of how an open ended task to be worked upon and goal to be achieved for it, would be initially outlined and presented. Think in terms of the rules of the road antagonist in my two subsystem self-driving example of Part 12 here, where a great deal of any success that might be achieved in addressing any overtly open-ended systems goal will almost certainly depend on where a self-learning agent would begin addressing it from.

To be clear in both how I am framing this discussion of open-ended tasks, and of the agents that would carry them out, my goal here is to begin a discussion of basic parametric issues that would constrain and shape them in general. So my goal here is to address general intelligence at a much more basic level than that of consideration of what specific types of resources should be brought to bear there – which would almost certainly prove to be inadequate as any specific artificial general intelligence agents are actually realized.

I have, as such, just cited antagonistic neural networks and agents constructed from them, that can self-evolve and ontologically develop from that type of start, as one possible approach. But I am not at least starting out with a focus on issues or questions such as:

• What specifically would be included and ontologically developed as components in a suite of adversarial neural networks, in an artificial general intelligence agent (… if that approach is even ultimately used there)?
• And what type of self-learning neural network would take overall command and control authority in reconciling and coordinating all of the activity arising from and within such a system (… here, assuming that neural networks as we currently understand them are to be used)?

I would argue that you can in fact successfully start at that solutions-detail level of conceptual planning when building artificial specialized intelligence agents that can only address single fully specified systems goals and their tasks – when you are designing and building tools per se. That approach is, in fact, required there. But it begins to break down and quickly, when you start planning and developing for anything that would significantly fall into a gray area, partly-specified category as a task or goal to be achieved or for agents that could carry them out. And it is likely going to prove to be a hallmark identifier of genuinely open-ended systems goals and their tasks, and of their agents too, that starting with a “with-what” focus cannot work at all for them. (I will discuss pre-adaptation – also called exaptation as a source of at least apparent exceptions to this, later in this series but for now let’s consider the points made here in terms of fully thought through, end-goals oriented pre-planning, and tightly task-focused pre-planning there in particular.)

I am going to continue this discussion in a next series installment where I will at least begin to more fully examine the four puzzle pieces that I made note of here. Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Dissent, disagreement, compromise and consensus 43 – the jobs and careers context 42

This is my 43rd installment to a series on negotiating in a professional context, starting with the more individually focused side of that as found in jobs and careers, and going from there to consider the workplace and its business-supportive negotiations (see Guide to Effective Job Search and Career Development – 3 and its Page 4 continuation, postings 484 and following for Parts 1-42.)

I have been at least relatively systematically discussing a series of workplace contexts and situations here since Part 25 that call for negotiating skills and effort on the part of involved employees, whether hands-on non-managerial, or managerial. And I have been delving into the issues and complexities of a particularly challenging and all too commonly faced negotiating challenge since Part 32 that in effect encompasses within it, all of the earlier challenges discussed here and more:

• Negotiating possible downsizings and business-wide events that might lead to them, and how you might best manage your career when facing the prospects of getting caught up in that type of circumstance.

That still-ongoing line of discussion has called for a dozen and more consecutive series installments here because the specific why and how of any negotiations that would be entered into, in navigating such uncertainly, are crucially important to any success that might be achieved there. So I have been working my way through a series of six specific downsizing scenarios and their particular issues; my goal there has been to offer a wide enough diversity of perspective here, so as to offer at least some value to a reader if they find themselves confronted by some “none of the above” seventh downsizing scenario instead.

So far I have at least briefly addressed the first four of the scenarios that I would explicitly discuss here, and my goal for here is to turn to and delve into my initial list’s Scenario 5. But I begin doing so by repeating both that scenario and Scenario 4 as considered in Part 42, so I can refer to it for purposes of comparison:

4. Downsizings, or at least a determination of who would be let go in them, are not always just about cutting down on staff to reduce redundancies and to bring the business into leaner and more effective focus for meeting its business performance needs. They can also be used as opportunities to cut out and remove people who have developed reputations as being difficult to work with, or for whatever reasons that the managers they report to would see as sufficiently justifying. Downsizings can be and are used as a no-fault opportunity for removing staff who do not fit into the corporate culture or who have ruffled feathers higher up on the table of organization and even if they would otherwise more probably be retained and stay.
5. And to cite another scenario that can be more Who oriented, and certainly from the perspective of who is bringing it about, a new, more senior manager who wants to do some personal empire building within their new employer’s systems can use a downsizing and reorganization in their area of oversight responsibility to put their name on how things are done there. Consider this a confrontational career enhancement tactic on their part.

Crucially importantly here, the above repeated Scenario 4 and its downsizing selection process are largely influenced by, if not shaped by the people who would be singled out for dismissal. Note that I am not assuming blame or fault of any type on their part. I am only assuming that for whatever reason they present themselves in such a way as to make them seem to be outsiders. And they do so in ways that at least one manager who is senior to them, would see as problematical. But Scenario 5 is entirely grounded in the egos and ambitions of the people who would carry it out, and with the majority of that coming from whomever among them, can best be considered a prime instigator there. The people who would be singled out for dismissal in a Scenario 5 context could be anyone.

Yes, ego and ambition, and bias and other considerations, coming out of the people who would carry it out, enter into Scenario 4 too. But that side to Scenario 5 is the only side of any real significance for it. And that simple fact is the single most important consideration to bring to the table when seeking to negotiate with such an individual.

Who, more specifically, brings about a Scenario 5 type of downsizing? This type of scenario can only take place when a manager who is driven by personal ambition is pushing for it.

• And they have to be well enough positioned in the business to be able to move this from wistful intention on their part into realized action. So they are most likely going to be middle managers at the very least. And if they are just middle managers, they can most probably only successfully push for this if they have special difficult-to-find or replace skills and experience that would prompt more senior managers and executives there to want to please them.
• But they are unlikely to be senior executives themselves there. They are unlikely to be C level officers, because a manager in that level of position would be empire building in a functional area that they already fully control, at least insofar as anyone in a large business organization can truly own their area of responsibility in a business.

Look for this scenario as coming from people who are still on their way up in their career path, as they see matters, and who have enough power and influence to be able to have real impact from that. And at the same time look for people who are more focused on their own careers and their own sense of self-worth and value, than they are on the business they work for or the people they work with.

• While it might be too anecdotal to consider this as to be more generally applicable principle, I add here that I have seen this scenario play out more as a stepping stone move than anything else, where the prime mover manager behind it is literally trying to develop resume bullet points that they can bring with them as they seek out bigger and better, and elsewhere if need be. I have seen career builders use this as they seek next step up opportunities, moving from business to business on the strength of the performance points they can amass in their resumes. And that is why I added the phrase “new, more senior manager” in the wording offered in my initial Scenario 5 bullet point descriptor.

And with that offered, I turn to the issues of negotiating in such a context, so as not to become an empire building castoff. And this is a scenario where preparation and planning are everything.

• People with this type and level of ambition are generally pretty open about that. They actively seek out opportunities to garner recognition as up and coming stars, from those above them on the table of organization. And they actively push to create opportunities to get that type of recognition from higher up if enough of them do not arise for them anyway, so as to meet their self-perceived needs.
• Use this fact both to identify these managers and to map out what their goals and ambitions are so you can approach them understanding them, and as thoroughly as possible.
• Think through and understand their resume-oriented performance and achievement, bullet point-shaped plans too, so you can approach them if and when you need to, with an equally detailed understanding of what they would do that would call for dismissals.
• As noted above, these individuals are would-be rising stars and most of the time their goal is to achieve C level executive status, if not Chief Executive Officer status and title. But focus here on what you can discern of their more immediate here-and-now goals and intentions. Plan and be prepared to negotiate and act with that in mind.
• This is key to any success that you might achieve here; you are most probably only going to succeed in this type of negotiating context if you can present yourself as a significant source of value to them for what they seek to do, and not just someone locked into what they would do away with.

What can you do that would make you a part of the solution that they seek to build and not just part of the problem that they see as standing in their way there?

At a crucially important note, the above discussion thread might suggest that any such downsizing is entirely contrived. But this basic scenario also applies to business settings where reasonable claims for needed change can be made too. In fact there are almost certainly going to be elements of real, arguably convincing need for a more genuinely business-supportive downsizing if an also-Scenario 5 downsizing effort is to take place and succeed (see Part 39 and Part 40 for their discussions of more needs-based downsizing scenarios, for comparative purposes here.) So plan and prepare for a Scenario 5 context, with an awareness of any more genuinely business needs-based staff reductions that might be argued for too. And be prepared to negotiate in those terms, as a Scenario 5 manager is never going to actually admit that they are empire building as I have been discussing that here; they are always going to justify and carry out their plans here, on the basis of an argument of real business needs. (Yes, to put this somewhat cynically, consider the negotiations that you would enter into in this context, as negotiating by euphemism, as the issues overtly discussed do not necessarily exactly match the actual reasons for these conversations being necessary.)

And as a final thought here, this is definitely a scenario where you need to think through what is and is not important to you in your life and in your jobs and careers planning. Let’s assume that you can convince this type of manager to keep you on as they set out to enhance their position at the business, and enhance their resume in the process. Do you really want to report to and work for this type of person and for what might be a significant period of time as they seek out their next career move? What other options or opportunities do you have? What other opportunities can you develop, if for example you get to stay on now, and if you use that as an opportunity to look for New for yourself, where you have a steady paycheck and benefits such as employee health insurance coverage while doing so? (See my series: Should I Stay or Should I Go? as can be found at Guide to Effective Job Search and Career Development – 3, postings 416 and following for a more detailed discussion of the issues raised here. I would particularly note my postings on working with difficult people as a starting point there.)

I am going to continue this narrative in a next series installment where I will discuss the sixth and final downsizing scenario that I will explicitly address in this series:

6. And as a final area of consideration here, consider the last-in, first-out approach as it can by default, impact on younger employees and more recent hires and regardless of what they do and can do that might be needed by the business. Businesses with a strong union presence often follow that approach though they are not the only ones that do. But this type of retain or let-go determination can also be skills-based, or location based if for example it is decided to close a more peripheral office that might not have been as much of a profit center as desired or expected. So even there, it might be possible to argue a case for being retained at a job.

My goal there will be to discuss standardized personnel processes, and career-oriented exceptions and exceptions handling as strategically considered options. And then as already noted I will turn to and discuss severance packages and their issues, as crucial negotiating considerations here.

Meanwhile, you can find this and related material at Page 4 to my Guide to Effective Job Search and Career Development, and also see its Page 1, Page 2 and Page 3. You can also find this and related postings at Social Networking and Business 3, and also see that directory’s Page 1 and Page 2.

Donald Trump, Xi Jinping, and the contrasts of leadership in the 21st century 25: some thoughts concerning how Xi and Trump approach and seek to create lasting legacies to themselves 14

Posted in macroeconomics, social networking and business by Timothy Platt on January 8, 2020

This is my 26th installment in a progression of comparative postings about Donald Trump’s and Xi Jinping’s approaches to leadership, as they have both turned to authoritarianism and its tools in their efforts to succeed there. And the most recent 14 of those postings have focused on legacy building as both Trump and Xi seek to build for that. I continue developing that narrative here, with a goal of more explicitly discussing Trump’s and Xi’s approaches to control, and both as they variously seek to lead and shape their nations, and as they seek to build personal legacies out of that, and fame for themselves in doing so.

I initially intended on continuing a discussion of Xi Jinping and his strategic planning, and with a focus on his legacy building efforts in Part 24 of this series. Then Donald Trump was impeached by the United States House of Representatives and the ideological lines that divide that nation politically, became more starkly drawn than they have ever been and both in the narrower context of the Trump administration itself, and as self-proclaimed conservatives and ultra-conservatives, and liberals and progressives have fought for the soul of the nation.

I decided to postpone my next Xi posting to now but find myself continuing a Trump-centric narrative again here too as we all face the heightened risk that has been imposed upon all of us by what arguably can be seen as an act of madness on Donald Trump’s part: the assassination of Iran’s senior-most military officer.

Some might question my choice of that word there: assassination. But let’s put my use of it in perspective. If the supreme leader of Iran had ordered the death of the most senior officer in the United States military: the chairman of the Joint Chiefs of Staff, and that directive was carried out with a targeted and effective bombing, president Trump and his followers, and his political opponents in the United States and their followers too would virtually all call that an assassination. So I use an intentionally loaded word here for a reason; it is the exact same word that would be all but universally used in the United States if the direction of this action were reversed. And I will add that the United States Congress would in all likelihood be facing a vote on a declaration of war coming out of that too, and as an all but immediate response and call for action. So we should not be too surprised to hear the Iranians calling out for vengeance. We should not be all that surprised if they call the killing of their Major General Qasem Suleimani an act of war.

Why was this done? According to president Trump and his spokespersons, he ordered this killing to prevent imminent attack by Suleimani led forces, on Americans and upon critically important American interests. But the details of that proclaimed imminent threat have not been forthcoming. And the confusion over the why of this action, coming out of America’s intelligence community as well as from other sources, raise questions as to whether that claimed threat was real or not.

I decided to write this posting when I first heard of this event itself, but held off on doing so until now because I was hoping to hear of some clarification on the why of it first. Anything like that is still to come and for essentially everyone. And that leaves at least one other possibility as to why president Trump would so act, that silence in all of this allows to fester and grow.

That is the possibility that when Trump was briefed on the options available to him for dealing with the ongoing Iran versus United States conflict, as it has continued as what amounts to business as usual, he chose the most extreme option that he heard, and regardless of the risk that it created and regardless of a lack of specific reason for carrying it out – as a distraction that he could present to the world, from his impeachment and his impending trial.

According to that possible narrative, Trump thought that if he did this and the leadership of Iran backed down, with only minor and low level reactions carried out far from American soil, he would look strong and decisive. That could only strengthen his support in the face of the constitutional crisis that he is embroiled in. And if the Iranians responded in a more decisive way, and with a level of impact that could not be brushed aside, then they would have attacked the United States. And traditionally, Americans really have rallied around the flag when their country has been attacked in any way. And polling numbers in support of a sitting president have always gone up then too. In this case that popularity bump would happen going into his Senate trial.

I am not in fact claiming that path to the why of this, is what Donald Trump actually followed here. I point it out because this attack should have been seen up-front and by all concerned, as deeply polarizing and damaging of any efforts to retain a civic discourse in this country, where that is an essential prerequisite for a democracy to function and to endure. People who disagree still need to be able to talk together and to work together, towards achieving shared goals in response to shared needs. This type of military action can only be seen as mitigating against that and certainly when even the basic why of it is shrouded in mystery, confusion and acrimony.

But this did happen. The real question now is one of what comes next, and the possibilities there and the more likely of them in particular are not all that reassuring:

• The first of them are already happening with the Iranian government openly and loudly stating that it will no longer abide by any of the terms of the agreement that they entered into during the Obama administration, to limit their nuclear technology development programs so as to preclude their building an atomic bomb. Donald Trump unilaterally ended that agreement from the United States side, because it was an accomplishment reached by his Democratic Party predecessor in office. The Iranians still abided by some and even much of what had been agreed to, in spite of that abrogation of responsibility on the part of this American president. But that is now over. And without citing references or sources, I feel fairly confident in suggesting that Iran will be able in build an atomic bomb within about three weeks of when they have completed enriching at least one full critical mass of uranium-235, up to weapons grade purity. They all but certainly, have everything else ready, or at the very least very close to finalized fabrication. This killing probably gave Iran the bomb. And with that, the balance of power in the Middle East, and any opportunity to meaningfully shape or even just influence that and certainly from the United States, will end.
• And yes, Iranian forces have now attacked American forces in Iraq, and at a time when the Iraqi government is now demanding that all American forces leave their country.
• This assassination was a tremendous holiday gift for the ISIS forces that America has been combating in Iraq and elsewhere, and certainly from how the Iraqi government has stated that it wants all American military presence out of their country (and away from ISIS and other terrorist organization strongholds there.) And as the gift that keeps on giving, ISIS also gains here from the killing of an Iranian general who was leading an anti-ISIS front from his nation too. (Do you remember when a true and avowed enemy of one of our most dire enemies could be at least marginally acceptable even if not our friend?)

All of that has already begun, and for the prospects of an Iranian atomic bomb, actively set in motion to happen. And more of that will likely continue – there, well away from American soil. But I write this brief note while waiting to see if one other possibility comes to pass too: a cyber-attack against American interests. And there are grounds for that, going back at least as far as a 2010 American and Israeli launched attack on Iran using a then cutting edge technology cyber-weapon: Stuxnet.

It does not matter if this killing is seen in the United States as an assassination or not. It does not matter if it is seen there as an act of war. And it does not matter if any considered cyber-attack that might be carried out by or at the behest of the Iranian government against the United States or its interests, would be considered there to be a retaliatory response to actions taken against Iran or as an initial and largely unprovoked attack. These possible understandings as presumed from an American perspective do not matter, at least as far as they would shape Iranian action. That will depend on what words they use and why, and on how pressured they see themselves to act, if they are to retain their sovereign independence and not present themselves to the world as weak and vulnerable and as an easy target. What happens next will depend on whether they genuinely see all of this in terms of “assassination” and “war.” And I sincerely hope that my more dire concerns here do not come to pass … in spite of the emerging realities that have brought me to hold them.

This now more dangerous world that we live in, is a part of the Trump legacy too, and regardless of what does or does not happen in the US Senate when they hold whatever form of trial they will enter into, regarding those Trump impeachment charges. This is part of his legacy, and it is an important part of it and certainly if Iran does build and test detonate an atomic bomb of their own.

My hope is that the level of crazy coming out of the White House will tone down enough now, so that I can in fact return to my intended narrative flow in this series. But however that does, or might turn out, I find myself finishing this note with one final thought. I very clearly remembering one of president Trump’s (apologist?) spokespersons declaring in front of open microphones and cameras that “president Trump thrives on chaos.” Personally, I am not sure how his putting himself into a position where he would be impeached and face possible forced removal from office could be seen as thriving. But let’s put that in the same box as his “mentally stable genius” claims and move on, counting all of that as political campaign talk. Unfortunately, his actual realized legacy is probably going to end up in that same box too. What else might end there as well?

I will continue writing to this series. Meanwhile, you can find my Trump-related postings at Social Networking and Business 2 and its Page 3 continuation. And you can find my China writings as appear in this blog at Macroeconomics and Business and its Page 2 continuation, at Ubiquitous Computing and Communications – everywhere all the time, and at Social Networking and Business 2 and its Page 3 continuation.

Building a startup for what you want it to become 41: moving past the initial startup phase 27

Posted in startups by Timothy Platt on January 7, 2020

This is my 41st installment to a series on building a business that can become an effective and even a leading participant in its industry and its business sector, and for its targeted marketplaces (see Startups and Early Stage Businesses and its Page 2 continuation, postings 186 and loosely following for Parts 1-40.)

A significant proportion of this series has revolved around the issues of business intelligence: of raw and processed information that is of organizational performance-enabling competitive value, and as it can serve as a basic and even fundamental business shaper and driver. And as a part of that overall narrative I have been focusing on the issues of third party business intelligence providers as sources of such information as a commoditizable product, and on the issues and challenges of startups and early stage businesses, and of small businesses in general as they seek to become and remain competitive, by working with larger and more powerful enterprises that are business intelligence developers and conduits of it, as a key part of their own overall business models.

I began discussing Facebook and its emerging role in small business and marketplace ecosystems, as an essential platform that businesses of all types increasingly connect with their own customers through (see Part 38 and following.) And to cite a point of detail relevant to this, that essentially any reader would see as increasingly familiar if they ever find need to contact such a business electronically (e.g. online, or alternatively by phone): an increasing proportion of them use Facebook to connect with their customers and with the larger marketplace as a whole. And many of them have come to use their Facebook presence as their only customer supportive channel of communications. And this can include outsourcing sales and customer fulfillment services to the Facebook platform, and certainly for customers who need assistance with a purchase already made, or in placing an order in the first place. And this increasingly includes customer feedback opportunities as well, and certainly if a customer seeks to offer a review or rating that would go directly to the business in question and not to an outside agency business such as Yelp.com.

I have addressed this topic area from a risk management and a cautionary-note perspective – not to argue against businesses setting up a Facebook presence and using it, but arguing a need for their owners and managers to understand what that operational and strategic decision actually involves, and for its full anticipatable range of pros and cons. And I continue pursuing that approach here, by repeating a question that I raised at the end of Part 40 but that I held off on addressing until here:

• What of legitimate smaller third party businesses that, for example, find that they have to market through a Facebook if they are to remain competitive and keep their doors open? (Nota bene, I posed that question in the context of just having briefly and selectively considered Cambridge Analytica and their Facebook data-based efforts, hence my “legitimate smaller businesses” phrasing.)

I begin addressing that question for its in-practice complexities, by parsing risk as addressed here into two broad categories:

• The judiciary system as that would be brought into play when alleged violations of personal privacy and related laws are raised and formally prosecuted, and
• The court of public opinion, and particularly as that plays out in a social media context and where messages: pro and con and on anything, can represent the legitimate views of real individuals, or be faked and in ways that are difficult at best to identify as such.

And I begin addressing this with a further consideration of regulatory law and judiciary enforcement of it as a source of risk and opportunity-shaping factors: I topic area that I have turned to on a variety of occasions in the past as I have written to this blog. My goal here is to build upon that already offered foundational discussion, and with that in mind I offer these references as relevant background material for what is to follow:

• Considering a Cost-Benefits Analysis of Economic Regulatory Rules (as can be found at Outsourcing and Globalization, as postings 23 and following),
• Making Regulation Work, (as can be found at Macroeconomics and Business, as postings 118 and following), and
• Regulatory Oversight, Prudent Business Practices and Risk Allocation (as can be found at Macroeconomics and Business, as postings 125 and following).

The first of those reference work series is listed in a directory with a globally spanning focus of attention, and for a reason. As soon as a business goes online and in ways that would be visible to an open marketplace and a wider community, and for whatever reason and with a goal of carrying out any customer-facing functions or services there, it ceases to be entirely local and even just within the confines of its own nation, and it becomes global. Crucially importantly here, this means that if a “local” business seeks to sell online, it has to expect that at least some non-zero percentage of its sales will be to customers living and working in other countries – with their privacy laws, and their laws in place for safeguarding sensitive personally identifiable information. And this means that business selling needs to be able to show that it adheres to now-involved, foreign-enacted and enforced laws as they address those issues, just as they have to address those types of law as hold in their own home country.

And laws change, and court rulings and case law precedent can create new interpretations of existing laws that can change how they would have to be met if a business is still to remain compliant for them. This much, I have already discussed and in some detail in the above-cited series and elsewhere in this blog too. But I would add one more detail to that brief summary here that is particularly relevant to this series and certainly as I have been developing it up to here:

• Big data and the progressively more revealing representational models that can be assembled from it, connecting the dots across vast numbers of data fields and types of them in descriptively and predictively capturing all of us, mean that simply blocking or even deleting protected types of sensitive data cannot prevent it from emerging from the assembly of supposedly safe data – as noted here in Part 40.
• And the growing inevitability of that happening and essentially regardless of any realistic effort to prevent it, means that even the possibility of safeguarding personal confidentiality – or privacy is evaporating.
• So welcome to what has increasingly become our transparent goldfish bowl world. And this brings me to several questions that I will simply raise (at least) here as thought pieces:
• If we are living in an increasingly transparent world, where personal confidentiality is more tightly constrained and limited than ever before, if nothing else and where privacy as we have traditionally known it is too … then what is to replace that?
• What should and can we safeguard, and how and from whom and under what circumstances? And what type of breach of those protections can and should we allow and from whom and for what reasons and under what circumstances?
• What I am writing of here is a rapidly emerging need to fundamentally rethink a series of issues and understandings that have for the most part remained axiomatically set, and from the days of our grandparents’ youth and from before then too: fundamental assumptions that we take so for granted that we do not tend to explicitly consider them, even as they are already beginning to fail us through technology-driven obsolescence.
• And this, of course, all raises still more uncertainty for any businesses that gather, process, store, or purchase access to and use from this flood of raw data and processed knowledge.

And that point of detail brings be directly to the issues of the court of public opinion. And I will begin addressing that source of challenge in all of this by noting that while the legal side to confidentially can change, with once-allowed and even required business practices becoming outmoded and in need of change, public opinion is change. And business practices and their outcomes do not need to violate the law, anywhere, for them to violate public expectations, and particularly where that is driven by online social media and related channels – and where so much of that has been gamed by online trolls and other public opinion manipulators.

This is a posting about uncertainty, and about living and working with it as a constant and unavoidable contextual presence. I am going to build from its narrative in a next series installment where I will at least begin to discuss strategic and operational approaches for better managing risk and opportunity in this type of rapidly evolving and emerging new context. And as part of that narrative, I expect to at least begin to offer some thoughts regarding my above listed but here-unaddressed questions.

Meanwhile, you can find this and related material at my Startups and Early Stage Businesses directory and at its Page 2 continuation.

Moore’s law, software design lock-in, and the constraints faced when evolving artificial intelligence 10

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on January 4, 2020

This is my 10th posting to a short series on the growth potential and constraints inherent in innovation, as realized as a practical matter (see Reexamining the Fundamentals 2, Section VIII for Parts 1-9.) And this is also my seventh posting to this series, to explicitly discuss emerging and still forming artificial intelligence technologies as they are and will be impacted upon by software lock-in and its imperatives, and by shared but more arbitrarily determined constraints such as Moore’s law (see Parts 4-9.)

I have been focusing in this series on the hardware that would serve as a platform for an artificial intelligence agent, since Part 4 of this series, with a goal of outlining at least some of the key constraining parameters that any such software implementation would have to be able to perform within. And as a key organizing assumption there, I have predicated this entire ongoing line of discussion in terms of integrated circuit technology (leaving out the possibilities of or the issues of quantum computing in the process.) And after at least briefly discussing a succession of such constraints and for both their artificial and natural brain counterpart systems, with comparisons drawn between them, I said at the end of Part 9 that I would explicitly turn to consider lock-in and Moore’s law in that context and as they apply to the issues raised in this series. I will pursue that line of discussion here, still holding to my initial integrated circuit assumption with its bits and bytes-formatted information flow (as opposed to quantum bit, or qubit formatted data.) And I do so by addressing the issues and challenges of Moore’s law from what some might consider a somewhat unexpected direction.

• Moore’s law is usually thought of as representing an ongoing base 2 logarithmic expansion of the circuit density and corresponding hardware capability in virtually all of our electronic devices, and without any significant accompanying, matching cost increases. This ongoing doubling has led to all but miraculous increase in the capabilities of the information processing systems that we have all seemingly come to use and to rely upon and throughout our daily lives. And in that, Moore’s law represents a logarithmic growth rate expansion of capability and of opportunity and a societally enriching positive.
• But just as importantly, and certainly from the perspective of the manufacturers of those integrated circuit chips, Moore’s law has become an imperative to find ways to develop, manufacture and bring to market, next step chips with next step doubled capability and on schedule and without significant per-chip cost increases, and to keep doing so,
• Even as this has meant finding progressively more sophisticated and expensive-to-implement work-arounds, in order to squeeze as much increased circuit density out of what would otherwise most probably already be considered essentially mature industrial manufacturing capabilities, in the face of fundamental physical law constraints and again and again and again ….
• Expressed this way, Moore’s law and lock-in begin to sound more and more compatible with each other and even more and more fundamentally interconnected. The pressures inherent to Moore’s law compel quick decisions and solutions and that adds extra pressures limiting anything in the way of disruptively new technology approaches, except insofar as they might be separately developed and verified, independently from this flow of development, and over several of its next step cycles. The genuinely disruptively new and novel takes additional time to develop and bring to marketable form and for the added uncertainties that it brings with it if nothing else. The already known and validated, and prepared for are easier, less expensive and less risky to build into those next development and manufacturing cycles, where they have already been so deployed.
• But the chip manufacturer-perceived and marketplace-demanded requirement of reaching that next Moore’s law step in chip improvement and every time and on schedule, compels a correspondingly rapid development and even commoditization of next-step disruptively new innovations anyway. As noted in earlier installments to this series, continued adherence to the demands of Moore’s law has already brought chip design, development and manufacturing to a point where quantum mechanical effects and the positions and behavior of individual atoms, and even of individual electrons in current flows, into chip success-defining importance.
• And all of this means decisions being made, and design and development steps being taken that rapidly and even immediately become fundamentally locked in as they are built upon in an essentially immediately started next-round of next-generation chip development. Novel and new have to be added into this flow of change in order to keep it going, but they of necessity have to be added into an ever-expanding and ever more elaborate locked-in chip design and development framework, and with all of the assumed but unconsidered details and consequences that that entails.

What I am writing of here amounts to an in-principle impasse. And ultimately, and both for computer science and its applications, and for artificial intelligence as a special categorical case there, this is an impasse that can only be resolved: that can only be worked around, by the emergence of disruptively new and novel that moves beyond semiconductor physics and technology, and the types of electronic circuitry that are grounded in it.

Integrated circuit technology as is currently available, and the basic semiconductor technology that underlies it have proven themselves to be quite sufficient for developing artificial specialized intelligence agents that can best be considered “smart tools,” and even at least low-end gray area agents that at the very least might arguably be developed in ways that could lead them beyond non-sentient, non-sapient tool status. (See my series: Reconsidering Information Systems Infrastructure, as can be found at Reexamining the Fundamentals, as its Section I, for a more complete discussion of artificial special and general intelligence agents, and gray area agents that would be found in the capabilities gap between them.) But advancing artificial intelligence beyond that gray area point might very well call for the development of stable, larger scale quantum computing systems, and certainly if the intended goal being worked toward is to achieve true artificial general intelligence and agents that can claim to have achieved that – just as it took the development of electronic computers, and integrated circuit-based ones at that, to realize the dreams inherent in Charles Babbage’s plans for his gear-driven, mechanical computers in creating general purpose analytical engines: general purpose computers per se.

I am going to continue this line of discussion in a next series installment where I will consider artificial intelligence software and hardware, from the perspective of breaking away from lock-ins that are for the most part automatically assumed as built-in requirements in our current technologies, but that serve as innovation barriers as well as speed of development enablers.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3 and also see Page 1 and Page 2 of that directory. And I also include this in my Reexamining the Fundamentals 2 directory as topics Section VIII. And also see its Page 1.

Donald Trump, Xi Jinping, and the contrasts of leadership in the 21st century 24: some thoughts concerning how Xi and Trump approach and seek to create lasting legacies to themselves 13

Posted in macroeconomics, social networking and business by Timothy Platt on January 3, 2020

This is my 25th installment in a progression of comparative postings about Donald Trump’s and Xi Jinping’s approaches to leadership, as they have both turned to authoritarianism and its tools in their efforts to succeed there. And the most recent 13 of those postings have focused on legacy building as both Trump and Xi seek to build for that. I continue developing that narrative here, with a goal of more explicitly discussing Trump’s and Xi’s approaches to control, and both as they variously seek to lead and shape their nations, and as they seek to build personal legacies out of that, and fame for themselves in doing so.

I initially intended to write this series installment as a continuation of an unfolding narrative that I have been offering regarding Xi Jinping, focusing on his vision and on its consequences as he seeks to realize it. And as a key part of that, I would continue an already begun historical narrative that I have pursued as I have sought to put Xi into a more meaningful, understandable perspective. Emerging news and events have forced a change and a refocusing here, with a continuation of my parallel discussion of president Trump and his story in this series installment instead.

Donald Trump, to put matters bluntly, is in meltdown. His ongoing flood of tweets, as he shares them with his supporters and with the world at large, have become more and more shrill and pressured as he lashes out in all directions through his ongoing flow of online rage and resentment, engagement messages.

He has been impeached by the United States House of Representatives, fundamentally challenging his self-image – and even when he sees that as coming from his enemies. His unconsidered bluff and bombast regarding North Korea, the Middle East and essentially everywhere and everything else, and certainly as far as his foreign policy is concerned, are all coming back to haunt, and if not him, then his Republican Party and those who have tied their fortunes to his.

Members of his political party, as backed by ideologically aligned “news” sources such as Fox News and Breitbart may be able to control the message as far as his domestic policy and practices failures are concerned, and certainly for his followers. But foreign leaders and foreign holders of power and influence in general are a lot harder to control for that and particularly when they actively seek to showcase how little regard or respect they hold for this United States president.

Donald Trump, to put matters bluntly, is in meltdown. And that is true even as he, himself is still incapable of seeing his house of cards collapse around him and even as it is doing precisely that. Senator McConnell, the ranking Republican member of that congressional body, has way too publically declared himself to be a tool of the Trump administration and of Donald Trump personally. And he has done this and he continues to do so, from how he has stated that it is the defendant in any Senate trial that is to come here: Donald Trump himself, and his legal team who will decide how that trial can proceed in response to that bill of impeachment. With that, both McConnell and his fellow Republicans in the Senate, and president Trump himself, have made a farce and a failed charade of their oaths of office, and of their avowals that they would in any way serve, protect or defend the constitution of their country, as they all pledged to do.

And yes, with all of this, Donald Trump’s core followers still see him as something of a second coming of Jesus Christ himself: his politically ultra-right wing evangelical Christian followers most definitely included there. They, in fact are the ones making those “chosen one” proclamations.

Donald Trump, to put matters bluntly, is in meltdown. But then again, he has been for as long as he has held elected office. And given his track record of business failures and bankruptcies from before then, he has never actually been all that stable or secure in his position, and from his beginnings as a once young adult when his father continued having to bail him out financially, if in no other ways.

I was initially planning on holding off on a next Trump-related posting here until after Nancy Pelosi: the speaker of the House, officially and formally turns the articles of impeachment that were voted on in that legislative body, over to the Senate, and until after the Senate Republicans who would receive it, have shown if they still see their president as being above the law: the US constitution included. And if they in fact continue with the farce that McConnell has so publically declared, and make their trial an automatic acquittal with no real consideration of any evidence or findings, they will for all intent and purpose have elevated Donald Trump to an autocratic dictatorial rank. And they will have fundamentally challenged and betrayed their nation and all that it stands for in the process, starting with their oaths of office.

Consider it a matter of irony if you will, but an attempt to hold Trump accountable and according to the laws of the United States and according to its constitution, might provide to be the opening wedge that he needs if he is to actually achieve what is probably his most fondly held, of all of his authoritarian dreams and goals: that finally, our system of laws and of governance in the United States might become so suborned and subjugated to his will that he: Donald J. Trump can finally, fully be Trump and without any counterbalancing reasoning or any intervening voice of authority to limit him there: any presumed constitutionally mandated limitations to his power in office included.

So I find myself writing this next step to this overall series-long narrative concerning authoritarianism a la Trump and Xi, with a primary focus on Donald Trump and his narrative. And with that acknowledged, I turn to more directly consider the word that I said I would more fully explore here, as a primary focus of discussion in this posting: control.

I have been citing the word control in the course of writing installments to this series but have yet to actually discuss what that means, and certainly as these two would-be absolute rulers seek to create it for themselves. I turn here to at least begin to more directly address that core defining issue as it underlies authoritarianism, as it is envisioned and attempted in the hands of a would-be ruler: a striving for absolute control and what can become an overwhelming sense of need to achieve and maintain it, in order to prevent chaos as they would see it.

Donald Trump seeks this through an insistence of complete, unquestioning, unswerving loyalty to himself as an individual, as an absolute requirement of admission into his inner circle. And where this means others trusting him, he requires absolute trust in himself and in his judgment and understanding too. A loyal follower can never question or doubt anything that he says or does, and no matter how unconsidered or off-the-cuff his words or actions.

I have to add when stating that repeatedly validated fact, that trust and trustworthiness as noted here, do not flow in both directions in such a system of relationships. Trump and others like him for this trait, insist that others support them with complete and unreserved loyalty and with complete trustworthiness in doing so. But a Donald Trump does not feel any need to reciprocate on any of that; Trump in particular, as an exemplar of this approach to leadership, feels no compunction about turning on and betraying those who support him, and as soon as they cease to hold immediate here-and-now value to him personally. I cite by way of example, the first elected member of the Republican Party with anything like national name recognition, who actively endorsed him leading up to his nomination as that political party’s presidential candidate for the 2016 US national elections: New Jersey governor Chris Christie. He publically, wholeheartedly endorsed Trump as a presidential contender starting as early as February 26, 2016 with his first fully public statements to that effect. And he stood by his candidate and loyally so – even as a now emerging presidential contender Donald Trump began to publically mock and humiliate him, his once only nationally known Republican public figure supporter. When Trump had achieved political backing and support that he saw as more valuable to himself than anything that Christie could offer, he discarded his once “good and close friend” and supporter as if he was so much trash.

Chris Christie, to be fair, was looking for something for himself there too. He supported Trump as a possible path forward when his own political career in New Jersey was disintegrating around him. (See for example this piece on one of his perhaps more widely known scandals from when he decided to take political vengeance against the mayor of Fort Lee, New Jersey for what he saw as disloyalty towards him: Chris Christie Knew About Bridge Lane Closings as They Happened, Prosecutors Say.)

Christie thought that if he backed Trump and showed real loyalty to him, he would be rewarded with a cabinet position in a Trump administration, or an ambassadorship if The Donald were elected. And he thought that his increased visibility as a true and committed voice in the Republican Party’s ascending ultra-right wing, would serve him in good stead even if Trump were to lose the 2016 election. None of that was possible, when his path forward depended on Donald Trump reciprocating in appreciative response to his support and his commitment.

Donald Trump has made a career out of that type of expedience-based fickleness and as a recurring behavior pattern, and from well before he first sought public office and on until now, and with no end in sight to that. What would have happened to Chris Christie if he had in fact been rewarded for his loyalty with a cabinet position in the Trump presidential administration? Look at the number of senior level appointees to his administration who he has used as he has found value in them, just to discard and dismiss them as soon as they have stopped offering him personal value, as he sees his due.

Trump was elected, and his administration has been dysfunctional from the day he won that election and from even before he was actually sworn into office. And nothing of his basic behavior pattern has changed through all of that, except for a shift in scale as he has become more grandiose than ever now, as the “leader of the free world.” And this represents his vision of control and it constitutes the core of his legacy building endeavors.

I am going to continue this narrative in a next series installment where I will turn to consider Xi and his story, as initially planned for in this posting. And in the process I will discuss the issues and challenges of truth and of propaganda, and from both a Xi and a Trump perspective. And from a Xi and China perspective, I will at least start a discussion of his efforts to achieve what he would see as a perfect surveillance state as the defining mechanism for his achieving his overall goals.

Meanwhile, you can find my Trump-related postings at Social Networking and Business 2 and its Page 3 continuation. And you can find my China writings as appear in this blog at Macroeconomics and Business and its Page 2 continuation, at Ubiquitous Computing and Communications – everywhere all the time, and at Social Networking and Business 2 and its Page 3 continuation.

%d bloggers like this: