Platt Perspective on Business and Technology

Dissent, disagreement, compromise and consensus 33 – the jobs and careers context 32

This is my 33rd installment to a series on negotiating in a professional context, starting with the more individually focused side of that as found in jobs and careers, and going from there to consider the workplace and its business-supportive negotiations (see Guide to Effective Job Search and Career Development – 3 and its Page 4 continuation, postings 484 and following for Parts 1-32.)

I have been successively addressing each of a set of workplace issues and challenges since Part 25 of this, that can arise for essentially anyone who works sufficiently long-term with a given employer (see Part 32 for a full list of those points, with appended links to where I have discussed them.) The first five of those entries represent very specific, focused sources of possible challenge and opportunity, and the final, sixth entry offered there is a more general and wide-ranging one that can encompass all of the others that I have raised and discussed here:

6. Negotiating possible downsizings and business-wide events that might lead to them, and how you might best manage your career when facing the prospects of getting caught up in that.

I began preparing for a more detailed discussion of this last topics point in Part 32 by outlining in at least a measure of detail, exactly what downsizings are, at least when they are considered beyond the simple fact that they are events where people, and even large numbers of them can lose their jobs and essentially all at once. As I stated in that posting and in the context of that discussion-organizing explanation:

• You cannot effectively negotiate absent an understanding of what you have to, and can negotiate about. And knowing that calls for understanding the context and circumstance, and the goals and priorities of the people who you would face on the other side of the table. And as a crucial part of that, this also includes knowing as fully and clearly as possible, what options and possibilities they might and might not even be able to negotiate upon.

Getting caught up in a downsizing can seem like getting run over by a truck, and when there is no way to get out of the way to avoid that happening. But this perception, while commonly held and understandable, is essentially always wrong and usually for several or even many of its at least assumed details. My goal for this posting is to briefly discuss and explain that, and then at least begin to discuss the options and possibilities for effective negotiations that you might have – that you might be able to create for yourself, when facing this type of challenge. And I begin addressing this set of issues with the basics – with points of readily visible fact that are in practice overlooked or pushed aside and by precisely the people who most need to be aware of them, and as fully and as early as possible:

• Downsizings essentially all come with advance warnings, and after a relatively long series of warnings have been made general knowledge and certainly throughout the workforce that would be affected.
• First of all, they often arise as what amount to Plan B or even Plan C or D options, turned to after other attempts to regain fiscal balance have failed. Everyone at a business, probably knows if its markets have dried up and they can no longer bring in the revenue flows needed to maintain the business they work for at the scale it has operated at. Everyone knows if the business they work for is no longer competitively up to date and if its senior management is going to have to make fundamental changes in what the business does and how, if it is to remain viable as an ongoing enterprise. They know if they have legacy skills that are not going to fit long-term into their employer’s future, and if they have become pigeonholed there as only being able to perform that type of work. They can and probably should know if their employer is looking to outsource what they do as their area of expertise. Everyone there generally knows if their employer is facing a possible merger or acquisition, where staff rightsizing, to use a popular euphemism, is going to mean eliminating what will become redundant work positions and dismissing the employees who hold them. The basic challenges that lead to downsizings are virtually always out there and visible, and in at least enough detail to indicate that downsizings are at least possible.
• And secondly, downsizings are rarely once and done events. They take place in stages, with groups let go and pauses and then with next groups let go. And it is not at all uncommon for businesses that are facing a need to downsize, to bring in outside specialists as business consultants to help manage all of this. So this can mean the employees there seeing colleagues disappear from their workplace in groups (and most commonly on Fridays), while seeing new faces walking around seeking information on what everyone does there.

And this brings me to the great unspoken: the issues and challenges of directly, objectively, openly facing these possibilities, when and as they become realities for an employer and for the people working there. Too many of us look away from the uncertainty and threat of all of this, as if our not seeing it and not considering its possible impact on us, might make it all go away. You have to at least consider the possibility that you might be caught up in this type of a tidal wave type event too, if there is evidence of it happening or of its likelihood of happening. And it is never safe to simply assume that this cannot happen to you because you are a loyal and effective employee or manager there, with skills and experience that the business needs. You can never simply assume that this cannot happen to you because you consistently get excellent performance reviews, or because your colleagues and supervisor like you and value having you there. People are fired for specific reasons that would put them at the center of a target for that type of dismissal. Problem hands-on employees and managers are fired and for specific cause. But good and even great employees and managers can and do get caught up in downsizings, as they are never (at least in principle) carried out on a fault or deficiency determined basis. Good people: good employees and managers are let go, and even despite their value to the business, to keep a business viable and competitive and to meet larger business needs. And that point of fact can serve as the basis for essentially all of the negotiating arguments that you could raise, in support of your being retained by an employer facing this type of at least perceived need.

• The question, which I will explore in at least some depth in the installment to come here, is one of how you can best present yourself as an asset that your employer would want to keep on, coming out of the staff reductions and reorganizations of a downsizing. And that means negotiating in terms of what you can do that will offer value through this type of transition and as your employer moves past it. And that means you’re negotiating in terms of the specific downsizing you face, and how and why it is taking place, and with as clear an understanding as possible of what this business seeks to achieve from it (and avoid from it.)

I am going to continue this discussion in a next series installment, where I will expand on that bullet point, discussing negotiations goals and priorities as they arise for you, depending on your job and career objectives, and the driving reasons for a possible, or ongoing downsizing that you might be caught up in. And in anticipation of that, I will consider all of the downsizing-cause scenarios that I have noted, at least in passing here and in Part 32.

Meanwhile, you can find this and related material at Page 4 to my Guide to Effective Job Search and Career Development, and also see its Page 1, Page 2 and Page 3. And you can also find this series at Social Networking and Business 2 and also see its Page 1 for related material.

Nonprofits as businesses: more effectively connecting mission and vision, strategy and operations 1

Posted in nonprofits by Timothy Platt on July 11, 2019

I have written over 2,600 postings to this blog as a whole as I write this one, but I have only added 41 of them up until here, to my specifically nonprofit-oriented directory: Nonprofits and Social Networking. And looking back at this ongoing flow of new content here, that strikes me as being at least somewhat of an anomaly. I make note of that here and in that way, because working with nonprofits has been so critically important to me in my work life and professional career path. I am drawn to working with businesses that have good people, and that strive to fulfill positive missions and visions. And good nonprofits tend to develop around both.

Much of what I write in my Business Strategy and Operations directory (in its Page 1, Page 2, Page 3, Page 4 and Page 5 listings), applies equally fully to for-profit, not for profit and nonprofit businesses, and certainly as I address all of them as they would more effectively follow business model approaches per se. But I have not added as much as I could have and not as much as I probably should have, that would focus on nonprofits in particular in all of this. Then something I did not expect began to happen. All of a sudden I began seeing visitors to this blog, clicking to view the admittedly sparse, explicitly nonprofit content that I have included here, and one posting in particular as an apparent emerging favorite, and readers of it coming from North and South America, Africa, Europe and Asia: Nonprofits and Blue Ocean Strategies.

I have been thinking a lot recently, about adding a new series to this directory that would build from that and related postings already available here, as I think back over my nonprofit work experience and about how these organizations do and do not effectively succeed as businesses per se, and in addressing their missions and visions as a consequence. And this more recent readership interest in my blue ocean strategies posting in particular, has given me the impetus to begin this new series now, rather than wait on it as I continue working my way through the ongoing collection of more business model-agnostic series, that I am already developing and offering here. So with this posting, I increase my actively under-development list of posting series by one again.

And I begin this first posting to it proper, and this series as a whole by stating the obvious: nonprofits, if coherently functionally organized and viable at all as potential ongoing enterprises, are always mission and vision driven. They are set up and run with a long term goal in mind that is defined by the terms of their particular mission and vision statements and those statements form their hearts and souls. And it is almost a truism that a nonprofit mission and vision are going to be long-term and even fundamentally open-ended in nature, as end goals. They can in fact be fundamentally unachievable, at least as absolute goals and more properly serve as aspirational direction pointers, in intent and in practice.

• “End world hunger.”

That overarching absolute goal might be the core message of a nonprofit’s mission and vision. But realistically that organization’s ongoing actionable goal is going to be to help feed the hungry who it can effectively reach out to and connect with, and in ways that at least ideally are self-sustaining for them.

• “Cure cancer” (as in all cancer.)

This may in fact be an attainable goal, some day and even in the world’s least served communities and among its most marginalized people. But realistically, this is a multi-generational goal as are others like it, for their far-reaching and open-ended scope of intent and of proposed action. Any nonprofit that sets out to fully realize this lofty goal is going to, of necessity, plan and work towards fulfilling step by step parts of this seemingly endlessly large and endlessly complex puzzle.

For-profit and not for profit businesses often have missions and visions too, and in principle at least virtually all of them do, at least as roughly considered points of intention. But these mission and vision statements tend to be very different in nature, and certainly when for-profits are considered there, and certainly when they are more formally thought through and written out. Consider the following:

• “Build and run the best shoe store on the planet: the very best shoe store possible.”

The How of curing all cancers or of definitively ending world hunger are still works in progress for even mapping out categorically what should be: what would absolutely need to be done in order to genuinely, completely fulfill them. For the world hunger example, how much of that effort (and at what stages of its fulfillment) should center on developing new crops that can grow in what are currently marginalized environments for agricultural production? And even just considering food production per se here, and setting aside the issues of wider food access and distribution among others, how much of this effort should go towards preserving the environment, through efforts to preserve or even improve soil quality, and water supply and with better buffering in place to protect against the extremes of drought or flood?

Ending world hunger is an aspirational mission goal. And any simple-to-state mission or vision statement like it is in most cases more likely to reflect overall long-term intent, more than it does any more here-and-now operationally specified goal, as could be explicitly encompassed in some timeframe-specified strategic plan or its more here-and-now execution. My shoe store example, on the other hand is more readily applicable as an explicit operational goal; it is more of a briefly stated functional road map, or at least more of an explicit pointer to one. And in this, effective strategy and the operational and other business practices that are developed out of it, can be seen as fleshing out that business’ (admittedly very ambitious) overall long-term goal.

Let’s consider that shoe store in a bit more detail, starting with a simple question, at least for its asking, that the above retail mission statement all but compels. What would the owners and employees of a shoe store need to do, and how would they best do that, in order to make their store really good and even great, and even an at least possible best of the best?

• Obviously, a great retail store needs great merchandise to sell. This in turn means good product design and selection, and high production quality. And it is not going to matter if this store sells the best when it has that in stock, if it cannot reliably have the items their customers would want, in stock. So good, great and best here call for effective supply chain systems and access to the best manufacturers through them, and on an ongoing, reliable basis.
• Price is important here too, so developing and maintaining good contractual relationships with suppliers, for price points and other terms of sale issues enters into this.
• But having the right products and even having them at the right prices is still only part of this puzzle as compacted into that brief above-offered shoe store mission statement. The physical location and the store size and layout are important too. A potentially great store that few can actually get to, posing this in bricks and mortar storefront terms, is not going to succeed very well and even if it has everything else working for it. Location can be king here.
• And the storefront itself, once reached, has to be open and easy for a customer to navigate in, and comfortable to shop in and as stress free for its customers as possible. I distinctly remember walking into a large sporting goods store once just to turn away and leave, and almost immediately – because the volume of the teen consumer-oriented music was so loud that it was literally physically painful! And looking around before leaving, I did not exactly see all that many actively engaged teen shoppers there anyway.
• And this brings me to the sales managers and the sales personnel and others who work with them there and who their customers would actually see and interact with, and both in finding the products that would best meet their needs, and in making purchases there. Training and support need to be great for them if that store is going to be great for their customers. And this brings me to the back-office support staff that this business would need and have too, and their training and support, and with effective communications systems available throughout the store and with all in-house stakeholders there, connected in.

My goal here has not been to exhaustively outline the operational details or functioning of a retail store. It has been to point out how, unlike ending world hunger or curing all cancer, it is known and in detail, how to build and run an effective retail business. So an effective mission statement for that type of enterprise can in fact be a fairly direct call to action, and with a large and complex set of what are at least in principle, achievable strategic and operational goals built into it.

Now, how can a nonprofit, with its more intrinsically open-ended, aspirational mission and vision statements, frame them and itself as an organization too, so as to be more directly achievable? At the risk of offending those who aspire to mission and vision greatness, as offered in my cancer and hunger examples, how can the leaders and the employees and the volunteers as well, who contribute so much to their nonprofits, make them more strategically clear-cut and operationally focused too, and in ways that are more effective in working towards fulfilling their missions and visions as a result – like that shoe store, and without sacrificing their overall mission and vision goals and aspirations?

My goal for this series is to at least shed some light on how that question can be better answered. Meanwhile, you can find this and related material at Nonprofits and Social Networking.

Business planning from the back of a napkin to a formal and detailed presentation 30

Posted in strategy and planning by Timothy Platt on July 8, 2019

This is my 30th posting to a series on tactical and strategic planning under real world constraints, and executing in the face of real world challenges that are caused by business systems friction and the systems turbulence that it creates (see Business Strategy and Operations – 3, Page 4 and their Page 5 continuation, postings 578 and loosely following for Parts 1-29.)

This is in large part a series about communications, shared understanding and the search for agreement, and follow-through – and repeat. It is about communication and its actionable context as an ongoing process. And as an emerging thread in that larger narrative, I have been successively discussing each of a set of three closely interrelated topics points here, since Part 19, that become important in any business and certainly when and as it faces change and a need to effectively navigate a path forward through it:

1. More systematically discuss how business operations would differ for businesses that follow one or the other of two distinctively different business models (see Part 19 through Part 21 for a selectively detailed outline and discussion of those businesses),
2. How the specific product offering decision-making processes that I have been making note of here would inform the business models pursued by both of these business types, and their overall strategies and operations and their views and understandings of change: linear and predictable, and disruptively transitional in nature (see Part 22 through Part 25 and for more of a “big picture” discussion of this also see Part 26 and Part 27.)
3. And I added that I would discuss how their market facing requirements and approaches as addressed here, would shape the dynamics of any agreement or disagreement among involved stakeholders as to where their business is now and where it should be going, and how (see Part 28 and Part 29.)

Crucially importantly here, for purposes of what is to come in this discussion, I have focused since Part 19, essentially entirely upon the individual business and its organization and management, and on its more within-house communications and business execution. And that has been true even when I have explicitly raised the issues of markets and of a business’ connections with and dealings with participants there too, and when noting outside connections in general through that progression of postings. Even there, my real point of focus has been on the single business and its in-house stakeholders and on their roles in all of this, and I have treated that group of participants and their enterprise as if collectively forming a center of their own little universe.

My goal now is to step out beyond those constraints to more explicitly consider and discuss wider contexts, starting here at least with business-to-business collaborations as perhaps most commonly found in organized supply chain systems. And I begin addressing that topic area here by making what I would assert to be an arguably quite defensible axiomatic assumption:

• Effective business-to-business collaborations can only be sustainably developed and maintained if the businesses involved in them can and do each speak with a single, consistently organized overall voice, and certainly at any given time and when addressing any given mutually significant issue.

This does not mean that all communications should go through single individual spokesperson agents. And this does not mean that a business’ basic message or its context-specific elaborations cannot change with time or circumstance. What is does mean is that conflicting messages can and do sew chaos, and longer term doubt as to how reliable a business partner is and can be, if it becomes known for conveying them.

Conflicting messages create uncertainty, and that creates or at the very least increases the overall level of risk as perceived by other businesses that need to know with certainty what is being offered or agreed to, and the actual terms of such agreements that they would enter into. The uncertainty that I write of here creates financial risk and liability exposure at the very least, and additional cost; here, risk can legitimately be seen as a categorical form of cost and of cost exposure. And like business systems friction, as is repeatedly discussed in this blog in a strategy and operations context, all of this deters and slows down business transaction flows but without creating any direct corresponding positive value in return.

• Conflicting messages make it more expensive to do business, and on several levels for a business that finds itself dealing with such an inconsistent supply chain partner, making them less competitively valuable or value creating as supply chain or related-system partners in the first place.

I have written up to here in this posting and in more recent installments leading up here (and certainly in Parts 28 and 29), in abstract terms of the value of consensus or at least reaching working agreement. Think of this posting and what is to follow in it as a starting effort to take that line of discussion out of the abstract, with a very real world example of how and where a failure to come together within a business, can and will create problems, costs and a loss of potential competitive strength and position. And I begin exploring these issues in what admittedly are still more general, abstractly stated points by noting that:

• The communications challenges issues that I have been raising here only start to play out and take on practical meaning and significance in the executive suite, and even when a business is essentially entirely top-down organized and run. A more complete answer to the challenges that I would raise here depends on the actions and the messaging of the full range of non-managerial hands-on, and managerial employees who would actually directly deal with supply chain partner businesses and their employees, and carry out specific transactions with them. And this leads me to some specific due diligence questions that would profitably be thought through and actionably addressed when developing and performance evaluating these business-to-business relationships and the actions of the businesses that enter into them.

Let’s start with the types of transactional communications that would have to be considered here, that I would categorically divide as a matter of hands-on practice into three at least relatively distinct types. And to be explicitly clear in what follows, organizing the questions that I would offer here as a planning and evaluation exercise, the goal here is to identify and characterize business activity as actually carried out – not just as intended on paper, to see what of that would fit into each of the following categorical types:

1. Fully standardized recurring transaction types that would essentially always be form-templated for what types of information would be shared and how, and where precise detail would be specified for all of the basic steps of these business processes: Here, the questions that would arise concerning these transactions are essentially all going to be performance-based for how well they actually work, as set up and carried out through the employee usable forms in place as data entered is shared in them Are these transactions and their implementing forms quick and easy to carry out, and with minimal mistakes or need to correct and resubmit? Are they effectively designed and particularly for initial action attempted by front-line employees who would routinely carry them out? Are they organized and carried out in ways that would make it easier to identify need for exception handling escalation, and in ways that would facilitate exception handling hand-offs and execution if that proves to be needed? And are they flexible enough and adaptable enough if change is needed in them, so as to limit the likelihood of need for more impactful exception handling later on, at least for whatever specific issues have led to a need for a specific exception handling response in some here-and-now? Are, in other words, next required exceptions or problems escalations more likely to be repeats of old problems, or are they more likely to mostly just arise when more entirely new problems and challenges do?
2. Standardized transaction types that can be seen as variations on the above, but where escalation to a manager or supervisor would likely be required, and where additional documentation would be required as to the What and How and Why of what happened: Effective due diligence processes begin with the more normative standardized transactions and their ongoing execution as touched upon in the first of these bullet points, to make sure that processes followed are and remain effective and effectively connected to the business model and the business plan in place, and in the face of ongoing change, and ideally for both businesses involved in any given supply chain transactions here. The transactions that I write of here are more unusual or even unique, calling for more customized transactional processes, and probably more unusual types of information sharing. And more particular individualized attention as offered on a more case by case or at least a more recurring-type basis, would be paid to these transactions.
3. Essentially entirely non-standard transactions that would be of sufficient scale and impact for their costs and profits potential or for their direct resource requirements so as to call for managerial oversight and decision making, and possibly more senior level approval: Effective due diligence of these events would be carried out on an entirely case by case basis, with an understanding that an excess of the second type of transaction as noted here, calls for redesign of and updating of the forms that would be used, the training of the people who would carry out this work, or both, in order to shift them into the first of these three categories. And anything in the way of excessive numbers of these third category types of exceptions would indicate genuine gaps and disconnects in the basic transactional process systems in place and of a level of significance that would call for fuller corrective measures. These types of impactfully non-standard events and the transactions they can call for, can arise in any business and in any business-to-business collaboration. But if they keep coming up that indicates what are most likely significant underlying problems in the businesses involved.

As an aside here, that might address a point that many readers would have noticed by now, I did not raise the issues of information security or its management as part of either the basic transactional processes under discussion here, or in how exception handling would be carried out. For purposes of this line of discussion, I simply include that into the more generally stated framework that I have been developing here.

And with that perhaps-clarifying note added, and in at least selective summarization of a key point that I have been raising here, I observe that in a business-to-business collaborative context, transactions between organizations and their staff members and managers can break down from any of several possible directions. And this means that the more the exceptions that arise, and of a Type 2 form, as listed above, and certainly from a Type 3 form where they arise, the more important it becomes that effort be made to standardize, or re-standardize how the two halves of these process-based transactions take place, so efforts made on both sides of them fit together more smoothly and efficiently. And with that I directly bring this discussion back to an explicitly business-to-business collaboration focus.

Now let’s consider the people involved in these transactions, starting with what in most cases and for most businesses and transactions would be non-managerial employees who have, for example, specific accounts in their hands and who would primarily if not exclusively focus on carrying out their sides to the Type 1 transactions as discussed above.

• Who does this work and what are their training and supervision requirements for being able to do so?
• Who would they turn to, to report, address and resolve exceptions when and as they arise?
• And which of those (here at least categorically anticipatable exception types would be handled by those same hands-on employees, perhaps with just an acknowledgment of occurrence and a quick authorization to proceed? (There, as a possible case in point example, an exception might mean finding a non-standard but available logistics and delivery solution for shipping and delivering a particularly fragile or perishable product to an unusual location – non-standard but not wildly so.)
• Which and how many of the exceptions that do arise would call for at least a more substantial hand-off to a supervisor and a true process escalation in order for it to be completed? And who would be responsible for carrying out this next step up work: a manager, or a hands-on specialist or who? And how and where would that be decided?
• How would basic workflow numbers and their statistics at the very least, and exceptions as identified by type, be reported in as ongoing due diligence input data?
• And who would be responsible for that? Would they be afforded the time and opportunity to carry out this essentially-documentation work on top of their regular duties, or would this be added onto their required workload and in ways that might make it easy for this not to be managed in as effective and timely a manner as would be wanted and even required?
• How would feedback from this due diligence information gathering and analysis be carried out and by whom?
• And how would the ongoing results of this at-least ideally ongoing review and analysis process flow, be reported back to the people they first arose for so these findings could be made use of? And who would do this and who would they share this information with and how? Who, at least as a matter of level on the table of organization, would develop and institute change as a means to reduce exceptions, and certainly ones that call for escalation to management or to middle or even senior level management? And what feedback would be gathered and from whom, bringing what involved stakeholders into these remediative processes so that any changes that are made are more likely to the right ones and not just sources of new problems in and of themselves?

The three questions that I started this posting with: there still in a more within-business context, and their business collaboration supply chain level counterparts, are mechanism and What oriented. And that is a valid point even as I have of necessity delved into at least a few Who issues when addressing them. The due diligence questions that I raise here, as a continuation of that narrative, are more personnel and Who oriented, even as they in turn of necessity raise mechanism and What issues too. Think of all of this as fitting together into a single due diligence approach, and one that would connect into larger more widely ongoing strategic and operational business review and management systems.

I am going to, as noted above, turn back to the two case study businesses that I began this series progression with, at least starting in my next series installment. And in anticipation of that narrative to come, I will reconsider both of those businesses in light of the issues and topics points and the business practice approaches that I have raised here since first offering them. Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory.

Rethinking exit and entrance strategies 33: keeping an effective innovative focus while approaching and going through significant business transitions 23

Posted in strategy and planning by Timothy Platt on July 5, 2019

This is my 33rd installment to a series that offers a general discussion of business transitions, where an organization exits one developmental stage or period of relative strategic and operational stability, to enter a fundamentally different next one (see Business Strategy and Operations – 3 and its Page 4 and Page 5 continuations, postings 559 and loosely following for Parts 1-32.)

I have been working my way through a to-address list of closely interrelated topic points since Part 28 that address a set of issues fundamental to that overall business challenge. More specifically, I have discussed its Points 1-3 in a relatively significant level of detail and have offered at least preliminary, orienting comments on its Points 4 and 5 too. My goal here is to at least begin to more fully discuss those last two points here, and I begin doing so by repeating this topics list as a while, for smoother continuity of narrative and for greater clarity as to what, overall, I am addressing here:

1. Reserves as a cost because they represent assets and in fact liquid assets that cannot be turned to and used, except in what might be more emergency situations – and the need for larger reserves as risk increases: a situation that arises when facing the novelty and the unknowns as would be found in true transitions.
2. Changes in goals and scope of action, and in the level of detail of processes under consideration that have to be monitored, and how that overall form of course correction can be intentionally proactively sought out and developed, and how it can be reactively forced upon a business and its leadership.
3. And in a more strictly project context, or at least in more strictly project-oriented terms: consider scope creep and scope expansion in general, and its opposite with scope compression and simplification where details are dropped and goals reduced …
4. With and without organized, strategically aware planning and forethought to back such decisions.
5. I added that I would discuss these issues at least in part in terms of goals and priorities collisions, where more strictly cash flow and financial considerations, and risk and benefits considerations, and overall business goals can all come into conflict and even direct collision with each other. And my goal there is to at least begin to offer some approaches for both better understanding these scenarios and their dynamics, and better addressing and resolving them. And then after addressing all of that, at least for purposes of this series, I will proceed to reconsider exit and entrance strategies per se again, this time from the perspective of this developing narrative.

To be more specific as to what I have offered up to here in at least starting to address the above Points 4 and 5, I raised a complex topics points in this context that I have in fact explored on multiple occasions in this blog, from a variety of perspectives and that all related to the challenge of balancing consistency in what is done in a business and how, with flexibility as that would be called for when addressing change and particularly the more disruptively unexpected. And perhaps more to the point, at least for what I would offer here moving forward, I wrote in Part 32 of change in the basic intentions of the business itself, that might arise as the cumulative consequence of drift and that might be entered into in more definitive steps as well. And in this context, I offered the following question and accompanying comment:

• What is the overall goal of the business model in place? (Note: this is not a trivial question; answering it is not just an exercise in blowing the dust off of an old and even original business plan document from this business’ founding to repeat whatever was offered there. This is a question of what the business does now in practice and with what actual current goals and priorities.)

I said that I would address this question and its issues, among other topics in this and upcoming series installments and I said that I would do so in the context of an at least briefly and selectively sketched out working-business example. And I at least begin doing so here, by turning back to reconsider a business example that I have already offered in another series in this blog:

• ClarkBuilt Inc. is a small to medium size business by head count and cash flow that was initially built to develop and pursue a new business development path as built around its founders’ jointly arrived at “bold new innovative products” ideas. The Clark brothers, Bob and Henry came up with a new way to make injection molds for plastics and similar materials that would make it cost-effective to use injection molding manufacturing processes with new types of materials, and cost-effectively so. They have in fact launched their dream business to do that, and have developed a nice little niche market for their offerings, providing specialized-materials parts to other manufacturers.

I initially began actually discussing this business in Don’t Invest in Ideas, Invest in People with Ideas 41 – the issues and challenges of communications in a business 8 and in subsequent installments to that series, there focusing on how the founders and owners of that venture sought to maintain their initial vision, focus and goals. And I pick up on that case in point example again here, reconsidering it from a somewhat different perspective: one of change and of what can with time become compelled response to it.

To start this narrative thread, let’s consider a phrase that I added into this posting and its initial orienting note, as to the nature of the types of change that I would address here: change “that might arise as the cumulative consequence of drift and that might be entered into in more definitive steps as well.”

I have written in my Invest in People postings, about how the Clark brothers have actively worked to keep their business centered, and essentially entirely so, around their initial injection molding technology, only actively considering and supporting change in what they do or in how they do it there, that would serve to keep their specific innovation as competitively close to cutting edge as possible. But this, of necessity means allowance for drift in what they do, and particularly if they find over a period of years, that essentially all of the key steps and processes that enter into using their innovation as an ongoing manufacturing process, have been changed from what they did when first opening their doors for business.

Change happens and both by proactive intent and reactive response. In this case, let’s assume that the materials that they manufacture their parts from are different from what they initially used, and for the vast majority of their productive output, with only a few legacy-support requiring customers still asking for their earliest products and even product types for that. They make their molds out of different materials in all cases now. They use a new centrifugal casting technology now, that did not exist when they first started out but that they added in, in order to stay current and relevant. They use different tools and types of them for final product smoothing and processing prior to packaging and shipping after removing them from their production molds. And at least as importantly: aside from their few legacy clients and for when they are explicitly making legacy orders, ClarkBuilt is now manufacturing different types of products: a different assortment of replacement and customization parts than they did when starting out, and for business clients in different industries than they ever would have expected, and certainly at first.

So ClarkBuilt is still in the injection molding business, but as a different business than they were at first, and with different types of clients than they began with too. And to round this out, their underlying business processes have significantly changed through all of this too, as their growth as a business has called for more than just a simple linear, same path forward expansion in scale for how they manage sales and even just their basic business operations. This has involved creating a few new, at least small but still significant services in-house that began more as single step processes. They have expanded these aspects of their business into small organized services and even into what amount to small departments. And this has meant outsourcing a few functional areas that they had initially carried out entirely in-house too, including on the production side, the preprocessing of some of their now-key building materials that go into their injection molds, where it is now more cost-effective to leave that to specialist partner-supplier businesses.

But let’s go back to the issues and challenges of their manufacturing processes and the business that they have built around them. And let’s add time and its complications into this step-by-step evolving story. The Clark brothers started out with a novel approach to injection molding manufacturing, and a disruptively new one at that. Let’s assume that they were able to secure patent protection for their core innovation, that would fit into what might be considered the Goldilocks limits of scope: not too narrow so their patent could not easily by circumvented by making what in principle might even just be relatively minor adjustments to what they have on record as theirs, but not so broadly staked out where any serious legal challenge to their patents would lead to at least parts of them being found overly broad and invalid. If their basic innovation really offers value: their particular approach to injection molding and basic variations on it, other competing businesses will find ways to match what they do. And if they operate out of sites where labor costs are lower and they are satisfied with producing and offering acceptable but less polished final products, and if they specialize on lower-end products and on quick production and delivery of them, they might significantly eat into what had been ClarkBuilt’s basic market and its market share there. So ClarkBuilt has two basic options:

• Their design teams are the best in their industry and they are known for that. Anything that comes out with a ClarkBuilt label on it, for its basic design is going to be seen as offering premium quality (even if, on the low end and for more routine parts this might not be seen as necessary by a wide range of at least potential-client, purchasing businesses.) So they could focus on design and even on outsource their actual manufacturing and certainly of low-end products to remain competitive there. Or they might even walk away from that end of the business, and
• Focus on high-end and specialty products and their design and production, and particularly where this involves use of more exotic materials.

What happens to ClarkBuilt Inc. if it turns into a design shop as its primary source of incoming revenue, trading on its name brand and its in-house design excellence, and perhaps its supply chain excellence too, for bringing all of the pieces of the manufacturing and distribution puzzle together for the types of products that they are involved in manufacturing? And this brings me back to a question that I offered here, towards the top of this posting:

• What is the overall goal of the business model in place? What, in fact is the actual current business model that is actually in place there now?

The answer to the second of those questions, and by extension the answer to the first of them as well, might be very different from what the Clark brothers started out with. But more importantly, realistically current answers to them might be different from what the Clark brothers still see in their business, and different from how they at heart, still think about it and plan for it. I point out here that this is a series about exit and entrance strategies and business transitions. Game changing events can be backed into or they can be entered into with open eyes and on the basis of careful planning. My goal in bringing up this case study example again, has at least in part been one of laying a foundation for that type of discussion.

I am going to continue this posting’s discussion in a next series installment, where I will at least begin to address that complex of issues and in the context of the above-repeated Points 4 and 5. Think, in that regard of this posting as setting out to reframe them in terms of longer-term change and ultimately in terms of the exit and entrance strategy issues that even just small, cumulative incremental change can come to compel – and change in general as it informs this series and its overall narrative as a whole. And as part of that discussion to come, I will raise and address at least a few more basic questions, adding them to ones that I have considered up to here in this and recent series installments, and offering them both as business analysis tools and as a means for further clarifying the basic business planning and execution approach that I am outlining here. Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory.

Rethinking the dynamics of software development and its economics in businesses 5

Posted in business and convergent technologies by Timothy Platt on July 2, 2019

This is my 5th installment to a thought piece that at least attempts to shed some light on the economics and efficiencies of software development as an industry and as a source of marketable products, in this period of explosively disruptive change (see Ubiquitous Computing and Communications – everywhere all the time 3, postings 402 and loosely following for Parts 1-4.)

I have been working my way through a brief simplified history of computer programming in this series, as a foundation-building framework for exploring that complex of issues, starting in Part 2. And I repeat it here in its current form, as I have updated this list since then:

1. Machine language programming
2. And its more human-readable and codeable upgrade: assembly language programming,
3. Early generation higher level programming languages (here, considering FORTRAN and COBOL as working examples),
4. Structured programming as a programming language defining and a programming style defining paradigm,
5. Object-oriented programming,
6. Language-oriented programming,
7. Artificial Intelligence programming, and
8. Quantum computing.

I have in fact already offered at least preliminary orienting discussions in this series, of the first six entries there, and how they related to each other with each successive step in that progression simultaneously seeking to resolve challenges and issues that had arisen in prior steps there, while opening up new possibilities in its own right.

I will in fact discuss steps seven and eight of that list as I proceed in this series too. But before I do that and in preparation for doing so, I will step back from this historical narrative to at least briefly start an overall discussion of the economics and efficiencies of software development as they have arisen and developed through the first six of those development steps.

• Topic Points 1-5 of the above list all represent mature technology steps at the very least, and Point 6 has deep historical roots, at least as a matter of long-considered principle. And while it is still to be more fully developed and implemented in a directly practical sense, at least current thinking about it would suggest that that will take a more step-by-step evolutionary route that is at least fundamentally consistent with what has come up to now, when and as it is brought into active ongoing use.
• Point 7: artificial intelligence programming has been undergoing a succession of dramatically disruptively novel changes and the scope and reach of that overall effort is certain to expand in the coming years. It has old and even relatively ancient roots and certainly by the standards and timeframes of electronic computing per se. But it is heading into a period of unpredictably disruptively new. And my discussion of this step in my above-listed progression will reflect that.
• And Point 8: quantum computing, is still, as of this writing, at most just in its early embryonic stage of actual realization as a practical, working source of new computer systems technologies and at both the software and even just the fundamental proof of principle hardware level. So its future is certain to be essentially entirely grounded in what as of this writing would be an emerging disruptively innovative flow of new and of change.

My goal for this installment is to at least briefly discuss something of the economics and efficiencies of software development as they have arisen and developed through the first six of those development steps, where they collectively can be seen as representing a largely historically grounded starting point and frame of reference, for more fully considering the changes that will arise as artificial intelligence agents and their underlying technologies, and as quantum computing and its, come into fuller realization.

And I begin considering that historic, grounding framework and its economics and efficiencies, by setting aside what for purposes of this discussion would qualify as disruptively innovative cosmetics as they have arisen in its development progression. And yes, I am referring here to the steady flow of near-miraculous technological development that has taken place since the initial advent of the first electronic computers, that in a span of years that is essentially unparalleled in human history for its fast-paced brevity, has led from early vacuum tube computers to single transistor per chip computers to early integrated circuit technology to the chip technology of today that can routinely and inexpensively pack billions of transistor gates onto a single small integrated circuit, and with all but flawless manufacturing quality control perfection.

• What fundamental features or constraints reside in both the earliest ENIAC and similar vacuum tube computers and even in their earlier electronic computer precursors, and in the most powerful supercomputers of today that can surpass petaflop performance speeds (meaning their being able to perform over one thousand million million floating point operations per second, that would lead to fundamental commonalities in the business models and the economics of how they are made?)
• What fundamental features or constraints underlie at least most of the various and diverse computer languages and programming paradigms that have been developed for and used on these increasingly diverse and powerful machines, that would lead to fundamental commonalities in the business models and the economics of how they are used?

I would begin approaching questions of economics and efficiencies here, for these widely diverse systems, by offering an at least brief and admittedly selective answer to those questions – noting that I will explicitly refer back to what I offer here when considering artificial intelligence programming and certainly its modern and still-developing manifestations, and when discussing quantum computing too. My response to this set of questions in this context will, in fact service as a baseline starting point, for discussing new issues and challenges that Points 7 and 8 and its emerging technologies raise and will continue to raise.

Computer circuit design and in fact overall computer design has traditionally been largely fixed at least within the design and construction of any given device or system, for computers developed according to the technologies and the assumptions of all of these first six steps. Circuit design and architecture, for example, have always been explicitly developed and built towards, as fixed product development goals that would be finalized before any given hardware that employs it would be built and used. And even in the most actively mutable Point 6: language-oriented programming scenario per se as currently envisioned, a customized programming language and any supportive operating system and other software that would be deployed and used with it, is essentially always going to have been finalized and settled for form and functionality prior to its use, in addressing any given computational or other information processing tasks that it would be developed and used for.

I am, of course, discounting hardware customization here, that in usually comprised of swapping different version, also-predesigned and finalized modules into a standardized hardware framework. Yes, it has been possible to add in faster central processing unit chips out of a suite of different price and different capability offerings that would fit into some single same name-branded computer design. And the same type and level of flexibility, and of purchaser and user choice has allowed for standardized, step-wise increased amounts of RAM memory and cache memory and hard drive and other forms of non-volatile storage. And considering this from a computer systems perspective this has meant buyers and users having the option of incorporating in alternative peripherals, specialty chips and even entire add-on circuit boards for specialized functions such as improved graphics and more, and certainly since the advent of the personal computer. But these add-on and upgrade features and options only add expanded functionalities to what are essentially pre-established computer designs with for them, settled overall architectures. The basic circuitry of these computers has never had to capability of ontological change based simply upon how it is used. And that change: a true capability for programming structure-level machine learning and adaptation, are going to become among the expected and even automatically assumed features of Point 7 and Point 8 systems.

My focus in this series is on the software side of this, even if it can be all but impossible to cleanly and clearly discuss that without simultaneously considering its hardware implementation context, so I stress here that computer languages and the code that they convey in specific software instances have been fundamentally set and in similar ways and to similar degrees by the programmers who have developed them, to any hardware lock-in that is built in at least by the assembly floor, a priori to their being loaded into any given hardware platform and executed there, and certainly prior to their being actively used – and even in more dynamically mutable scenarios as envisioned in a Point 6 context.

This fundamental underlying standardization led to and sustained a series of fundamental assumptions, and practices that have collectively gone a long way to shaping both these systems themselves and their underlying economics and their cost-efficiencies:

This has led to the development and implementation of a standardized, paradigmatic approach that has led from initial concept to product design and refinement, prototyping as appropriate, and alpha and beta testing and certainly in any realized software context and its implementations, and with every step of this following what have become well understood and expected cost and returns based financial models. I am not saying here that problems cannot or do not arise, as specific New is and has been developed in this type of system. What I am saying here is that there is a business process and accounting-level microeconomic system that has arisen and that can be followed according to scalable, understandable risk management and due diligence terms. And a big part of that stability comes from the simple fact what when a business, hardware or software in focus, has developed a new product and brings it to market, they are bringing what amounts to a set and settled finalized product to market that they can calculate all costs paid and returns expected to be received from.

The basic steps and performance benchmarks that arise in these business and economic models and process flows, and that are followed in developing these products can and do vary in detail of course, and certainly when considering computer technologies as drawn from different steps in my first five points, above. And the complexity of those steps has gone up, and of necessity as computer systems under consideration have become more complex. But at least categorically, the basic types business and supportive due diligence steps that I refer to here have become more settled and even in the face of the ongoing technological change they would manage.

But looking ahead for a moment, consider one step in that process flow, and from a software perspective. What happens to beta testing (as touched upon above) when any given computer system: any given artificial intelligence agent can and most likely will continue to change and evolve and on its own and starting the instant that it is first turned on and running, and with every one of a perhaps very large number of at least initially identical agents, coming to individuate in its own potentially unique ontological development direction? How would this type of change impact upon economic modeling: microeconomic or macroeconomic that might have to be determined for this type of New?

I am going to continue this discussion in my next installment to this series, with at least a brief discussion of the balancing that has to be built for, when managing both in-house business model and financial management requirements for the companies that produce these technologies, and the pressures that they face if they are to be and if they are remain effective when operating in demanding competitive markets. Then after that I will at least begin to discuss Point 7: artificial intelligence programming, with a goal of addressing the types of questions that I have begun raising here as to business process and its economics. And in anticipation of that, I will add more such questions to complement the basic one that I have just started that line of discussion with. Then I will turn to and similarly address the above Point 8: quantum computing and its complex of issues.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory.

Meshing innovation, product development and production, marketing and sales as a virtuous cycle 19

Posted in business and convergent technologies, strategy and planning by Timothy Platt on June 29, 2019

This is my 19th installment to a series in which I reconsider cosmetic and innovative change as they impact upon and even fundamentally shape the product design and development, manufacturing, marketing, distribution and sales cycle, and from both the producer and consumer perspectives (see Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 342 and loosely following for Parts 1-18.)

I initially offered a set of to-address topics points in Part 16 that I have been discussing since then. And I repeat that list here as I continue doing so, noting in advance that I have in effect been simultaneously addressing its first three points up to here, due to their overlaps:

1. What does and does not qualify as a true innovation, and to whom in this overall set of contexts?
2. And where, at least in general terms could this New be expected to engender resistance and push-back, and of a type that would not simply fit categorically into the initial resistance patterns expected from a more standard cross-demographic innovation acceptance diffusion curve and its acceptance and resistance patterns?
3. How in fact would explicit push-back against globalization per se even be identified, and certainly in any real case-in-point, detail-of-impact example, where the co-occurrence of a pattern of acceptance and resistance that might arise from that might concurrently appear in combination with the types and distributions of acceptance and resistance that would be expected from marketplace adherence to a more standard innovation acceptance diffusion curve? To clarify the need to address this issue here, and the complexities of actually doing so in any specific-instance case, I note that the more genuinely disruptively new an innovation is, the larger the percentage of potential marketplace participants would be that would be expected to hold off on accepting it and at least for significant periods of time, and with their failure to buy and use it lasting throughout their latency-to-accept periods. But that failure to buy in on the part of these involved demographics and their members does not in and of itself indicate anything as to their underlying motivation for doing so, longer term and as they become more individually comfortable with its particular form of New. Their marketplace activity, or rather their lack of it would qualify more as noise in this system, and certainly when anything like a real-time analysis is attempted to determine underlying causal mechanisms in the market activity and marketplace behavior in play. As such, any meaningful analysis and understanding of the dynamics of the marketplace in this can become highly reactive and after the fact, and particularly for those truly disruptive innovations that would only be expected to appeal at first to just a small percentage of early and pioneer adaptor marketplace participants.
4. This leads to a core question of who drives resistance to globalization and its open markets, and how. And I will address that in social networking terms.
5. And it leads to a second, equally important question here too: how would globalization resistance-based failure to buy in on innovation peak and then drop off if it were tracked along an innovation disruptiveness scale over time?

My primary goal for this series installment is to focus on Points 3 and 4 of that list, but once again, given the overlaps implicit in this set of issues as a whole, I will also return to Part 1 again to add further to my discussion of that as well.

To more formally outline where this discussion is headed, I ended Part 18 with this anticipatory note as to what would follow, at least beginning here:

• I am going to continue this discussion in a next series installment where I will make use of an approach to social network taxonomy and social networking strategy that explicitly addresses the issues of who networks with and communicates with whom, and that also can be used to map out patterns of influence as well: important to both the basic innovation diffusion model and to understanding the forces and the dynamics of global flattening and wrinkling too. In anticipation of that discussion to come, that is where issues of agendas enter this narrative. Then after discussing that, I will explicitly turn to the above-repeated Point 3: a complex of issues that has been hanging over this entire discussion since I first offered the above topics list at the end of Part 16 of this series. And I will address a very closely related Point 4 and its issues too, as already briefly touched upon here.

I will in fact address all of that in what follows in this series. But to set the stage for that, I step back to add another layer of nuance if not outright complexity to the questions and possible answers of what innovation is in this context, and to whom. And I will very specifically use the points that I will make there, in what follows in addressing the issues of the above-added bullet point.

• As a first point that I raise here, a change might arise in its significance to be seen as an innovation because “at least someone might realistically be expected to see it as creating at least some new source or level of value or challenge, however small, at least by their standards and criteria” (with “…value or challenge” offered with their alternative valances because such change can be positively or negatively perceived.)
• But it is an oversimplifying mistake to only consider such changes individually and as if they only arose as in a context-free vacuum. More specifically, a sufficient number of individually small changes: small and even more cosmetic-in-nature innovations, all arriving in a short period of time and all affecting a same individual or group, can have as great an impact upon them and their thinking as a single, stand alone disruptively new innovation would have on them. And when those people are confronted with what they would come to see as an ongoing and even essentially ceaseless flood of New, and even if that just arrives as an accumulating mass of individually small forms of new, they can come to feel all but overwhelmed by it. Context can in fact be essentially everything here.
• Timing and cumulative impact are important here, and disruptive is in the eye of the beholder.

Let’s consider those points, at least to start, for how they impact upon and even shape the standard innovation acceptance diffusion curve as empirically arises when studying the emergence and spread of acceptance of New, starting with pioneer and early adaptors and continuing on through late and last adaptors.

• Pioneer and early adaptors are both more tolerant of and accepting of new and the disruptively new, and more tolerant of and accepting of a faster pace of their arrival.
• Or to put this slightly differently, late and last adaptors can be as bothered by too rapid a pace of new and of change, as they would be bothered by pressure to adapt to and use any particular new innovation too quickly to be comfortable for them, and even just any new more minor one (more minor as viewed by others.)
• Just considering earlier adaptors again here, these details of acceptance or caution, or of acceptance and even outright rejection and resistance stem from how more new-tolerant and new-accepting individuals and the demographics they represent, have a higher novelty threshold for even defining a change in their own thinking as actually being significant enough to qualify as being more than just cosmetic. And they have a similarly higher threshold level for qualifying a change that they do see as being a significant innovation, as being a disruptively new and novel one too.
• What is seen as smaller to the earlier adaptors represented in an innovation acceptance diffusion curve, is essentially certain to appear much larger for later adaptors and for whatever individual innovative changes, or combinations and flows of them that might be considered.

And with that continuation of my Point 1 (and by extension, Point 2) discussions, I turn to consider how a flow of new innovations would impact upon a global flattening versus global wrinkling dynamic.

While most if not all of the basic points that I have just raised here in my standard innovation acceptance curve discussion apply here too, at least insofar as its details can be mapped to corresponding features there too, there is one very significant difference that emerges in the flattening versus wrinkling context:

• Push-back and resistance, as exemplified by late and last adaptors in the standard acceptance curve pattern, is driven by questions such as “how would I use this?” or “why would I need this?”, as would arise at a more individual level. But resistance to acceptance as it arises in a wrinkling context, is driven more by “what would adapting this new, challenge and even destroy in my society and its culture?” It is more a response to perceived societal-level threat.

This is a challenge that is defined at, and that plays out at a higher, more societally based organizational level than would apply to a standard innovation acceptance curve context. And this brings me very specifically and directly to the heart of Point 4 of the above list and the question of who drives resistance to globalization and its open markets, and how. And I begin addressing that by noting a fundamentally important point of distinction:

• Both acceptance of change and resistance of it, in a global flattening and wrinkling context, can and do arise from two sometimes competing, sometimes aligned directions. They can arise from the bottom up and from the cumulative influence of local individuals, or they can arise from the top down.
• And to clarify what I mean there, local and bottom up, and (perhaps) more centralized for source and top down can mean any combination of two things too, as to the nature of the voice and the power of influence involved. This can mean societally shaped and society shaping political authority and message coming from or going to voices of power and influence there. Or this can mean the power of social media and of social networking reach. And that is where I will cite and discuss social networking taxonomies and networking reach and networking strategies as a part of this discussion.

I am going to continue this discussion in a next series installment where I will focus explicitly on the issues and challenges of even mapping out and understanding global flattening and its reactive counterpoint: global wrinkling. And as a final thought for here that I offer in anticipation of that line of discussion to come, I at least briefly summarize a core point that I made earlier here, regarding innovation and responses to it per se:

• Change and innovation per se, can be disruptive and for both the perceived positives and negatives that that can bring with it. And when a sufficiently high percentage of an overall population primarily see positive, or at worst neutral there, flattening is at least going to be more possible and certainly as a “natural” path forward. But if a tipping point level of overall negative impact-perceived response arises, then the acceptance or resistance pressures that arise will favor wrinkling and that will become a societally significant force and it will represent a significant part of the overall voice for those peoples too.

I will discuss the Who of this and both for who leads and for who follows in the next installment to this narrative. Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. And see also Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations.

Dissent, disagreement, compromise and consensus 32 – the jobs and careers context 31

This is my 32nd installment to a series on negotiating in a professional context, starting with the more individually focused side of that as found in jobs and careers, and going from there to consider the workplace and its business-supportive negotiations (see Guide to Effective Job Search and Career Development – 3 and its Page 4 continuation, postings 484 and following for Parts 1-31.)

I have been successively addressing each of a set of workplace issues and challenges that can arise for essentially anyone who works sufficiently long-term with a given employer, that I repeat here in list form (with appended links to where I have discussed them) for smoother continuity of narrative:

1. Changes in tasks assigned, and resources that would at least nominally be available for them: timeline allowances and work hour requirements definitely included there (see Part 25 and Part 26),
2. Salary and overall compensation changes (see Part 27),
3. Overall longer-term workplace and job responsibility changes and constraints box issues as change might challenge or enable you’re reaching your more personally individualized goals there (see Part 28),
4. Promotions and lateral moves (see Part 29),
5. And dealing with difficult people (see Part 30 and Part 31).

And while all of these issues can arise and can need to be addressed in combination with others on the list, they can also all be seen as separate and distinct jobs and careers issues that can call for largely separate negotiations to resolve. I have in fact discussed them separately up to here as more stand-alone topics. But I added one more issue: one more increasingly common challenge to this list that of necessity involves all of the above, simultaneously, and more. And that is:

6. Negotiating possible downsizings and business-wide events that might lead to them, and how you might best manage your career when facing the prospects of getting caught up in that.

I added this example of a negotiations-requiring workplace situation last on this list, because navigating this type of challenge as effectively as possible, calls for skills in dealing with all of the other issues on this list and more, and with real emphasis on Plan B preparation and planning, and on its execution too as touched upon in Part 23 and again in Parts 30 and 31. And my goal here is to at least begin a discussion as to how you might better approach this challenge or its possibility. And as a starting point that means more clearly stating what downsizings are, as cause and effect driven processes.

• You cannot effectively negotiate absent an understanding of what you have to, and can negotiate about. And knowing that calls for understanding the context and circumstance, and the goals and priorities of the people who you would face on the other side of the table. And as a crucial part of that, this also includes knowing as fully and clearly as possible, what options and possibilities they might and might not even be able to negotiate upon.

I begin this first step discussion for addressing the above Point 6 by acknowledging that I have personally been caught up in two downsizings so I write from direct experience here, and not simply from the perspective of abstract principles. And I have seen them play out when I was not an in-house employee or manager too. And that perhaps-relevant piece of my own workplace experience shared, I begin this posting’s main line of discussion by at least briefly outlining some of the details of the heart of this challenge itself: what downsizings are and what leads to them.

• In principle, this is simple and straightforward. Essentially any business that grows in scale beyond that of a single proprietor owner has at least some hands-on working, non-managerial employees. And as a business grows in scale it generally takes on managers who supervise them and coordinate their efforts towards the resolution of larger tasks than any single individual could carry out on their own. And next level up managers come onboard too if this trend towards growth continues. And payroll and benefits expenses can and often do rise in scale and significance to become among the largest ongoing expenses that most businesses face. So if a business has a set-back in its incoming revenue and they have to cut back on their expenses, staff and directly staff-related expenses are usually one of the first possible places considered when cutbacks in expenses paid and due are on the table.
• This can mean last in, first out and certainly in business contexts where seniority of employment has to be taken into account. Businesses with a strong union presence often follow that approach. But this type of retain or let-go determination can also be skills-based, or location based if for example it is decided to close a more peripheral office that might not have been as much of a profit center as desired or expected.
• Downsizings, while more usually driven by revenue and expense imbalances, can also be driven by pressures to phase out old systems and install new ones that might be better fits for the current business model in place. Think of staff reductions there, as they can arise when a business decides to outsource a functional area and its work, making it unnecessary to keep the people who have done that in-house as ongoing employees. To take that out of the abstract with a specific example, there was a time when large numbers of businesses had their own in-house teams for developing and maintaining the more technical side of their websites and online presence. It is now much more common to outsource that type of specialized work to third party providers that only do this type of work and that can more cost-effectively provide these services. And that widespread change in organizational perspective and priorities lead to a significant numbers of downsizings for people who had worked in-house in Information Technology and related departments, and with those businesses shifting their in-house focus there, essentially entirely to a more Marketing and Communications or other content-oriented focus.
• But to be blunt, and I will add a lot more candid than most senior managers are on this, downsizings are not just about cutting down on staff to reduce redundancies and to bring the business into leaner and more effective focus for meeting its business performance needs. Downsizings can also be used as opportunities to cut out and remove people who have developed reputations as being difficult to work with, or for whatever reasons that the managers they report to would see as sufficiently justifying. They are used as a no-fault opportunity for removing staff who do not fit into the corporate culture or who have ruffled feathers higher up on the table of organization and even if they would otherwise more probably be retained and stay.
• People can be and sometimes are fired with cause. But a business that pursues that path needs to be able to back up any such actions with fact and evidence-based reasons that they could offer to justify those dismissals. Otherwise they run a risk of facing unlawful termination law suits, and with a distinct possibility for that happening if they operate in any of a great many legal jurisdictions.
• Downsizings, on the other hand are entirely no-fault in nature, at least as formally defined. They can and do sweep up skilled workers who have proven their value to the organization and who have supportively fit into it and contributed to it. They can and do sweep up people the business would otherwise want to keep on-staff and long term. But downsizings can also be used, and are used to get rid of people who do their jobs and at performance levels that would mitigate against their being fired per se, but who at least someone in management would like to see leave anyway. All such a manager would need there is the cover of their business seeing need to enter into an actual downsizing, for reorganizational purposes.
• The point that I have been leading to in the past three bullet points of this list is simple in principle, even as it is complex and largely opaque in the details of any given actual downsizing events. People are let go for any and usually all of a complex mix of reasons with that including financial need on the part of a business, with that meaning dismissal of good and desired employees, with that meaning reduction in or elimination of functional areas in-house that could more cost-effectively be outsourced, and with that meaning “housecleaning” out employees who while effective at their jobs, do not fit there. And ultimately, all of these decisions are judgment calls on the part of managers who are involved in carrying these actions out. I will come back to this point and its possibilities, later in this series when I begin to discuss negotiations in this context. But to round out this bullet pointed list of downsizing-clarifying points, and to bring this point itself into clearer focus, consider the following scenario: the CEO of a business that has suddenly found itself in severe fiscal stress tells the C level heads of its functional arms on the table of organization that all of their departments and services are going to have to make reductions in scale, sharing the pain. No one service or functional area will simply take the hit there. So word goes down through middle and lower management that they have what amount to quotas to fill, and then they have to choose who is to be let go. If you work there and can see this coming, what can you do and how can you best present and represent yourself if you in fact want to stay working there? That is where your negotiations and your skills at that enter this narrative.

There are of course, more possible reasons and rationales for downsizings that I could have raised in my above list; my above-offered outline of what downsizings are is just a simplified cartoon representation of a more complex and nuanced process that is essentially always riven by pushback and challenge. Just consider my last bullet point and its “share the pain” example. Every senior manager and certainly every C level officer who is challenged to make their share of these cuts will want to argue the case for why their services should be spared, or at least allowed to make smaller cuts.

I will consider at least one more reason for downsizing at all as I continue this narrative, which I will identify here in anticipation of discussion to come. And it is one that I have seen play out first hand so I know from personal observation how real and how impactful it can be. A new, more senior manager who wants to do some personal empire building within their new employer’s systems can use a downsizing and reorganization in their area of oversight responsibility to put their name on how things are done there. Consider this a confrontational career enhancement tactic, and I will discuss it in that context. And consider this as an arena where a prepared skilled employee or manager can negotiate their own circumstances with this type of empire builder too.

And with that noted, I have at least laid out the basic issues leading up to a downsizing here, and the basic issues of who gets swept up in them too. I will continue this discussion in the next installment of this series where I will begin addressing preparation and response options that hands-on employees and managers can use when facing these types of possibilities.

Meanwhile, you can find this and related material at Page 4 to my Guide to Effective Job Search and Career Development, and also see its Page 1, Page 2 and Page 3. And you can also find this series at Social Networking and Business 2 and also see its Page 1 for related material.

Finding virtue in simplicity when complexity becomes problematical, and vice versa 17

Posted in social networking and business by Timothy Platt on June 23, 2019

This is my 17th installment to a series on simplicity and complexity in business communications, and on carrying out and evaluating the results of business processes, tasks and projects (see Social Networking and Business 2), postings 257 and loosely following for Parts 1-16.)

I have been discussing trade-offs and related contingency issues in recent installments to this series, regarding:

• Allowing and even actively supporting free and open communications in a business, in order to facilitate work done and in order to create greater organizational agility and flexibility there while doing so …
• While also maintaining effective risk management oversight of sensitive and confidential information.

I have in fact turned to consider this particular due diligence and risk management, versus competitive efficiency balancing act in a number of contexts and from a number of perspectives in the course of developing this blog. To be more specific here, I addressed this dynamic in Part 16 of this series from an outside regulatory agency and legal mandate perspective, on the access and control side of that two-sided challenge.

I begin this next installment to that narrative progression, by picking up on a key point of distinction that I have at least made note of in this series but that merits specific attention here. I often find myself writing of strategy and strategic approaches to thinking, planning and executing in a business. I begin here by raising a very basic question, that when more fully addressed, highlights aspects to that set of issues that can easily be overlooked. What, at least in this series’ type of context, is the difference between strategy and tactics?

I would argue that ultimately the most important differences between them are ones of scale. Tactical is essentially always and by definition, here-and-now oriented, and local and immediate in both planning and execution. Strategic automatically means inclusion of a contextually significant combination of two wider-ranging types of considerations:

• Planning and execution along longer timeframes, and/or
• Planning and execution with an active and engaged, and inclusive awareness of the fuller range of interactive scope that decisions and actions in one part of a larger system might have elsewhere in that system.

Business systems are essentially always interconnected, and even when the process flows under immediate tactical consideration in them, cannot be readily connected to each other through the types of direct and immediate dependencies that might for example appear in a project work flow Gantt chart or related planning tool. Slowdowns and related challenges, to express this in terms of possible problematical consequences, can and do arise unexpectedly, and certainly when wider perspective consideration of possible complications as would enter into due diligence-based strategic planning, is not done.

And this leads to a fundamental truth and a fundamental problem: standard operating procedure rapidly becomes rote and routine, and any actual still-ongoing planning that enters into it tends to become tactical, and terse and even essentially pro forma tactical at that. This might not at least seem to matter in general, and certainly when everything is proceeding as (essentially automatically) expected. But change and the unexpected, and from any impactfully significant source, can derail that. So how can you instill looking as if with fresh eyes into the routine and standard? How, in fact can you do this as a matter of more usual practice, even if just as a sampling and reviewing process, and even when a business at least nominally should be prepared for the unexpected and even the disruptively so?

To take the issues that I raise here out of the abstract, I have seen emergency response systems that have come to take way too many things for granted and both for what they look for and for how they normatively operate, just to see unexpected and un-prepared for contingencies and occurrences bring them effectively to a halt, and precisely when they are suddenly most needed. And yes, I have some very specific instances of this in mind as I write this posting and this part of it, that still bother me for their consequences.

• Twenty-twenty hindsight and recriminations do not in fact help there. Better: how can this possibility for what amounts to a fundamental breakdown in capability, be more proactively managed and both to reduce the chances of this type of off-the-rails challenge happening, and to speed up an effective corrective response when the unexpected does happen?

Wider timeframe consideration here, is if anything even more complex for its potential increases in reach and inclusion and both in planning and in targeted execution. This means looking out further in time per se, but it also includes consideration of process lifecycles and related consequentially recurring patterns, and separating them from trending and non-trending but non-recurring patterns. And it means discerning these and related patterns out of the more random background static that arise in the ongoing output of essentially any recurring task as it is successively carried out. Tactical-only, with its here-and-now, more blindered focus does not deal with anything like longer-term patterns, and in its pure form tends to function as if “current” was in a fundamental sense “always.”

Strategic also means looking for possible change events that can have what at least amounts to retroactive impact, and certainly where prior decisions and actions have set up resource bases and process systems related to them, that would now have to be rebuilt and with data and other resources already in place affected accordingly. I focused in large part in Part 16 of this series on regulatory law. First of all, not all legal systems explicitly bar ex post facto laws. And second, even legal systems and jurisdictions that do explicitly bar them can still allow changes in interpretation of existing law as initially legislated and passed, through court decision and case law. And that can still create immediate if not effectively-retroactive pressures to change and correct systems in place.

There, remediation might not mean being fined for actions taken and business practices and processes followed prior to a definitive case law decision that would demand change. But a business might have to be prepared to change and remediate from a previously at least tacitly accepted and even presumed-required prior, to a now demanded new, and very quickly and even where this would mean updating or even fundamentally replacing complex deeply integrated-in systems – such as large parts of the database query and data access systems in place in a big-data accumulating, processing, and using business, and processes in place for making use of data from this critical resource.

I stated at the end of Part 16 that I would turn here to consider the issues of information management from a more strategic perspective, and I have begun doing so here with this more general discussion. I am going to switch directions in my next installment to this series, to more explicitly discuss information management strategy as a specific area of overall business strategy and planning per se. Meanwhile, you can find this and related material at Social Networking and Business and its Page 2 continuation. And also see my series: Communicating More Effectively as a Job and Career Skill Set, for its more generally applicable discussion of focused message best practices per se. I initially offered that with a specific case in point jobs and careers focus, but the approaches raised and discussed there are more generally applicable. You can find that series at Guide to Effective Job Search and Career Development – 3, as its postings 342-358.

Reconsidering Information Systems Infrastructure 10

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on June 20, 2019

This is the 10th posting to a series that I am developing, with a goal of analyzing and discussing how artificial intelligence and the emergence of artificial intelligent agents will transform the electronic and online-enabled information management systems that we have and use. See Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 374 and loosely following for Parts 1-9. And also see two benchmark postings that I initially wrote just over six years apart but that together provided much of the specific impetus for my writing this series: Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground and a late 2017 follow-up to that posting.

I have been discussing artificial intelligence agents from a variety of perspectives in this series, turning in Part 9 for example, to at least briefly begin a discussion of neural network and related systems architecture approaches to hardware and software development in that arena. And my goal in that has been to present a consistently, logically organized discussion of a very large and still largely amorphous complex of issues, that in their simplest case implementations are coming to be more fully understood, but that are still open and largely undefined when moving significantly beyond that.

We now have a fairly good idea as to what artificial specialized intelligence is and certainly when that can be encapsulated into rigorously defined starter algorithms and with tightly constrained self-learning capabilities added in, that would primarily just help an agent to “random walk” its way towards greater efficiency in carrying out its specifically defined end-goal tasks. But in a fundamental sense, we are still in the position of standing as if at the edge of an abyss of yet to acquire knowledge and insight, when it comes to dealing with genuinely open-ended tasks such as natural conversation, and the development of artificial agents that can master them.

I begin this posting by reiterating a basic paradigmatic approach that I have offered in other information technology development contexts, and both in this blog and as a consultant, that explicitly applies here too.

• Start with the problem that you seek to solve, and not with the tools that you might use in accomplishing that.

Start with the here-artificial intelligence problem itself that you seek to effectively solve or resolve: the information management and processing task that you seek to accomplish, and plan and design and develop from there. In a standard if perhaps at least somewhat complex-problem context and as a simple case ideal, this means developing an algorithm that would encapsulate and solve a specific, clearly stated problem in detail, and then asking necessary questions as they arise at the software level and then the hardware level, to see what would be needed to carry that out. And ultimately that will mean selecting designing and building at the hardware level for data storage and accessibility, and for raw computational power requirements and related capabilities that would be needed for this work. And at the software level this would mean selecting programming languages and related information encoding resources that are capable of encoding the algorithm in place and that can manage its requisite data flows as it is carried out. And it means actually encoding all of the functionalities required in that algorithm, in those software tools so as to actually perform the task that it specifies. (Here, I presume in how I state this, as a simplest case scenario, a problem that can in fact be algorithmically defined up-front and without any need for machine learning and algorithm adjustment as better and best solutions are iteratively found for the problem at hand. And I arbitrarily represent the work to be done there as fitting into what might in fact be a very large and complex “single overall task”, and even if carrying it out might lead to very different outcomes depending on what decision points have to be included and addressed there and certainly at a software level. I will, of course, set aside these and several other similar more-simplistic assumptions as this overall narrative proceeds and as I consider the possibilities of more complex artificial intelligence challenges. But I offer this simplified developmental model approach here, as an initial starting point for that further discussion to come.)

• Stepping back to consider the design and development approach that I have just offered here, if just in a simplest application form, this basic task-first and hardware detail-last approach can be applied to essentially any task, problem or challenge that I might address here in this series. I present that point of judgment on my part as an axiomatic given and even when ontological and even evolutionary development, as self-organized and carried out by and within artificial agents carrying out this work, is added into the basic design capabilities developed. There, How details might change but overall Towards What goals would not necessarily do so, unless the overall problem to be addressed in changed or replaced.

So I start with the basic problem-to-software-to-hardware progression that I began this line of discussion with, and continue building from there with it, though with a twist and certainly for artificial intelligence oriented tasks that are of necessity going to be less settled up-front as to their precise algorithms as would ultimately be required. I step back from my more firmly stated a priori assumptions as explicitly outlined above in my simpler case problem solving scenario, that I would continue to assume and pursue as-is in more standard computational or data processing task-to-software-to-hardware computational systems analyses, and certainly where off the shelf resources would not suffice, to add another level of detail there.

• And more specifically here, I argue a case for building flexibility into these overall systems and with the particular requirements that that adds to the above development approach.
• And I argue a case for designing and developing and building overall systems – and explicitly conceived artificial intelligence agents in particular, with an awareness of a need for such flexibility in scale and in design from their initial task specifications step in this development process, and with more and more room for adjustment and systems growth added in, and for self-adjustment within these systems added in for each successive development step as carried out from there too.

I focused in Part 9 on hardware, and on neural network designs and their architecture, at least as might be viewed from a higher conceptual perspective. And I then began this posting by positing in effect, that starting with the hardware and its considerations might be compared to looking through a telescope – but backwards. And I now say that a prospective awareness of increasing resource needs, with next systems-development steps is essential. And that understanding needs to enter into any systems development effort as envisioned here, and from the dawn of any Day 1 in developing and building towards it. This flexibility and its requisite scope and scale change requirements, I add, cannot necessarily be anticipated in advance of its actually being needed, and at any software or hardware level, and certainly not in any detail. So I write here of what might be called flexible flexibility: flexibility that itself can be adjusted and updated for type and scope as changing needs and new forms of need arise. So on the face of things, this sounds like I have now reversed course here and that I am arguing a case for hardware then software then problem as an orienting direction of focused consideration, or at the very least hardware plus software plus problem as a simultaneously addressed challenge. There is in fact an element of truth to that final assertion, but I am still primarily just adding flexibility and capacity to change directions of development as needed, into what is still basically a same settled paradigmatic approach. Ultimately, the underlying problem to be resolved has to take center stage and the lead here.

And with that all noted and for purposes of narrative continuity from earlier installments to this series if nothing else, I add that I ended Part 9 by raising a tripartite point of artificial intelligence task characterizing distinction, that I will at least begin to flesh out and discuss here:

• Fully specified systems goals (e.g. chess rules as touched upon in Part 8 for an at least somewhat complex example, but with fully specified rules defining a win and a loss, etc. for it.),
• Open-ended systems goals (e.g. natural conversational ability as more widely discussed in this series and certainly in its more recent installments with its lack of corresponding fully characterized performance end points or similar parameter-defined success constraints), and
• Partly specified systems goals (as in self-driving cars where they can be programmed with the legal rules of the road, but not with a correspondingly detailed algorithmically definable understanding of how real people in their vicinity actually drive and sometimes in spite of those rules: driving according to or contrary to the traffic laws in place.)

My goal here as noted in Part 9, is to at least lay a more detailed foundation for focusing on that third, gray area middle-ground task category in what follows, and I will do so. But to explain why I would focus on that and to put this step in this overall series narrative into clearer perspective, I will at least start with the first two, as benchmarking points of comparison. And I begin that with fully specified systems and with the very definite areas of information processing flexibility that they still can require – and with the artificial agent chess grand master problem.

• Chess is a rigorously definable game as considered at an algorithm level. All games as properly defined involve two players. All involve functionally identical sets of game pieces and both for numbers and types of pieces that those players would start out with. All chess games are played on a completely standardized game board with opposing side pieces positioned to start in a single standard accepted pattern. And opposing players take turns moving pieces on that playing board, with rules in place that would determine who is to make the first move, going first in any given game played.
• The chess pieces that are employed in this all have specific rules associated with them as to how they can be moved on a board, and for how pieces can be captured and removed by an opposing player. And chess games proceed until a player sees that they are one move away from being able to win in which case they declare “check.” Winning by definition for chess always means capturing an opposing player’s king piece. And when they win and with the determination of a valid win fully specified, they declare “checkmate.” And if a situation arises in which both players realize that a definitive formal win cannot be achieved in a finite number of moves from how the pieces that remain in play are laid out in the board, preventing one player from being able to capture their opponent’s king piece and winning, a draw is called.
• I have simplified this description for a few of the rules possibilities that enter into this game when correctly played, omitting a variety of at least circumstantially important details. But bottom line, the basic How of playing chess is fully and readily amenable to being specified within a single highly precise algorithm that can be in place and in use a priori to the actual play of any given chess game.
• Similar algorithmically defined specificity could be offered in explaining a much simpler game: tic-tac toe with its simple and limited range of moves and move combinations. Chess rises to the level of complexity and the level of interest that would qualify it for consideration here because of the combinatorial explosion in the number of possible distinct games of chess that can be played, each carrying out an at least somewhat distinct combination of moves when compared with any other of the overall set. All games start out the same with all pieces identically positioned. After the first set of moves with each player moving once, there are 400 distinct board setups possible with 20 possible white piece moves and 20 possible black piece moves. After two rounds of moves there are 197,742 possible board layouts and after three, that number expands out further to over 121 million. This range of possibilities arises at the very beginning of any actual game with the numbers of moves and of board layouts continuing to expand from there, and with the overall number of moves and move combinations growing to exceed and even vastly exceed the number of board position combinations possible, as differing move patterns can converge on same realized board layouts. And this is where strategy and tactics enter chess and in ways that would be meaningless for a game such as tic-tac toe. And this is where the drive to develop progressively more effective chess playing algorithm-driven artificial agents enters this too, where those algorithms would just begin with the set rules of chess and extend out from there to include tactical and strategic chess playing capabilities as well – so agents employing them can play strongly competitive games and not just by-the-rules, “correct” games.

So when I offer fully specified systems goals as a task category above, I assume as an implicit part of its definition that the problems that it would include all involve enough complexity so as to prove interesting, and that they be challenging to implement and certainly if best possible execution of the specific instance implementations involved in them (e.g. of the specific chess games played) is important. And with that noted I stress that for all of this complexity, the game itself is constrainable within a single and unequivocal rules-based algorithm, and even when effective strategically and tactically developed game play would be included.

That last point is going to prove important and certainly as a point of comparison when considering both open-ended systems goals and their so-defined tasks, and partly specified systems goals and their tasks. And with the above offered I turn to the second basic benchmark that I would address here: open-ended systems goals. And I will continue my discussion of natural conversation in that regard.

I begin with what might be considered simple, scale of needed activity-based complexity and the numbers of chess pieces on a board, and on one side of it in particular, when compared to the number of words as commonly used in wide-ranging conversation, in real-world natural conversation. Players start out with 16 chess pieces each and with fewer functionally identical game piece types than that; if you turn to resources such as the Oxford English Dictionary to benchmark English for its scale as a widely used language, it lists some 175,000 currently used words and another roughly 50,000 that are listed as obsolete but that are still at least occasionally used too. And this leaves out a great many specialized terms that would only arise when conversing about very specific and generally very technical issues. Assuming that an average person might in fact only actively use a fraction of this: let’s assume some 20,000 words on a more ongoing basis, that still adds tremendous new levels of complexity to any task that would involve manipulating and using them.

• Simple complexity of the type addressed there, can perhaps best be seen as an extraneous complication here. The basic algorithm-level processing of a larger scale piece-in-play set, as found in active vocabularies would not necessarily be fundamentally affected by that increase in scale beyond a requirement for better and more actively engaged sorting and filtering and related software as what would most probably be more ancillary support functions. And most of the additional workload that all of this would bring with it would be carried out by scaling up the hardware and related infrastructure that would carry out the conversational tasks involved and certainly if a normal rate of conversational give and take is going to be required.
• Qualitatively distinctive, emergently new requirements for actually specifying and carrying out natural conversation would come from a very different direction, that I would refer to here as emergent complexity. And that arises in the fundamental nature of the goal to be achieved itself.

Let’s think about conversation and the actual real-world conversations that we ourselves enter into and every day. Many are simple and direct and focus on the sharing of specific information between or concerning involved parties. “Remember to pick up a loaf of bread and some organic lettuce at the store, on the way home today.” “Will do, … but I may be a little late today because I have a meeting that might run late at work that I can’t get out of. I’ll let you know if it looks like I am going to be really delayed from that. Bread and lettuce are on the way so that shouldn’t add anything to any delays there.”

But even there, and even with a brief and apparently focused conversation like this, a lot of what was said and even more of what was meant and implied, depended on what might be a rich and complex background story, and with added complexities there coming from both of the two people speaking. And they might individually be hearing and thinking through this conversation in terms of at least somewhat differing background stories at that. What, for example, does “… be a little late today” mean? Is the second speaker’s boss, or whoever is calling this meeting known for doing this, and disruptively so for the end of workday schedules of all involved? Does “a little” here mean an actual just-brief delay or could this mean everyone in the room feeling stressed for being held late for so long, and with that simply adding to an ongoing pattern? The first half of this conversation was about getting more bread and lettuce, but the second half of it, while acknowledging that and agreeing to it, was in fact very different and much more open-ended for its potential implied side-messages. And this was in fact a very simple and very brief conversation.

Chess pieces can make very specific and easily characterized moves that fit into specific patterns and types of them. Words as used in natural conversations cannot be so simply characterized, and conversations – and even short and simple ones, often fit into larger ongoing contexts, and into contexts that different participants or observers might see very differently. And this is true even if none of the words involved have multiple possible dictionary definition meanings, if none of them can be readily or routinely used in slang or other non-standard ways, and if none of them have matching homophones – if there is not confusion as to precisely which word was being used, because two or more that differ by definition sound the same (e.g. knight or night, and to, two or two.)

And this, for all of its added complexities, does not even begin to address issues of euphemism, or agendas that a speaker might have with all of the implicit content and context that would bring to any conversation, or any of a wide range of other possible issues. It does not even address the issues of accent and its accurate comprehension. But more to the point, people can and do converse about any and every of a seemingly entirely open-ended range of topics and issues, and certainly when the more specific details addressed are considered. Jut consider the conversation that would take place if the shopper of the above-cited chat were to arrive home with a nice jar of mayonnaise and some carrots instead of bread and lettuce, after assuring that they knew what was needed and saying they would pick it up at the store. Did I raise slang here, or dialect differences? No, and adding them in here still does not fully address the special combinatorial explosions of meaning at least potentially expressed and at least potentially understood that actual wide-ranging open ended natural conversation brings with it.

And all of this brings me back to the point that I finished my above-offered discussion of chess with, and winning games in it as an example of a fully specified systems goal. Either one of the two players in a game of chess wins and the other loses, or they find themselves having to declare a draw for being unable to reach a specifically, clearly, rules-defined win/lose outcome. So barring draws that might call of another try that would at least potentially reach a win and loss, all chess games if completed, lead to a single defined outcome. But there is no single conversational outcome that would meaningfully apply to all situations and contexts, all conversing participants and all natural conversation – unless you were to attempt to arrive at some overall principle that would of necessity be so vague and general as to be devoid of any real value. Open-ended systems goals, as the name implies, are open-ended. And a big part of developing and carrying through a realistic sounding natural conversational capability in an artificial agent has to be that of keeping it in focus in a way that is both meaningful and acceptable to all involved parties, where that would mean knowing when a conversation should be concluded and how, and in a way that would not lead to confusion or worse.

And this leads me – finally, to my gray area category: partly specified systems goals and the tasks and the task performing agents that would carry them out and on a specific instance by specific instance basis and in general. My goal for what is to follow now, is to start out by more fully considering my self-driving car example, then turning to consider partly specified systems goals and the agents that would carry out tasks related to them, in general. And I begin that by making note of a crucially important detail here:

• Partly specified systems goals can be seen as gateway and transitional challenges, and while solving them at a practical matter can be important in and of itself,
• Achieving effective problem resolutions there can perhaps best be seen as a best practices route for developing the tools and technologies that would be needed for better resolving open-ended systems challenges too.

Focusing on the learning curve potential of these challenge goals, think of the collective range of problems that would fit into this mid-range task set as taking the overall form of a swimming pool with a shallow and a deep end, and where deep can become profoundly so. On the shallow end of this continuum-of-challenge degree, partly specified systems merge into the perhaps more challenging end of fully specified systems goals and their designated tasks. So as a starting point, let’s address low-end, or shallow end partly specified artificial intelligence challenges. At the deeper end of this continuum, it would become difficult to fully determine if a proposed problem should best be considered partly specified or open-ended in nature, and it might in fact start out designated one way to evolve into the other.

I am going to continue this narrative in my next installment to this series, starting with a more detailed discussion of partly specified systems goals and their agents as might be exemplified by my self-driving car problem/example. I will begin with a focus on that particular case in point challenge and will continue from there to consider these gray area goals and their resolution in more general terms, and both in their own right and as evolutionary benchmark and validation steps that would lead to carrying out those more challenging open-ended tasks.

In anticipation of that line of discussion to come and as an opening orienting note for what is to come in Part 11 of this series, I note a basic assumption that is axiomatically built into the basic standard understanding of what an algorithm is: that all step by step process flows as carried out in it, would ultimately lead to or at least towards some specific at least conceptually defined goal. (I add “towards” there to include algorithms that for example seek to calculate the value of the number pi (π) to an arbitrarily large number of significant digits where complete task resolution is by definition going to be impossible for that. And for a second type of ongoing example, consider an agent that would manage and maintain environmental conditions such as atmospheric temperature and quality within set limits in the face of complex ongoing perturbing forces, where ‘achieve and done” cannot apply.)

Fully specified systems goals can in fact often be encapsulated within endpoint determinable algorithms that meet the definitional requirements of that axiomatic assumption. Open-ended goals as discussed here would arguably not always fit any single algorithm in that way. There, ongoing benchmarking and performance metrics that fit into agreed to parameters might provide a best alternative to any final goals specification as presumed there.

In a natural conversation, this might mean for example, people engaged in a conversation not finding themselves confused as to how their chat seems to have become derailed from a loss of focus on what is actually supposedly being discussed. But even that type and level of understanding can be complex, as perhaps illustrated with my “shopping plus” conversational example of above.

So I will turn to consider middle ground, partly specified systems goals and agents that might carry out tasks that would realize them in my next installment here. And after completing that line of discussion, at least for purposes of this series, I will turn back to reconsider open-ended goals and their agents again, and more from a perspective of general principles.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 and Page 3 continuations. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Xi Jinping and his China, and their conflicted relationship with Hong Kong – an unfolding Part 2 event: 1

Posted in macroeconomics by Timothy Platt on June 19, 2019

I have recently been offering a progression of postings on Donald Trump and on Xi Jinping, where I have been analyzing, and comparing and contrasting their approaches to leadership. And as part of that larger effort, I have been writing about how and something of why they have both turned to authoritarian approaches to both define their leadership goals and to realize them. My more recent postings in that still ongoing narrative, have focused on legacy building, first as president Trump seeks to pursue that historically defining goal, and then as Xi Jinping has. And my most recent installment to that, as can be found at:

Some Thoughts Concerning How Xi and Trump Approach and Seek to Create Lasting Legacies to Themselves 6,

was a third consecutive posting there, to discuss historical forces and events that have come to shape Xi’s approach to this, and with a goal of shedding light on both his understanding of effective leadership per se, and on how he sees his role as a leader in today’s China.

My initial intention was to continue that posting progression as a next step offering concerning Xi and his China, with that continuing that ongoing historical timeline based narrative of issues and understandings that appear to have shaped Xi and his drive to succeed. And I will write and offer that posting here in this blog soon. But I have been following a succession of recent events in Hong Kong that would prompt me to interrupt that narrative, for its timely relevance and both in understanding modern China and where that nation seeks to go, and Xi and how he seeks to lead it there.

I have identified this posting in its title, as “… an unfolding Part 2 event.” And I begin its narrative by explaining that wording, as a starting point for putting what is to follow in it into an at least recent historical perspective. And I do so by noting that the Part 1 event that can be seen as prelude to the current events that I would write of here, was the Yellow Umbrella Movement (雨傘運動) of Hong Kong protestors that came to a head and erupted across the Hong Kong Special Administrative Region in 2014, and that was most actively carried out from roughly September 28 through December 15 of that year.

Angry if passively resisting peaceful crowds, and in large numbers, shut down key areas of Hong Kong and its government services, among other areas of activity, in protest over widely perceived interference from Beijing in what should be local Hong Kong elections. And this, in many respects was the first real testing challenge that Xi Jinping faced as General Secretary of the Communist Party of China and as leader of their overall government.

I wrote two postings as current events updates to a series that I was developing at the time: China and Its Transition Imperatives, that dealt with this then-still very actively unfolding news story: Part 12.5: an inserted news update re Hong Kong and Part 12.6: a continuation of that. And my reason for adding those extemporaneous additions to that series, and for adding them in as such, was simple. I saw a direct and immediate challenge to the government of China in Beijing, and to the Communist Party of China, and to Xi Jinping and his leadership emerging, and as a globally visible spectacle. And I found myself viewing and thinking about this, in light of the 1989 Tiananmen Square protests and the crackdown and massacre that ended that ordinary citizen based reform movement attempt. I felt real concern that Xi’s China might react to and suppress this attempted reform movement through military action too, just as the China of Deng Xiaoping did in Tiananmen Square.

Xi Jinping, however, found more peaceful, if equally effective ways to curtain and then shut down that call for reform, leaving smaller numbers of protestors to carry on the struggle with their yellow umbrellas for months and even years to come, after the main protests of 2014 ended. And then a decision was made to change Hong Kong’s legal system to allow the Beijing government and its courts to impose extradition of people who would otherwise face legal trial in Hong Kong and under Hong Kong law with its legal protections, to other venues including Beijing itself and Taiwan, and at the complete discretion of the Beijing government and its courts. And this in fact came about as a generally applicable decision and action, that was intended at least for immediate use, to allow extradition of a specific individual who was accused of committing murder, to a Taiwan court – not to mainland China and not to a Beijing court at all!

For purposes of this discussion, it does not matter if that intended change had its origins in Beijing, or in Hong Kong and its government and particularly as Carrie Lam, the Chief Executive of Hong Kong since 2017, is widely known to have been hand-picked for that job by the government of the People’s Republic of China in Beijing and by the Communist Party there, and as an abrogation of local authority and control as called for in the treaty under which Great Britain returned Hong Kong to China.

What is currently happening in Hong Kong, that would rise to a level of impact and of possible action, so as to make this a Part 2 continuation of the Part 1 Yellow Umbrella Movement? I would begin addressing that question, by repeating a detail that I offered earlier in this posting in passing, here noting its current immediate significance. The Tiananmen Square Massacre: an event that is considered so toxic in the People’s Republic of China that it is all but illegal to even publically acknowledge that it happened, took place in 1989. And the current protests taking place in Hong Kong are taking place as its 30th anniversary fast approaches and at a time when that cautionary note event is back in the news again, and globally so. The people of Hong Kong certainly know at least in outline what happened then. And recent revelations as to what actually happened then and with new details emerging for that, have brought all of this into very sharp current interest focus.

You can find a brief sampling of current, as of this writing, references to this now-emerging side to that 1989 story at:

New Documents Show Power Games Behind China’s Tiananmen Crackdown,
Photos of the Tiananmen Square Protests Through the Lens of a Student Witness and
He Stayed at Tiananmen to the End. Now He Wonders What It Meant.

And you can find also-recent accounts of how this event has been all but officially obliterated from memory in mainland China, even as it is know, discussed and thought about in Hong Kong at:

Witnessing China’s 1989 Protests, 1,000 Miles From Tiananmen Square (in Hong Kong) and
Tiananmen Anniversary Draws Silence in Beijing but Emotion in Hong Kong.

And for a lessons learned news piece, as to how China’s People’s Liberation Army: the force that moved on Tiananmen Square, creating that massacre, views what they did and what came of it, see:

30 Years After Tiananmen, a Chinese Military Insider Warns: Never Forget.

And I would round out this first half to this posting by offering a brief selection of in the news links concerning this Part 2 protest movement itself:

Fearing China’s Rule, Hong Kong Residents Resist Extradition Plan,
Hong Kong March: Vast Protest of Extradition Bill Shows Fear of Eroding Freedoms,
Hong Kong Leader, Carrie Lam, Says She Won’t Back Down on Extradition Bill,
Why Are People Protesting in Hong Kong?,
Hong Kong Residents Block Roads to Protest Extradition Bill,
The Hong Kong Protests Are About More Than an Extradition Law, and
Hong Kong’s Leader, Yielding to Protests, Suspends Extradition Bill,
China Backs Hong Kong’s Leader Despite Huge Protests and
Hong Kong Protesters Return to the Streets, Rejecting Leader’s Apology.

I will have more to say on this, and particularly on its impact on Xi Jinping and his rule in China in an upcoming installment to my series on Xi and Donald Trump as they approach and seek to create lasting legacies to themselves, as cited towards the start of this posting. But for now, at least I conclude this half of this posting’s narrative here, simply adding that:

• Little if anything of what I have offered here should come across as being startling new to anyone who follows the news at all, and certainly outside of the People’s Republic of China itself where this unfolding story is being officially censored from view by their Golden Shield Project: their Great Firewall of China
• But at the same time, it is impossible to fully understand the bind that China’s leadership sees itself in from all of this, and certainly since the Yellow Umbrella Movement and definitely today, without knowing and understanding something of the background history of all of this, and certainly for China as it has held and then lost and then regained hegemony over Hong Kong as part of its national territory.

I am going to delve into some of that history in a follow-up posting to this one, where I will selectively discuss trade-motivated foreign intervention in China, and particularly as that led up to the First Opium War there. And in the course of that, I will discuss how China was forced to cede ownership of Hong Kong to Great Britain as a point of humiliation imposed on the Qing emperor, under the terms of the Treaty of Nanking of 1842 that ended that conflict. And I will equally selectively discuss the treaty and its terms, that China had to agree to when Great Britain finally returned Hong Kong to Chinese rule in 1997. I will simply add here in anticipation of what is to follow in this, that the history that I will briefly outline there, holds a great deal of meaning for Xi Jinping and his China’s leadership, and certainly when considered for how it fits into and supports the two sided historical narrative mythos that by all appearance drives much of Xi’s understanding of where China is, where it can go and how it should achieve those goals (as addressed in my above-cited Xi and Trump postings.)

Meanwhile, you can find this and related Xi-oriented material at Macroeconomics and Business and its Page 2 continuation.

%d bloggers like this: