Platt Perspective on Business and Technology

Meshing innovation, product development and production, marketing and sales as a virtuous cycle 9

Posted in business and convergent technologies, strategy and planning by Timothy Platt on December 6, 2017

This is my 9th installment to a series in which I reconsider cosmetic and innovative change as they impact upon and even fundamentally shape the product design and development, manufacturing, marketing, distribution and sales cycle, and from both the producer and consumer perspectives (see Ubiquitous Computing and Communications – everywhere all the time 2, postings 342 and loosely following for Parts 1-8.)

One of my core goals for Part 8 was to in effect force a reconsideration of what “business cycle” means, expanding it out to include impact and influence from a wider range of actively involved stakeholders, within the specific business and its marketplace, and as found throughout the supply chains and other larger value chain systems that it of necessity operates in.

• Quite simply, no business operates in a vacuum. Businesses work with and depend upon other businesses, as well as increasingly complex business-to-consumer systems with increasingly complex and important feedback and two-way communications governing all of this.

But up to here, at least in the narrative of this series, I have cited “marketplace” and “consumer” as largely undefined and uncharacterized markers while focusing on the businesses that deal with them and that seek to connect effectively to them. My goal for this next step installment in this narrative progression is to at least begin to open up the black box construct of markets and consumers, in order to more fully understand them and to more fully understand what those businesses need to do regarding them, to endure and as competitively strong enterprises.

I phrase that in perhaps extreme terms, at least in part because I have been focusing on virtuous and vicious cycle decision and action patterns in business strategy and operations in the past few installments to this series, where extremes become relevant. So I approach this topic from the perspective of how strategy and operations can and do have business-effectiveness and even business-viability defining consequences. And with that noted, I turn to consider markets and consumers, and I do so from the fundamentals and with a statement that will bear explanation:

• When businesses operate in interactive networks of the type under consideration here, the distinction between business and consumer, and that between provider and consumer blur and become more matters of perspective and orientation than anything else.

As an obvious starting point for explaining that point, essentially every business in a supply chain system can legitimately be viewed as a customer and a marketplace participant for at least one other business in that interacting system. Often, in fact essentially every business that participates in this type of system, can legitimately be considered a customer, and a loyal repeat customer of several or even many other businesses there. And at least as significantly, when supply chain systems are robust and stable, the businesses participating in them at least ultimately provide value to the businesses that they service and provide for there, by helping them to more effectively and cost-effectively service the needs of their customers and marketplace: other businesses in those same systems included. So these relationships can come to offer success enabling value for all concerned and in a feedback driven and reinforced manner.

With this blurring in mind, let’s at least conceptually parse the concept of market into two basic categorical subtypes:

• A direct market for a business, consists of its own current actively involved customer base, plus whatever larger demographic that they belong to that could realistically be engaged with by that business, into becoming actively engaged customers for it to – when that business follows its current business and marketing plans as already in place.

Obviously, a business can always at least plan for and attempt to change the target market demographic range that it would be able to effectively draw actively involved customers from, and even very significantly so. But for purposes of this discussion at least, that type of shift would require at least a measure of change in its underlying business model insofar as it specifies target markets, and certainly where more than just minor target market adjustments are being considered. So when I write here in terms of a business’ “direct market”, I do so considering it as if viewed from a snapshot-in-time perspective of where it is now and how it functions. And with that perspective in mind, I correspondingly add that:

• An indirect market for a business, in simplest terms consists of the direct market of a second, customer business that that enterprise services in a supply chain or related system as a client business there.

And the positive value that a business offers in that system, can ultimately be seen as a measure of how effectively and fully it offers value to the indirect markets that it is connected to through its supply chain relationships, as it brings value to its supply chain partner/client businesses. And ultimately, businesses create greater levels of value for themselves through really effective business-to-business collaborations than they could through more strictly stand-alone effort. Value creation directed to direct and to indirect markets in this systems are at the very least additive for businesses in them that receive these benefits.

These points of conclusion fall directly out of the presumption that the real sustaining strength of a business is in how competitively effectively it can bring value to its customer base, and in ways that would prompt its members to keep buying from it, providing it with revenue through that transactional activity. Think of this as a brief discussion as to how collaboration can amplify the maximum possible level of such value creation that could be achieved.

The easiest and clearest way to parse these systems is to consider business-to-business dyads – simplest case two-business interaction models. But realistically, impact here spreads out throughout entire supply chain systems. And both direct and indirect markets can and do overlap too. As a simplest case example there, consider delivery businesses that enter into both business-to-business, and direct business-to-consumer transactions, where at least some of their customers might be members of both their own actively involved direct market and the indirect market that they face through one (or more) of their partner businesses.

If this set of distinctions highlights and at least somewhat clarifies how complex business-to-market relationships can become and certainly in more complex systems as found in supply chains, then it serves its purpose. Apparent simplicity can arise from actual underlying simplicity, but it can also arise from lack of attention to the details too and this is a context where that is readily possible; we all tend to take terms like “market” for granted and as if they were somehow axiomatically unexaminable, where detailed examination can be essential.

I am going to turn to in my next installment to this series, to consider the issues of marketing and sales in the types of complex and multi-layered contexts that I have been addressing here. Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 4, and also at Page 1, Page 2 and Page 3 of that directory. And see also Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation.

Advertisements

The fallacy of the singularity and the fallacy of simple linear progression, reconsidered

I have recently found myself thinking back to a posting that I first added to this blog in February, 2010 – just a few months after starting it:

Assumption 6 – The fallacy of the Singularity and the Fallacy of Simple Linear Progression – finding a middle ground.

I offered that posting in my directory: Reexamining the Fundamentals, as an installment in a series of brief notes in which I posed questions and suggested reconsideration of a succession of issues that we can find ourselves taking for granted.

My basic argument in the sixth installment to that series, as cited here, was that aside from astronomical events such as the emergence of black holes from massive supernova explosions, true singularities do not arise and certainly not on Earth or in our range of direct and immediate ongoing experience. And if they do not actually arise in our direct experience, neither does simple, same evolutionary path forward linear progression, except as it might arise in a very time-limited manner. Long-term and certainly unending linear developmental progression as I write of it here, is just as much a simplification of more complex phenomena and just as much of a mirage as are putative singularities in any directly human or human societal contexts.

I specifically cited the book:

• Kurtzweil, Ray. (2005) The Singularity is Near: when humans transcend biology. Penguin Books.

in that earlier posting, for how Kurtzweil predicted an acceleration in the pace of innovation and change, until a true singularity for it would be reached. For purposes of this posting and its narrative, let’s construe singularity there to mean the pace of change and of disruptively novel change accelerating to a point where essentially no one can keep up and no matter how much of a pioneer and earliest, fastest adaptor they would be when positioned on an innovation diffusion theory-based, adaptation curve.

In 2010, we lived in a context where that type of singularity event was all but certain to never arise, and certainly when innovation was being developed and advanced entirely from direct human initiative and by essentially the same people and types of people who would ultimately have to accept and adapt to any given step in this process of change, and buy into it as a part of that. Think there, in terms of businesses that would bring next step innovations to market, having to be financially successful enough from earlier efforts at that, and from their success in the marketplace from that, to be able to afford to design, prototype test and manufacture a next step innovation too. Innovation developers who function as such in a business or enterprise, have to have the resources and the opportunity to develop their next step innovation and the next after it. And this requires that the business that pays for this, be able to afford it and still keep their doors open. That, and certainly according to the logic of my earlier Assumption 6 posting, of necessity breaks down if innovation arrives so quickly to the marketplace that it cannot achieve buy-in for it, and if it is impossible to achieve the necessary consumer support that would drive this innovation cycle.

But even as I wrote that earlier posting, there was at least in principle, a possible way around that anthropocentric, from human to human restraint mechanism on the maximum possible sustainable pace of change. And we might be on the verge of seeing the emergence of the first simple test case proof of principle examples of how that might happen. And with that noted and as a starting point for reconsidering the line of reasoning offered in my 2010 posting, as repeated and expanded upon here, I cite a brief news story that recently appeared in the New York Times:

Building A.I. that Can Build A.I..

This is a news story about artificially intelligent machines that can build other artificially intelligent machines, and it focuses on a more blue-sky research and development project that is taking place at Google, that they call AutoML (where ML stands for machine learning.) See this Google Research Blog posting:

Using Machine Learning to Explore Neural Network Architecture

The goal of this project is to develop machine learning algorithms that can design and build next generation improved machine learning algorithms, using a neural networks approach as that is so effectively oriented towards supporting iterative step-by-step, experience based development and improvement.

The proximal goal of AutoML is to make it possible for less experienced and less expert artificial intelligence (AI) programmers to make significant advances in developing and refining their own AI software, that can tap into the specific task-level knowledge and insight that they and the organizations that they work for, have particular expertise in. And in fact, the most highly skilled and experienced AI programmers who are out there making necessary advances in their field now, are and will continue to be in very limited supply even as need and demand for them continues to rise. There are way more areas of specialized need for the skills that they have, than there are expert professionals to do all of this possible work. But as this takes off and initiatives such as AutoML really begin to succeed, that goal will be superseded by the larger and more widespread goal of greater efficiency and cost-effectiveness in the innovative effort.

Where is AutoML now in its development? It still represents what will come to be seen as a more embryonic, proof of principle stage for what is to come. As of this writing, the only working examples coming out of this initiative that have come to light, revolve around more effectively solving tasks such as very simple visual pattern recognition tests so a machine can, for example drop a ball into a blue bowl when it is randomly positioned in a grouping of bowls of other colors.

But … the principles involved there have potentially open ended application and for essentially any tasks that could conceivably be captured in an algorithm, fuzzy logic based as well as more deterministic as is being explored up to now.

I find myself writing this posting at a point in time of fundamental, pivotal change. And I write it with an eye towards where the fruits of projects such as AutoML will develop – not “might” but “will.” And when machine learning can effectively supplant the need for human-based expertise and experience in the design and development of AI systems, that will effectively remove one half of the system of breaks that I first wrote of here in my above cited 2010 posting.

That would not make innovation singularities possible as a matter of reaching an infinitely fast pace of development, but it will force a reconsideration of what singularity means as I have tentatively defined it here, earlier on in this posting. And that has the potential at least, for significantly impacting on how people who would variously fit along an innovation adaptation curve approach the change taking place around them, and certainly for those who would naturally find themselves to be later, slower adaptors to change.

I initially planned on offering this as a single, one-off posting but writing it has prompted me to want to write a second, at least somewhat related other new posting too. I am at least tentatively considering as a working title for that: “Reconsidering Information Systems Infrastructure,” and my goal for that is to expand out the scope that is usually included there, beyond information storage and transmission systems per se, to include information and knowledge processing as well, and certainly as that has moved into the cloud and into the information systems backbone per se. The issues that I touch upon here, become important there too.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation. And you can also find a link to this posting, appended to the end of Section I of Reexamining the Fundamentals as a supplemental entry there.

Innovation, disruptive innovation and market volatility 37: innovative business development and the tools that drive it 7

Posted in business and convergent technologies, macroeconomics by Timothy Platt on November 24, 2017

This is my 37th posting to a series on the economics of innovation, and on how change and innovation can be defined and analyzed in economic and related risk management terms (see Macroeconomics and Business, posting 173 and loosely following for Parts 1-5 and Macroeconomics and Business 2, posting 203 and loosely following for Parts 6-36.)

I have been systematically working my way through a to-address list of topics points in recent installments to this series, that I repeat here for purposes of continuity. (Note that I append reference links to the ends of the points on this list that I have already addressed, indicating where I did so):

1. Innovation and its realization are information and knowledge driven (Part 32).
2. And the availability and effective use of raw information and of more processed knowledge developed from it, coupled with an ability to look beyond the usual blinders of how that information and knowledge would be more routinely viewed and understood, to see wider possibilities inherent in it (Part 33),
3. Make innovation and its practical realization possible and actively drive them (Part 34, Part 35 and Part 36).
4. Information availability serves as an innovation driver, and business systems friction and the resistance to enabling and using available business intelligence that that creates, significantly set the boundaries that would distinguish between innovation per se and disruptively novel innovation as it would be perceived and understood
5. And in both the likelihood and opportunity for achieving the later, and for determining the likelihood of a true disruptive innovation being developed and refined to value creating fruition if one is attempted.

And I turn here to consider Point 4 of that list. But before I do so, I want to at least briefly address a point that I have left hanging from the end of Part 36 as I finished up its discussion of Point 3: my promise to provide references regarding research financing here. I would at least begin that by offering two series that I have included in this blog, both appearing in my Macroeconomics and Business directory with its Page 2 continuation:

• Considering a Cost and Benefits Analysis of Innovation (that directory, postings 137 and loosely following for its Parts 1-9) and
• Building for an Effective Portfolio of Marketable Offerings (that directory, postings 196 and loosely following for its Parts 1-6.)

The first of these references takes a perhaps-more macroeconomics view of that topic than would be called for here and the second addresses it as one facet of a more widely inclusive discussion, though it does directly address the core issues that I simply allude to here in this series for developing and building a more balanced research portfolio per se. I am going to turn back to the issues of research financing in this series after addressing Point 5 of the above list, to at least begin to fill in what I now see as real gaps in my coverage of research financing and related matters per se.

With that stated, I turn here to more specifically address Point 4 from above. And I begin that by stating a point that should be completely obvious in the abstract, but that can become obscured in the specific context:

• Innovation offers perceivable value because it addresses what have been unmet, or at least inadequately met needs, and more effectively and/or cost effectively than anything already routinely available could. And knowing what to innovatively develop and how, on the developing and producing side to that, and knowing when a proffered new development would meet such needs on the marketplace and consumer side, are all about information development and availability, and communications (e.g. marketing, and ultimately on the within-business development and production side to this, as much as on the market and consumer facing side to it.)

If this bullet point and its issues hold for more routine incremental innovation per se, where in most cases all involved parties have a preparatory background for what is to come next from its already being at least somewhat familiar, it becomes much more pressing and more difficult to achieve when the innovation involved is disruptively new and novel and no one starts out with any basis of familiarity to help them think about it and address it.

A disruptively new and novel innovation might if really successful, become a new essential and a source of products that will become taken for granted as such. And to follow up on that, the tools and resources that we most fully take for granted, mostly all began as disruptively novel offerings and ones that only sufficiently wealthy early and pioneer adaptors would buy into – even as people in general come with time to take them for granted. Consider the electric light, the refrigerator, the automobile and telephone, or the cell phone for its more modern disruptive innovation if you will. I could, of course added to this list and in an essentially open-ended manner, but will only cite two more examples here that are particularly relevant to this series and its discussion here: personal computers and their smaller and more portable versions: laptops computers, tablets and the like, and the internet and the wirelessly connected, anywhere to anywhere internet in particular.

In their beginnings these initial disruptive innovations and the progression of step by step follow-up innovations that arose from them, all started out as unknowns to the general public. And it took time for them to become embraced as so basic to all of our lives that most all of us cannot readily imagine what our lives would be like without them. But their early and initial acceptance and use depended essentially entirely on effective information availability and sharing: first in the beachhead acceptance of pioneer and early adaptors, and then with waves of differently focused and framed messages, to progressively later and later adaptors – until even the late and last of them had adopted these changes too.

• And this information and its communication, shaped by and towards whatever adaption curve audience would be brought in next, had to both convincingly offer a case for buying into a given change, and convincingly explain what that change even is, as a source of potential value to a potential innovation adaptor.

With that noted for background purposes, I am going to more directly address Point 4 and then continue on to discuss Point 5 of the above list in my next series installment. And then as promised above, I will reconsider the issues of research financing: there, in large part from a more accounting and bookkeeping perspective. Meanwhile, you can find this and related postings at Macroeconomics and Business and its Page 2 continuation. And see also Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation.

Rethinking national security in a post-2016 US presidential election context: conflict and cyber-conflict in an age of social media 5

Posted in business and convergent technologies, social networking and business by Timothy Platt on November 6, 2017

This is my 5th installment to a new series on cyber risk and cyber conflict in a still emerging 21st century interactive online context, and in a ubiquitously social media connected context and when faced with a rapidly interconnecting internet of things among other disruptively new online innovations (see Ubiquitous Computing and Communications – everywhere all the time 2, postings 354 and loosely following for Parts 1-4.)

I have been addressing change in this series, and disruptive change, and in both the positive sense of improving and expanding the information technology and communications systems that we rely upon, and on the downside vulnerabilities side that these innovations bring with them too. And I have also and in that same context, been addressing how many of us, and both as individuals and as participants in organizations, keep failing to even address old and known cyber-risks and even when effective means are readily available to patch them. I add that that circumstance continues to hold, even when effective risk reducing measures are readily and even freely available and when warning signs are developing that those risk sources are being systematically exploited already.

I begin this series installment by citing some recent news pieces that might focus in on specific events and specific organizations, but that hold warning for all. The three news stories that I would cite here, address very different seeming events, but I would argue that they hold more in common than might be apparent, as explicit examples of realized vulnerabilities to the issues that I raise in this series. So I offer them with that point in mind:

Identity Thieves Hijack Cellphone Accounts to Go After Virtual Currency,
Equifax Says Cyberattack May Have Affected 143 Million in the U.S. and
Every Single Yahoo Account Was Hacked – 3 billion in all.

Focusing in on the last of these three as a starting point for follow-up discussion of these events, I note that at its peak, Yahoo was valued at just over $100 billion as a company. It has lost value since then, over a period of some 15 years and for a variety of reasons. But this event had to have contributed to its level of devaluation from that high point, as of when Verizon bought out Yahoo – for only $4.8 billion. Not to belabor the obvious, that sale price was less than 5% of Yahoo’s peak value and even then, right now Verizon executives must be thinking that they still paid way too much for what they got.

And turning to the first two of those news stories: the first of them simply adds to already existing concerns as to the safety and reliability of virtual and crypto-currencies such as bitcoin. And this successful hacking of the Equifax database system, with the loss of control over personal confidential records for so many: records with data in them that can be used for identity theft, has become an all but existential threat to that organization as a whole: one of the three major credit reporting agencies globally.

And this brings me to the core point that I would raise here in this posting, which I offer here in the form of a brief set of bullet points:

• The more globally interconnected we all become, and both in general
• And through the elaboration of specific organization-to-organization information sharing and communications channels,
• And through the elaboration of deeper and more pervasive organization-to-individual and individual-to-organization data sharing,
• The more difficult it becomes to both prevent security breaches there,
• And the larger they can expect to become when and as they do arise.

Quite simply, a malicious hacker does not have to be able to breach any and all possible points of connection and entry into an organization to steal or suborn the keys to its information holding kingdom. They only have to find one route in that they can identify and exploit system vulnerabilities through. And if that represents a source of vulnerability that would not readily raise red flags if probed and exploited, so much the better for the hacker and so much the worse for all of the rest of us. And one of the core consequences of the above bullet pointed observations, is that every one of these new technologies created both new points of connection and new types of points of connection: each potentially having within them their own set of still to be discovered zero-day vulnerabilities.

• Ultimately, it is our race to ubiquitous connectivity and our race to build newer and better tools and approaches for achieving that, that become our truest vulnerability here. And the pace of technological advancement in all of this, with its steady flow of new and of disruptively novel and different, simply represents the new area of an already large map for where to look, when seeking to protect and safeguard all of those communications and information systems that we have come to absolutely rely upon.

I want to be as clear as possible here. I am NOT in any way espousing a turning back from the emerging technologies of the 21st century. Their positive value is way too great, for blocking or even just significantly limiting them as a general cutting-off due diligence measure to make sense, and ultimately for any of us. What I am proposing here is that we need to find better, more real-time effective, more resilient and more consistently followed approaches to safeguarding these systems and the resources they contain so they cannot so readily be turned against us for malicious reasons. And I am writing of a need for greater resiliency and flexibility in the systems that we do have in place, so we can more rapidly and effectively expand them to accommodate new challenges and their security needs.

I have written in this series, among other places, about how any such solution has to include new types of technology components in its coverage. But at least as crucially importantly, it has to have human, and human behavior accommodating (and shaping) components too. I am going to continue discussing the issues and challenges and problems faced here in this posting, but I am also going to at least begin to more explicitly address approaches for resolving them in my next installment to this series.

I wrote at the end of Part 4 that I would:

• “Discuss threats and attacks themselves in the next installment to this series. And in anticipation of that and as a foretaste of what is to come here, I will discuss trolling behavior and other coercive online approaches, and ransomware. After that I will at least briefly address how automation and artificial intelligence are being leveraged in a still emerging 21st century cyber-threat environment. I have already at least briefly mentioned this source of toxic synergies before in this series, but will examine it in at least some more detail next.”

I will explicitly discuss those issues next. Then after more fully discussing the problems faced that I have been examining here, I will turn to consider approaches for better addressing these challenges. And in anticipation of that, I note that:

• On the human side of this conundrum, that has to mean shaping effective information security enhancing options that mesh with basic human behavior and with what we tend to do by default. And that challenge is daunting.
• And on the technology side, this means trying to stay at least one step ahead on the white hat hacker side, in the technology arms race that we are in here, vying effort to secure and safeguard against effort to breach and challenge,
• And all while making legitimate system usage easy and in accordance with the “human side of this” challenge as just noted.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation. And you can also find this and related material at Social Networking and Business 2, and also see that directory’s Page 1.

Meshing innovation, product development and production, marketing and sales as a virtuous cycle 8

Posted in business and convergent technologies, strategy and planning by Timothy Platt on October 25, 2017

This is my eighth installment to a series in which I reconsider cosmetic and innovative change as they impact upon and even fundamentally shape the product design and development, manufacturing, marketing, distribution and sales cycle, and from both the producer and consumer perspectives (see Ubiquitous Computing and Communications – everywhere all the time 2, postings 342 and loosely following for Parts 1-7.)

I offered a downward, vicious cycle example of recurring bad decision making in Part 3 and a more positive virtuous cycle counterexample to that in Part 5, that a business might attempt in order to pull out of the downward path of the first example. And in the course of that, I at least briefly noted how blindly following what might begin as an essential corrective change, can lead to disaster too if circumstances further change, but no further course correction is allowed for or considered.

This leads me directly to the challenge of how best a business would respond at a higher level strategic and overall operational level to mapping out possible change and its pros and cons, and not just act (react) at a day-to-day, here-and-now details level.

The downward spiral, negative example that I make note of here was an at least brief and selective, though entirely accurate one that I have at least occasionally seen in restaurants. It is a decision and action pattern that I have come to refer to as the restaurant death spiral, and it is a path forward that is based on cost cutting, at the expense of customer-facing product and service, and an attempt to achieve savings in the face of loss of business through quality and value cutting. And when that short-term-perspective, expenses limiting approach is unrelentingly followed, repeat business and then essentially all business, dries up and so does that restaurant. The alternative, positive example that I have also discussed in this series, was offered as a turn-around strategy for a restaurant seeking to survive what its owners had come to see as an at least near death experience from following that first approach in their enterprise. And that decision and action pattern was organized around a conscious and firmly held decision to switch to a value creating and enhancing farm to table business model, in which they would work collaboratively with local farms and dairies, bakeries and other providers, in order to bring their customers the very best while helping to support their own local communities – that would in turn help support them too.

Then, after a period of local success, their local farmers who they would turn to, were hit and very hard by a long-term drought or other natural disaster, greatly limiting what they could source from them, and for some of the core ingredients that anything like their standard menus would call for.

Yes, they would very much like to continue supporting and working with the local producers who they had built both mutually beneficial business relationships and friendships with. But at least for a year, according to long-range forecasts, a strictly pure-play local farm to table approach might not work. Where would and should they draw what lines in what they will do, while this is happening? And what of local producers who might have helped them out when they were still low on cash and still trying to pull out of the nosedive of their near death, restaurant death spiral experience?

I stated at the end of Part 6 that I would continue its strategy and planning and operations oriented narrative here, in terms of a particular aspect of the consequences that this leads to:

• “Where decisions that have to be made can be grounded in business ethics and related terms and in how a business and its owners enter into and participate in larger communities that only begin with their customers and their potential customer bases.”

My goal for this posting is to address that, but in the perhaps larger context of creating and maintaining business agility and resiliency as organizational goals – and as buffering mechanisms against the down-sides of change.

Businesses might enter into collaborative relationships with other businesses, for purposes of enlightened self interest and in order to become more competitively strong and profitable. But when they do so from a long-term perspective this of necessity means their planning and executing according to a perception that win-win strategies that would benefit all involved parties: their own business included here, always offer greater positive stability and more secure long-term value – and for themselves too. But the key phrase in that is long-term.

Business ethics, in this type of context does not mean long-term sacrifice and certainly not with that meaning increased risk of long-term business failure; it does mean making and allowing for short-term accommodations, that at the very least mean supporting a partner business that has helped create mutual positive value, so that it can still be there and under circumstances where it can do so again. The basic principle that I cite here, arises in a wide variety of contexts. To cite a common one that I have at least occasionally mentioned in this blog in different contexts, consider how businesses can allow longer lag times before payment is due (e.g. larger numbers of days receivable) for products and services provided.

But the basic principles that I write of here go beyond that too. In an increasingly complexly interconnected global context, the traditional view of business process cycles, as addressed in the titles to the postings in this series, has to be expanded to be more inclusive. The most important processes and process cycles that need to be included there, do not exist entirely within single businesses or within them and their specific markets and customer bases as they work with them. They have to be expanded to include larger business-to-business collaborative networks too. And I have been building up to that point throughout this series.

The one significant area of discussion that I posed as being necessary to include here in this series, that I have not actively discussed so far in it is the marketplace and how that fits into this here-expanded framework of understanding and action. I am going to turn to explicitly include that in this series discussion in my next installment to it. Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 4, and also at Page 1, Page 2 and Page 3 of that directory. And see also Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation.

Innovation, disruptive innovation and market volatility 36: innovative business development and the tools that drive it 6

Posted in business and convergent technologies, macroeconomics by Timothy Platt on October 15, 2017

This is my 36th posting to a series on the economics of innovation, and on how change and innovation can be defined and analyzed in economic and related risk management terms (see Macroeconomics and Business, posting 173 and loosely following for Parts 1-5 and Macroeconomics and Business 2, posting 203 and loosely following for Parts 6-35.)

I have been systematically working my way through a to-address list of topics points in recent installments to this series, that I repeat here for purposes of continuity. (Note that I append reference links to the ends of the points on this list that I have already addressed, indicating where I did so):

1. Innovation and its realization are information and knowledge driven (Part 32).
2. And the availability and effective use of raw information and of more processed knowledge developed from it, coupled with an ability to look beyond the usual blinders of how that information and knowledge would be more routinely viewed and understood, to see wider possibilities inherent in it (Part 33),
3. Make innovation and its practical realization possible and actively drive them (Part 34 and Part 35).
4. Information availability serves as an innovation driver, and business systems friction and the resistance to enabling and using available business intelligence that that creates, significantly set the boundaries that would distinguish between innovation per se and disruptively novel innovation as it would be perceived and understood
5. And in both the likelihood and opportunity for achieving the later, and for determining the likelihood of a true disruptive innovation being developed and refined to value creating fruition if one is attempted.

I began addressing Point 3 of this list, as just noted above, in Part 34, and continued that in Part 35 by raising a set of three issues that I would argue, need to be addressed in order to even just preliminarily resolve Point 3 for purposes of this series:

• A basic assumption as to what types of already-held and routinely used business information would be required in a genuinely disruptively innovative context, as for example might be explored and pursued in an innovation-supporting service, department or more separate facility within a business.
• An implicit financial assumption that runs counter to what would more generally be automatically assumed when innovation, and the research and development that it calls for are considered. I will at least briefly address that “starter” assumption here, offering some references that delve into its issues. And I will offer a basic rationale for justifying this alternative point of view assumption that I offer here for purposes of this series.
• And a fuller reconsideration of timeframes in all of this, where outside forces can easily become the driving shaping factors for all of this but where within-business factors always have to be accounted for too – and where they can be less examined in any planning that takes place.

And I delved into and discussed the first of these sub-issues there. My goal for this posting is to address the second of them and at least start addressing the third and final of them here too. Then when I have addressed all of them, as required in the context of this series discussion, I will continue on to Points 4 and 5 of the main to-address list as repeated above.

And I begin all of this with financial considerations, and a set of points that strikes to the heart of this series, and certainly as organizationally summarized in the starting paragraph here. And I begin that by reframing and reconsidering what applied research and pure research actually are, at least when considered from a more strictly financial perspective.

• Applied research and its most directed practical extreme of specific product development, are channelized in what can be considered at least somewhat tested and validated directions, and with the more product-specific end of that short spectrum quite securely reliable for that. Success there can mean adding new life to an already developed and successful product or product line, and failure: if an attempted upgrade or advancement does not cost-effectively work out, will at least offer practical insight for further next-step product advancement.
• Applied research, as considered here, offers what might be greater uncertainty and definitely when a new, next generation product is being attempted that would call for rethinking, and for manufacturing line retooling. But conversely, this also carries potential for proportionately larger rewards too. In any case, and focusing on that word “applied”: this type of research, like still more focused product development, follows a more linear development change and modification and upgrade pattern and with lower levels of overall risk associated with it, in keeping with the incremental improvements, profit and value creation potential of simply working to create a next generation upgrade.
• Pure research on the other hand, carries more dramatic value creation and loss possibilities, with its potential for developing successful disruptive innovation and game changing new product development – or dead end failure if that does not work out. Yes, sometimes failed pure research does bring insight that can be turned into next attempt success, and often in completely new and otherwise unexpected directions when that happens. But this delayed and alternative success cannot be counted on.

Let’s start addressing the finances of this, with an explicitly stated assumption that can be considered a basic if mostly just presumed mantra, for those who run and lead corporate research and development facilities:

• Appropriately scaled and selected innovation that is kept in focus (no scope creep) can be cost-controlled and within budget.

This, in practice, means balancing the costs and benefits: profit and risk potentials included, of suites of pure and applied research projects and initiatives, with the more secure and reliable of them as found towards the specific product development and improvement end of this, in effect bankrolling the more pure end of this overall effort and certainly for constraining overall risk faced. And in that, this is all about developing and carrying through upon balanced overall research project portfolios: a core job responsibility for more senior managers and directors of larger, wider-ranging research facilities.

I said above, that I would offer an alternative financial model here for understanding and managing overall research for an organization, and I do so with this perhaps-baseline, more standard approach held up for comparison. And I begin this by noting a detail in the above, standard-cant approach that I intentionally failed to acknowledge there, and that tends to be lost in most such policy and practice development: the role of timelines in all of this, and particularly timelines that extend beyond reporting quarters.

• A business that assiduously seeks to pursue stable, risk limiting and controlling safety in how it selects and benchmarks and carries out its research, and in how it seeks to balance its overall books for its research facility, will probably do very well from that in any given short term timeframe. It will probably succeed there in middle-range timeframes too, where short and middle-term are measured in terms of the rate of advancement in their overall industry for new product development, and in their markets for the pace of change in consumer demand.
• But that same business has to assume that pursuing a simpler short-term oriented approach here, will only lead to increased risk, and even what is essentially a certainty of failure long-term. The only way that a business, and certainly one in a rapidly advancing industry that is driven by disruptive change, can succeed long-term and in the face of this ongoing flow of challenge faced, is if it is willing to become more risk tolerant in its own next steps forward, building for its future through research and development.

This is important enough to bear repeating. Businesses that cannot and do not take this leap into the admittedly unknown, ever, might be secure in their current here-and-now for right now and in their immediate and shorter term future. But they also run the risk and certainly longer-term, of being blindsided by their competitors who do innovate and who do support the potential for innovation that their employees can offer. And with this, I argue that at least longer-term the “prudent and well balanced research product portfolio” of above, can prove to be overly cautious and overly risk-taking and certainly long-term, as a result. And safety, longer-term, can and often does mean being more risk accepting in any given shorter-term.

How can an effectively, efficiently run business manage this and still remain stable and resilient? I offer an at least easy to state possibility here, and a thought point and a starting point for more focused discussion. A business can in fact set up and run a stand-alone research facility in-house and basically in accordance with the above stated research finances mantra with its risk and costs balancing. But it can also support a level of special research projects that might be carried out within this same facility, but that would be separately financed, from a reserves account that would be set up for this purpose. And researchers who were so interested, would compete for these special blue sky research funds and for the necessary space and other resources needed to carry out their projects.

Note: this can mean enlisting and developing research excellence from in-house, but it can also at least include search for new talent and new potential from outside of the business, that could be brought in with contractual promises in writing to support the research that these professionals have been striving to be able to carry out.

And with this, I have at least briefly discussed both of the remaining sets of issues that I cited early in this posting as being necessary in order to complete discussion here, of Point 3 of the basic to-address list under consideration. I am going to turn to consider Point 4 and Point 5 of the basic list from the top of this posting, at least beginning that in my next series installment:

4. Information availability serves as an innovation driver, and business systems friction and the resistance to enabling and using available business intelligence that that creates, significantly set the boundaries that would distinguish between innovation per se and disruptively novel innovation as it would be perceived and understood
5. And in both the likelihood and opportunity for achieving the later, and for determining the likelihood of a true disruptive innovation being developed and refined to value creating fruition if one is attempted.

And yes, I will also offer the reference links that I promised in this posting, regarding research financing, in the next installment too, where they will prove relevant in the contexts of Points 4 and 5 too.

Meanwhile, you can find this and related postings at Macroeconomics and Business and its Page 2 continuation. And see also Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation.

Rethinking national security in a post-2016 US presidential election context: conflict and cyber-conflict in an age of social media 4

Posted in business and convergent technologies, social networking and business by Timothy Platt on September 25, 2017

This is my fourth installment to a new series on cyber risk and cyber conflict in a still emerging 21st century interactive online context, and in a ubiquitously social media connected context and when faced with a rapidly interconnecting internet of things among other disruptively new online innovations (see Ubiquitous Computing and Communications – everywhere all the time 2, postings 354 and loosely following for Parts 1-3.)

I have been discussing the more malicious weaponization of new and emerging cyber-technology in this series. And I have at the same time been discussing the continued vulnerabilities that we still face and seemingly without end, from already known threat vectors that arise from more established technologies too. That second thread to this discussion is one that I have recurringly returned to in the course of writing this blog and unfortunately, it remains as relevant a topic of discussion as ever when considering cyber-security and either locally and within single organizations, or nationally and even globally.

But at the same time that I have been delving into this combined, new and old technical side to cyber-attack and to the risk and threat of it, I have been delving into the more human side to this challenge too, and the risks of careless online behavior, and the challenge of social engineering attacks that would exploit it. Cyber-risk and cyber-security inextricably include both technology and human behavior aspects and each shapes and in fact can help to even create the other.

And with this noted, I add the issues of clarity and transparency into this discussion too, and I do so by way of a seemingly unrelated case in point example that I would argue serves as a metaphor for the security issues and vulnerabilities that I write of here:

• I went to see a physician recently for an appointment at her office. And when I go there, I saw only one person working behind the receptionist counter instead of the usual two that I had come to expect. The now-vacant part of the counter that patients would go to when arriving, had a tablet computer in place instead, with basic appointment sign-in now automated and for any scheduled return patients to use. That was not a problem, in and of itself. The problem that I found in this, was that this now automated system was much more involved than any verbal sign-in had ever been, with requirements that every patient sign multiple screens, each involving an authorization approval decision on a separate issue or set of them. Most of these screens in fact represented lengthy legal documents, ranging into the many hundreds and even thousands of words. And at least one of them meant my agreeing to or declining to participate in what turned out to be patient records sharing programs that I had never heard of and that had never come up in my dealings with that physician or with the hospital that she is affiliated with. I objected that this did not give me opportunity to make informed consent decisions, with patients waiting to sign in after me and with my scheduled appointment start time fast approaching. And the receptionist there rolled her eyes and said something to the effect that she was “used to being yelled at” by dissatisfied and impatient people. She briefly tried explaining what those two programs were on that one very lengthy screen but it was clear that she did not know the answer to that herself. So I signed as best I could, unsure of what some of my sign-in decisions actually meant, and then I went to my appointment.

When an online computer user clicks to a link, they might or might not realize that they are in effect signing an information access agreement too, and often one where they do not know that they are doing this and usually one where they do not understand the possible range and scope of such agreements. And this information sharing goes both ways and that fact is often overlooked. Supposedly legitimate online businesses can and at times do insert cookies and related web browser tracking software onto their link-clicking site visitors’ computers, and some even use link clicks to their servers to push software back onto a visitor’s computer to turn off or disable ad blocking software. And they do this without explicit warning and certainly not on the screens that users would routinely click to on their sites: hiding any such disclosures on separate and less easily found “terms of usage” web pages. And I am writing of “legitimate” businesses there. Even they take active and proactive actions that can change the software on a visitor’s computer and without their explicit knowledge or consent.

When you add in the intentionally malicious, and even just the site owners who would “push the boundaries” of legality, that can have the effect of opening Pandora’s box. And my above cited example of businesses that seek to surreptitiously turn off ad blocker apps is just one of the more benign(?) of the “boundary pusher” examples that I could cite here.

The Russian hackers of my 2016 US elections example as discussed in this series, and their overtly criminal cousins just form an extreme end point to a continuum of outside sourced interactivity that we all face when we go online. And this ranges from sites that offer you a link to “remember” your account login on their web site so you do not have to reenter it every time you go there on your computer, to sites that would try downloading keystroke logger software on your computer so their owners can steal those login names and passwords and wherever you go online from then on.

• Transparency and informed decision making and its follow-through, and restrictions to them that might or might not be apparent when they would count the most, are crucially important in both my more metaphorical office sign-in example, and in the cyber-involvement examples that I went on to discuss in light of it.

I have written of user training here in this series, as I have in earlier postings and series to this blog. Most of the time, the people who need this training the most tend to tune it out because they do not see themselves as being particularly computer savvy, at least for the technical details. And they are not interested in or concerned about the technical details that underlie their online experiences and activities. But the most important training here is not technical at all and is not about computers per se. It is about the possibilities and the methods of behavioral manipulation and of being conned. It is about recognizing the warning signs of social engineering attempts in progress and it is about knowing how to more effectively and protectively respond to them – and both individually and as a member of a larger group or organization that might be under threat too.

I am going to turn back to discussion of threats and attacks themselves in the next installment to this series. And in anticipation of that and as a foretaste of what is to come here, I will discuss trolling behavior and other coercive online approaches, and ransomware. After that I will at least briefly address how automation and artificial intelligence are being leveraged in a still emerging 21st century cyber-threat environment. I have already at least briefly mentioned this source of toxic synergies before in this series, but will examine it in at least some more detail next.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation. And you can also find this and related material at Social Networking and Business 2, and also see that directory’s Page 1.

Meshing innovation, product development and production, marketing and sales as a virtuous cycle 7

Posted in business and convergent technologies, strategy and planning by Timothy Platt on September 13, 2017

This is my seventh installment to a series in which I reconsider cosmetic and innovative change as they impact upon and even fundamentally shape the product design and development, manufacturing, marketing, distribution and sales cycle, and from both the producer and consumer perspectives (see Ubiquitous Computing and Communications – everywhere all the time 2, postings 342 and loosely following for Parts 1-6.)

I offered two case studies in this series, that were both based on restaurant planning and execution. The first, appearing in Part 3, represented a vicious cycle in which recurring bad decisions acted upon as consequences mount, lead to disaster. The second, appearing in Part 5 represents a more virtuous cycle example where success can lead to further success. But both of the action and consequence cycles that the restaurants of those examples follow, if taken to their logical extremes and without possible deviation, can and do lead to problems. And yes, this holds for the virtuous cycle example too: if their basic business model and strategy cannot be adjusted and even significantly course corrected in the face of the unexpected.

With those examples in place, in order to take subsequent discussion out of the abstract, I offered a to-address list of topic points in Part 6 that I repeat here for purposes of smoother continuity of narrative, where I would:

1. Discuss what businesses respond to, and in the specific context of this series, as they respond in patterns of decision and action, review and further decision and action that can have recurringly cyclical elements to them.
2. And it means addressing how they would respond at a higher level strategic and overall operational level and not just at a day-to-day, here-and-now details level, and certainly if they do so effectively.
3. In anticipation of that point, I cite agility and resiliency as organizational goals – and as buffering mechanisms against the down-sides of change. I have already touched on this third complex of issues (e.g. in Part 5) but will return to further consider it in light of my discussion of the above Points 1 and 2.

I at least briefly discussed Point 1 of that list in Part 6, doing so in terms of those case study examples. My goal for this posting is to delve into Point 2 and its issues. And I begin doing so here, with an at least brief and selective discussion of how Point 2 is worded, and what that implies.

• I raised in Part 6, an important point of distinction between the longer-term and bigger picture understanding of a business, as considered at a “higher level strategic and overall operational level,” and the shorter term and more situationally tactical focus of the “day-to-day, here-and-now details level.”
• My Part 3, vicious cycle example, which I refer to as falling into a “restaurant death spiral” pattern arises because no one there is actually carrying out consistent and inclusive, open minded strategic reviews or analyses to see how courses of action followed, are actually performing. And even when the restaurant owner and their senior staff are all aware that their business is failing, none of them seem able to connect the dots on their own as to how or why that is happening. Or at the very least, none of those stakeholders are able to articulate such an understanding in ways that would lead to remediative change for the business, and recovery.
• My Part 5 example follows a more virtuous cycle approach – but only as long as the conditions that it was initially developed in, continue unchanged and unabated. Disruptive change and challenge to that status quo, hold real potential for problems even then: if that is, this new recovery approach business model (leading a business out of a Part 3 downward spiral and into New), becomes an immutable given and as if set in stone too.
• Ultimately, both business model approaches fail if they are pushed to their logical extremes and left there and regardless of how circumstances change with new challenges and new opportunities arising.

Identifying those emerging changes: positive and negative, and planning and organizing so as to better address them, falls within the realm of strategy and the longer-term that it should be preparing the business for. If you wait until all of this: good and bad is already hitting you and if you only seek to address it tactically as a first response, you can only be reactive in doing so. And you cannot become proactive in this unless and until you step back and start addressing all of what you face and do right now, from a more specifically strategic perspective too.

Ultimately tactical can only succeed long-term if it is grounded in effective inclusive strategy – and that means strategy that is not limited by the types of blind spots that led the restaurant of Part 3 into so much trouble. Ultimately, the best that tactics can accomplish, absent supportive underlying strategy is to seek to arrive at an at least for now least-damaging reactive response, where longer term effectiveness essentially always calls for stepping out ahead proactively too.

I wrote the Part 5 scenario of the farm to table restaurant in terms of that restaurant and its operations and its business success. But I also wrote and discussed it and both there and in Part 6, in terms of larger communities that such an enterprise enters into: there, with local farmers and family owned dairies and related businesses. I stated at the end of Part 6 that I would begin addressing the issues of Point 2 of the above-repeated list, in terms of:

• “Where decisions have to be made that can be grounded in business ethics and related terms and in how a business and its owners enter into and participate in larger communities that only begin with their customers and their potential customer bases.”

I proposed that because those issues were weighing on my mind as I concluded that series installment. I in fact decided to develop some more organizing structure in this narrative, before assaying that set of issues. But I will return to consider the farm to table ethos in my next series installment, and the commitments that businesses make to other enterprises in general in supply chain and related value chain systems. And I will explicitly tie that line of discussion back to the core topical issues of this series as a whole, where businesses need to be change and innovation driven if they are to succeed. Then and in that context, I will finally turn to consider Point 3 of the above list, and:

• “Agility and resiliency as organizational goals – and as buffering mechanisms against the down-sides of change.”

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 4, and also at Page 1, Page 2 and Page 3 of that directory. And see also Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation.

Innovation, disruptive innovation and market volatility 35: innovative business development and the tools that drive it 5

Posted in business and convergent technologies, macroeconomics by Timothy Platt on September 5, 2017

This is my 35th posting to a series on the economics of innovation, and on how change and innovation can be defined and analyzed in economic and related risk management terms (see Macroeconomics and Business, posting 173 and loosely following for Parts 1-5 and Macroeconomics and Business 2, posting 203 and loosely following for Parts 6-34.)

I have been working my way through a to-address list of topics in the past several installments to this series, that I repeat here for purposes of continuity of narrative (having already addressed specific points from this list in earlier postings, as parenthetically noted below):

1. Innovation and its realization are information and knowledge driven (Part 32).
2. And the availability and effective use of raw information and of more processed knowledge developed from it, coupled with an ability to look beyond the usual blinders of how that information and knowledge would be more routinely viewed and understood, to see wider possibilities inherent in it (Part 33),
3. Make innovation and its practical realization possible and actively drive them (Part 34).
4. Information availability serves as an innovation driver, and business systems friction and the resistance to enabling and using available business intelligence that that creates, significantly set the boundaries that would distinguish between innovation per se and disruptively novel innovation as it would be perceived and understood
5. And in both the likelihood and opportunity for achieving the later, and for determining the likelihood of a true disruptive innovation being developed and refined to value creating fruition if one is attempted.

My goal for this posting, as of when I first wrote Part 34 was to finish discussion of Points 1-3 and of Point 3 in particular here, at least for purposes of this series, and to then at least begin a discussion of Part 4 and its issues. With further thought, I realize that that goal was too ambitious for one posting, so I begin this here with the final thoughts (for purposes of this series), related to the first three Points of the above list. And I begin that by listing three topics points that I raised in passing in Part 34, but never actually discussed there:

• A basic assumption as to what types of already-held and routinely used business information would be required in a genuinely disruptively innovative context, as for example might be explored and pursued in an innovation-supporting service, department or more separate facility within a business.
• An implicit financial assumption that runs counter to what would more generally be automatically assumed when innovation, and the research and development that it calls for are considered. I will at least briefly address that “starter” assumption here, offering some references that delve into its issues. And I will offer a basic rationale for justifying this alternative point of view assumption that I offer here for purposes of this series.
• And a fuller reconsideration of timeframes in all of this, where outside forces can easily become the driving shaping factors for all of this but where within-business factors always have to be accounted for too – and where they can be less examined in any planning that takes place.

I will of necessity begin addressing Point 4 of the above to-address list in dealing with these issues and even if I approach them from a Points 1-3 perspective. Then I will continue on to example Point 4 and then Point 5 of the above list. And with that reorienting note for what is to follow from here, I begin with the basic assumption of the first of the above three bullet points:

• The more innovative an idea is and the more disruptively novel it is in relation to what a business more routinely does, the less it is going to have to draw on the information flow that more conventionally derives from, fits into and supports their business as usual.
• Innovation and particularly novel and disruptive innovation needs to find its own path, and with its own, new types of data and understanding.

According to that assumption, innovation can be walled off from business-as-usual for the most part, with its own separate accumulated body of proprietary data and processed knowledge and without real need of all that much routine business information in place – besides basic information as to where current products or services are breaking down if a New approach would be developed to address that. And even then, the information presumed to be needed might be very circumscribed, and to limit introducing the biases of the past into the creative process if nothing else.

This understanding would fit into and support a simple, basic default confidential and sensitive business information management system that would, for example, limit access to sensitive trade secret manufacturing knowledge to the people in production who have essential need of it. And this would also fit into and support the development of within-business research centers as essentially separate and independently run, if wholly owned facilities too, with their own pools of sensitive and confidential data and processed knowledge too.

It has been a long time now for this, but I have given talks at in-house but separately run and conceived research facilities of this type, and particularly in the pharmaceutical industry from early in my professional life when I was still actively doing basic biomedical research and before I turned professionally towards organizational issues and consulting per se. So I write here of systems that I have seen up-close and first hand, where I have gotten to know the people involved who work at and run them. I still saw myself as a research scientist at the time, but even then I was acutely aware of and interested in the business model implications of that approach to research and development. So I actively studied that aspect of what I was allowed to see in those businesses.

How does this assumption hold up and longer-term? I would argue that it would not, and certainly not in its pure information and knowledge partitioning form. Put slightly differently, and in terms of individual innovative initiatives:

• That assumption would only hold true, if it does at all,
• If and where a new and disruptively-different innovation under consideration does not fit into and contribute to the ongoing business for anything in particular that it has historically done, and even just in a peripherally connected new direction.
• Restated from a different direction, an assumption of validity to an essentially complete walling off of essential information flow between a business’ production systems and its research and development, cannot succeed if the New that would be developed is to be integrated into the business as a whole and into what it does, as a new part of a consistent and coordinated larger whole.

The pharmaceutical research facilities that I got to visit, as cited above, succeeded in bringing developed value to their parent businesses, precisely because the walls there were selectively impermeable where that was needed and selectively porous when that was.

I am going to continue this discussion in a next series installment, with the second of the “clean-up” issues that I am adding here to round out my coverage in this series of Points 1-3 from above.

• An implicit financial assumption that runs counter to what would more generally be automatically assumed when innovation, and the research and development that it calls for are considered. I will at least briefly address that “starter” assumption here, offering some references that delve into its issues. And I will offer a basic rationale for justifying this alternative point of view assumption that I offer here for purposes of this series.

Then I will discuss the third and last of those bullet points and move on to address Points 4 and 5 of the main topics list that I have been working my way through here, as repeated at the top of this installment. Meanwhile, you can find this and related postings at Macroeconomics and Business and its Page 2 continuation. And see also Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation.

Rethinking national security in a post-2016 US presidential election context: conflict and cyber-conflict in an age of social media 3

Posted in business and convergent technologies, social networking and business by Timothy Platt on August 16, 2017

This is my third installment to a new series on cyber risk and cyber conflict in a still emerging 21st century interactive online context, and in a ubiquitously social media connected context and when faced with a rapidly interconnecting internet of things among other disruptively new online innovations (see Part 1 and Part 2.)

I concluded Part 2 of this narrative by proffering a briefly outlined solution to a problem, and in a way that could be seen as highlighting a fundamental conundrum faced. More specifically, I wrote in Parts 1 and 2 of how new and emerging value-creating technological innovations such as online social media and cloud computing create new opportunity for more malevolent use too, even as they create whole new worlds of positive opportunity. And to pick up on just one of the many facets to the positive side of this transformation, that make its advancement inevitable:

• Consider how essentially anywhere to anywhere and at any time, ubiquitous connectivity through small, simple smart phones and tablets has changed the world, reducing friction and barriers and bring people together and even globally,
• And particularly when cloud computing and for both data storage and for processing power, have in effect put always-connected supercomputer, super-communications devices into everyone’s hands. Think of this as ubiquitous connectivity and communications with what can amount to arbitrarily wide computational bandwidth, and equally wide ranging data storage, retrieval and sharing capabilities supporting it.

Now consider how this capability can be exploited by both individual black hat hackers, and by large organizations: governments included, that seek to exploit newly emerging cyber-weaknesses that arise from these new technologies in pursuing their own plans and policies. I wrote in Part 2, at least in brief and selective outline, of how Russia, China and North Korea have done this, as case in point examples. And in the course of that, I noted and at least began to discuss how the vulnerabilities exploited there, always have two faces: technological and human, and how the human side to that can be the more difficult to effectively address.

That led me to the quickly outlined “cyber security solution” that I made note of above and that I first offered at the end of Part 2, where I wrote of cyber-defense and security in general as calling for:

• Better computer and network user training,
• Better, more up to date and capable automated systems,
• And usage options channeling systems that reinforce good practices and discourage or even actively prevent bad, risk-creating ones.

Then, after offering that, I added that “technology fixes are always going to be important and necessary in this, but increasingly the biggest vulnerabilities faced come from human users, and particularly ones who are trusted and who have access permissions, to critically important systems.”

I begin addressing that ending point to Part 2 of this series and starting point to this Part 3 by picking up on one of the Russian government sponsored and led examples made note of in Part 2, where the Russian government explicitly sought to influence and even suborn the 2016 elections in the United States, including their presidential election. One of the key attack vectors used was a phishing attack campaign that gave them access to the Democratic Party email server system, used for within-Party confidential communications. This attack helped Russian operatives and private sector participants working for them, to insert malware into those server computers that gave them direct access to them for copying files stored on them, as well as capability for damaging or deleting files stored there. And this gave them the ability to edit as desired, and selectively leak emails so covertly captured too. And this was done and according to a timing schedule that would cause the greatest harm to a Hillary Clinton, Democratic Party presidential campaign while significantly helping Donald Trump to win the White House. This attack required concerted application of weaponized technology, but that in and of itself could never have accomplished anything without help from trusted insiders in the United States Democratic Party leadership, who had legitimate and trusted access to those computer servers, and who clicked to open what should have been suspicious links in emails that they received from what turned out to actually be malevolent Russian sources.

With this noted, let’s reconsider the three “to-do”, or at least “to-attempt” bullet points that I just repeated here from Part 2, as a first-take “cyber security solution”:

• Training only works if people who receive it actually follow through and do what they have been taught.
• “Better, more up to date and capable automated systems” as an operational goal, is always going to constitute a moving target, as both new positive capabilities and the new vulnerabilities that they bring with them arise and become commonplace.
• And the ongoing emergence of this new and different, and particularly of an ongoing flow of disruptively new and different, can make good practice shaping and requiring systems, obsolete almost before they are really implemented – and particularly given the challenges of the first of these three bullet points.

How did the Russians hack into the Democratic National Committee (DNC) confidential email servers that they specifically targeted here? Setting aside the technical side of this question and only considering the social engineering side to it, all that took was one person who was trusted enough to be given access to this email system, who would click to open what probably should have been seen to be suspicious links in an email that they had opened with their standard email software. Then when they went to the DNC secure server with it, they delivered the malware that they had just infected their computer with from this, and the rest was history.

• This is very important. It did not matter if a thousand others had deleted the malware-carrying emails that this one user opened and clicked into, if just that one trusted systems user did open at least one of them and click at least one link in it.

There is a saying to the effect that a chain can be no stronger than its weakest link. Reframing “link” in human terms rather than hyperlink, cyber terms, all it takes is one weak human link in this type of system, among its community of trusted and vetted users to compromise the entire system. And they only have to set aside their judgment and training once, at an inopportune moment to become that crucially weak link.

Let me add one more innovative element to the positive value created/negative vulnerability created from it, paradigm that I have been developing and pursuing this series around: automation and the artificial intelligence based cyber systems that enable it. These smart systems can be and increasingly are being developed and implemented to create automatic nuanced flexibility into complex information and communications systems. They can be and increasingly are being used to promote what many if not most would consider more malevolent purposes too, such as attempting to throw national elections. Automated systems of the type that I write of here are consistent and always follow their algorithmic protocols and processes in place, and they are becoming more and more subtle and capable in doing this, every day. They do not tire or become distracted and they do not make out-of-pattern mistakes. And here, they are pitted against individual human users of these systems, who all at least occasionally do.

Let’s reconsider the three to-do recommendation points that I initially repeated here towards the top of this posting:

• Training only works if people who receive it actually follow through and do what they have been taught.
• “Better, more up to date and capable automated systems” as an operational goal, is always going to constitute a moving target, as both new positive capabilities and the new vulnerabilities that they bring with them arise and become commonplace.
• And the ongoing emergence of this new and different, and particularly of an ongoing flow of disruptively new and different, can make good practice shaping and requiring systems, obsolete almost before they are really implemented – and particularly given the challenges of the first of these three bullet points.

And I match them with the issues and challenges of this posting in mind, with a brief set of matching questions:

• How best can these technology/human user systems be kept up to date and effective from a security perspective, while still keeping them essentially intuitively usable for legitimate human users?

The faster the technologies change that these systems have to address, and the more profoundly they do so when they do, the greater the training requirements that will be required at least by default and according to most current practices in place, and the less likely it becomes that “potentially weaker links” will learn all of this New and incorporate it into their actual online and computer-connected behavior, and fast enough. So the more important it becomes that systems be made intuitively obvious and that learning curve requirements be prevented, to limit if not entirely avoid that losing race towards cyber-security safety. And yes, I intentionally conflate use per se and “safe, security-aware” use in this, as they need to be one and the same in practice.

• Moving targets such as “better, more up to date and capable automated systems” of the type cited in the second above-repeated point, tend to become harder to justify, at least for the added effort and expense of keeping them secure in the face of new possible challenges. That certainly holds true when these information technology and communications systems keep working for their current iterations, and when updates to them, up to now have seemed to work and securely so too. How do you maintain the financial and other support for this type of ongoing change when it succeeds, and continues to – in the face of pressures to hold down costs?

Unfortunately, it is all too common that ongoing success from using technologies, breeds reduced awareness of the importance of maintaining equally updated ongoing (generally expensive) protective, preemptive capabilities in them too. And it becomes harder and harder to keep these systems updated and with support for doing so, as the most recent negative consequence actually once faced, slips farther into the past. And to put this point of observation into perspective, I suggest you’re reviewing Parts 1 and 2 of this series, where I write of how easy it is to put off responding to already known and still open vulnerabilities that have struck elsewhere, but not here at least yet.

And for Point 3 of that list, I add what is probably the most intractable of these questions:

• In principle, non-technology organizations that do not have strength in depth in cyber issues and on how best to respond to them, can be safe in the face of already known threats and vulnerabilities, if that is they partner for their cyber-security with reliable businesses that do have such strengths and that really stay as up to date as possible on known threat vectors and how they can be and are being exploited. But what of zero-day vulnerabilities and the disruptively new? How can they be at least better managed?

I am going to continue this discussion in a next series installment, starting with these questions. And I will take that next step to this narrative out of the abstract by at least briefly discussing some specific new, and old-but-rebuilt sources of information systems risk. Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation. And you can also find this and related material at Social Networking and Business 2, and also see that directory’s Page 1.

%d bloggers like this: