Platt Perspective on Business and Technology

Rethinking national security in a post-2016 US presidential election context: conflict and cyber-conflict in an age of social media 11

Posted in business and convergent technologies, social networking and business by Timothy Platt on August 9, 2018

This is my 11th installment to a new series on cyber risk and cyber conflict in a still emerging 21st century interactive online context, and in a ubiquitously social media connected context and when faced with a rapidly interconnecting internet of things among other disruptively new online innovations (see Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 continuation, postings 354 and loosely following for Parts 1-10.)

I began Part 10 of this by repeating a briefly and succinctly stated (i.e. perhaps overly simplified first draft-only) summary of my take on the overall challenge faced, which I offer again here as I will be referring to it and building from it in what follows:

• The underlying assumptions that a potential cyber-weapon developer (and user) holds, shape their motivating rationale for developing and perhaps actively deploying and using these capabilities.
• Yes, I initially phrased that in terms of developer and user representing the same active agent, as a developer who knowingly turns a cyber-weapon over to another, or to others is in effect using that capability through them and with those “outside” users serving as their agents in fact. Part of what I will do in what follows in this series is challenge that assumption and for two fundamentally important reasons. First, it presumes that a cyber-weapon developer might be able to retain control over it, and over access to it and long term. That cannot in fact be assumed with any possibility of reliability and regardless of what agency develops and safeguards such a capability – and certainly if it is ever used, with that establishing that a weapon of its type can be developed and deployed. And second, most cyber-weapons are in fact built primarily from dual use technologies, and even from explicitly dual use code: capabilities that could be weaponized and deployed as such but that also serve more societally beneficial functions, entirely separate and distinct from their potential militarized use.
• Turning back to the first of these bullet points, and focusing on weaponized use of new and emerging technical capabilities, the motivating rationales that are developed and promulgated out of them, both determine and prioritize how and where any new such weapons capabilities would be test used, and both in-house if you will, and in outwardly facing but operationally limited live fire tests.
• And any such outwardly facing and outwardly directed tests that do take place, can be used to map out and analyze both adversarial capability for the (nation state or other) players who hold these resources, and map out the types of scenarios that they would be most likely to use them in if they were to more widely deploy them in a more open-ended and large scale conflict.
• And crucially importantly here, given the nature of cyber-weapons it is possible to launch a cyber-attack and even with a great deal of impact on those under attack, in ways that can largely mask the source of this action – or at least raise questions of plausible deniability for them and even for extended periods of time. That, at least is a presumption that many holders of these weapons have come to assume, given the history of their use and of resulting consequences from that.

And I proceeded from there to at least briefly discuss how cyber weapons capabilities, unlike nuclear weapons capabilities, do not tend to engender a counterpart to the MAD (mutually assured destruction) doctrine that would limit their likelihood of use in conflict. And then I offered a briefly and I admit here inadequately stated cartoon-like first approximation solution to this emerging globally reaching challenge, which I repeat here too as my goal for this posting is to set the stage for challenging it and for further refining it, as a first round effort for discussing how remediative measures might actually be arrived at that might work:

• Remediation of or at least significant reduction of the overall threat posed by cyber-weapon technology would require a coordinated, probably treaty-based response that would most likely have to be organized with United Nations support if not direct United Nations organizing oversight.
• Possible cyber-attack victims, and at all organizational levels from nation states on down to local businesses and organizations, have to be willing to both publically acknowledge when they have been breached or compromised by malware (cyber-weapons.)
• And organizations at all levels in this from those smaller local organizations on up to national organizations and treaty groups of them have to develop and use mechanisms for coordinating the collection and analysis of this data, and both to more fully understand the scope and nature of an attack and any pattern that it might fall into, and to help identify its source.
• And a MAD-like approach to this can only arise and work if that type of threat and incident analysis and discovery would in effect automatically lead to action, with widely supported coordinated sanctions imposed on any offenders so identified and verified, and with opportunity built into this system to safeguard third parties who an actual attacker might set up as appearing to be involved in an attack event when they were not. (I made note of this type of misdirection as to attack source in Part 9 and raise that very real possibility here again too.)

I begin this posting’s intended line of discussion by pointing out that both technological and political factors enter into the underlying problem addressed here. And both technological and political factors and considerations are going to have to be shoehorned together into any viable solution for at least limiting the likelihood of this problem erupting into realized open conflict too. And I write that explicitly acknowledging that technological considerations and a direct and specific awareness of what is and is not possible at that level, rarely play a significant role in setting politically-driven policy or in determining how it would be enacted operationally.

The line “scientists should be on tap, not on top” as usually attributed to Winston Churchill comes to mind there, and particularly for how the sentiment that it expresses comes to the fore when a scientific and technological understanding of the options and their possibilities, collides with perhaps more ideologically-shaped and driven goals and their imperatives.

At the same time I also have to acknowledge that politicians and policy makers rarely if ever find themselves facing single issues that can be thought through, planned for and addressed as if in a vacuum. Competing needs and challenges always have to be considered, and competing pressures from differing constituencies and stakeholders who see differing goals and priorities in what are at least nominally the same events and circumstances, and who can come to see a seemingly same problem from very different perspectives as a result.

Politics is sometimes posited (mostly by politicians) as representing the art of the possible. It is also compared to sausage making, with admonitions that those of faint heart and weak stomach probably do not want to look to closely at what goes into the grind. That, I add is also said by politicians. And together, and with a direct admission of the inevitability of at least some conflict and controversy in any real decision making regarding challenges of any wide-ranging import, I turn to consider my modestly cartoonish remediative proposal as just repeated above.

My goal in that is to discuss the basic model approach as just offered above in general terms, in the specific context of some very real and pressing problem scenarios that we can all see playing out around us in the news, and on an ongoing basis:

1. Stuxnet and how the US and Israel set a precedent with its development and use that has subsequently been proven problematical – and with an update that includes some current news references that would indicate that even the gains hoped for from that effort have proven short-lived, and even as negative consequences arising from it have proliferated.
2. NSA cyber-weapon leaks and the consequences of unintended (though with time at least inevitable) loss of control of even just defense-oriented weapons development here. This, I add here will serve as a basis for fundamentally questioning the first of four points raised above, as to the overall nature of the threat that cyber-weaponization of information technology poses, and that will in turn lead to a reconsideration of the basic remediative response model offered here too. As part of that, I will challenge the assumption that defensive and offensive can be meaningfully distinguished between in a cyber-weaponization context, just as I will question the assumptions that I made in my cartoon resolution as to the organizational levels that would have to become involved in effectively addressing all of this.
3. Russia and their use of cyber-weapons. And I will discuss in that context, what might perhaps best be thought of as pure cyber versus mixed: cyber plus more conventional force-enabling capabilities approach, as might be deployed in a conflict as it widens on goals sought and for range of action taken. I fully expect to offer something of an historical digression when exploring this case in point example in detail, that would illustrate how long-standing national concerns and assumptions can and do drive political thought and the policy that stems from it, and shape how governments would use technological capabilities to reach political ends. In anticipation of that narrative to come, I refer here to deep set fear of invasion from neighboring states as held by Russia’s leaders that goes back at least as far as the Tsars, and that among other things led to post-World War II Communist Russia setting up its Warsaw Pact system as a buffer against that possibility.
4. North Korea and their use of cyber weapon capabilities for disrupting enemy infrastructure in South Korea, and for raising funds and globally through deployment of ransomware. Issues of resource limitations and exploiting capabilities held to achieve goals, comes in here. And in this context, the asymmetrical nature of cyber-warfare and of cyber-weapons developments becomes crucially important, where the stronger a nation is for its more conventional force capabilities, and for its overall economy too, the more explicitly vulnerable it becomes and from that, to cyber-disruption.
5. China, and how they turn their cyber capabilities inward as well as outward, and most ominously for the future with their effort to achieve ubiquitous facial recognition system monitoring of everyone in their country and effectively all of the time. I have been discussing the Golden Shield Project (the Great Firewall of China as it is perhaps best known in the West) for almost as long as I have been writing to this blog. I will of necessity at least briefly reconsider that complex system of governmental resources, and China’s other resources already in place that would support the same political goals that their Golden Shield Project is aimed at. And I will I discuss how new capabilities would fit into those existing surveillance-based security systems and from both a technological and a political policy perspective. What is done by whom in all of this, and what is enabled as being possible through this effort, and toward what goals? All of the issues raised by these questions and ones like them have to be considered and understood when taking the generalities of any broadly framed in-principle approach at remediating cyber-threat and making it work. And this point of observation applies to any of the scenarios and case study examples that I have just raised here, or that I might add to this list.

How do each of these scenarios stress and challenge the basic models as offered above, and for both outlining and thinking about the basic problems faced, and for addressing them? I offer a link here to a specific posting that I offered in this blog approaching 8 years ago now that has, unfortunately, simply taken on increased significance since then for the issues that it raises: Stuxnet and the Democratization of Warfare. That earlier posting addresses a dimension to this overall problem that I will explicitly touch upon in analyzing at least several the specific challenge scenarios just offered above, but that in fact runs through all of them.

In anticipation of discussion to come on that, I offered my initial outline of the nature of the threat faced here, in nation-level terms. But this is not in fact just a game that only nations can play, and the distinction between organizational levels of involvement from that of the individual hacker through that of the nation state can become very blurry here. So any resolving or at least ameliorating approach for limiting risk and damage to any of this, can only fail if it is only arrived at and executed at a national and nation-to-nation international level and as if all took place at that level. As such, and turning for the moment to my cartoon remediation approach of above, simply hoping for a remediation at an international treaty and/or UN level cannot suffice. I will discuss this in more detail in what follows too, and certainly in the context of the above-cited Scenario 3: Russia and its active use of cyber-weapons as a core part of its emerging routine diplomatic arsenal.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time 3, and at Page 1 and Page 2 of that directory. And you can also find this and related material at Social Networking and Business 2, and also see that directory’s Page 1.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.