Platt Perspective on Business and Technology

Rethinking national security in a post-2016 US presidential election context: conflict and cyber-conflict in an age of social media 6

Posted in business and convergent technologies, social networking and business by Timothy Platt on December 18, 2017

This is my 6th installment to a new series on cyber risk and cyber conflict in a still emerging 21st century interactive online context, and in a ubiquitously social media connected context and when faced with a rapidly interconnecting internet of things among other disruptively new online innovations (see Ubiquitous Computing and Communications – everywhere all the time 2, postings 354 and loosely following for Parts 1-5.)

I offered Part 5 of this series as my attempt at starting a wider dialog on what should be considered the most complex and far-reaching online security conversation that has to be addressed and resolved for a still emerging 21st century:

• Unraveling the Gordian knot of finding an effective balance point between legitimate information access and use,
• While simultaneously carrying out necessary information security control and oversight and with privacy and confidentiality protected there, in our increasingly interconnected, information sharing-based world and in the face of an ongoing flood in new and disruptive technology that enables and expands what can be done there and with all of the new security vulnerabilities that brings with it.

Collectively, we have not offered a very good track record for safeguarding our computer and network systems already in place, from already known vulnerabilities. And that does not bode well for how we would secure and safeguard use of the flow of new in what we can do and in the tools we have for that, where that brings with it an ongoing flow of new zero-day vulnerabilities too that we have to somehow identify and effectively respond to as well.

I listed a fairly significant number of topic points that I would address in this series, towards the end of Part 5, and suggest you’re reviewing them to put this posting in that larger perspective. The first of those issues that I have not explicitly considered here yet, that I would at least begin to address now from that list, can be summarized as:

• Discuss threats and attacks themselves in the next installment to this series. And in anticipation of that and as a foretaste of what is to come here, I will discuss trolling behavior and other coercive online approaches, and ransomware. After that I will at least briefly address how automation and artificial intelligence are being leveraged in a still emerging 21st century cyber-threat environment. I have already at least briefly mentioned this source of toxic synergies before in this series, but will examine it in at least some more detail.

Let me clarify that by briefly noting what I have already covered in addressing at least the first part of that in this series. I have already cited three significant news stories of large and significant information security breaches in Part 5, but that primarily meant my noting incidents that have happened and hacker attack results achieved from them. My goal here is to at least in general terms begin a discussion of the attacks themselves that we face. And I begin that with the fundamentals.

As an admittedly cartoonish oversimplification of a more complex and nuanced set of phenomena, all information systems vulnerability exploits can be roughly divided into two broad categories (plus exploits that involve elements of both in a threat vector arena where the two basic forms overlap.) And to be very clear here, I am not referring in this to known threats and their exploitation, versus new and emerging ones that we cannot know to safeguard against until they have been exploited at least once, and even then only if their victims choose to share word of that happening. I take a more timeless approach here in categorizing these vulnerabilities, and that unfortunately makes sense because so many businesses, and so many individuals persist in not addressing even long-term known vulnerabilities that have been exploited repeatedly. And when businesses are successfully victimized here, it is still way too common that they not reveal that fact and even when they have been caught up in a new and previously unknown zero-day exploit. I focus on types of threat and their exploitation in broad categorical terms instead, with:

• Technological vulnerabilities and their exploitation, and
• Human behavioral vulnerabilities and their exploitation,
• Distinguishing between what is exploited and who is. (The distinction there will become moot as artificially intelligent agents more fully participate online as people do, but we are not there yet, as of this writing.)

Let me add in a further perhaps complicating detail here, that does in fact blur the lines between those two categories and right now, and even as I plan on discussing them as if this point of consideration did not apply. I simply raise it here as a point to keep in the back of your mind as you read on, as indication of at least one reason why you should read what I offer here as representing a simplified cartoon-style conceptual model:

• Ultimately all of these vulnerabilities: technological and human behavioral are in fact human behavioral ones. This obviously applies when overtly technological vulnerabilities that arise in hardware and/or software are known, but where system owners and managers to not apply, for example software patches and updated anti-virus and anti-malware to their computers and networks. But is also holds for those zero-day vulnerabilities too, and certainly when their emergence can be traced to overly limited vulnerabilities search and identification through alpha and beta software testing that could have prevented them, with this corner cutting allowed in order to get new software out of the door faster and at reduced cost as a marketable product.

But for what follows here, I set that detail aside and simply treat the two vulnerability categories just noted, as if they were separate and fully distinct. Think of them as potential categorically defined sources of vulnerability in larger and more inclusive systems where exploitation and attack can be initiated.

Technological vulnerabilities happen, and particularly in the software that we use. Hardware and its limitations can enable this, and certainly for events such as denial of service attacks. But software offers more, and I add more rapidly changing targets that can be exploited, and demand more rapid ongoing risk management response. So I focus in this series on software vulnerabilities, and as more pervasive and less easily managed overall sources of potential risk.

I would focus here at least to start, on the software side to this and on the services and capabilities that are enabled and even made possible because of its ongoing revolutionary and evolutionary development. Hardware advances enable next step software advances, and both are driven by disruptive reimagining of what can be done with this technology, and by user demand on improvement and continued improvement to what is already available from it. And this, here focusing on the new risk creation side to that flow of change, is where I begin discussing “trolling behavior and other coercive online approaches” as cited above, as negative-side implementations of what new technology makes possible.

Let me begin that with the fundamentals too. I just wrote above, if only in passing, of the relationship between software development and change, and information systems usage here. As a starting point for discussing trolling, and I add organized online disinformation campaigns in general, I offer a point of observation regarding software (and hardware), and the creation and development of new ways of using all of that:

• Technological advancement, and software development in particular for its faster rate of change, enables an even faster flow of change in what people do with these technologies and with that pace only limited by how quickly the smartest and the earliest adaptor-oriented among us can think through what we would seek to accomplish with these tools. And that applies to malicious users and usage as much as it does to more positively directed ones.

Trolling behavior, and intentional disinformation creation and its online dissemination as a form of disruptive attack, serves as what can perhaps best be considered a quintessential example of how new advances in the technological bases of information and communication systems, allows for new and even disruptive capabilities and for both positive and destructive use.

Originally, intentional online disinformation creation and dissemination as discussed here was essentially entirely carried out by lone individuals who perhaps serially and repeatedly attacked specific single targets with, for example bogus negative reviews and other social media postings. And they did so as separate and essentially entirely disconnected efforts. Social media and the interactive online context have enabled the harnessing of these individuals into what amount to armies of organized disinformation sources that can be aimed and launched against specified targets in large coordinated attacks. The very information sharing frameworks that make this possible at all, have become a framework for weaponizing them and on a large and organized scale. Think of this as organizing what would otherwise be disconnected and disjointed attacks, in a manner similar to the construction of botnet attacks.

And with automation and the addition of artificial intelligence to that mix, it is now possible to replace many if not all of the human troll participants in those systems with artificial constructs that simply appear to be human online participants: fakes that are assembled from a combination of genuine personal photos and other real person-sourced material scraped from real online participants and their online presence, as augmented with made-up names and profile data that is assembled to fit specific targeted demographics in order to make these fake content posters appear to be real.

I am going to continue this discussion in a next series installment, where I will add a discussion of ransomware to it (and how that is enabled by currencies such as Bitcoin as an anonymized online currency.) I will also at least briefly discuss denial of service attacks there, as touched upon here, and how new and disruptive technologies (e.g. the internet of things) have transformed that threat source. And then I will continue addressing the to-discuss points of part 5 from there.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation. And you can also find this and related material at Social Networking and Business 2, and also see that directory’s Page 1.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: