Platt Perspective on Business and Technology

Rethinking national security in a post-2016 US presidential election context: conflict and cyber-conflict in an age of social media 4

Posted in business and convergent technologies, social networking and business by Timothy Platt on September 25, 2017

This is my fourth installment to a new series on cyber risk and cyber conflict in a still emerging 21st century interactive online context, and in a ubiquitously social media connected context and when faced with a rapidly interconnecting internet of things among other disruptively new online innovations (see Ubiquitous Computing and Communications – everywhere all the time 2, postings 354 and loosely following for Parts 1-3.)

I have been discussing the more malicious weaponization of new and emerging cyber-technology in this series. And I have at the same time been discussing the continued vulnerabilities that we still face and seemingly without end, from already known threat vectors that arise from more established technologies too. That second thread to this discussion is one that I have recurringly returned to in the course of writing this blog and unfortunately, it remains as relevant a topic of discussion as ever when considering cyber-security and either locally and within single organizations, or nationally and even globally.

But at the same time that I have been delving into this combined, new and old technical side to cyber-attack and to the risk and threat of it, I have been delving into the more human side to this challenge too, and the risks of careless online behavior, and the challenge of social engineering attacks that would exploit it. Cyber-risk and cyber-security inextricably include both technology and human behavior aspects and each shapes and in fact can help to even create the other.

And with this noted, I add the issues of clarity and transparency into this discussion too, and I do so by way of a seemingly unrelated case in point example that I would argue serves as a metaphor for the security issues and vulnerabilities that I write of here:

• I went to see a physician recently for an appointment at her office. And when I go there, I saw only one person working behind the receptionist counter instead of the usual two that I had come to expect. The now-vacant part of the counter that patients would go to when arriving, had a tablet computer in place instead, with basic appointment sign-in now automated and for any scheduled return patients to use. That was not a problem, in and of itself. The problem that I found in this, was that this now automated system was much more involved than any verbal sign-in had ever been, with requirements that every patient sign multiple screens, each involving an authorization approval decision on a separate issue or set of them. Most of these screens in fact represented lengthy legal documents, ranging into the many hundreds and even thousands of words. And at least one of them meant my agreeing to or declining to participate in what turned out to be patient records sharing programs that I had never heard of and that had never come up in my dealings with that physician or with the hospital that she is affiliated with. I objected that this did not give me opportunity to make informed consent decisions, with patients waiting to sign in after me and with my scheduled appointment start time fast approaching. And the receptionist there rolled her eyes and said something to the effect that she was “used to being yelled at” by dissatisfied and impatient people. She briefly tried explaining what those two programs were on that one very lengthy screen but it was clear that she did not know the answer to that herself. So I signed as best I could, unsure of what some of my sign-in decisions actually meant, and then I went to my appointment.

When an online computer user clicks to a link, they might or might not realize that they are in effect signing an information access agreement too, and often one where they do not know that they are doing this and usually one where they do not understand the possible range and scope of such agreements. And this information sharing goes both ways and that fact is often overlooked. Supposedly legitimate online businesses can and at times do insert cookies and related web browser tracking software onto their link-clicking site visitors’ computers, and some even use link clicks to their servers to push software back onto a visitor’s computer to turn off or disable ad blocking software. And they do this without explicit warning and certainly not on the screens that users would routinely click to on their sites: hiding any such disclosures on separate and less easily found “terms of usage” web pages. And I am writing of “legitimate” businesses there. Even they take active and proactive actions that can change the software on a visitor’s computer and without their explicit knowledge or consent.

When you add in the intentionally malicious, and even just the site owners who would “push the boundaries” of legality, that can have the effect of opening Pandora’s box. And my above cited example of businesses that seek to surreptitiously turn off ad blocker apps is just one of the more benign(?) of the “boundary pusher” examples that I could cite here.

The Russian hackers of my 2016 US elections example as discussed in this series, and their overtly criminal cousins just form an extreme end point to a continuum of outside sourced interactivity that we all face when we go online. And this ranges from sites that offer you a link to “remember” your account login on their web site so you do not have to reenter it every time you go there on your computer, to sites that would try downloading keystroke logger software on your computer so their owners can steal those login names and passwords and wherever you go online from then on.

• Transparency and informed decision making and its follow-through, and restrictions to them that might or might not be apparent when they would count the most, are crucially important in both my more metaphorical office sign-in example, and in the cyber-involvement examples that I went on to discuss in light of it.

I have written of user training here in this series, as I have in earlier postings and series to this blog. Most of the time, the people who need this training the most tend to tune it out because they do not see themselves as being particularly computer savvy, at least for the technical details. And they are not interested in or concerned about the technical details that underlie their online experiences and activities. But the most important training here is not technical at all and is not about computers per se. It is about the possibilities and the methods of behavioral manipulation and of being conned. It is about recognizing the warning signs of social engineering attempts in progress and it is about knowing how to more effectively and protectively respond to them – and both individually and as a member of a larger group or organization that might be under threat too.

I am going to turn back to discussion of threats and attacks themselves in the next installment to this series. And in anticipation of that and as a foretaste of what is to come here, I will discuss trolling behavior and other coercive online approaches, and ransomware. After that I will at least briefly address how automation and artificial intelligence are being leveraged in a still emerging 21st century cyber-threat environment. I have already at least briefly mentioned this source of toxic synergies before in this series, but will examine it in at least some more detail next.

Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time and its Page 2 continuation. And you can also find this and related material at Social Networking and Business 2, and also see that directory’s Page 1.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: