Rethinking national security in a post-2016 US presidential election context: conflict and cyber-conflict in an age of social media 39
This is my 39th installment to a series on cyber risk and cyber conflict in a still emerging 21st century interactive online context, and in a ubiquitously social media connected context and when faced with a rapidly interconnecting internet of things, among other disruptively new online innovations (see Ubiquitous Computing and Communications – everywhere all the time 2 and its Page 3 and Page 4 continuations, postings 354 and loosely following for Parts 1-38.)
I was focusing on new and emerging technologies in this series, in installments leading up to Part 30, and for how they enter into its narrative. Then I pursued as a digression that is relevant to this series but a break from that narrative line, starting in Part 31, with a discussion of Vladimir Putin’s war in Ukraine. This, his would-be quick and easy victory from a minor conflict still drags on and on; nothing has really changed for it as the carnage continues and on both sides. And there is no realistic end in sight to any of this now – still, after all this time and in the face of all of this loss and suffering.
Conventional conflict remains and will remain an ongoing source of threat; Vladimir Putin and his militaristic hubris prove that with what can only be considered a latest example. And with that said, my goal here is to turn back to the more intended narrative line that I broke away from in this series, as last addressed here in my February 16, 2022 Part 30 installment to it: to analytically discuss new and emerging technologies in national and global contexts.
There are a wide variety of issues and challenges that I could recommence this narrative line with, and I begin here by noting that I have identified and at least briefly touched upon a variety of them in this series and in this blog as a whole. Some of them, such as the threat to national security that quantum computing based cryptanalysis poses, are longer-term and still just emerging in nature. And I will address that complex of issues in more depth as I continue this series. But I begin this technology enablement oriented discussion with a threat that is already here, and that has come to have a popular, publically recognized name: deep faking.
• Deep fake images: still photo and video, and deep fake voice and other sound recordings are fabricated fictions that are so accurate seeming in detail that it can be all but impossible to prove from their contents and formats that they are in fact artifactual – that they are in fact just fictions created to manipulate and deceive.
It is possible to buy a standard commercially available cell phone now with photo editing software in it that can edit people or objects out of pictures taken as if they were not there, substituting background in where they actually were when those photos were taken. This editing is still crude enough so that an expert with the right equipment can tell that those image files were altered and systematically so. But that photo editing represents the low end of a rapidly developing and already significantly impactful new technology, that at its high end can be essentially impossible to detect – from within the altered files themselves and from trace evidence left in them such as pixel size or orientation shifts.
• Think of the potential impact that carefully contrived “effectively perfect” deep fake can have on molding public opinion, and how that can change public behavior and even in ways that would throw the results of an otherwise open and democratic election. As one part of this, consider the production and distribution of divisively damaging fake videos and photos of public leaders and others of influence, for how they could be used to drive wedges into societies, weakening them in the process and turning them away from what should be their issues of real concern – and action.
• Think of the potential impact that this could have on national intelligence gathering, where cooked information gathering and communications, to use an old term for this, could lead to faulty planning, prioritization and operational execution and both in the private sector and governmentally, and in national defense.
I have repeated argued a case and in detail for thinking of our organizations and our societies, and on all levels, as being information guided and driven. The type of deep faking that I write of here is in fact as much of a threat: locally and for specific demographic groups, nationally, and globally as any that we face. And as I will argue as part of what follows here, this type of threat carries its own MAD, mutual assured destruction potentials; it is not just nuclear weapons that can destroy societies and their social structures as a whole.
I will continue this line of discussion with consideration of lower level but still largely uncontainable risks as can arise from this type of informational challenge. To be more specific, I am going to continue this line of discussion in the next installment to this series by expanding upon this posting’s start, where I will consider some of the possible national security implications of weaponized artificial intelligence in general.
Meanwhile, you can find this and related postings and series at Ubiquitous Computing and Communications – everywhere all the time 4, and at Page 1, Page 2 and Page 3 of that directory. And you can also find this and related material at Social Networking and Business 3 and its Page 4 continuation. And also see that directory’s Page 1 and Page 2.
Leave a Reply Cancel reply
This site uses Akismet to reduce spam. Learn how your comment data is processed.
leave a comment