Platt Perspective on Business and Technology

Monoculture and ecological diversity as a paradigm for modeling cyber risk – 1

Posted in book recommendations, strategy and planning by Timothy Platt on May 18, 2011

Corn rust is a fungal disease that attacks one of our most important global food resources. There are several forms of this disease, each caused by a specific, distinct species of fungus with the commonest in the United States caused by a species named Puccinia sorghi. When a field of corn is infested with this, or with virtually any of the other causative fungal pathogens responsible for corn rust, the crop from that planting is usually lost and if not entirely at least in great part. And a number of factors can cause rust to spread as an epidemic, significantly limiting and even destroying an entire agricultural region’s corn harvest in the extreme. One of the most significant of these factors arises when farmers plant single crop species, and single strains of that, and across wide ranging areas according to an agricultural practice known as monoculture. It should be noted in this context that the first large scale, epidemic proportion outbreaks of corn rust, impacting on food production at national levels trace to the early 1950’s with the widespread adaptation of monoculture as an approach to more effective agribusiness. And as a general reference on this set of issues, I cite a 1974 US Department of Agriculture sponsored study report on work carried out by the US National Research Council:

• National Research Council (U.S.). (1974) Committee on Genetic Vulnerability of Major Crops.

And I quote one of its key findings, bolstered by a wide range of data involving both food crop and non-food crop experience: “… crop monoculture and genetic uniformity invite epidemics.” (from page 21 of this report.)

• This is very nice to know, but what does it have to do with cyber security?

My goal in this posting is to answer that question, using my answer as a basis for offering a perhaps new way to look at risk management in information processing systems. And here I turn to information management systems themselves, and with two very specific types of system in mind:

In-house IT systems: Most large organizations have a great many computer and network users, and a lot of hardware to support them as they work. This may mean lots of stand-alone full function desktop and laptop computers, plus perhaps a mix of tablet and handheld devices. And all of this has to be maintained with support from their help desk and tech support. Here, the precise mix of devises is not of crucial importance and the same could be said as to how this support is offered: in-house and from their own IT department or as an outsourced service. The important point is that the more work that this support system has to expend in maintaining these hardware and software systems, the more expensive this becomes. So pressure can be intense to build towards and maintain as close to a single standard as possible, and for hardware, operating system and application software. Exceptions can, under these circumstances require explicit needs-based approval and on a case by case basis. Think of this as information technology monoculture as a cost saving imperative, and in that agricultural monoculture is pursued to follow cost-effectiveness criteria too so the analogy is sound.
Cloud computing systems: Now consider a rapidly emerging external information storage, processing and sharing capability: cloud computing, with its various associated distributed and distant-hosted functionalities – e.g. Software as a Service (SaaS), Platform as a Service (PaaS) and so on. The underlying details as to the technology behind this may be designed to work transparently to the user, and it generally does. But these systems have to be flexible and scalable to work, and that means taking a rigidly monoculture approach with every server, for example set up and managed according to a common, shared pattern, and with updates to this pattern tested out then distributed throughout the entire system.

Both of the approaches outlined in these examples make a great deal of sense. I know from personal experience how difficult it is for a help desk and technical support system, for example, to have to deal with a seemingly endless number of combinations of operating system version plus software applications installed, and with their version variations, and so on. And when everyone has their own fully functional computers on their desks rather than more readily managed/controlled resources such as thin clients, many will upload still more software on their own and regardless of what you tell them to do and not to do. Given this, a more monoculture approach sounds very attractive. But that still creates risk. Basically, if you can and do institute a uniform, and monoculture-oriented approach you may save yourself a great deal of work and expense while all is going well. But you can end up with a system where if any one computer is vulnerable in it, every computer in the system is vulnerable. And this is where malware enters this story, as counterpart to the corn rust infestations I started this posting discussing.

And this brings me to a basic conundrum that I would leave you with, and an open question:

• It can be all but prohibitively expensive to actively, effectively maintain a large and complex IT system entirely ad doc and with no standardization or best practices developed or followed in what computer resources are supported. And trying to do so would create risk management vulnerabilities anyway.
• At the same time it also creates risk having everything standardized and available to any potential threat as a single target.
• Where should you standardize and where should you customize and even individualize so as to develop an optimal balance?

I will finish this posting by sharing some thoughts related to a single part of this puzzle.

• Look for core areas in your overall system where realized vulnerability would have greatest deleterious impact, and where exposure to vulnerability might be significant. Here, remember that even air gaps cannot eliminate exposure vulnerabilities entirely, as Stuxnet proved for people responsible for Iranian IT systems at their Bushehr nuclear facility.
• Risk assessment and mediation are often operationally defined in terms of identifying and filtering out specific known vulnerabilities with the ever-growing lists of virus and other malware code fragment definitions to screen for. Do that, but also turn this problem around conceptually and look to create variability to limit the possibility that everything go down from some single universally shared extreme vulnerability – and if not from some new known vulnerability then from some shared zero day attack vulnerability.

I am going to follow up on this posting with a discussion of networking and internet open standards and how they do and do not contribute to the problems I write of here.

Tagged with:

Comments Off on Monoculture and ecological diversity as a paradigm for modeling cyber risk – 1

%d bloggers like this: