Platt Perspective on Business and Technology

Rethinking the dynamics of software development and its economics in businesses 1

Posted in business and convergent technologies by Timothy Platt on October 2, 2018

I offer this posting as a compact if not entirely briefly stated thought piece that connects to and that hopefully helps inform, much of what I have offered in this blog that relates to software development and to computer technology development in general. And as I develop this essay I will include artificial intelligence agent oriented software development in its narrative and as a central orienting point of example for what is being developed towards now in all of this; I will at least selectively write of how we have arrived where we are now. But I will also offer at least a few selectively considered anticipatory notes as to how the trends and the underlying forces that have brought us here, might move forward too.

The issues that I would address here have roots that go way back in the history of electronic computers and in how the information technology systems based upon them have taken shape. And many of the basic paradigmatic assumptions and presumptions and their direct consequences that have led us to where we are now for the computer and networking technology that we have, will almost certainly continue forward too and certainly from how they have become so ingrained as to have become essentially invisible – except when circumstances at least situationally force a more direct awareness of them. So this is not a discussion about any given particular new or old technology state of the art, or of any particular hardware or software design paradigm step or phase or expression; it is about the framework that underlies what is developed and how.

• “Nature abhors a vacuum” …
• … and in an information technology context that means software expands for the hardware capacity that it requires, to fill and with time exceed any possible systems capability that might currently be available for running it.

I offer the above as a starting point for a wider ranging discussion than those paired bullet points themselves seek to encompass. That noted, the first of those sentiments can perhaps best be viewed as a considered and examined axiomatic assumption that was developed by Aristotle in his writings and teachings as a foundation point for how he viewed reality, and that has since been attributed to him as its original source. The second represents a reality that essentially any computer programmer or computer hardware designer with any significant level of experience, has come to see as a part of their working reality, and certainly for those who work on complex systems and as they seek to push the envelope of the doable.

Yes, this means software code expansion (and its often accompanying cousin: performance speed reduction) as a cost of adding in necessary new functionalities. And this holds just as true when expanding upon and significantly improving features and functionalities already in place. But how much of this often code-volume explosion primarily takes place as a bells and whistles-adding mechanism that is intended to keep the programmers, and their managers, and their services and departments more fully and gainfully employed and with steady paychecks and business expenditure allowances to prove it? How much of it arises as an ongoing search for that next marketable cosmetic add-on that providing businesses can market as proof of their ongoing marketplace relevance and value? And yes, how much of it takes place with an intentionally planned goal of adding in really functionally important new features that an end user would specifically see and use, or that would at the very least improve the performance of functionalities that they already have, as they use them?

How can a better dynamic balance be arrived at and maintained there, where businesses legitimately do need to change and evolve their product offerings and even if that just means making more cosmetic and bells and whistles oriented changes if they are to remain competitive? And how can and should they best accomplish that?

Let’s begin considering that dual question from the perspective of a software development and production paradigm that can be found in variations throughout information technology as a set of interconnected industries. It is a very legitimate point that software development companies, and essentially any such companies, need to be able to maintain a steady new product development pipeline that ranges from emerging new-in-concept possibilities, on through to first-release new product offerings that are ready to bring to market and that have their roots in those earlier starting point development steps. No software developer business just has one possibility under development in this type of process flow at any one time. They of necessity find themselves with what can amount to an assembly line progression of what would hopefully be next new and great marketable offerings that are each at their own specific stage in this overall new product development pattern, and with at least something at every benchmarked step of that progression at any one time. Their goal in that is to avoid ever facing a dry period where their competitors are bringing out their next new, but they can only offer their current and older.

But they have to simultaneously work towards developing their next New while still supporting their current customers and their more recent and older products that they might see as more legacy than current but that those customers still rely upon. Customers: individual consumer or business, that make what for them are significant investments in purchasing new software packages, integrating them into the rest of their overall systems, and going through the learning curves to make effective make use of all of this, would feel betrayed if a company that developed these offerings and sold rights to them were to simply walk away, their eyes entirely on their own future and their own needs and with no interest in offering any service support when their customers need it.

True, most software development businesses do set and follow aged-out, support cut-off policies where they stop offering patches or other upgrade support services for sufficiently older products when they have moved some set number of next development generations beyond them with their current new offerings. This becomes a cost-effectiveness necessity; no business can continue to actively support everything that they have ever offered, no matter how old or rarely used, forever. But meeting what for them and for their customers would be more legitimate needs for this, can call for a significantly scaled and complex software development and maintenance staff, just as new product development does, and certainly for larger developers that offer wide ranges of software offerings and ones that are not in any way fad oriented or driven. Consider office and business productivity tools there as a source of working examples, or operating system developers and providers. This requirement perspective particularly holds true where all of the old functionalities that would be found in older software releases are going to continue to hold real importance to their customers, long term and with no realistic end in sight, and even as demand for inclusion of new as well, continues.

This second vision of software development and production (with all of its patches and fractional generation upgrades) can call for what effectively amounts to an established but still supported product maintenance pipeline too, and one that can be just as important to effectively maintain as their new product pipeline if they are to securely and consistently remain financially strong by retaining the ongoing business of a loyal, returning customer base.

How does this relate to software code expansion? Large and wide-ranging in what they offer, or small and specialty software oriented and focused, all of these businesses need to be able to keep their programming staff busy and employed and for both pipeline types if they explicitly have them, and with all of their respective accumulations of skills and experience at hand for them when they would be needed, to make more fundamental value creating changes too. A new product pipeline might address this type of challenge more proactively and a maintenance one more reactively, but this basic point applies to both. And in the new development pipeline, this can and does lead to code bloat and certainly when a need to be able to offer new is coupled with a need to maintain an actively busy development team, but as cost-effectively as possible.

Think of all of this as representing just one side to a single larger dynamic that takes place in software development companies. And think of it as one that just as strongly impacts upon and shapes the next-step development worked towards by the hardware industries and their players, and certainly for challenges such as next generation chip design. And think of it as one that can catch the consumer and the marketplace in the middle, and certainly when this race towards bigger and more expansive brings with it, what can become significant law of unintended consequences impact for them.

How does this dynamic fairly automatically lead to software expansion, and in forms that all too often can come to be perceivable more as software bloat than anything else? First of all, it is easier to simply add new blocks of code to a body of software already in place than it is to work your way through the accumulation of code already there with a goal of optimizing it and with a goal of creating leaner more efficient programs through that. I write here primarily of programs that would run on laptop computers or larger, but the same applies to their tablet and smaller cousins too. And for a very real world example of this add-on and grow phenomenon that can be seem as fitting a basic standard pattern, I cite the continued incorporation of even early development stage DOS operating system code in Microsoft Windows operating system releases and well after the presumed end of the DOS era itself. I still find it amazing that Microsoft was still running what for it was still early DOS operating system code in their new operating system software until after they completed their round of update versions for their Windows 95 system (see this brief history of MS-DOS and this timeline of DOS operating systems.) They repeatedly updated their DOS code into new versions through their explicitly DOS-based operating systems history and in these presumably post-DOS operating system packages too. But they retained the core elements and their core limitations of at least some of even their earliest DOS code though out all of that. Mostly what they did was to build upon that primordial foundation in their New and Next, and retain it in what were up-front visibly post-DOS operating system packages, where it still functioned more behind the scenes for most users but significantly so in at least certain functional contexts.

Yes, Microsoft has gone through rounds of “redevelop from scratch and build from there,” new software design and coding. Apple has too and so have most any large software development company of any historical duration. But it is still easier to add on, then it is to painstakingly refine and optimize and trim back and particularly when software development businesses are always racing to be the first out the door with that next Newest and Greatest and none of their competition will wait for them to rebuild the already released for improvement first. Stepping back to rebuild the old is not as likely to be cost-effective or competitive value creating, as adding New is.

But that narrative and its issues only addresses one part of this story. The assumptions and presumptions that arise from the ongoing successful predictiveness of Moore’s law works synergistically with that trend. The at least presumption of Moore’s law as if a virtual axiom of nature – or at least an axiom for the next few technology development generations, takes the pressure out of any potential countervailing argument that would favor lean and efficient, because of the presumption that it brings with it that hardware capability expansion will always at least keep up with any software expansion need and regardless of how its volume of code (or its potential run-time demands) expand. That is the second side, or the second leg of this paradigm of expansion that I would offer here. And I add parenthetically if in no other way, that I also at least alluded to a third such foundational element just now too when I added in the phrase “or at least an axiom for the next few technology development generations.” Think in terms of a stable three legged stool there.

Business finances might be included in longer term strategic planning as is often carried out on an annual basis by the executive officers of a business: a process that can take on a ritualistic quality but that when carried through upon effectively can lead to real value for the organization. But that means looking five years out at most and with everyone acknowledging up-front that the out-years of that are more intention-orienting cartoon than anything else. Publically traded businesses often find themselves focusing more on the next fiscal quarter than anything else, and with a goal of being rated positively by stock market gurus and pendants if for no other reason. But even that five year evaluation (three or so if you discount the longer-term cartoon) spans approximately two Moore’s law-expected technology generations. Foundation piece three: supportive leg three as listed above, can be seen as a calculated and analytically considered, or at least a commonly adhered to long-term myopia. And what we do not see, can still impact and even fundamentally shape what we do and how we think about it.

Software growth is necessary to meet genuine emerging needs. It is not just a consequence of adding extraneous change that is of cosmetic value at most to software, with its seemingly always supplementing code volume. And this growth continues on even when efforts are added in to rein in the code volume and pursue at least something of a lean and agile approach. That point of observation gains significance as software packages grow; the bigger they are the more easily they continue to grow even bigger from there. Or slightly rephrased, lean and agile coding begins to break down and can become all but cosmetic in its own right, as the scale of the software that it would be applied to reaches a shadowed, gray area threshold limit where it becomes essentially impossible to at least cost-effectively trace through all of the possible inefficiencies that might have arisen in a software package as a whole.

And bloat can and does arise in all of this in essential function development and improvement too, and not just in the more cosmetic side of software development … and this leads me to more fully consider the challenge of even meaningfully trying for lean and efficient, compact code and the challenges that that approach faces.

I am going to turn in a next installment to this thought piece to more fully consider that, and software optimization in general. And in anticipation of that I note here that programmable computer software as we think of it, initially arose in a context where lean and agile were not luxuries that might be thought about but that were not necessarily practiced. They were absolute necessities. And after an era of the type of expansionary growth, and yes bloat and room for bloat that I have been writing of here, we might very well find ourselves in a world that has more in common with that earliest lean-demanding one than might always be readily apparent in our still current here-and-now Moore’s law certainty, as still prevails as of this writing. But that will involve complex and nuanced shifts – not simply a return to a beginning paradigm. I will at least briefly discuss some of the factors and possible outcomes that might arise in that impending paradigm shift.

And yes, I will in turn bring the lines of discussion that I have begun developing here, to an artificial intelligence agent context too, as promised at the start of this posting.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: