Platt Perspective on Business and Technology

Rethinking the dynamics of software development and its economics in businesses 5

Posted in business and convergent technologies by Timothy Platt on July 2, 2019

This is my 5th installment to a thought piece that at least attempts to shed some light on the economics and efficiencies of software development as an industry and as a source of marketable products, in this period of explosively disruptive change (see Ubiquitous Computing and Communications – everywhere all the time 3, postings 402 and loosely following for Parts 1-4.)

I have been working my way through a brief simplified history of computer programming in this series, as a foundation-building framework for exploring that complex of issues, starting in Part 2. And I repeat it here, at least for its key identifying points and in its current form, as I have updated this list since then:

1. Machine language programming
2. And its more human-readable and codeable upgrade: assembly language programming,
3. Early generation higher level programming languages (here, considering FORTRAN and COBOL as working examples),
4. Structured programming as a programming language defining and a programming style defining paradigm,
5. Object-oriented programming,
6. Language-oriented programming,
7. Artificial Intelligence programming, and
8. Quantum computing.

I have in fact already offered at least preliminary orienting discussions in this series, of the first six entries there and how they relate to each other, with each successive step in that progression simultaneously seeking to resolve challenges and issues that had arisen in prior steps there, while opening up new possibilities in its own right.

I will also discuss steps seven and eight of that list as I proceed in this series too. But before I do that and in preparation for doing so, I will step back from this historical narrative to at least briefly start an overall discussion of the economics and efficiencies of software development as they have arisen and developed, and particularly through the first six of those development steps.

I begin that by putting all eight of the technology development step entries of that list into perspective with each other, as they are now perceived, with a goal of at least initially focusing on the first six of them:

• Topic Points 1-5 of the above list all represent mature technology steps at the very least, and Point 6 has deep historical roots, at least as a matter of long-considered principle. And while it is still to be more fully developed and implemented in a directly practical sense, at least current thinking about it would suggest that that will take a more step-by-step evolutionary route that is at least fundamentally consistent with what has come up to now, when and as it is brought into active ongoing use.
• Point 7: artificial intelligence programming has been undergoing a succession of dramatically disruptively novel changes and the scope and reach of that overall effort is certain to expand in the coming years. That noted, it also has old and even relatively ancient roots and certainly by the standards and time frames of electronic computing per se. But it is heading into a period of unpredictably disruptively new. And my discussion of this step in my above-listed progression will reflect that.
• And Point 8: quantum computing, is still, as of this writing, at most just in its early embryonic stage of actual realization as a practical, working source of new computer systems technologies and at both the software and even just the fundamental proof of principle hardware level. So its future is certain to be essentially entirely grounded in what as of this writing would be an emerging disruptively innovative flow of new and of change.

My goal for this installment is to at least briefly discuss something of the economics and efficiencies of software development as they have arisen and developed through the first six of those development steps, where they collectively can be seen as representing a largely historically grounded starting point and frame of reference, for more fully considering the changes that will arise as artificial intelligence agents and their underlying technologies, and as quantum computing and its, come into fuller realization.

And I begin considering that historic, grounding framework and its economics and efficiencies, by setting aside what for purposes of this discussion would qualify as disruptively innovative cosmetics as they have arisen in its development progression to date. And yes, I am referring with that label to the steady flow of near-miraculous technological development that has taken place since the initial advent of the first electronic computers, that within a span of years that is essentially unparalleled in human history for its fast-paced brevity, has led from early vacuum tube computers to single transistor per chip computers to early integrated circuit technology to the chip technology of today that can routinely and inexpensively pack billions of transistor gates onto a single small integrated circuit, and with all but flawless manufacturing quality control perfection.

• What fundamental features or constraints reside in both the earliest ENIAC and similar vacuum tube computers and even in their earlier electronic computer precursors, and also in the most powerful supercomputers of today that can surpass petaflop performance speeds (meaning they’re being able to perform over one thousand million million floating point operations per second), that would lead to fundamental commonalities in the business models and the economics of how they are made?
• What fundamental features or constraints underlie at least most of the various and diverse computer languages and programming paradigms that have been developed for and used on these increasingly diverse and powerful machines, that would lead to fundamental commonalities in the business models and the economics of how they are used?

I would begin approaching questions of economics and efficiencies here, for these widely diverse systems, by offering an at least brief and admittedly selective answer to those questions – noting that I will explicitly refer back to what I offer here when considering artificial intelligence programming and certainly its modern and still-developing manifestations, and when discussing quantum computing too. My response to this set of questions in this context will, in fact service as a baseline starting point, for discussing new issues and challenges that Points 7 and 8 and their emerging technologies raise and will continue to raise.

Computer circuit design and in fact overall computer design have traditionally been largely fixed at least within the design and construction of any given device or system, for computers developed according to the technologies and the assumptions of all of these first six steps. Circuit design and architecture, for example, have always been explicitly developed and built towards, as fixed product development goals that would be finalized before any given hardware that employs it would be built and used. And even in the most actively mutable Point 6: language-oriented programming scenario per se as currently envisioned, a customized programming language and any supportive operating system and other software that would be deployed and used with it, is essentially always going to have been finalized and settled for form and functionality prior to its use, in addressing any given computational or other information processing tasks that it would be developed and used for.

I am, of course, discounting hardware customization here, that in usually comprised of swapping different version, also-predesigned and finalized modules into a standardized hardware framework. Yes, it has been possible to add in faster central processing unit chips out of a suite of different price and different capability offerings that would fit into some single same name-branded computer design. And the same type and level of flexibility, and of purchaser and user choice has allowed for standardized, step-wise increased amounts of RAM memory and cache memory, and of hard drive and other forms of non-volatile storage. And considering this from a computer systems perspective this has meant buyers and users having the option of incorporating in alternative peripherals, specialty chips and even entire add-on circuit boards for specialized functions such as improved graphics and more, and certainly since the advent of the personal computer. But these add-on and upgrade features and options only add expanded functionalities to what are essentially pre-established computer designs with for them, settled overall architectures. The basic circuitry of these computers has never had to capability of ontological change based simply upon how it is used. And that change: a true capability for programming structure-level machine learning and adaptation, are going to become among the expected and even automatically assumed features of Point 7 and Point 8 systems.

My focus in this series is on the software side of this, even if it can be all but impossible to cleanly and clearly discuss that without simultaneously considering its hardware implementation context, so I stress here that computer languages and the code that they convey in specific software instances have been fundamentally set and in similar ways and to similar degrees by the programmers who have developed them, to any hardware lock-in that is built in at least by the assembly floor, a priori to their being loaded into any given hardware platform and executed there, and certainly prior to their being actively used – and even in more dynamically mutable scenarios as envisioned in a Point 6 context.

This fundamental underlying standardization led to and sustained a series of fundamental assumptions, and practices that have collectively gone a long way to shaping both these systems themselves and their underlying economics and their cost-efficiencies:

This has led to the development and implementation of a standardized, paradigmatic approach that has led from initial concept to product design and refinement, prototyping as appropriate, and alpha and beta testing and certainly in any realized software context and its implementations, and with every step of this following what have become well understood and expected cost and returns based financial models. I am not saying here that problems cannot or do not arise, as specific New is and has been developed in this type of system. What I am saying here is that there is a business process and accounting-level microeconomic system that has arisen and that can be followed according to scalable, understandable risk management and due diligence terms. And a big part of that stability comes from the simple fact what when a business, hardware or software in focus, has developed a new product and brings it to market, they are bringing what amounts to a set and settled finalized product to market that they can calculate all costs paid and returns expected to be received from.

The basic steps and performance benchmarks that arise in these business and economic models and process flows, and that are followed in developing these products can and do vary in detail of course, and certainly when considering computer technologies as drawn from different steps in my first five points, above. And the complexity of those steps has gone up, and of necessity as computer systems under consideration have become more complex. But at least categorically, the basic types of business and supportive due diligence steps that I refer to here have become more settled and even in the face of the ongoing technological change that they would manage.

But looking ahead for a moment, consider one step in that process flow, and from a software perspective. What happens to beta testing (as touched upon above) when any given computer system: any given artificial intelligence agent can and most likely will continue to change and evolve and on its own and starting the instant that it is first turned on and running, and with every one of a perhaps very large number of at least initially identical agents, coming to individuate in its own potentially unique ontological development direction? How would this type of change impact upon economic modeling: microeconomic or macroeconomic that might have to be determined for this type of New?

I am going to continue this discussion in my next installment to this series, with at least a brief discussion of the balancing that has to be built for, when managing both in-house business model and financial management requirements for the companies that produce these technologies, and the pressures that they face if they are to be, and if they are remain effective when operating in demanding competitive markets. Then after that I will at least begin to discuss Point 7: artificial intelligence programming, with a goal of addressing the types of questions that I have begun raising here as to business process and its economics. And in anticipation of that, I will add more such questions to complement the basic one that I have just started that line of discussion with. Then I will turn to and similarly address the above Point 8: quantum computing and its complex of issues.

Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: