Platt Perspective on Business and Technology

Rethinking the dynamics of software development and its economics in businesses 6

Posted in business and convergent technologies by Timothy Platt on September 9, 2019

This is my 6th installment to a thought piece that at least attempts to shed some light on the economics and efficiencies of software development as an industry and as a source of marketable products, in this period of explosively disruptive change (see Ubiquitous Computing and Communications – everywhere all the time 3, postings 402 and loosely following for Parts 1-5.)

I have been at least somewhat systematically been discussing a series of historically grounded benchmark development steps in both the software that is deployed and used, and by extension the hardware that it is run on, since Part 2 of this series:

1. Machine language programming
2. And its more human-readable and codeable upgrade: assembly language programming,
3. Early generation higher level programming languages (here, considering FORTRAN and COBOL as working examples),
4. Structured programming as a programming language defining and a programming style defining paradigm,
5. Object-oriented programming,
6. Language-oriented programming,
7. Artificial Intelligence programming, and
8. Quantum computing.

And I have successively delved into and discussed all of the first six of those development steps since then, noting how each successive step in that progression has simultaneously sought to resolve challenges and issues that had arisen in prior steps of that list, while opening up new positive possibilities in its own right (and with that also including their creating new potential problems for a next development step beyond it, to similarly address too.)

My goal beyond that has, and continues to include an intention to similarly discuss Points 7 and 8 of the above-repeated list. But in anticipation of doing so, and in preparation for that too, I switched directions in Part 5 and began to at least lay a foundation for explicitly discussing the business model and economic issues that comprise the basic topics goal of this series as a whole. And I focused on the above Points 1-6 for that, as Points 1-5 are all solidly historically grounded for their development and implementation, and Point 6 is likely to continue to develop along more stable, non-disruptive evolutionary lines. That is a presumption that could not realistically be made when considering Points 7 and 8.

I focused in Part 5 of this series on issues of what might be called anticipatory consistency, where systems: hardware and software in nature, are determined and designed in detail before they are built and run, and as largely standardized, risk-consistent forms. And in that, I include tightly parameterized flexibility in what is offered and used, as would be found for example, in a hardware setting where purchasing customers and end users can select among pre-set component options (e.g. for which specific pre-designed and built graphics card they get or for the amount of RAM their computer comes with.)

This anticipatory consistency can only be expected to create and enforce basic assumptions, and both for the businesses that would develop and offer these technologies, and for both their hardware and software, and for how this would shape their business models. And this would be expected to create matching, all but axiomatic assumptions when considering their microeconomics too, and both within specific manufacturing and selling businesses and across their business sectors in general.

I included Point 6: language-oriented programming there, as offering a transition step that would lead me from considering the more historically set issues of Points 1-5, to a discussion of the still very actively emerging Points 7 and 8. And I begin this posting’s main line of discussion here by noting a very important detail. I outlined something of language-oriented programming as it has more traditionally been conceived, when raising and first discussing it in Part 4 of this series. And I kept to that understanding of this software development step in Part 5, insofar as it came up there. But that is not the only way to view this technology, and developments to come in it are likely to diverge very significantly from what I offered there.

Traditionally, and at least as a matter of concept and possibility, language-oriented programming has been seen as an approach for developing computing and information management, problem-specific computer languages that would be developed and alpha tested and at least early beta tested, and otherwise vetted prior to their being offered publically and prior to their being used on real-world, client-sourced problems as marketable-product tools. The nature of this approach, as a more dynamic methodology for resolving problems that do not readily fit the designs and the coding grammars of already-available computer languages, at least for the speed and efficiency that they would offer, is such that this vetting would have to be streamlined and fast if the language-oriented programming protocols involved are to offer competitive value and in fact become more widely used. But the basic paradigm, going back to 1994 as noted in Part 4, fits the same pattern outlined in Part 4 and when considering development steps 1-5.

And with that, I offer what could be called a 2.0 version of that technology and its basic paradigm:

6.0 Prime: In principle, a new, problem type-specific computer language, with a novel syntax and grammar that are selected and designed in order to optimize computational efficiency for resolving that problem, or class of them, might start out as a “factory standard” offering. But there is no reason why a self-learning and a within-implementation capacity for further ontological self-development and change, could not be built into that.

Let’s reconsider some of the basic and even axiomatic assumptions that are in effect built into Points 1-5 as product offerings, as initially touched upon in Part 5 here, with this Point 6 Prime possibility in mind. And I will frame that reconsideration in terms of a basic biological systems evolutionary development model: the adaptive peaks, or fitness landscape model.

Let’s consider a computational challenge that would in fact likely arise in circumstances where more standard computer languages would not cleanly, efficiently code the problem at hand. A first take approach to developing a better language for this for coding it, with a more efficient grammar for that purpose, might in fact be well crafted and prove to be very efficient for that purpose. But if this is a genuinely novel problem, or one that current and existing computer languages are not well suited for, it is possible that this first, directly human coder crafted version will not be anywhere near as efficient as a genuinely fully optimized draft language would be. It might, when considered in comparison to a large number of alternative possible new languages, fit onto the slope of a fitness (e.g. performance efficiency) peak that at its best, most developed possibility would still offer much less that would be possible, overall when considering that performance landscape as a whole. Or it might in fact fit best into a lower level position in a valley there, where self-directed ontological change in a given instantiation of this language could conceivably lead it towards any of several possible peaks, each leading to improved efficiency but all carrying their own maximum efficiency potential. And instantiation A as purchased by one customer for their use self-learns and ontologically develops as a march up what in fact is a lower possible peak in that landscape and the overall efficiency of that, plateaus out with a still relatively inefficient tool. Instantiation B on the other hand, finds and climbs a higher peak and that leads to much better performance. And instantiation C manages to find and climb a veritable Mount Everest for that fitness landscape. And the company that bought that, publishes this fact by publically announcing how effectively its version of this computer language, as started in a same and standardized form, has evolved in their hands and simply from its own performance and from its own self-directed changes.

• What will the people who run and own the client businesses that purchased instantiations A and B think when they learn of this, and particularly if they see their having acquired this new computer language as having represented a significant up-front costs expenditure for them?

I am going to leave that as an open question here, and will begin to address it in my next series installment. In anticipation of that discussion to come, I will discuss both the business model of the enterprise that develops and markets this tool, and how it would offer this product to market, selling or in some manner leasing its use. And that means I will of necessity discuss the possible role that an acquiring business’ own proprietary data, as processed through this new software, might have helped shape its ontological development in their hands. Then after delving into those and related issues, I will begin to more formally discuss development step 7: artificial intelligence and the software that will come to contain it, and certainly as it is being fundamentally reshaped with the emergence of current and rapidly arriving artificial intelligence agents. Meanwhile, you can find this and related material at Ubiquitous Computing and Communications – everywhere all the time 3, and also see Page 1 and Page 2 of that directory.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: