Platt Perspective on Business and Technology

Some thoughts concerning a general theory of business 27: a second round discussion of general theories as such, 2

Posted in blogs and marketing, book recommendations, reexamining the fundamentals by Timothy Platt on January 30, 2019

This is my 27th installment to a series on general theories of business, and on what general theory means as a matter of underlying principle and in this specific context (see Reexamining the Fundamentals directory, Section VI for Parts 1-25 and Reexamining the Fundamentals 2, Section IX for Part 26.) I began this series in its Parts 1-8 with an initial orienting discussion of general theories per se. And I then turned from that, in Parts 9-25 to at least begin to outline a lower-level, more reductionistic approach to businesses and to thinking about them, that is based on interpersonal interactions. And then I began a second round, further discussion of general theories per se in Part 26 to add to the foundation I have been discussing theories of business in terms of, and as a continuation of the Parts 1-8 narrative that I began all of this with.

More specifically and focusing here on my Section IX continuation of this overall series, my primary focus of attention in Part 26 was on the nature of axioms and of systems of them, as would form the basis for developing a more comprehensive body of theory. And I continue that discussion thread here too, beginning by explicitly noting and discussing a point of distinction that I just made of in passing in Part 26 but that merits more explicit discussion here: the distinction between abstractly grounded theories and empirically grounded, evidence-based theories.

• Much of mathematics could arguably be offered as being abstract in its fundamental nature, and separate and distinct from any possible physical sciences or other empirically grounded applications as currently conceived, even if some fairly large areas of math that were initially viewed in that way have found their way into mathematical physics and related empirically grounded fields too.
• See Hardy, D.H. (1940) A Mathematician’s Apology for a telling discussion of how areas of mathematics that have traditionally been considered devoid of “practical” application can become anything but that. Note: the link offered here is to a full text electronic file version of the original print edition of this book, as so republished by the University of Alberta Mathematical Sciences Society.
• But I would hold up abstract mathematics as a source of axiom-based theory that is developed and elaborated free of outside constraints, at least as a matter of expectation: free for example from intrinsically having to be consistent with the patterns of behavior observed in empirically based systems of any sort, and certainly in its axiomatic assumptions.
• General theories of business, fit into a special category of general theories per se, as they would be logically developed in accordance with deductive reasoning, as abstract theories are. But such theories are also bound by empirically observable data: real-world observably validated fact and the inductive reasoning that its inclusion would bring with it too.

I posit empirically grounded bodies of theory such as general theories of business this way, for a very specific reason. And my goal in this series, here, is to elaborate on that point of observation as a next step in this developing narrative. I begin addressing that here by more fully considering empirical data per se.

• Empirically derived data of the type that would enter into and inform a body of theory such as a general theory of business, is never going to be perfect.
• There is always going to be at least some measurement error in it, even when it is entered into theory-based calculations and related analyses as-is, as for example the way a temperature reading might be entered into a thermodynamics calculation, free of any preparatory calculations as might be made upon it.
• But the term “empirical data” is also more loosely interpreted at times, to include more pre-processed findings as well, that happen to have been initially based on observation.
• And observed data per se is not all the same as far as replicability and freedom from possible bias are concerned, for how it is arrived at. If you identify data such as the above-cited temperature readings as hard data, a body of theory such as a theory of business relies upon and is grounded in significant amounts of softer data too.

But even if all data entering into a theory of business and its application could be considered hard data, as loosely noted above, actually developing and using such a theory would still always involve the dynamic interplay of two sometimes opposing approaches and visions.

• On the one hand you have the axioms and theorem-based elaborations of them that have already been established to at least a workable level of certainty of reliability for predictively describing at least aspects of the real, observable world: here business systems and aspects of them.
• And these theory-based systems would be developed according to the internal logic of their structure and content, in developing deductively derived testable predictions: hypotheses (as well as less directly testable thought experiments: Gedanken experiments (thought experiments).) This half of what amounts to a larger, dual-sided process follows the pattern of abstractly-based theories as noted above when I cited mathematical systems.
• And on the other hand, you have empirically based reality-check tests of those predictions and the inevitable likelihood that such outside-sourced data and insight will with time come to force a reconsideration, and of both what is concluded from the axioms in play and even reconsideration of those fundamental axioms themselves. Experiments set up to test those deductively derived testable predictions, do not always yield predictable results, and either in detail or even just in general form. Unexpected phenomena do arise and are found and observed and measured, that would have to be included in an already ongoing body of theory that cannot as-is, account for them.

I just made an assumption there that can also be considered axiomatic in nature, but that I would immediately challenge: my distinction between hard data that might be considered more empirically certain, consistent and reliable, and soft data that might require more rigorous validation and from several sources for it to be considered as reliable – assuming such validation is even possible.

Let’s consider two working examples of those proposed data types here:

• Temperature readings as here identified as being emblematic of hard data as a general categorical form, and
• Wealth and income data as assembled demographically, but as developed from numerous individual sources that would among other things seek to maintain confidentiality over their personal finances and any data that would reveal them, and who would follow different investment and income patterns for different individuals. This, I offer here as a soft data example.

Let’s start by considering my hard data example. When you really begin considering what goes into those temperature readings, their apparent hardness here starts to become more illusory than real. How do you measure the precise temperature of a system at one of its more directly observed and measured nodes? More specifically, what physical systems do you precisely measure those temperature values from, and how do you determine and know precisely how to calibrate your physical measuring systems for this so as to arrive at specific reliable temperature values? Where can you assume linearities in what you measure where increasing some value measured, corresponds directly with a same scale shift in actual temperature? And how would you best deal with nonlinearities in measuring equipment response too, where that becomes necessary? Let me take that out of the abstract by citing thermistors that measure temperature indirectly by measuring changes in electrical resistance in sensors, and bimetallic thermometers that measure changes in the curvature of fused bimetallic strips as a surrogate for measuring temperature per se. Or I could have cited standard tinted alcohol or mercury thermometers that actually measure changes in volume of specific sensor materials in use.

Actually calibrating this equipment and making necessary measurements from it, rests on the absolute presumption of reliability and accuracy of large amounts of underlying physical theory – and even when these measurements are going to be used in testing and validating (in my above example thermodynamic) theory that might be based on essentially the same set of underlying axiomatic assumptions. In this case, overall consistency throughout the entire conceptual system in play here, becomes the crucial defining criterion for accuracy and reliability in this “hard data” example.

Now let’s consider my soft data example, and the demographically aggregated but ultimately individually sourced financial information as would be used in testing out the predictive value of an economic measure such as the Gini coefficient. I picked this example for several reasons, one of the more important of which is that as with many grandly conclusive business systems and economic modeling endeavors, calculating this value almost always involves at least some aggregation of diversely sourced raw, original data.

• How consistently are the specific data types that are gathered, functionally defined and actually measured?
• And how best should perhaps differing value scales be reconciled, when that proves necessary?
• How current is all of this data and for each of the basic data sources that would be aggregated for overall analysis here?
• And how has this data been “cleansed” if it has been to weed out anomalies that fall too far from some mean or median value observed, raising questions as to fringe value accuracy among other issues?
• And cutting to the core of the issues that I raise here regarding the hardness/softness of this data, precisely what demographics were queried for it and how, and is the entire data set consistent for all of this too?

Note: I more fully examined some of the details that might be hidden behind the seemingly clear-cut data measurements for my second, admittedly soft data example than I did for the first, but ultimately they both share the same basic limitations. As such, I rephrase my hard and soft data distinction as being only somewhat useful, and there only when speaking and thinking in very general terms, free of specific analytical precision on any particular theory-based description or prediction.

• And with this, I suggest that while more strictly abstractly framed theories, such as for example an approach to algebraic topology that would address spaces with fractional dimensions, might only involve or require one basic within-theory type of axiomatic support,
• An empirically grounded body of theory such as a general theory of business is going to in fact rest on two categorically distinct types of axiom:
• A deductively grounded set of axioms that would describe and predict on the basic of data already in hand and the basic axioms and proven theorems that have been derived from that
• And a set of axioms that might in detail overlap with the first, that would underlie any new empirically developed data that might be brought into this – and particularly importantly where that would include divergently novel, new types of data.

I am going to continue this line of discussion in a next series installment where I will consider how new axioms would be added into an already developing body of theory, and how old ones might be reframed, generalized, limited for their validly expected or discarded as axioms per se. Then after addressing that set of issues I will turn to consider the issues of completeness and consistency, as those terms are defined and used in a mathematical logic context and as they would be used in any theory that is grounded in descriptive and predictive enumerable form. That line of discussion will be used to address issues of scope expansion for the set of axioms assumed in a given theory-based system.

I will also more fully consider issues raised in

• Stillwell, J. (2018) Reverse Mathematics: proofs from the inside out. Princeton University Press.

as initially cited in Part 26 in this series, with a goal of more fully analytically discussing optimization for the set of axioms presumed, and what that even means.

Meanwhile, you can find this and related material about what I am attempting to do here at About this Blog and at Blogs and Marketing. And I include this series in my Reexamining the Fundamentals directory and its Page 2 continuation, as topics Sections VI and IX there.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: