Platt Perspective on Business and Technology

Building a startup for what you want it to become 35: moving past the initial startup phase 21

Posted in startups by Timothy Platt on December 1, 2018

This is my 35th installment to a series on building a business that can become an effective and even a leading participant in its industry and its business sector, and for its targeted marketplaces (see Startups and Early Stage Businesses and its Page 2 continuation, postings 186 and loosely following for Parts 1-34.)

I have been working my way through a briefly stated to-address list of topics points in recent installments to this series, that I repeat here as I continue to discuss them (with parenthetical notes added as to where I have discussed what of this so far):

1. An at least brief discussion of businesses that gather in, aggregate and organize information for other businesses, as their marketable product and in accordance with the business models of those client enterprises. (See Part 31, Part 32 and Part 33.)
2. The questions of where all of this business intelligence comes from, and how it would be error corrected, deduplicated, and kept up to date, as well as free from what should be avoidable risk from holding and using it. (I began addressing this point in Part 34.)
3. And that will mean addressing the sometimes mirage of data anonymization, where the more comprehensive the range and scale of such data collected, and the more effectively it is organized for practical use, the more likely it becomes that it can be linked to individual sources that it ultimately came from, from the patterns that arise within it.

My goal for this posting is to complete my discussion of the above Point 2 and its issues, at least for purposes of this series and this phase of it. And I begin doing so by making note of a two part news and information series that is currently running on Public Broadcasting Service (PBS) television stations in the United States as I write this, as part of their Frontline series: The Facebook Dilemma. I wrote in Part 35 of this series that we have only seen the tip of an iceberg so far, that threatens Facebook to its core for how it gathers and organizes, and then sells user information, while proclaiming that it safeguards it. This televised news piece with its on-air insider interviews, and its in-depth research and reporting put a live-action face to that news story and its emerging consequences. More will come out about that unfolding news story too; it is not going to end any time soon and either for Facebook or the businesses and other organizations that have been purchasing use of its members’ personal data, or for those member users.

I begin this posting on that note to illustrate real-time as of this writing, how pressingly important the issues of Point 2 are, and for all concerned:

• Businesses that gather and sell access to user or customer data,
• Businesses that acquire access to it for their own use,
• And the people who this data is gathered in from who might in effect be marketed and sold through this business practice, and to their direct detriment,
• And even to the detriment of society as a whole, too.

I would argue that the issues that are included in the above Points 1-3 are going to prove to be among the most important and impactful issues that we will face societally, and certainly through the coming decades. They in fact already are, and certainly insofar as misuse of massive volumes of individually sourced data has already been weaponized to skew and even throw national elections, and as a tool for advancing ethnic conflict and international aggression.

• When big data reaches a threshold scale of comprehensive reach and of fineness of detail and granularity, its growing utility and range of utility and its cost-effectiveness in providing such value create undeniable pressures to expand it out even more.
• When big date gets big enough, its own inner dynamics and its value to those who would develop and use it, compel its becoming even bigger, and as a seemingly open-ended positive feedback response.

I have at least briefly touched on the issues of where this data would come from, and the issues of its use and misuse in this discussion up to here. And that brings me to the issues of data quality and the challenges of keeping it up to date and relevant (e.g. valuable) and at least potentially to both the organization that holds it and to the people and organizations that it seeks to describe.

• The bigger a big data store is, the more of a challenge it becomes to keep the data in it cleansed of error and up to date. And this challenge expands in both the context of increasing numbers of individually sourced records, and in the context of increasingly complex records with more and more data and types of data gathered and held in them, regarding any given individual source so captured.
• But sources of increase in the potential value inherent in bigger and bigger big data: more data source records describing more individual data sources and more comprehensive records that would be tapped into there, at least in principle should drive holders of such data resources to expend the financial and other resources needed to both expand and maintain these systems more carefully, and to keep them up to date and accurate for that.

I cited utility as flowing to the original source of this data accumulation just now, and after discussing big data in a Facebook context see a need to justify that presumption. Utility and positive value can run just one way, only accruing to data collectors and users. But in stable systems, value and utility can flow in many and even in all possible directions.

Consider, by way of example, a massive emergency services database that first responders would turn to when responding to emergency calls such as building fires or health crises. And more specifically, consider fire department personnel who need to be able to access up to date building plans for structures that they might have to enter, and both to save lives and to limit damage. In principle, every building in their catchment area: their geographic area of responsibility has been inspected by fire safety and other inspectors, including building inspectors, if and when any structural changes are made there. And they would make note of and report in any changes made and certainly insofar as they would affect building accessibility. And in principle all of these presumably up to date building blueprints can be, and for more up to date systems are, available through wireless online access by first responders when needed. Now what happens when first responder firemen enter a burning building, such as an apartment building to find that entrance and egress routes that show on their screens as available have been closed off through illegal and unreported construction, partitioning larger apartments into larger numbers of smaller ones? This endangers the lives of those firemen and the lives of anyone who might be trapped in these buildings.

• The same challenges would arise if this was in fact legally reported construction but the access route and related changes that were carried out, were not added into this system yet.
• My point in this example is that negative impact from faulty and out of date information in big data stores, can and does flow in all possible directions – including ones that might not always be appreciated in advance. And accurate and up to date data can create positive value that flows in all directions too.

I am going to turn to Point 3 of the above topics list in my next series installment, and the issues and challenges of how anonymous seemingly anonymized data really is, and certainly in an ever-expanding big data context where new raw data and new knowledge derived from it, is added into its already stored raw data records and files and into the processed knowledge base already developed from all of this, and used real-time to analyze and understand both old data already held, and new data as it comes in too. Then after at least preliminarily addressing that complex of issues, I will circle back to reconsider the impact that all of this has on:

• Businesses that provide big data as a marketable commodity,
• Businesses that buy access to it (startups included), and
• The ultimate sources of all of this data, with consumers and other individuals prominently included there.

Meanwhile, you can find this and related material at my Startups and Early Stage Businesses directory and at its Page 2 continuation.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: