Platt Perspective on Business and Technology

On the importance of disintermediating real, 2-way communications in business organizations 13

Posted in social networking and business, strategy and planning by Timothy Platt on December 31, 2018

This is my 13th installment to a brief series on coordinating information sharing and communications needs, and information access filtering and gate keeping requirements (see Social Networking and Business 2, postings 275 and loosely following for Parts 1-12.)

I began working my way through a briefly stated to-address topics list in Part 12 that I repeat here for smoother continuity of narrative, as I continue that process:

• Reconsider the basic issues of communications and information sharing and their disintermediation in light of the trends and possibilities that I have been writing of in this series, and certainly since its Part 6 where I first started to more explicitly explore insider versus outside employee issues here.
• Begin that with a focus on the human to human communications and information sharing context.
• And then build from that to at least attempt to anticipate a complex of issues that I see as inevitable challenges as artificial agents develop into the gray area of capability that I made note of above (n.b. in Part 11). More specifically, how can and should these agents be addressed and considered in an information communications and security context? In anticipation of that line of discussion to come, I will at least raise the possibility there, that businesses will find themselves compelled to confront the issues of personhood and of personal responsibility and liability for gray area artificial agents, and early in that societal debate. And the issues that I raise and discuss in this series will among other factors, serve as compelling bases for their having to address that complex of issues.

I offered at least a preliminary, first cut response to the first of those topic points in Part 12, at least as far as our current workplace context is concerned, where all involved parties are human and where our still simple, pre-intelligent artificial intelligence agents are still just tools per se. But I add here that one of the key threads of discussion that I have offered here in this series has centered on what is inevitably to come, as more genuinely intelligent, sentient artificial intelligence agents arise, and to a level of mental capacity where they should be considered as people. So with that background in mind, I ended Part 12 with what should be a disconcerting note, posing the following comment and question:

• All of our current, as of this writing artificial intelligence agents in place in businesses, and throughout the workplace are still special function and single function for the most part, and driven by single algorithms that are of very constrained and limited scope. They are still just tools. Security software and patches make sense for tools and for any tool-level computer or network or otherwise information access-vulnerable devices. But how do you address the issues that would lead to use of such imposed, overriding and in fact over-writing solutions when faced with what would best be considered AI people, as discussed as a real possibility in Parts 10 and 11 of this series? And perhaps more importantly, how would and should a business parse out valid and I add ethical and moral resolutions to this type of question?

I raised the above-repeated note in the context of having just touched on the issues of how software-based resources can be and are routinely updated with code patches and for both functionality and security purposes – where the systems so acted upon are tools and are viewed as such.

My goal moving forward in this series is to address the human-centric second topics point as listed above and then the artificial intelligence agent-centric third point of that list. But before doing so and as an extension of my response to the first of those points, I want to more explicitly discuss what a true artificial general intelligence agent is, and certainly when recognized as such and when considered from a legal and an ethical perspective. And I note in anticipation of that line of discussion, that some of the issues that bear consideration there, will apply in a more strictly speaking human agent context too, and both as technologies advance that would make artificial intelligence agents direct competitors with humans, and as technologies advance that would enable the manipulation and even the fundamental reshaping of humans and both physically and mentally. That distinction, I add is already becoming moot as pharmaceutically based interventions advance that affect and even transform mental functioning at a basic biochemical and physiological level.

I am writing this at a time when people are people and machines are tools and very separate from people. And I am writing this at a time when people – humans are generally thought of as being essentially entirely in control of who they are and of how they think. The perspectives taken on these presumptions: these automatically assumed axioms will change and even just within the next decades of this century.

• Would it be ethical to “patch” a human mind, overriding and even fundamentally overwriting parts of it for purposes of interest and value to others, or in support of a state or other organized entity?
• Most people would say no to that, viewing any attempt at that type or level of control as dystopian evil. And the one generally accepted exception to that, that we now face can be found in psychiatric intervention using medication and restraint as needed, in addressing the challenges faced by people deemed to pose a direct threat to themselves or others as a result of mental illness. But authoritarian states can and do interpret what that means, to serve their own purposes so even that exception can be fraught with uncertainties and even overt controversy.
• Can you ethically “patch” an artificial intelligence agent that has reached a level of cognitive capacity that would lead it to be fully, generally intelligent and in the manner that people are considered to be when fully mentally capable? And if so,
• How and to what degree?
• And with what informed consent requirements and from whom?
• And for what purposes?
• And according to what criteria for addressing those and similar gatekeeper questions that would have to be satisfactorily met to justify potentially mind-altering change or intervention?
• And with those criteria and the standards that would have to be met for them determined by whom?
• And with what legal protections in place to at least limit if not entirely prevent abuse?

To connect this line of reasoning back to this series and its communications and information management focus, can it be ethical to “patch” a genuinely artificial general intelligence agent, with as reasoning and intelligent a mind as a normal human, to externally control their use and sharing of information that a third party would seek to manage? Consider that from the more strictly human agent context of when it will become possible to reshape or even fully delete specific memories or knowledge held in a mind by a combination of psychopharmaceutical and other means.

• On one level I am writing here in this series of our current here-and-now and in an entirely human agent context. But I am also looking forward in order to consider the disruptively new and different contexts and challenges that we will see emerging into our everyday reality, and even before the middle of this century for their opening stages.

And with that noted, and at least starting in our still present here-and-now, I turn to consider the second topic point as offered above, and with:

• A focus on the human to human communications and information sharing context.

And I will also address the third of those topics points with its broader, more inclusive assumptions as to what it means to be a person. In anticipation of that discussion to come, I will raise and discuss the question of whether it would make any more sense to have separate information security management and communications rules and processes for humans and artificial general intelligence agents respectively, than it would to have such policy and practice distinctions when addressing full time in-house employees and temporary, outside-sourced employees.

Meanwhile, you can find this and related postings and series at Business Strategy and Operations – 5, and also at Page 1, Page 2, Page 3 and Page 4 of that directory. And also see Social Networking and Business 2 and that directory’s Page 1 for related material.

One Response

Subscribe to comments with RSS.

  1. Alan Singer said, on December 31, 2018 at 7:15 am

    Please forward this article to Chuck and Donnie.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: