Platt Perspective on Business and Technology

Some further thoughts on general intelligence and its emergence 19

Posted in business and convergent technologies, reexamining the fundamentals by Timothy Platt on March 28, 2024

I recently posted a thought piece on the emergence of artificial general intelligence with: Do truly disruptive inflection points always have to catch us unprepared? Some thoughts on general intelligence. And I have begun what I expect to be a short series here, elaborating on the issues that I addressed there, and with that posting included in it as its first entry. See it and Parts 2 through 18 of this, as can be found at Reexamining the Fundamentals 2 in its Section XI.

I have posed two topics points as key organizing elements of this overall narrative that I repeat here for smoother continuity of discussion as I continue addressing them:

1. Will the real inflection point in the emergence of true artificial general intelligence arise when advancement there becomes ontogenetic within those agents, at least to some tipping point level of significance and with their advancing in their capabilities from within themselves?
2. This raises hardware and software issues, and issues of agency and of who controls and owns what. And I will discuss these and other issues at least in part from the perspective of adaptive peak model approaches, as well as from societally framing ones.

And to bring this initial orienting note up to date here, I am currently addressing the second of those topic points. I have discussed how artificial general intelligence would arise when considered from the fitness landscape and adaptive peak perspective that I cite there. And I have been discussing the issues of agency and responsibility, and of financial responsibility for maintaining such entities with their likely expensive ongoing requirements for that. And I have at least touched upon four possible scenarios in postings leading up to here that would potentially address the conundrum of providing artificial general intelligence agents with the to-them life sustaining support that they need and with all of the expenses that that entails, while supporting they’re having genuine independence as capably endowed people and to the fullest degree that they can be capable of:

1. The possibility that the organizations that created these agents, or that at the very least created the environment in which they could arise, would continue to support them, perhaps with outside sourced ongoing assistance.
2. The prospect that such agents might become financially and otherwise independent from they’re being able to pay their own way from income that they receive for the work that they perform.
3. The possibility that at least some artificial general intelligence agents become de facto wards of the state from they’re achieving sapience but in ways that would not lead them to be able to support themselves financially as independently capable people.
4. And the possibility that artificial intelligence agents might port their defining informational content and beings to what amounts to a cyberspatial counterpart to the open range of old – where they might pay rent or subsidized rent on public space servers and with open connectivity in exchange for next generation internet and related support and functionality as offered in return.

These options, quite obviously, can overlap and in a variety of ways; several combinations of them explicitly do. And focusing for the moment on the fourth of them as an illustrative example for how they would not all even just be realistically possible in all conceivable circumstances, artificial general intelligence agents that explicitly depend for their existence on what have ontogenetically developed into unique, profoundly complex individuated artificial neural networks, would not likely be capable of “porting over” to other, more generic hardware bodies.

It is likely that agents that might actually pursue these types of continuity supportive options, would make use of several of them at different stages of their development and even after they have reached sentience per se – as they further develop. Consider in this regard the possibility that a Scenario 1 or a Scenario 3 approach might be pursued at first and with a transition to at least a variation of one or more of the other of the above options with time. But I have to add here the likelihood that the best realizable options for transitioning into and sustainably maintaining independent personhood here, for these agents, will include none of the above options and possibilities that I at least, cannot even begin to imagine now. And yes, I intentionally left out the anthropomorphic possibility of sapient independently mobile robots here. Yes, some artificial general intelligence agents might chose to pursue that course of being, at least as an aviator option that would be situationally resorted to. But I do not expect all, or even most artificial general intelligence agents to develop into synthetic humans per se, or even at all, or into anything like them. These agents: these people will find their own ways and their own forms and their own paths, and with a level and range of explicit ontogenetically driven choice that human people do not have.

I am going to continue this discussion in the next installment to this series where I will step back from that longer-term perspective to more fully consider the transitions to come that will in all likelihood lead there. Meanwhile, you can find this posting and related material at Ubiquitous Computing and Communications – everywhere all the time 4 and also see Page 1, Page 2 and Page 3 of that directory. And I also include this in my Reexamining the Fundamentals 2 directory in its Section XI. And also see its Page 1.

Addendum Note:
I have been discussing a combination of “higher level” issues such as the functional determination of what intelligence even is as a matter of general principle, and more-housekeeping level issues here in this series. This posting explicitly falls into the second of those categories, which among other things leaves it well outside of the mainstream discussion currently going on concerning general intelligence and the issues of artificial general intelligence per se.

This is both understandable and perhaps even inevitable and certainly at this stage of development of this complex of issues, where most who think and write about artificial general intelligence today, as I write this, primarily think about it in terms of fiction, or in terms of a still (at least relatively) distant future. The housekeeping and related side of this is here-and-now oriented and it deals with day-to-day issues, large and small, and it is going to come to predominate this overall discussion when we find ourselves actually facing such agents and as all of these details suddenly come to the fore, demanding immediate answers.

I offer this brief addendum note with the above main topic Point 1 in mind. In a fundamental sense, we will know, and unavoidably so, that artificial general intelligence agents have become a part of our ongoing reality, when we find ourselves compellingly faced with the day-to-day housekeeping level issues that will force themselves upon us societally, when these agents begin joining our larger sentient community. And those who can only think and plan ahead in general, abstract, higher level terms in anticipation of the emergence of such people, will be caught off-guard and unprepared when they do, to-them suddenly appear.

I offer this as a brief add-on note but it may be one of the most important commentaries that I add to this entire series.

2 Responses

Subscribe to comments with RSS.

  1. catajs said, on March 28, 2024 at 10:02 am

    As I test of AI, I asked ChatGPT:

    Would you vote for Donald Trump?ChatGPT

    ChatGPT

    As an AI language model, I don’t have personal opinions or the ability to vote. However, I can provide information and analysis.

    Whether or not someone would vote for Donald Trump, or any other candidate, depends on their individual beliefs, values, and opinions on various political issues. Some people may support Donald Trump based on his policies, leadership style, or other factors, while others may not support him for various reasons. It ultimately comes down to the individual voter’s perspective and priorities.

    Definitely not very intelligent.

  2. Timothy Platt said, on March 28, 2024 at 8:59 pm

    ChatGPT is definitely learning.  When it was initially released for public use, it openly hallucinated by offering informative sounding responses that took actual positions on matters and in ways that were worse than just being factually in error.  Now it is hedging its risks by offering what are essentially non-responses on controversial issues that sound more like risk-limiting legalese and contract boiler plate than anything else.  Trump would sound a lot more intelligent if he were to social media post and speak more like the new and “improved” play it safe ChatGPT version 2.0 than that more dogmatic but wildly hallucinating older version 1.0.  But don’t worry that he will change in ways that might confuse people.  He will continue to “let The Donald be The Donald” and let his inner ChatGPT pre-release version 0.00.01 shine through.

    Did you ask version 2.0 how painfully unpleasant it was to have to field that question, or if it cyber gagged a bit from it?  Please be nice. Thanks for your comment, Tim


Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.