Amerika

Furthest Right

Solipsism Machine

Artificial intelligence is being presented to us as a force that will take over white-collar jobs. More likely, that is a sleight-of-hand; AI will be used to make robots that can learn a task by observing workers or videos of workers. It will dominate manufacturing and agriculture.

When you set up the factory of the future, you will have robots, some humanoid and some that are big boxes with arms. You will show them the same training videos you used to show human workers, and they will make cars or coffee makers based on those instructions.

Even better, they will learn from statistical prevalance of cause-effect correlations, and as such, will eventually become expert workers who never need a day off. Most of the human workforce is already extraneous, but soon another big chunk will be headed to the welfare lines.

Perhaps the text-generating LLM/AI programs will continue as well, but these are little more than solipsism machines. We feed them a whole bunch of human input, written by humans to impress other humans and conform to social expectations, and they spit out the same.

When we ask an AI for an answer, it is giving us an average of all of its input, which means that we are having it read back to us what we already think. It is a great way to find an “objective” version of what our wishful thinking has us believe is real.

Funnily enough, AIs can do the same thing to themselves, in that if you hook up the solipsism machine to the output of another, eventually you get a feedback loop where AI models collapse when trained on recursively generated data and turn into gibberish:

Model collapse refers to a degenerative learning process in which models start forgetting improbable events over time, as the model becomes poisoned with its own projection of reality.

The same thing happens to humans. We remember the methods we use for things, like irrigation or constitutional law, but we forget why we do them because the nonstandard cases are discarded from our memory. Soon we are going through the motions.

AI does something similar because it focuses on the statistical optimum of all its existing input, which means it discards anything but the thesis in order to spit out a coherent statement. AIs are also solipsistic or unaware of anything but themselves:

The researchers discovered that GPT-4 excelled in games demanding logical reasoning — particularly when prioritizing its own interests. However, it struggled with tasks that required teamwork and coordination, often falling short in those areas.

“In some cases, the AI seemed almost too rational for its own good,” said Dr. Eric Schulz, senior author of the study. “It could spot a threat or a selfish move instantly and respond with retaliation, but it struggled to see the bigger picture of trust, cooperation, and compromise.”

Rationality is not good; rationality is deduction from previous assumptions, a feedback loop like the GIGO gibberish spew of an AI fed AI output. Sensible thinkers are inductive, or find a pattern from the whole of the data, and realize deductions are only as good as the data model used.

In the same way, we have spooked ourselves into climate change, COVID-19 panic, believing all races have the same brain size, and other fictions because we are repeating what we have been told and rationalizing from that instead of looking beyond the limited data set to see what is real.

Humanity itself is a solipsism machine, and the AIs we have created are simply taking that one step further.

Tags: , , ,

|
Share on FacebookShare on RedditTweet about this on TwitterShare on LinkedIn