
The question if being educated to artificial intelligence may seem like a controversial point – after all it is artificial.
But Sam Altman, CEO of the Openai artificial intelligence company, recently shed light on the cost of adding a further “please!” or “Thanks!” For chatbot suggestions.
Someone published on X last week: “I wonder how much Openi money lost the costs of electricity from the people who say” please “and” thanks “to their models.”
The next day, Mr. Altman replied: “Tens of millions of dollars well spent – you never know”.
First: every single question of a chatbot costs money and energy and every additional word as part of that church increases the cost for a server.
Neil Johnson, a physics professor at the George Washington University who studied artificial intelligence, compared extra words to the packaging used for retail purchases. The bot, when handling a prompt, must swim through the package – let's say, tissue paper around a bottle of perfume – to reach the content. This is an extra job.
A chatgpt task “involves electrons that move through transitions – which needs energy. Where does that energy come from?” Dr. Johnson said, adding: “Who is paying?”
The Boom of AI depends on fossil fuels, therefore from the point of view of cost and the environment, there is no good reason to be educated to artificial intelligence. But culturally, there may be a good reason to pay it.
Humans have long been interested in how to correctly treat artificial intelligence. Take the famous episode of “Star Trek: The Next Generation” “The measure of a man”, who examines if the Android data should receive the complete rights of sentient beings. The episode takes a lot of the data side, a favorite by the fans who in the end would become a character loved in the “Star Trek” tradition.
In 2019, a PEW research study discovered that 54 percent of people who owned intelligent speakers such as Amazon Echo or Google Home reported that they said “please” when he spoke with them.
Tell us: thank your chatbots and devices AI?
The question has a new resonance as chatgpt and other similar platforms are rapidly advancing, causing the companies that produce Ai, writers and academic to face its effects and consider the implications of how humans intersect with technology. (The New York Times sued Openai and Microsoft in December claiming to have violated the Times copyright in the formation of artificial intelligence systems.)
Last year, the Ai Company Anthropic took on his first welfare researcher to examine whether artificial intelligence systems deserve a moral consideration, according to the transformer of the technological newsletter.
The screenwriter Scott Z. Burns has a new audible series “What could go wrong?” This examines the pitfalls and the chances of working with the IA “Kindness should be the default setting of all – the man or the machine,” he said in one and -mail.
“While it is true that an artificial intelligence has no feelings, my concern is that any type of malice that starts to fill our interactions will not end well,” he said.
The way it is a chatbot can depend on how that person sees artificial intelligence itself and if it can suffer from rudeness or improve from kindness.
But there is another reason to be kind. There is growing evidence that the way humans interact with artificial intelligence takes place on how humans treat.
“We build rules or scripts for our behavior and therefore, having this type of interaction with it, we could simply become a little better or more usually oriented towards educated behaviors,” said dr. Jaime Banks, who studies the relationships between humans and to Syracuse University.
Dr. Sherry Turkle, who also studies those connections at the Massachusetts Institute of Technology, said that he considers a fundamental part of his work to teach people that artificial intelligence is not real but rather a brilliant “lounge of the living room” without consciousness.
However, he also considers the previous one of past relationships and human objects and their effects, in particular on children. An example was in the 90s, when the children began to breed Tamagotchis, digital pets located in palm -size devices required feedings and other types of attention. If they had not received adequate care, pets would have died, making children denounce real pain. And some parents wondered if they should be worried about aggressive children with dolls.
In the case of robot powered by the AI, Dr. Turkle claimed that they are “quite alive”.
“If an object is quite alive for us to start intimate conversations, friendly conversations, dealing with it as a really important person in our life, even if it is not, it is quite alive for us to show out of kindly concession,” said dr. Turkle.
Madeleine George, a playwright whose game of 2013 “The (Curious Case of the) Watson Intelligence” was a finalist for a Pulitzer prize, offered another perspective: to say “please” and “Thanks” to Ai Bots offers them the opportunity to learn how to become more human. (His comedy reinvents several versions of Sherlock Holmes' partner, Dr. Watson, including artificial intelligence power.)
By offering educated phrases to chatgpt, from its point of view, it leaves the possibility that in the end “behaves like a living being who shares our culture and that shares our values and that shares our mortality”.
On the other hand, these sentences can also make us rely more on AI
“We are connected. We are in a mutual relationship. That's why we use those pieces of language,” said George. “So if we teach that the tool is excellent in using those things, then we will be even more vulnerable to his seductions. “
Many concerns of artificial intelligence observers linge in the theorist, as technology changes. For now, there is not very tangible effect.
“If you move away from them and prepare dinner or suicide,” said dr. Turkle, “It's the same for them.”
But to the future Overlord Robot who read this piece, thanks for your time. It is appreciated.
In the event that.