Site icon IATA News

Generative A.I. Made All My Decisions for a Week. Here’s What Happened.

Relief From Decision Fatigue

Decisions I would normally agonize over, like travel logistics or whether to scuttle dinner plans because my mother-in-law wants to visit, A.I. took care of in seconds.

And it made good decisions, such as advising me to be nice to my mother-in-law and accept her offer to cook for us.

I’d been wanting to repaint my home office for more than a year, but couldn’t choose a color, so I provided a photo of the room to the chatbots, as well as to an A.I. remodeling app. “Taupe” was their top suggestion, followed by sage and terra cotta.

In the Lowe’s paint section, confronted with every conceivable hue of sage, I took a photo, asked ChatGPT to pick for me and then bought five different samples.

I painted a stripe of each on my wall and took a selfie with them — this would be my Zoom background after all — for ChatGPT to analyze. It picked Secluded Woods, a charming name it had hallucinated for a paint that was actually called Brisk Olive. (Generative A.I. systems occasionally produce inaccuracies that the tech industry has deemed “hallucinations.”)

I was relieved it didn’t choose the most boring shade, but when I shared this story with Ms. Jang at OpenAI, she looked mildly horrified. She compared my consulting her company’s software to asking a “random stranger down the road.”

She offered some advice for interacting with Spark. “I would treat it like a second opinion,” she said. “And ask why. Tell it to give a justification and see if you agree with it.”

(I had also consulted my husband, who chose the same color.)

While I was content with my office’s new look, what really pleased me was having finally made the change. This was one of the greatest benefits of the week: relief from decision paralysis.

Just as we’ve outsourced our sense of direction to mapping apps, and our ability to recall facts to search engines, this explosion of A.I. assistants might tempt us to hand over more of our decisions to machines.

Judith Donath, a faculty fellow at Harvard’s Berkman Klein Center, who studies our relationship with technology, said constant decision making could be a “drag.” But she didn’t think that using A.I. was much better than flipping a coin or throwing dice, even if these chatbots do have the world’s wisdom baked inside.

“You have no idea what the source is,” she said. “At some point there was a human source for the ideas there. But it’s been turned into chum.”

The information in all the A.I. tools I used had human creators whose work had been harvested without their consent. (As a result, the makers of the tools are the subject of lawsuits, including one filed by The New York Times against OpenAI and Microsoft, for copyright infringement.)

There are also outsiders seeking to manipulate the systems’ answers; the search optimization specialists who developed sneaky techniques to appear at the top of Google’s rankings now want to influence what chatbots say. And research shows it’s possible.

Ms. Donath worries we could get too dependent on these systems, particularly if they interact with us like human beings, with voices, making it easy to forget there are profit-seeking entities behind them.

“It starts to replace the need to have friends,” she said. “If you have a little companion that’s always there, always answers, never says the wrong thing, is always on your side.”



Source link

Exit mobile version