Why You Should Be Careful About Trusting AI For Camera Advice


Seemingly “intelligent” AI continues to expand its presence in the daily world of human activities at an unprecedented rate, but it can’t always be trusted.

This is something that many users of ChatGPT have learned the hard way when asking the program to answer complex (or sometimes even very simple) questions for them, only to have it smoothly inject complete fabrications into its responses.

Well, ChatGPT isn’t the only culprit for these kinds of shenanigans. As has been made evident in an embarrassingly public way, Google’s Gemini AI model does the same thing.

Gemini, a sort of rival to OpenAI’s GPT artificial intelligence was used in a recent Google I/O keynote video to showcase its capacity for advice and answers to human user requests.

With one particular question in the demo, Gemini conjures up a terribly incorrect answer to an actor playing a struggling photographer about a fairly simple analog photography troubleshooting issue.

The “photographer” in question uses Google Lens to ask the AI what to do when his analog camera lever stops advancing by stating “Why is the lever not moving all the way?”

Gemini then puts forth several recommendations and from among them, the AI chooses the one that’s most explicitly terrible.

As you can see in the image and video below, Gemini’s highlighted solution to the photographer’s problem is to open the back of the camera and “gently” remove the film reel.

Of course, this would immediately destroy the already-taken shots by exposing the film to light.

It’s especially funny when the photographer smiles afterward.

The other listed suggestions in the video hardly inspire confidence either though. Most of them are either dubious or just incoherent.

The obvious response to this easy question, at least if given by a human photographer with thinking and reasoning ability, would be that the film roll is probably finished and this is causing the lever to no longer advance.

Gemini apparently couldn’t conceive of this obvious idea as even one of its suggestions.

Bear in mind that this wasn’t some random Twitter video with a casual Gemini user sharing their isolated case of a mistake by the AI.

No, it was from a choreographed demo of the AI’s capabilities from Google’s own publicity video!

Even more ironically, Google put the video forth specifically to showcase just how Gemini can make people’s lives better with its intuitive assistance.

That’s assistance most of us might want to be leery of. And maybe Google should have first hired human experts to vet the specific responses in the video before making it public.

The wider thing to keep in mind behind this little example is that you shouldn’t let the very real wonders of modern AI blind you to just how frequently and badly wrong its advice can be.

Examples of people being misdirected by confidently detailed responses from AIs like ChatGPT and others abound, and no current AI is an exception to this kind of problem.

If just one specific, supposedly curated keynote video of Gemini in action managed to reveal bad answers to very basic questions for the AI, just imagine counting on it for frequent, daily advice about, well, pretty much anything.

This would apply especially when it comes to advice about your expensive, personally valuable photographic equipment and creations.

It’s almost as if the Large Language Models (LLMs) behind these AIs, after being trained with billions upon billions of words of human dialogue to learn how to talk like us, also learned how to blandly lie as we sometimes do.

Image credits: Google



Credit : Source Post

We will be happy to hear your thoughts

Leave a reply

Technology-gadgets.com
Logo
Shopping cart