November 23, 2024

Ferrum College : Iron Blade Online

Complete Canadian News World

Google Pixel’s face-changing tool sparks controversy over AI manipulation

Google Pixel’s face-changing tool sparks controversy over AI manipulation

Image source, Getty Images

Comment on the photo,

It’s never easy for everyone to look good when taking a group photo

The camera never lies. Except of course it does happen, and it seems to be happening more often with each passing day.

In the age of smartphones, quick digital adjustments to enhance images have become commonplace, from enhancing colors to adjusting light levels.

Now, a new generation of smartphone tools powered by artificial intelligence (AI) is adding to the debate about what it means to photograph reality.

Google’s latest smartphones released last week, the Pixel 8 and Pixel 8 Pro, go a step further than other companies’ devices. They use artificial intelligence to help change people’s expressions in photos.

It’s an experience we’ve all had: There’s one person in a group shot who looks away from the camera or fails to smile. Google phones can now search through your photos to mix and match previous expressions, using machine learning to draw a smiley from a different image of it in a photo. Google calls it Best Take.

The devices also allow users to erase, move, and resize unwanted elements in an image — from people to buildings — “filling” the remaining space with a so-called Magic Editor. This uses what’s known as deep learning, which is a powerful artificial intelligence algorithm that determines which textures should fill a gap by analyzing the surrounding pixels it can see, using knowledge it has gained from millions of other images.

The photos do not have to be taken on the device. With the Pixel 8 Pro, you can apply what’s called a Magic Editor or Best Take to any photos in your Google Photos library.

“Bad and scary”

For some observers, this raises new questions about how the images are taken.

Andrew Pearsall, a professional photographer and senior lecturer in journalism at the University of South Wales, agrees that manipulating artificial intelligence comes with risks.

“Simple manipulation, even for aesthetic reasons, can lead us down a dark path,” he said.

He said the risks are higher for those using AI in professional contexts, but there are implications everyone should consider.

“You have to be very careful about, ‘When do you cross the line?’”

“It’s very unsettling now that you can take a photo and instantly remove something on your phone. I think we’re moving into a kind of fake world.”

Speaking to the BBC, Google’s Isaac Reynolds, who leads the team developing camera systems on the company’s smartphones, said the company takes ethical considerations of its consumer technology seriously.

He was quick to point out that features like Best Take weren’t “fake” anything.

Comment on the photo,

This image was edited with Google’s AI Magic Editor to change the position and size of people in the foreground

Camera and software quality are key for a company that competes with Samsung, Apple and others, and these AI features are seen as a unique selling point.

All reviewers who raised concerns about the technology praised the camera system’s image quality.

“You can finally get that shot where everyone looks the way you want them to look, and that’s something you haven’t been able to do on any smartphone camera, or on any camera, for a while,” Reynolds said.

“If there is a copy [of the photo you’ve taken] Where this person is smiling, I will show you. “But if there’s not a version where they smile, then yeah, you won’t see it,” he explained.

For Mr. Reynolds, the final image becomes “a representation of the moment.” In other words, that specific moment may not have happened but it is the image you wanted to happen that was created from multiple real moments.

“People don’t want reality”

Professor Rafal Mantioc, a graphics and displays expert at the University of Cambridge, said it was important to remember that the use of artificial intelligence in smartphones was not to make images look real.

“People don’t want to capture reality,” he said. “They want to take beautiful photos. The entire image processing pipeline in smartphones is aimed at producing good-looking photos – not real photos.”

The physical limitations of smartphones mean that they rely on machine learning to “fill in” information that is not in the image.

This helps improve zooming, enhance low-light photographs, and in the case of Google’s Magic Editor feature – add elements to photographs that were never there or swap elements from other photos, such as replacing a frown with a smile.

Image manipulation is nothing new, it is as old as the art form itself. But it’s never been easier to scale up real thanks to AI.

Earlier this year Samsung has been criticized For the way it used deep learning algorithms to improve the quality of photos taken of the moon with its smartphones. Tests found that no matter how bad the photo you initially took was, it always gave you a usable photo.

In other words – your photo of the moon was not necessarily a photo of the moon you were looking at.

The company acknowledged the criticism, saying it was working to “reduce any potential confusion that might occur between taking a photo of the real moon and a photo of the moon.”

Regarding Google’s new technology, Reynolds says the company is adding metadata to its photos — the digital fingerprint of the image — using an industry standard to indicate when AI has been used.

“It’s a question we talk about internally. And we’ve talked about it at length. Because we’ve been working on this stuff for years. It’s a conversation, and we’re listening to what our users are saying,” he says.

Google is clearly confident that users will agree, as the artificial intelligence features in its new phones are at the heart of its advertising campaign.

So, is there a line that Google won’t cross when it comes to image manipulation?

Reynolds said the debate over the use of artificial intelligence has been so nuanced that you cannot simply point to a line in the sand and say it is too far away.

“The deeper you get into building features, you start to realize that the font is kind of an oversimplification of what ends up being a very difficult feature-by-feature decision,” he says.

Even as these new technologies raise ethical considerations about what is and is not reality, Professor Mantioc said we must also take into account the limitations on our eyes.

“The fact that we see sharp color images is because our brain can reconstruct information and infer even missing information,” he said.

“So, you might complain that cameras are doing fake things, but the human brain is actually doing the same thing in a different way.”

See also  Mario Kart 8 Deluxe Version 2.0.0 Now Live, Confirms All DLC Cup Names