Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Google

Google Will Begin Labeling AI-Generated Images In Search 31

Google said in a blog post today it will begin labeling AI-generated and AI-edited image search results later this year. Digital Trends reports: The company will flag such content through the "About this image" window and it will be applied to Search, Google Lens, and Android's Circle to Search features. Google is also applying the technology to its ad services and is considering adding a similar flag to YouTube videos, but will "have more updates on that later in the year," per the announcement post.

Google will rely on Coalition for Content Provenance and Authenticity (C2PA) metadata to identify AI-generated images. That's an industry group Google joined as a steering committee member earlier in the year. This "C2PA metadata" will be used to track the image's provenance, identifying when and where an image was created, as well as the equipment and software used in its generation.

Google Will Begin Labeling AI-Generated Images In Search

Comments Filter:
  • They were always going to try to censor AI generated video and stills because they can be used to make the government and elites look bad. They cannot stand that and clearly will censor the daylights out of anyone who tries. So, can we just skip to the part where the AI image generation throws in whatever breaks the filter-bots and start that spy-vs-spy game? I'm guessing the porn hounds will win. My money is on them, not the shit-eating government censors.

    So, let's just skip to the part where they'd lik
    • by Fly Swatter ( 30498 ) on Tuesday September 17, 2024 @09:42PM (#64794801) Homepage
      You care confusing labeling with censorship.

      If you got it flaunt it! It works for boobs. Although the newer fake boobs are almost undetectable now at first sight, maybe they need labels too!
      • Labeling is the first step. You generally need to classify something as "bad" before you make laws and policies against it and take action.
        • it might be a long long wait for google to exclude ai content by default.

          Think of 'active congress members' who employ 'a team of millions of AI processors' to generate 'a positive social message' about the 'beneficial work' the member of congress is doing for 'the children, seal lion pups, helpless women, threatened species, vanishing cultures, ...'

          • by Rei ( 128717 )

            It depends on the context. For example, if it's a piece of art, all I care about is how much it serves the purpose of art (causing an emotional or thought-provoking reaction), not how it was made. The only complaint therein is bad AI images or uncreative AI images (especially how like 90% of them use the exact same "Midjourney Default" styling because the creator doesn't have a single creative bone in their body :P).

            By contrast, it gets really annoying when searching for actual specific things for an educat

        • Nudity and porn get labeled and hidden by default, but it is still there - ai content will end up the same way.
          • It's all good. Google is pretty irrelevant anyway. If they keep sucking hind tit when it comes to search results, their filtering and taxonomies won't save them. They are headed for Altavista/Webcrawler search-heaven.
    • So, let's just skip to the part where they'd like to censor all the AI content, but cannot. This part of the movie is boring.

      How about we first worry about how Google won’t likely filter their own AI generated content first. Content that may become prioritized above others, or even worse (made available to the highest bidder.) The actors in this movie? We know how they got rich.

      • Good point. I'd suggest start with boycotting Google. They aren't a good search engine anymore anyway. I'd say do yourself a favor and find some new engines.
    • by Askmum ( 1038780 )
      You are so off the mark with your sentiment. It is good to label AI/generated/fake images as such. In most cases AI is still not good enough to fool all the people, but the less discerning people are already fooled by AI and will shout "I told you so!" when some makes a fake image that fits their narrative. And that is not a good thing.
      Also, this is not censorship.
      • No, what's good is for folks to be skeptical enough that they don't automatically assume anything they see is real. We don't need nanny's and labelers (which is the first step on the road to censoring them). I'm expecting people to embrace a buyer beware mentality and be media savvy not for Big Brother to come along and spoonfeed them labels and warnings.
        • by Askmum ( 1038780 )
          Unfortunately, we do not live in such a utopian world. I hope you do realise that the majority of this world is not so media savy to spot AI fakes immediately, or at all. And given sufficient development time for AI, the same may apply to you and me.
  • The Internet will be divided into "safe" and "unsafe" information, and anything Google dislikes will be sanctioned fairly or unfairly. They don't care about the truth. They just want control.

    "Unsafe" information will be falsely and maliciously labeled in the browser (to harm competitors and people who don't vote for the right candidate) and will disenfranchise millions.

    Within five years the Internet will be destroyed.

    • Within five years the Internet will be destroyed.

      (The Internet) ”Man, If I had a bit(coin) for every time I heard this shit..”

      With VPNs and onion routing fracturing access to online data, what IS the “internet” anyway, other than all the shit left out in the proverbial streets to be picked through. Facebook will eventually become the world’s largest online cemetery, with the dead profiles exceeding the living. Wonder when we should stop calling it “social media”.

    • Conservatives and trumpers get so butt hurt over this stuff. How dare google label AI content!

    • The Internet's already been destroyed, if you mean the internet where legitimate content outweighed the malicious. For at least a decade, pushing on 2.

      What this is, is another PR campaign from a tech giant to parade around how "responsible" they are. Once they figure out they can't accurately label the content, and nobody pays attention to the labels anyway, the feature will quietly disappear. You'll claim victory, the internet will continue to be useless, and you'll tell us about how great Trump 2032 will

  • Websites that accept user images almost always strip out the metadata. They'll have nothing to work with.

  • Glue cheese to pizza? AI generated.
    Run with scissors for better health? AI generated.
    Obnoxious wall of pointless text displayed in 30pt font at the top of your Google search? AI generated.
    On the first page of search results? AI generated (and advertisement).

    In short: click Next Page to maybe get relevant search results you might actually want.

    • by Rei ( 128717 )

      I don't know how to break this to you, but the examples you cite were written by humans. For example, here's the search result [archive.org] (since deleted) about glue on pizza. Google's "AI Summaries" are pure RAG with a very lightweight specialized summary model that simply sums up what's in the top search results, with no attempt to assess their validity. Get a stupid Reddit page or whatnot in your top search results and it'll end up in the AI summary.

      IMHO, with "don't trust what you see on the internet" being a we

  • by PPH ( 736903 )

    Just count the fingers.

  • by WaffleMonster ( 969671 ) on Tuesday September 17, 2024 @10:31PM (#64794901)

    There is nothing that makes AI generated content any inherently more or less truthful than CG content, Photoshop, hand drawings or deceptive photography. The modality of an images creation has no relationship to the information the image conveys.

    Not only does this create a false/meaningless indicator that runs a real risk of being relied upon as an indicator of truth whatever discriminator is employed to tell the difference will be gamed.

    • There is nothing that makes AI generated content any inherently more or less truthful than CG content, Photoshop, hand drawings or deceptive photography. The modality of an images creation has no relationship to the information the image conveys.

      That depends heavily on what the image is claiming to convey. If the image appears to convey "This is a photo of a thing that happened", then it being AI-generated rather than a photograph (even if edited somewhat) is a very important distinction. For allegedly journalistic photos or videos, what we really want is a complete description of how the image was created, starting with what type of film/sensor it was captured with, and covering all of the editing steps leading to the final image. But "this was

  • Why? So they filter it? Block it? Track it? Does a search engine really need to know how content was made? I think not. All these roads lead to government controlled censorship.
  • I would find Google much more usable if there were some selection switches to turn off AI images and turn off the stock images.

    .

    Google would also be more helpful if you could tell it "yes I'm looking to purchase something" or no I'm not. Sometimes it's pushing all the retail links when I don't want them, and other times when I am actually searching for something I need to buy the other sites are clutter.

    Altavista was good at letting you define what you were looking for.

  • Nope, I don't trust them to be accurate. So, more misinformation peddled as truth.

Support bacteria -- it's the only culture some people have!

Working...