{"id":258029,"date":"2024-09-02T09:23:57","date_gmt":"2024-09-02T09:23:57","guid":{"rendered":"https:\/\/michigandigitalnews.com\/index.php\/2024\/09\/02\/how-to-avoid-being-fooled-by-ai-generated-misinformation\/"},"modified":"2025-06-25T17:11:37","modified_gmt":"2025-06-25T17:11:37","slug":"how-to-avoid-being-fooled-by-ai-generated-misinformation","status":"publish","type":"post","link":"https:\/\/michigandigitalnews.com\/index.php\/2024\/09\/02\/how-to-avoid-being-fooled-by-ai-generated-misinformation\/","title":{"rendered":"How to avoid being fooled by AI-generated misinformation"},"content":{"rendered":"<p> [ad_1]<br \/>\n<\/p>\n<div id=\"\">\n<figure class=\"ArticleImage\">\n<div class=\"Image__Wrapper\"><img fetchpriority=\"high\" decoding=\"async\" class=\"Image\" width=\"1350\" height=\"900\" alt=\"An AI-generated image of a family posing outside with a mountain in the distance\" src=\"https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg\" sizes=\"(min-width: 1288px) 837px, (min-width: 1024px) calc(57.5vw + 55px), (min-width: 415px) calc(100vw - 40px), calc(70vw + 74px)\" srcset=\"https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=300 300w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=400 400w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=500 500w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=600 600w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=700 700w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=800 800w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=837 837w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=900 900w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=1003 1003w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=1100 1100w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=1200 1200w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=1300 1300w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=1400 1400w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=1500 1500w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=1600 1600w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=1674 1674w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=1700 1700w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=1800 1800w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=1900 1900w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28151924\/SEI_218925972.jpg?width=2006 2006w\" loading=\"eager\" fetchpriority=\"high\" data-image-context=\"Article\" data-image-id=\"2445540\" data-caption=\"Many AI-generated images look realistic until you take a closer look\" data-credit=\"MidJourney\"\/><\/div><figcaption class=\"ArticleImageCaption\">\n<div class=\"ArticleImageCaption__CaptionWrapper\">\n<p class=\"ArticleImageCaption__Title\">Many AI-generated images look realistic until you take a closer look<\/p>\n<p class=\"ArticleImageCaption__Credit\">MidJourney<\/p>\n<\/div>\n<\/figcaption><\/figure>\n<\/p>\n<p>Did you notice that the image above was created by artificial intelligence? It can be difficult to spot AI-generated images, video, audio and text at a time when technological advances are making them increasingly indistinguishable from much human-created content, leaving us open to manipulation by disinformation. But by knowing the current state of the AI technologies used to create misinformation, and the range of telltale signs that what you are looking at might be fake, you can help protect yourself from being taken in.<\/p>\n<p>World leaders are concerned. According to <a href=\"https:\/\/www.weforum.org\/publications\/global-risks-report-2024\/in-full\/\">a report by the World Economic Forum<\/a>, misinformation and disinformation may \u201cradically disrupt electoral processes in several economies over the next two years\u201d, while easier access to AI tools \u201chave already enabled an explosion in falsified information and so-called \u2018synthetic\u2019 content, from sophisticated voice cloning to counterfeit websites\u201d.<\/p>\n<p>The terms misinformation and disinformation both refer to false or inaccurate information, but disinformation is that which is deliberately intended to deceive or mislead.<\/p>\n<p>\u201cThe issue with AI-powered disinformation is the scale, speed and ease with which campaigns can be launched,\u201d says <a href=\"https:\/\/www.ischool.berkeley.edu\/people\/hany-farid\">Hany Farid<\/a> at the University of California, Berkeley. \u201cThese attacks will no longer take state-sponsored actors or well-financed organisations \u2013 a single individual with access to some modest computing power can create massive amounts of fake content.\u201d<\/p>\n<p><span class=\"js-content-prompt-opportunity\"\/><\/p>\n<p>He says that generative AI (<a href=\"#DeepDive-1\">see glossary, below<\/a>) is \u201cpolluting the entire information ecosystem, casting everything we read, see and hear into doubt\u201d. He says his research suggests that, in many cases, AI-generated images and audio are \u201cnearly indistinguishable from reality\u201d.<\/p>\n<p>However, research by Farid and others reveals that there are strategies you can follow to reduce your risk of falling for social media misinformation or disinformation created by AI.<\/p>\n<h2>How to spot fake AI images<\/h2>\n<p>Remember seeing a photo of <a href=\"https:\/\/www.newscientist.com\/article\/2366312-should-you-be-worried-that-an-ai-picture-of-the-pope-went-viral\/\">Pope Francis wearing a puffer jacket<\/a>? Such fake AI images have become more common as new tools based on diffusion models (<a href=\"#DeepDive-1\">see glossary, below<\/a>) have allowed anyone to start churning out images from simple text prompts. One <a href=\"https:\/\/arxiv.org\/abs\/2405.11697\">study<\/a> by Nicholas Dufour at Google and his colleagues found a rapid increase in the proportion of AI-generated images in fact-checked misinformation claims from early 2023 onwards.<\/p>\n<p>\u201cNowadays, media literacy requires AI literacy,\u201d says <a href=\"https:\/\/scholar.google.com\/citations?user=BtbeIckAAAAJ&amp;hl=en\">Negar Kamali<\/a> at Northwestern University in Illinois. In a 2024 <a href=\"https:\/\/arxiv.org\/abs\/2406.08651\">study<\/a>, she and her colleagues identified five different categories of errors in AI-generated images (outlined below) and provided guidance on how people can spot these for themselves. The good news is that their research suggests people are currently about 70 per cent accurate at detecting fake AI images of people. You can use their <a href=\"https:\/\/detectfakes.kellogg.northwestern.edu\/\">online image test<\/a> to assess your own sleuthing skills.<\/p>\n<p><strong>5 common types of errors in AI-generated images:<\/strong><\/p>\n<ol>\n<li><strong>Sociocultural implausibilities:<\/strong> Is the scene depicting rare, unusual or surprising behaviour for certain cultures or historical figures?<\/li>\n<li><strong>Anatomical implausibilities:<\/strong> Take a close look: are body parts like hands unusually shaped or sized? Do the eyes or mouths look strange? Have any body parts merged?<\/li>\n<li><strong>Stylistic artefacts: <\/strong>Does the image look unnatural, almost too perfect or stylistic? Does the background look odd or like it is missing something? Is the lighting strange or variable?<\/li>\n<li><strong>Functional implausibilities: <\/strong>Do any objects look bizarre or like they might not be real or work? For example, are buttons or belt buckles in weird places?<\/li>\n<li><strong> Violations of physics: <\/strong>Are shadows pointing in different directions? Are mirror reflections consistent with the world depicted within the image?<\/li>\n<\/ol>\n<figure class=\"ArticleImage\">\n<div class=\"Image__Wrapper\"><img loading=\"lazy\" decoding=\"async\" class=\"Image lazyload\" width=\"1350\" height=\"900\" alt=\"An image of a man brushing his teeth with two toothbrushes, one of which looks strange, that has been generated by an AI program\" sizes=\"(min-width: 1288px) 837px, (min-width: 1024px) calc(57.5vw + 55px), (min-width: 415px) calc(100vw - 40px), calc(70vw + 74px)\" srcset=\"https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=300 300w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=400 400w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=500 500w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=600 600w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=700 700w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=800 800w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=837 837w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=900 900w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=1003 1003w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=1100 1100w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=1200 1200w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=1300 1300w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=1400 1400w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=1500 1500w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=1600 1600w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=1674 1674w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=1700 1700w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=1800 1800w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=1900 1900w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg?width=2006 2006w\" src=\"https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144522\/SEI_217242335.jpg\" loading=\"lazy\" data-image-context=\"Article\" data-image-id=\"2445483\" data-caption=\"Strange objects and behaviour can be clues that an image was created by AI\" data-credit=\"MidJourney\"\/><\/div><figcaption class=\"ArticleImageCaption\">\n<div class=\"ArticleImageCaption__CaptionWrapper\">\n<p class=\"ArticleImageCaption__Title\">Strange objects and behaviour can be clues that an image was created by AI<\/p>\n<p class=\"ArticleImageCaption__Credit\">MidJourney<\/p>\n<\/div>\n<\/figcaption><\/figure>\n<\/p>\n<h2>How to identify video deepfakes<\/h2>\n<p>AI technology known as generative adversarial networks (<a href=\"#DeepDive-1\">see glossary, below<\/a>) has allowed tech-savvy individuals to create <a href=\"https:\/\/www.newscientist.com\/article\/2418188-deepfakes-are-out-of-control-is-it-too-late-to-stop-them\/\">video deepfakes<\/a> since 2014 \u2013 digitally manipulating existing videos of people to swap in different faces, create new facial expressions and insert new spoken audio aligned with matching lip-syncing. This has enabled a growing array of scammers, state-backed hackers and internet users to produce video deepfakes where celebrities such as <a href=\"https:\/\/www.newscientist.com\/article\/2418740-could-an-ai-replace-all-music-ever-recorded-with-taylor-swift-covers\/\">Taylor Swift<\/a> and ordinary people alike may find themselves unwillingly featured in non-consensual deepfake pornography, scams and political misinformation or disinformation.<\/p>\n<p>The techniques for spotting AI fake images (see above) can be applied to suspect videos too. Additionally, researchers at the Massachusetts Institute of Technology and Northwestern University in Illinois have compiled <a href=\"https:\/\/www.media.mit.edu\/projects\/detect-fakes\/overview\/\">some tips<\/a> for how to spot such deepfakes, but they have acknowledged that there is no fool-proof method that always works.<\/p>\n<p><strong>6 tips for spotting AI-generated video:<\/strong><\/p>\n<ol>\n<li><strong>Mouth and lip movements:<\/strong> Are there moments when the video and audio aren\u2019t completely synced?<\/li>\n<li><strong>Anatomical glitches: <\/strong>Does the face or body look weird or move unnaturally?<\/li>\n<li><strong>Face: <\/strong>Look for inconsistencies in face smoothness or wrinkles around the forehead and cheeks, along with facial moles.<\/li>\n<li><strong>Lighting: <\/strong>Is the lighting inconsistent? Do shadows behave as you would expect? Pay particular attention to a person\u2019s eyes, eyebrows and glasses.<\/li>\n<li><strong>Hair: <\/strong>Does facial hair look weird or move in strange ways?<\/li>\n<li><strong>Blinking: <\/strong>Too much or too little blinking could be a sign of a deepfake.<\/li>\n<\/ol>\n<p>A newer category of video deepfakes is based on diffusion models (<a href=\"#DeepDive-1\">see glossary, below<\/a>) \u2013 the same AI technology behind many image generators \u2013 that can create completely AI-generated video clips based on text prompts. Companies are already testing and releasing commercial versions of <a href=\"https:\/\/www.newscientist.com\/article\/2417639-realism-of-openais-sora-video-generator-raises-security-concerns\/\">AI video generators<\/a> that could make it easy for anyone to do this without needing special technical knowledge. So far, the resulting videos tend to feature distorted faces or bizarre body movements.<\/p>\n<p>\u201cThese AI-generated videos are probably easier for people to detect than images, because there is a lot of movement and there is a lot more opportunity for AI-generated artefacts and impossibilities,\u201d says Kamali.<\/p>\n<p>\n    <iframe title=\"How to spot deepfakes and AI-generated images\" width=\"1200\" height=\"675\" src=\"https:\/\/www.youtube.com\/embed\/7akzhpx0EIU?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><br \/>\n    <\/iframe>\n<\/p>\n<h2>How to identify AI bots<\/h2>\n<p>Social media accounts controlled by computer bots have become common on many social media and messaging platforms. A growing number of these bots have also been taking advantage of generative AI technologies such as large language models (<a href=\"#DeepDive-1\">see glossary, below<\/a>) since 2022. These make it both easy and cheap to churn out AI-written content through thousands of bots that is grammatically correct and convincingly customised to different situations.<\/p>\n<p>It has become much easier \u201cto customise these large language models for specific audiences with specific messages\u201d, says <a href=\"https:\/\/crc.nd.edu\/about\/people\/paul-brenner\/\">Paul Brenner<\/a> at the University of Notre Dame in Indiana.<\/p>\n<p>Brenner and his colleagues have found in their research that volunteers could only distinguish AI-powered bots from humans <a href=\"https:\/\/arxiv.org\/abs\/2402.07940\">about 42 per cent of the time<\/a> \u2013 despite the participants being told they were potentially interacting with bots. You can test your own bot detection skills <a href=\"https:\/\/nd.qualtrics.com\/jfe\/form\/SV_dgy2Ymsq74ZkNOm\">here<\/a>.<\/p>\n<p>Some strategies can help identify less sophisticated AI bots, says Brenner.<\/p>\n<p><strong>5 ways to determine whether a social media account is an AI bot:<\/strong><\/p>\n<ol>\n<li><strong>Emojis and hashtags: <\/strong>Excessive use of these can be a sign.<\/li>\n<li><strong>Uncommon phrasing, word choices or analogies:<\/strong> Unusual wording could indicate an AI bot.<\/li>\n<li><strong>Repetition and structure: <\/strong>Bots may use repeated wording that follows similar or rigid forms and they may overuse certain slang terms.<\/li>\n<li><strong>Ask questions: <\/strong>These can reveal a bot\u2019s lack of knowledge about a topic \u2013 particularly when it comes to local places and situations.<\/li>\n<li><strong>Assume the worst: <\/strong>If a social media account isn\u2019t a personal contact and their identity hasn\u2019t been clearly validated or verified, it could well be an AI bot.<\/li>\n<\/ol>\n<h2>How to detect audio cloning and speech deepfakes<\/h2>\n<p>Voice cloning (<a href=\"#DeepDive-1\">see glossary, below<\/a>) AI tools have made it easy to generate new spoken audio that can mimic practically anyone. This has led to the rise of audio deepfake scams that clone the voices of family members, company executives and political leaders such as US President Joe Biden. These can be much more difficult to identify compared with AI-generated videos or images.<\/p>\n<p>\u201cVoice cloning is particularly challenging to distinguish between real and fake because there aren\u2019t visual components to support our brains in making that decision,\u201d says <a href=\"https:\/\/www.linkedin.com\/in\/racheltobac\/\">Rachel Tobac<\/a>, co-founder of SocialProof Security, a white-hat hacking organisation.<\/p>\n<p>Detecting such AI audio deepfakes can be especially tricky when they are used in video and phone calls. But there are some common-sense steps you can follow to distinguish authentic humans from AI-generated voices.<\/p>\n<p><strong>4 steps for recognising if audio has been cloned or faked using AI:<\/strong><\/p>\n<ol>\n<li><strong>Public figures: <\/strong>If the audio clip is of an elected official or celebrity, check if what they are saying is consistent with what has already been publicly reported or shared about their views and behaviour.<\/li>\n<li><strong>Look for inconsistencies: <\/strong>Compare the audio clip with previously authenticated video or audio clips that feature the same person\u2019s voice. Are there any inconsistencies in the sound of their voice or their speech mannerisms?<\/li>\n<li><strong>Awkward silences: <\/strong>If you are listening to a phone call or voicemail and the speaker is taking unusually long pauses while speaking, they may be using AI-powered voice cloning technology.<\/li>\n<li><strong>Weird and wordy: <\/strong>Any robotic speech patterns or an unusually verbose manner of speaking could indicate that someone is using a combination of voice cloning to mimic a person\u2019s voice and a large language model to generate the exact wording.<\/li>\n<\/ol>\n<figure class=\"ArticleImage\">\n<div class=\"Image__Wrapper\"><img loading=\"lazy\" decoding=\"async\" class=\"Image lazyload\" width=\"1350\" height=\"900\" alt=\"Videograb of an AI-generated version of Narendra Modi dancing to the song Gangnam Style\" sizes=\"(min-width: 1288px) 837px, (min-width: 1024px) calc(57.5vw + 55px), (min-width: 415px) calc(100vw - 40px), calc(70vw + 74px)\" srcset=\"https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=300 300w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=400 400w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=500 500w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=600 600w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=700 700w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=800 800w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=837 837w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=900 900w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=1003 1003w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=1100 1100w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=1200 1200w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=1300 1300w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=1400 1400w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=1500 1500w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=1600 1600w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=1674 1674w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=1700 1700w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=1800 1800w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=1900 1900w, https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg?width=2006 2006w\" src=\"https:\/\/images.newscientist.com\/wp-content\/uploads\/2024\/08\/28144531\/SEI_200078480.jpg\" loading=\"lazy\" data-image-context=\"Article\" data-image-id=\"2445485\" data-caption=\"Public figures such as Narendra Modi behaving out of character can be an AI giveaway\u00a0\" data-credit=\"@the_indian_deepfaker\"\/><\/div><figcaption class=\"ArticleImageCaption\">\n<div class=\"ArticleImageCaption__CaptionWrapper\">\n<p class=\"ArticleImageCaption__Title\">Public figures such as Narendra Modi behaving out of character can be an AI giveaway\u00a0<\/p>\n<p class=\"ArticleImageCaption__Credit\">@the_indian_deepfaker<\/p>\n<\/div>\n<\/figcaption><\/figure>\n<\/p>\n<h2>The technology will only get better<\/h2>\n<p>As it stands, there are no consistent rules that can always distinguish AI-generated content from authentic human content. AI models capable of generating text, images, video and audio will almost certainly continue to improve and they can often quickly produce authentic-seeming content without any obvious artefacts or mistakes. \u201cBe politely paranoid and realise that AI has been manipulating and fabricating pictures, videos and audio fast \u2013 we\u2019re talking completed in 30 seconds or less,\u201d says Tobac. \u201cThis makes it easy for malicious individuals who are looking to trick folks to turn around AI-generated disinformation quickly, hitting social media within minutes of breaking news.\u201d<\/p>\n<p>While it is important to hone your eye for AI-generated false information and learn to ask more questions of what you read, see and hear, ultimately this won\u2019t be enough to stop harm and the responsibility to detect fakes can\u2019t fall fully on individuals. Farid is among researchers who say that government regulators must hold to account the largest tech companies \u2013 along with start-ups backed by prominent Silicon Valley investors \u2013 that have developed many of the tools that are flooding the internet with fake AI-generated content. \u201cTechnology is not neutral,\u201d says Farid. \u201cThis line that the technology sector has sold us that somehow they don\u2019t have to absorb liability where every other industry does, I simply reject it.\u201d<\/p>\n<div class=\"DeepDive\" id=\"DeepDive-1\" data-title=\"An AI glossary\">\n<div class=\"DeepDive__Content\">\n<p><strong>Diffusion models<\/strong>: AI models that learn by first adding random noise to data \u2013 such as blurring an image \u2013 and then reversing the process to recover the original data.<\/p>\n<p><strong>Generative adversarial networks<\/strong>: A machine learning method based on two neural networks that compete by modifying original data and then try to predict whether the generated data is authentic or real.<\/p>\n<p><strong>Generative AI:<\/strong> A broad class of AI models that can produce text, images, audio and video after being trained on similar forms of such content.<\/p>\n<p><strong>Large language models<\/strong>: A subset of generative AI models that can produce different forms of written content in response to text prompts and sometimes translate between various languages.<\/p>\n<p><strong>Voice cloning<\/strong>: The method of using AI models to create a digital copy of a person\u2019s voice and then potentially generating new speech samples in that voice.<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<section class=\"ArticleTopics\">\n<p class=\"ArticleTopics__Heading\">Topics:<\/p>\n<\/section><\/div>\n<p>[ad_2]<br \/>\n<br \/><a href=\"https:\/\/www.newscientist.com\/article\/2445475-how-to-avoid-being-fooled-by-ai-generated-misinformation\/?utm_campaign=RSS%7CNSNS&#038;utm_source=NSNS&#038;utm_medium=RSS&#038;utm_content=home\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] Many AI-generated images look realistic until you take a closer look MidJourney Did you notice that the image above was created by artificial intelligence?<\/p>\n","protected":false},"author":1,"featured_media":258030,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[177],"tags":[],"_links":{"self":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts\/258029"}],"collection":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/comments?post=258029"}],"version-history":[{"count":0,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts\/258029\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/media\/258030"}],"wp:attachment":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/media?parent=258029"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/categories?post=258029"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/tags?post=258029"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}