{"id":230513,"date":"2024-06-10T16:56:14","date_gmt":"2024-06-10T16:56:14","guid":{"rendered":"https:\/\/michigandigitalnews.com\/index.php\/2024\/06\/10\/childrens-photos-are-being-illegally-used-to-train-ai\/"},"modified":"2025-06-25T17:17:27","modified_gmt":"2025-06-25T17:17:27","slug":"childrens-photos-are-being-illegally-used-to-train-ai","status":"publish","type":"post","link":"https:\/\/michigandigitalnews.com\/index.php\/2024\/06\/10\/childrens-photos-are-being-illegally-used-to-train-ai\/","title":{"rendered":"Children&#8217;s photos are being &#8216;illegally used to train AI&#8217;"},"content":{"rendered":"<p> [ad_1]<br \/>\n<br \/><img decoding=\"async\" src=\"https:\/\/readwrite.com\/wp-content\/uploads\/2024\/06\/Childrens-photos-are-being-illegally-used-to-train-AI-900x600.png\" \/><\/p>\n<div>\n<p>Personal photos of Brazilian children are being used without their knowledge or consent to develop sophisticated <a href=\"https:\/\/readwrite.com\/category\/ai\/\">artificial intelligence<\/a> (AI) tools, according to <a href=\"https:\/\/www.hrw.org\/news\/2024\/06\/10\/brazil-childrens-personal-photos-misused-power-ai-tools\" target=\"_blank\" rel=\"noopener\">Human Rights Watch<\/a>. These images are reportedly collected from the internet and compiled into extensive datasets, which companies use to improve their AI technologies.<\/p>\n<p>Consequently, these tools are then said to be employed to produce harmful <a href=\"https:\/\/readwrite.com\/ais-dark-side-deepfakes-explained-as-it-poses-growing-threat\/\">deepfakes<\/a>, increasing the risk of exploitation and harm to more children.<\/p>\n<blockquote class=\"twitter-tweet\" data-width=\"550\" data-dnt=\"true\">\n<p lang=\"en\" dir=\"ltr\">\ud83d\udce2NEW: The personal photos of Brazilian children are being secretly used to build powerful AI tools. <\/p>\n<p>Others are then using these tools to create malicious deepfakes, putting even more children at risk of serious harm.<a href=\"https:\/\/t.co\/Bf3f8SOK5M\">https:\/\/t.co\/Bf3f8SOK5M<\/a><\/p>\n<p>\u2014 Hye Jung Han (@techchildrights) <a href=\"https:\/\/twitter.com\/techchildrights\/status\/1800124357790048282?ref_src=twsrc%5Etfw\">June 10, 2024<\/a><\/p>\n<\/blockquote>\n<p>Hye Jung Han, a children\u2019s rights and technology researcher and Human Rights Watch advocate said: \u201cChildren should not have to live in fear that their photos might be stolen and weaponized against them.<\/p>\n<p>\u201cThe government should urgently adopt policies to protect children\u2019s data from AI-fueled misuse,\u201d he continued, warning, \u201cGenerative AI is still a nascent technology, and the associated harm that children are already experiencing is not inevitable.\u201d<\/p>\n<p>He added: \u201cProtecting children\u2019s data privacy now will help to shape the development of this technology into one that promotes, rather than violates, children\u2019s rights.\u201d<\/p>\n<h2>Are children\u2019s images being used to train AI?<\/h2>\n<p>The investigation revealed that LAION-5B, a major dataset used by prominent AI applications and compiled by crawling vast amounts of online content, includes links to identifiable images of Brazilian children.<\/p>\n<p>These images often include the children\u2019s names either in the caption or the URL where the image is hosted. In various examples, it\u2019s possible to trace the children\u2019s identities, revealing details about the time and place the photos were taken. According to <a href=\"https:\/\/www.wired.com\/story\/ai-tools-are-secretly-training-on-real-childrens-faces\/\" target=\"_blank\" rel=\"noopener\">WIRED<\/a>, it has been used to train several AI models, such as Stability AI\u2019s Stable Diffusion image generation tool.<\/p>\n<p>Human Rights Watch identified 170 images of children across at least 10 Brazilian states including Alagoas, Bahia, Cear\u00e1, Mato Grosso do Sul, Minas Gerais, Paran\u00e1, Rio de Janeiro, Rio Grande do Sul, Santa Catarina, and S\u00e3o Paulo from as far back as the mid-1990s. This figure likely represents just a fraction of the children\u2019s personal data in LAION-5B, as only less than 0.0001 per cent of the dataset\u2019s 5.85 billion images and captions were reviewed.<\/p>\n<p>The organization claims that at least 85 girls from some of these Brazilian states have been subjected to harassment. Their classmates misused AI technology to create sexually <a href=\"https:\/\/readwrite.com\/uk-criminalizes-creation-of-sexually-explicit-deepfakes\/\">explicit deepfakes<\/a> using photographs from the girls\u2019 social media profiles and subsequently distributed these manipulated images online.<\/p>\n<h2>LAION responds to claims<\/h2>\n<p>In response, LAION, the German AI nonprofit overseeing LAION-5B, acknowledged that the dataset used some children\u2019s photos identified by Human Rights Watch and committed to removing them. However, it contested that AI models trained on LAION-5B could exactly copy personal data. LAION also stated that it was the responsibility of children and their guardians to delete personal photos from the internet, arguing this was the best way to prevent misuse.<\/p>\n<p>In December, <a href=\"https:\/\/purl.stanford.edu\/kh752sm9123\" target=\"_blank\" rel=\"noopener\">Stanford University\u2019s Internet Observatory<\/a> reported that LAION-5B contained thousands of suspected child sexual abuse images. LAION took the dataset offline and released a <a href=\"https:\/\/laion.ai\/notes\/laion-maintenance\/\" target=\"_blank\" rel=\"noopener\">statement<\/a> saying it \u201chas a zero tolerance policy for illegal content\u201d.<\/p>\n<p>ReadWrite has reached out to LAION and Stability AI for comment.<\/p>\n<p><em>Featured image: Canva \/ Ideogram<\/em><\/p>\n<\/p><\/div>\n<p><script async src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><br \/>\n<br \/>[ad_2]<br \/>\n<br \/><a href=\"https:\/\/readwrite.com\/childrens-photos-are-being-illegally-used-to-train-ai\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] Personal photos of Brazilian children are being used without their knowledge or consent to develop sophisticated artificial intelligence (AI) tools, according to Human Rights<\/p>\n","protected":false},"author":1,"featured_media":230514,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[152],"tags":[],"_links":{"self":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts\/230513"}],"collection":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/comments?post=230513"}],"version-history":[{"count":0,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts\/230513\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/media\/230514"}],"wp:attachment":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/media?parent=230513"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/categories?post=230513"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/tags?post=230513"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}