{"id":214098,"date":"2024-03-16T18:09:52","date_gmt":"2024-03-16T18:09:52","guid":{"rendered":"https:\/\/michigandigitalnews.com\/index.php\/2024\/03\/16\/hackers-can-read-your-encrypted-ai-assistant-chats\/"},"modified":"2025-06-25T17:20:31","modified_gmt":"2025-06-25T17:20:31","slug":"hackers-can-read-your-encrypted-ai-assistant-chats","status":"publish","type":"post","link":"https:\/\/michigandigitalnews.com\/index.php\/2024\/03\/16\/hackers-can-read-your-encrypted-ai-assistant-chats\/","title":{"rendered":"Hackers can read your encrypted AI-assistant chats"},"content":{"rendered":"<p> [ad_1]<br \/>\n<br \/><img decoding=\"async\" src=\"https:\/\/readwrite.com\/wp-content\/uploads\/2024\/03\/oO3jM8KgQHKw0jeGsuivUw-719x719.png\" \/><\/p>\n<div>\n<p>Researchers at Ben-Gurion University have discovered a vulnerability in cloud-based <a href=\"https:\/\/readwrite.com\/ai-that-helps-us-be-more-human-with-each-other\/\">AI<\/a> assistants like Chat GTP. The vulnerability, according to researchers, means that hackers are able to intercept and decrypt conversations between people and these AI assistants.<\/p>\n<p><span style=\"font-weight: 400;\">The researchers found that chatbots such as <a href=\"https:\/\/readwrite.com\/chatgpt-everything-you-need-to-know-consistent-updates\/\">Chat-GPT<\/a> send responses in small tokens broken into little parts in order to speed up the encryption process. But by doing this, the tokens can be intercepted by hackers. These hackers in turn can analyze the length, size, and sequence of these tokens in order to decrypt their responses. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cCurrently, anybody can read private chats sent from <\/span><span style=\"font-weight: 400;\">ChatGPT<\/span><span style=\"font-weight: 400;\"> and other services,\u201d Yisroel Mirsky, head of the Offensive AI Research Lab, told <\/span><a href=\"https:\/\/arstechnica.com\/security\/2024\/03\/hackers-can-read-private-ai-assistant-chats-even-though-theyre-encrypted\/\"><span style=\"font-weight: 400;\">ArsTechnica <\/span><\/a>in an email<\/p>\n<p><span style=\"font-weight: 400;\">\u201cThis includes malicious actors on the same Wi-Fi or LAN as a client (e.g., same coffee shop), or even a malicious actor on the Internet\u2014anyone who can observe the traffic. The attack is passive and can happen without OpenAI or the client\u2019s knowledge. OpenAI encrypts their traffic to prevent these kinds of eavesdropping attacks, but our research shows that the way OpenAI is using encryption is flawed, and thus the content of the messages are exposed.\u201d<\/span><\/p>\n<p>\u201cOur investigation into the network traffic of several prominent AI assistant services uncovered this vulnerability across multiple platforms, including Microsoft Bing AI (Copilot) and OpenAI\u2019s ChatGPT-4. We conducted a thorough evaluation of our inference attack on GPT-4 and validated the attack by successfully deciphering responses from four different services from OpenAI and Microsoft.<\/p>\n<p><span style=\"font-weight: 400;\">According to these researchers, there are two main solutions: either stop sending tokens one by one or make tokens as large as possible by \u201cpadding\u201d them to the length of the largest possible packet, which, reportedly, will make these tokens harder to analyze.<\/span><\/p>\n<p><em><strong>Featured image: Image generated by Ideogram<\/strong><\/em><\/p>\n<\/p><\/div>\n<p>[ad_2]<br \/>\n<br \/><a href=\"https:\/\/readwrite.com\/hackers-can-read-your-encrypted-ai-assistant-chats\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] Researchers at Ben-Gurion University have discovered a vulnerability in cloud-based AI assistants like Chat GTP. The vulnerability, according to researchers, means that hackers are<\/p>\n","protected":false},"author":1,"featured_media":214099,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[152],"tags":[],"_links":{"self":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts\/214098"}],"collection":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/comments?post=214098"}],"version-history":[{"count":2,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts\/214098\/revisions"}],"predecessor-version":[{"id":337029,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts\/214098\/revisions\/337029"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/media\/214099"}],"wp:attachment":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/media?parent=214098"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/categories?post=214098"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/tags?post=214098"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}