{"id":235361,"date":"2024-06-22T15:38:26","date_gmt":"2024-06-22T15:38:26","guid":{"rendered":"https:\/\/michigandigitalnews.com\/index.php\/2024\/06\/22\/openais-cybersecurity-grant-program-highlights-pioneering-projects\/"},"modified":"2025-06-25T17:16:26","modified_gmt":"2025-06-25T17:16:26","slug":"openais-cybersecurity-grant-program-highlights-pioneering-projects","status":"publish","type":"post","link":"https:\/\/michigandigitalnews.com\/index.php\/2024\/06\/22\/openais-cybersecurity-grant-program-highlights-pioneering-projects\/","title":{"rendered":"OpenAI&#8217;s Cybersecurity Grant Program Highlights Pioneering Projects"},"content":{"rendered":"<p> [ad_1]<br \/>\n<\/p>\n<div>\n<figure class=\"figure mt-2\">&#13;<br \/>\n                        <a href=\"https:\/\/image.blockchain.news:443\/features\/D11B7CFCA58E34BD7D45FE96B9319DC677103B086D2B5DC6241654AB7083E58E.jpg\">&#13;<br \/>\n                            <img decoding=\"async\" class=\"rounded\" src=\"https:\/\/image.blockchain.news:443\/features\/D11B7CFCA58E34BD7D45FE96B9319DC677103B086D2B5DC6241654AB7083E58E.jpg\" alt=\"OpenAI's Cybersecurity Grant Program Highlights Pioneering Projects\"\/>&#13;<br \/>\n&#13;<br \/>\n                        <\/a>&#13;<br \/>\n                    <\/figure>\n<p>OpenAI&#8217;s Cybersecurity Grant Program has been instrumental in supporting a diverse array of projects aimed at enhancing AI and cybersecurity defenses. Since its inception, the program has funded several groundbreaking initiatives, each contributing significantly to the field of cybersecurity.<\/p>\n<h2>Wagner Lab from UC Berkeley<\/h2>\n<p>Professor David Wagner\u2019s security research lab at UC Berkeley is at the forefront of developing techniques to defend against prompt-injection attacks in large language models (LLMs). By collaborating with OpenAI, Wagner&#8217;s team aims to enhance the trustworthiness and security of these models, making them more resilient to cybersecurity threats.<\/p>\n<h2>Coguard<\/h2>\n<p>Albert Heinle, co-founder and CTO at <a rel=\"nofollow\" href=\"https:\/\/www.coguard.io\/\">Coguard<\/a>, is leveraging AI to mitigate software misconfiguration, a prevalent cause of security incidents. Heinle&#8217;s approach uses AI to automate the detection and updating of software configurations, improving security and reducing reliance on outdated rules-based policies.<\/p>\n<h2>Mithril Security<\/h2>\n<p>Mithril Security has developed a proof-of-concept to enhance the security of inference infrastructure for LLMs. Their project includes open-source tools for deploying AI models on GPUs with secure enclaves based on Trusted Platform Modules (TPMs). This work ensures data privacy by preventing data exposure, even to administrators. Their findings are publicly available on <a rel=\"nofollow\" href=\"https:\/\/github.com\/mithril-security\/blindllama-v2\">GitHub<\/a> and detailed in a comprehensive <a rel=\"nofollow\" href=\"https:\/\/github.com\/mithril-security\/blind_llama\/blob\/main\/docs\/docs\/whitepaper\/blind_llama_whitepaper.pdf\">whitepaper<\/a>.<\/p>\n<h2>Gabriel Bernadett-Shapiro<\/h2>\n<p>Individual grantee Gabriel Bernadett-Shapiro has created the AI OSINT workshop and AI Security Starter Kit, providing technical training and free tools for students, journalists, investigators, and information-security professionals. His work is particularly impactful for international atrocity crime investigators and intelligence studies students at Johns Hopkins University, equipping them with advanced AI tools for critical environments.<\/p>\n<h2>Breuer Lab at Dartmouth<\/h2>\n<p>Professor Adam Breuer\u2019s Lab at Dartmouth is focused on developing defense techniques to protect neural networks from attacks that reconstruct private training data. Their approach aims to prevent these attacks without sacrificing model accuracy or efficiency, addressing a significant challenge in the field of AI security.<\/p>\n<h2>Security Lab Boston University (SeclaBU)<\/h2>\n<p>At Boston University, Ph.D. candidate Saad Ullah, Professor Gianluca Stringhini, and Professor Ayse Coskun are working to improve the ability of LLMs to detect and fix code vulnerabilities. Their research could enable cyber defenders to identify and prevent code exploits before they can be maliciously used.<\/p>\n<h2>CY-PHY Security Lab from the University of Santa Cruz (UCSC)<\/h2>\n<p>Professor Alvaro Cardenas\u2019 Research Group at UCSC is investigating the use of foundation models to design autonomous cyber defense agents. Their project compares the effectiveness of foundation models and reinforcement learning (RL) trained counterparts in improving network security and threat information triage.<\/p>\n<h2>MIT Computer Science Artificial Intelligence Laboratory (MIT CSAIL)<\/h2>\n<p>Researchers Stephen Moskal, Erik Hemberg, and Una-May O\u2019Reilly from MIT CSAIL are exploring the automation of decision processes and actionable responses using prompt engineering in a plan-act-report loop for red-teaming. Additionally, they are examining LLM-Agent capabilities in Capture-the-Flag (CTF) challenges, which are exercises designed to identify vulnerabilities in a controlled environment.<\/p>\n<p><span><i>Image source: Shutterstock<\/i><\/span>                    <!-- Divider --><\/p>\n<p>                    <!-- Divider --><\/p>\n<p>                    <!-- Author info START --><br \/>\n                    <!-- Author info END --><br \/>\n                    <!-- Divider -->\n                <\/div>\n<p>[ad_2]<br \/>\n<br \/><a href=\"https:\/\/blockchain.news\/news\/openai-cybersecurity-grant-program-highlights-pioneering-projects\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] &#13; &#13; &#13; &#13; &#13; OpenAI&#8217;s Cybersecurity Grant Program has been instrumental in supporting a diverse array of projects aimed at enhancing AI and<\/p>\n","protected":false},"author":1,"featured_media":235362,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[171],"tags":[],"_links":{"self":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts\/235361"}],"collection":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/comments?post=235361"}],"version-history":[{"count":0,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts\/235361\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/media\/235362"}],"wp:attachment":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/media?parent=235361"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/categories?post=235361"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/tags?post=235361"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}