{"id":245877,"date":"2024-07-20T16:41:06","date_gmt":"2024-07-20T16:41:06","guid":{"rendered":"https:\/\/michigandigitalnews.com\/index.php\/2024\/07\/20\/claude-3-5-sonnet-empowers-audio-data-analysis-with-python\/"},"modified":"2025-06-25T17:14:19","modified_gmt":"2025-06-25T17:14:19","slug":"claude-3-5-sonnet-empowers-audio-data-analysis-with-python","status":"publish","type":"post","link":"https:\/\/michigandigitalnews.com\/index.php\/2024\/07\/20\/claude-3-5-sonnet-empowers-audio-data-analysis-with-python\/","title":{"rendered":"Claude 3.5 Sonnet Empowers Audio Data Analysis with Python"},"content":{"rendered":"<p> [ad_1]<br \/>\n<\/p>\n<div>\n<figure class=\"figure mt-2\">&#13;<br \/>\n                                &#13;<\/p>\n<p>&#13;<br \/>\n                                    <a href=\"https:\/\/blockchain.news\/Profile\/Terrill-Dicki\">Terrill Dicki<\/a>&#13;<br \/>\n                                    <span class=\"publication-date ml-2\"> Jul 20, 2024 11:23<\/span>&#13;\n                                <\/p>\n<p>&#13;<\/p>\n<p class=\"lead\">Learn to use Claude 3 models with audio data in Python, leveraging AssemblyAI&#8217;s LeMUR framework for seamless integration.<\/p>\n<p>&#13;<br \/>\n                                <a href=\"https:\/\/image.blockchain.news:443\/features\/DC3788979712BF4DFF603597AAC46E7C52F8B5EF76BC21453D757F37CDB271FE.jpg\">&#13;<br \/>\n                                    <img decoding=\"async\" class=\"rounded\" src=\"https:\/\/image.blockchain.news:443\/features\/DC3788979712BF4DFF603597AAC46E7C52F8B5EF76BC21453D757F37CDB271FE.jpg\" alt=\"Claude 3.5 Sonnet Empowers Audio Data Analysis with Python\"\/>&#13;<br \/>\n                                <\/a>&#13;<br \/>\n                            <\/figure>\n<p>Claude 3.5 Sonnet, recently <a rel=\"nofollow\" href=\"https:\/\/www.anthropic.com\/news\/claude-3-5-sonnet?ref=assemblyai.com\">announced by Anthropic<\/a>, sets new industry benchmarks for various LLM tasks. This model excels in complex coding, nuanced literary analysis, and showcases exceptional context awareness and creativity.<\/p>\n<p>According to <a rel=\"nofollow\" href=\"https:\/\/www.assemblyai.com\/blog\/claude-3-5-sonnet-with-audio-data-python\/\">AssemblyAI<\/a>, users can now learn how to utilize Claude 3.5 Sonnet, Claude 3 Opus, and Claude 3 Haiku with audio or video files in Python.<\/p>\n<figure><img decoding=\"async\" src=\"https:\/\/www.assemblyai.com\/blog\/content\/images\/2024\/07\/claude3_lemur_pipeline.png\" alt=\"claude3_lemur_pipeline.png\"\/>&#13;<figcaption>Pipeline for applying Claude 3 models to audio data<\/figcaption>&#13;<br \/>\n<\/figure>\n<p>Here are a few example use cases for this pipeline:<\/p>\n<ul>&#13;<\/p>\n<li>Creating summaries of long podcasts or YouTube videos<\/li>\n<p>&#13;<\/p>\n<li>Asking questions about the audio content<\/li>\n<p>&#13;<\/p>\n<li>Generating action items from meetings<\/li>\n<p>&#13;\n<\/ul>\n<h2>How Does It Work?<\/h2>\n<p>Language models primarily work with text data, necessitating the transcription of audio data first. Multimodal models can address this, though they remain in early development stages.<\/p>\n<p>To achieve this, AssemblyAI&#8217;s LeMUR framework is employed. LeMUR simplifies the process by allowing the combination of industry-leading Speech AI models and LLMs in just a few lines of code.<\/p>\n<h2>Set Up the SDK<\/h2>\n<p>To get started, install the <a rel=\"nofollow\" href=\"https:\/\/github.com\/AssemblyAI\/assemblyai-python-sdk?ref=assemblyai.com\">AssemblyAI Python SDK<\/a>, which includes all LeMUR functionality.<\/p>\n<pre><code>pip install assemblyai<\/code><\/pre>\n<p>Then, import the package and set your API key. You can get one for free <a rel=\"nofollow\" href=\"https:\/\/www.assemblyai.com\/?ref=assemblyai.com\">here<\/a>.<\/p>\n<pre><code>import assemblyai as aai&#13;\naai.settings.api_key = \"YOUR_API_KEY\"<\/code><\/pre>\n<h2>Transcribe an Audio or Video File<\/h2>\n<p>Next, transcribe an audio or video file by setting up a <code>Transcriber<\/code> and calling the <code>transcribe()<\/code> function. You can pass in any local file or publicly accessible URL. For instance, a <a rel=\"nofollow\" href=\"https:\/\/www.lennyspodcast.com\/lessons-from-1000-yc-startups-pivoting-resilience-avoiding-tar-pit-ideas-more-dalton-caldwel\/?ref=assemblyai.com\">podcast episode of Lenny&#8217;s podcast featuring Dalton Caldwell<\/a> from Y Combinator can be used.<\/p>\n<pre><code>audio_url = \"https:\/\/storage.googleapis.com\/aai-web-samples\/lennyspodcast-daltoncaldwell-ycstartups.m4a\"&#13;\n&#13;\ntranscriber = aai.Transcriber()&#13;\ntranscript = transcriber.transcribe(audio_url)&#13;\n&#13;\nprint(transcript.text)<\/code><\/pre>\n<h2>Use Claude 3.5 Sonnet with Audio Data<\/h2>\n<p>Claude 3.5 Sonnet is Anthropic&#8217;s most advanced model to date, outperforming Claude 3 Opus on a wide range of evaluations while remaining cost-effective.<\/p>\n<p>To use Sonnet 3.5, call <code>transcript.lemur.task()<\/code>, a flexible endpoint that allows you to specify any prompt. It automatically adds the transcript as additional context for the model.<\/p>\n<p>Specify <code>aai.LemurModel.claude3_5_sonnet<\/code> for the model when calling the LLM. Here\u2019s an example of a simple summarization prompt:<\/p>\n<pre><code>prompt = \"Provide a brief summary of the transcript.\"&#13;\n&#13;\nresult = transcript.lemur.task(&#13;\n    prompt, final_model=aai.LemurModel.claude3_5_sonnet&#13;\n)&#13;\n&#13;\nprint(result.response)<\/code><\/pre>\n<h2>Use Claude 3 Opus with Audio Data<\/h2>\n<p>Claude 3 Opus is adept at handling complex analysis, longer tasks with many steps, and higher-order math and coding tasks.<\/p>\n<p>To use Opus, specify <code>aai.LemurModel.claude3_opus<\/code> for the model when calling the LLM. Here\u2019s an example of a prompt to extract specific information from the transcript:<\/p>\n<pre><code>prompt = \"Extract all advice Dalton gives in this podcast episode. Use bullet points.\"&#13;\n&#13;\nresult = transcript.lemur.task(&#13;\n    prompt, final_model=aai.LemurModel.claude3_opus&#13;\n)&#13;\n&#13;\nprint(result.response)<\/code><\/pre>\n<h2>Use Claude 3 Haiku with Audio Data<\/h2>\n<p>Claude 3 Haiku is the fastest and most cost-effective model, ideal for executing lightweight actions.<\/p>\n<p>To use Haiku, specify <code>aai.LemurModel.claude3_haiku<\/code> for the model when calling the LLM. Here\u2019s an example of a simple prompt to ask your questions:<\/p>\n<pre><code>prompt = \"What are tar pit ideas?\"&#13;\n&#13;\nresult = transcript.lemur.task(&#13;\n    prompt, final_model=aai.LemurModel.claude3_haiku&#13;\n)&#13;\n&#13;\nprint(result.response)<\/code><\/pre>\n<h2>Learn More About Prompt Engineering<\/h2>\n<p>Applying Claude 3 models to audio data with AssemblyAI and the LeMUR framework is straightforward. To maximize the benefits of LeMUR and the Claude 3 models, refer to additional resources provided by AssemblyAI.<\/p>\n<p><span><i>Image source: Shutterstock<\/i><\/span><\/p>\n<p>                            <!-- Divider --><\/p>\n<p>                            <!-- Author info END --><br \/>\n                            <!-- Divider --><\/p><\/div>\n<p>[ad_2]<br \/>\n<br \/><a href=\"https:\/\/blockchain.news\/news\/claude-3-5-sonnet-audio-data-analysis-python\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] &#13; &#13; &#13; Terrill Dicki&#13; Jul 20, 2024 11:23&#13; &#13; Learn to use Claude 3 models with audio data in Python, leveraging AssemblyAI&#8217;s LeMUR<\/p>\n","protected":false},"author":1,"featured_media":245878,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[171],"tags":[],"_links":{"self":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts\/245877"}],"collection":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/comments?post=245877"}],"version-history":[{"count":0,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts\/245877\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/media\/245878"}],"wp:attachment":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/media?parent=245877"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/categories?post=245877"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/tags?post=245877"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}