{"id":278002,"date":"2025-06-07T15:17:44","date_gmt":"2025-06-07T15:17:44","guid":{"rendered":"https:\/\/michigandigitalnews.com\/index.php\/2025\/06\/07\/how-androidify-leverages-gemini-firebase-and-ml-kit\/"},"modified":"2025-06-25T17:08:09","modified_gmt":"2025-06-25T17:08:09","slug":"how-androidify-leverages-gemini-firebase-and-ml-kit","status":"publish","type":"post","link":"https:\/\/michigandigitalnews.com\/index.php\/2025\/06\/07\/how-androidify-leverages-gemini-firebase-and-ml-kit\/","title":{"rendered":"How Androidify leverages Gemini, Firebase and ML Kit"},"content":{"rendered":"<p> [ad_1]<br \/>\n<\/p>\n<div>\n<meta content=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEgit06SKK3XzFZSCJhq4ygask3mob-5PrLcFiDgsT7qXwo5gMmrrlufpNXWev3IZjqESamgtrAtMBsUK4ZVA-Zt9iqXr-th0L8cvQK057_xByadvbJWCzm16yCUBove2WCnhHijUqQHqILY1-aJxfap2BRDK_iPs4egL6J40WeYhQH4fRpnCER3kx-66Ic\/s1600\/O25-BHero-Android-1-Meta.png\" name=\"twitter:image\"\/><br \/>\n<img decoding=\"async\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEgit06SKK3XzFZSCJhq4ygask3mob-5PrLcFiDgsT7qXwo5gMmrrlufpNXWev3IZjqESamgtrAtMBsUK4ZVA-Zt9iqXr-th0L8cvQK057_xByadvbJWCzm16yCUBove2WCnhHijUqQHqILY1-aJxfap2BRDK_iPs4egL6J40WeYhQH4fRpnCER3kx-66Ic\/s1600\/O25-BHero-Android-1-Meta.png\" style=\"display: none;\"\/><\/p>\n<p><em>Posted by Thomas Ezan \u2013 Developer Relations Engineer, Rebecca Franks \u2013 Developer Relations Engineer, and Avneet Singh \u2013 Product Manager<\/em><\/p>\n<p><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEgGn8cQLt5yYm4cg6RnPqvgDZ2yj-M1Bd-oqsfJAKyfTlDhX3D2FWDzzN0k4NjEMeR0VdUBOc39Sh9WTTNnuHvnqGAkKB45pGw0fr6Gc71UwdhrBrAQSK_xGvNthqD8kHHNDkDFXf4b1l9KzbfJ2FoFEFL3Cii1xUEGH4XDSgI8zeyEb4lKCvi3wIJC4MI\/s1600\/O25-BHero-Android-1.png\"><img decoding=\"async\" border=\"0\" data-original-width=\"100%\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEgGn8cQLt5yYm4cg6RnPqvgDZ2yj-M1Bd-oqsfJAKyfTlDhX3D2FWDzzN0k4NjEMeR0VdUBOc39Sh9WTTNnuHvnqGAkKB45pGw0fr6Gc71UwdhrBrAQSK_xGvNthqD8kHHNDkDFXf4b1l9KzbfJ2FoFEFL3Cii1xUEGH4XDSgI8zeyEb4lKCvi3wIJC4MI\/s1600\/O25-BHero-Android-1.png\"\/><\/a><\/p>\n<p>We\u2019re bringing back Androidify later this year, this time powered by Google AI, so you can customize your very own Android bot and share your creativity with the world. Today, we\u2019re releasing a new <a href=\"http:\/\/github.com\/android\/androidify\" target=\"_blank\" rel=\"noopener\">open source demo app for Androidify<\/a> as a great example of how Google is using its Gemini AI models to enhance app experiences.<\/p>\n<p>In this post, we&#8217;ll dive into how the Androidify app uses Gemini models and Imagen via the <a href=\"https:\/\/firebase.google.com\/docs\/vertex-ai\/get-started?platform=android\" target=\"_blank\" rel=\"noopener\">Firebase AI Logic SDK<\/a>, and we&#8217;ll provide some insights learned along the way to help you incorporate Gemini and AI into your own projects. Read more about the <a href=\"https:\/\/android-developers.googleblog.com\/2025\/05\/androidify-building-ai-driven-experiences-jetpack-compose-gemini-camerax.html\" target=\"_blank\" rel=\"noopener\">Androidify demo app<\/a>.<\/p>\n<h2><span style=\"font-size: x-large;\">App flow<\/span><\/h2>\n<p>The overall app functions as follows, with various parts of it using Gemini and Firebase along the way:<\/p>\n<p><image><\/p>\n<div style=\"text-align: center;\"><img decoding=\"async\" alt=\"flow chart demonstrating Androidify app flow\" border=\"0\" id=\"imgCaption\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEilK6O_vSvlOxVHNiTPOyF-ebqbR_ynIwGuIfA6jwQjbqwt8ymA-fa9TK4quvVxc9VDvlbq8cEboOTyZ4WUtfkWuLX3GOlVmUQ0biVyHiytRYiTn685he44PoE0KtpZ4XnEx4frRoZyICArEOWFKfchCYxE18L45a1gD0lRhIqsun45GTneT3fD7bKBGGk\/s16000\/androidify-app-flow-architecture.png\"\/><\/div>\n<p><\/image><\/p>\n<h3><span style=\"font-size: large;\">Gemini and image validation<\/span><\/h3>\n<p>To get started with Androidify, take a photo or choose an image on your device. The app needs to make sure that the image you upload is suitable for creating an avatar.<\/p>\n<p>Gemini 2.5 Flash via Firebase helps with this by verifying that the image contains a person, that the person is in focus, and assessing image safety, including whether the image contains abusive content.<\/p>\n<p><!-- Kotlin --><\/p>\n<div style=\"background: #f8f8f8; overflow:auto;width:auto;border:0;\">\n<pre style=\"margin: 0; line-height: 125%\"><span style=\"color: #008000; font-weight: bold\">val<\/span> jsonSchema = Schema.obj(\n   properties = mapOf(<span style=\"color: #BA2121\">\"success\"<\/span> to Schema.boolean(), <span style=\"color: #BA2121\">\"error\"<\/span> to Schema.string()),\n   optionalProperties = listOf(<span style=\"color: #BA2121\">\"error\"<\/span>),\n   )\n   \n<span style=\"color: #008000; font-weight: bold\">val<\/span> generativeModel = Firebase.ai(backend = GenerativeBackend.googleAI())\n   .generativeModel(\n            modelName = <span style=\"color: #BA2121\">\"gemini-2.5-flash-preview-04-17\"<\/span>,\n   \t     generationConfig = generationConfig {\n                responseMimeType = <span style=\"color: #BA2121\">\"application\/json\"<\/span>\n                responseSchema = jsonSchema\n            },\n            safetySettings = listOf(\n                SafetySetting(HarmCategory.HARASSMENT, HarmBlockThreshold.LOW_AND_ABOVE),\n                SafetySetting(HarmCategory.HATE_SPEECH, HarmBlockThreshold.LOW_AND_ABOVE),\n                SafetySetting(HarmCategory.SEXUALLY_EXPLICIT, HarmBlockThreshold.LOW_AND_ABOVE),\n                SafetySetting(HarmCategory.DANGEROUS_CONTENT, HarmBlockThreshold.LOW_AND_ABOVE),\n                SafetySetting(HarmCategory.CIVIC_INTEGRITY, HarmBlockThreshold.LOW_AND_ABOVE),\n    \t),\n    )\n\n <span style=\"color: #008000; font-weight: bold\">val<\/span> response = generativeModel.generateContent(\n            content {\n                text(<span style=\"color: #BA2121\">\"You are to analyze the provided image and determine if it is acceptable and appropriate based on specific criteria.... (more details see the full sample)\"<\/span>)\n                image(image)\n            },\n        )\n\n<span style=\"color: #008000; font-weight: bold\">val<\/span> jsonResponse = Json.parseToJsonElement(response.text)\n<span style=\"color: #008000; font-weight: bold\">val<\/span> isSuccess = jsonResponse.jsonObject[<span style=\"color: #BA2121\">\"success\"<\/span>]?.jsonPrimitive?.booleanOrNull == <span style=\"color: #008000; font-weight: bold\">true<\/span>\n<span style=\"color: #008000; font-weight: bold\">val<\/span> error = jsonResponse.jsonObject[<span style=\"color: #BA2121\">\"error\"<\/span>]?.jsonPrimitive?.content\n<\/pre>\n<\/div>\n<p>In the snippet above, we\u2019re leveraging <a href=\"https:\/\/firebase.google.com\/docs\/vertex-ai\/structured-output?platform=android\" target=\"_blank\" rel=\"noopener\">structured output<\/a> capabilities of the model by defining the schema of the response. We\u2019re passing a <span style=\"color: #0d904f ; font-family: courier ;\"><b>Schema<\/b><\/span> object via the <span style=\"color: #0d904f ; font-family: courier ;\"><b>responseSchema<\/b><\/span> param in the <span style=\"color: #0d904f ; font-family: courier ;\"><b>generationConfig<\/b><\/span>.<\/p>\n<p>We want to validate that the image has enough information to generate a nice Android avatar. So we ask the model to return a json object with <span style=\"color: #0d904f ; font-family: courier ;\"><b>success = true\/false<\/b><\/span> and an optional <span style=\"color: #0d904f ; font-family: courier ;\"><b>error<\/b><\/span> message explaining why the image doesn&#8217;t have enough information.<\/p>\n<p>Structured output is a powerful feature enabling a smoother integration of LLMs to your app by controlling the format of their output, similar to an API response.<\/p>\n<h3><span style=\"font-size: large;\">Image captioning with Gemini Flash<\/span><\/h3>\n<p>Once it&#8217;s established that the image contains sufficient information to generate an Android avatar, it is captioned using Gemini 2.5 Flash with structured output.<\/p>\n<p><!-- Kotlin --><\/p>\n<div style=\"background: #f8f8f8; overflow:auto;width:auto;border:0;\">\n<pre style=\"margin: 0; line-height: 125%\"><span style=\"color: #008000; font-weight: bold\">val<\/span> jsonSchema = Schema.obj(\n            properties = mapOf(\n                <span style=\"color: #BA2121\">\"success\"<\/span> to Schema.boolean(),\n                <span style=\"color: #BA2121\">\"user_description\"<\/span> to Schema.string(),\n            ),\n            optionalProperties = listOf(<span style=\"color: #BA2121\">\"user_description\"<\/span>),\n        )\n<span style=\"color: #008000; font-weight: bold\">val<\/span> generativeModel = createGenerativeTextModel(jsonSchema)\n\n<span style=\"color: #008000; font-weight: bold\">val<\/span> prompt = <span style=\"color: #BA2121\">\"You are to create a VERY detailed description of the main person in the given image. This description will be translated into a prompt for a generative image model...\"<\/span>\n\n<span style=\"color: #008000; font-weight: bold\">val<\/span> response = generativeModel.generateContent(\ncontent { \n       \ttext(prompt) \n             \timage(image) \n\t})\n        \n<span style=\"color: #008000; font-weight: bold\">val<\/span> jsonResponse = Json.parseToJsonElement(response.text!!) \n<span style=\"color: #008000; font-weight: bold\">val<\/span> isSuccess = jsonResponse.jsonObject[<span style=\"color: #BA2121\">\"success\"<\/span>]?.jsonPrimitive?.booleanOrNull == <span style=\"color: #008000; font-weight: bold\">true<\/span>\n\n<span style=\"color: #008000; font-weight: bold\">val<\/span> userDescription = jsonResponse.jsonObject[<span style=\"color: #BA2121\">\"user_description\"<\/span>]?.jsonPrimitive?.content\n<\/pre>\n<\/div>\n<p>The other option in the app is to start with a text prompt. You can enter in details about your accessories, hairstyle, and clothing, and let Imagen be a bit more creative.<\/p>\n<h3><span style=\"font-size: large;\">Android generation via Imagen<\/span><\/h3>\n<p>We\u2019ll use this detailed description of your image to enrich the prompt used for image generation. We\u2019ll add extra details around what we would like to generate and include the bot color selection as part of this too, including the skin tone selected by the user.<\/p>\n<p><!-- Kotlin --><\/p>\n<div style=\"background: #f8f8f8; overflow:auto;width:auto;border:0;\">\n<pre style=\"margin: 0; line-height: 125%\"><span style=\"color: #008000; font-weight: bold\">val<\/span> imagenPrompt = <span style=\"color: #BA2121\">\"A 3D rendered cartoonish Android mascot in a photorealistic style, the pose is relaxed and straightforward, facing directly forward [...] The bot looks as follows $userDescription [...]\"<\/span>\n<\/pre>\n<\/div>\n<p>We then call the Imagen model to create the bot. Using this new prompt, we create a model and call <span style=\"color: #0d904f ; font-family: courier ;\">generateImages<\/span>:<\/p>\n<p><!-- Kotlin --><\/p>\n<div style=\"background: #f8f8f8; overflow:auto;width:auto;border:0;\">\n<pre style=\"margin: 0; line-height: 125%\"><span style=\"color: #408080; font-style: italic\">\/\/ we supply our own fine-tuned model here but you can use \"imagen-3.0-generate-002\" <\/span>\n<span style=\"color: #008000; font-weight: bold\">val<\/span> generativeModel = Firebase.ai(backend = GenerativeBackend.googleAI()).imagenModel(\n            <span style=\"color: #BA2121\">\"imagen-3.0-generate-002\"<\/span>,\n            safetySettings =\n            ImagenSafetySettings(\n                ImagenSafetyFilterLevel.BLOCK_LOW_AND_ABOVE,\n                personFilterLevel = ImagenPersonFilterLevel.ALLOW_ALL,\n            ),\n)\n\n<span style=\"color: #008000; font-weight: bold\">val<\/span> response = generativeModel.generateImages(imagenPrompt)\n\n<span style=\"color: #008000; font-weight: bold\">val<\/span> image = response.images.first().asBitmap()\n<\/pre>\n<\/div>\n<p>And that\u2019s it! The Imagen model generates a bitmap that we can display on the user\u2019s screen.<\/p>\n<h3><span style=\"font-size: large;\">Finetuning the Imagen model<\/span><\/h3>\n<p>The Imagen 3 model was finetuned using <a href=\"https:\/\/arxiv.org\/abs\/2106.09685\" target=\"_blank\" rel=\"noopener\">Low-Rank Adaptation (LoRA)<\/a>. LoRA is a fine-tuning technique designed to reduce the computational burden of training large models. Instead of updating the entire model, LoRA adds smaller, trainable &#8220;adapters&#8221; that make small changes to the model&#8217;s performance. We ran a fine tuning pipeline on the Imagen 3 model generally available with Android bot assets of different color combinations and different assets for enhanced cuteness and fun. We generated text captions for the training images and the image-text pairs were used to finetune the model effectively.<\/p>\n<p>The current <a href=\"http:\/\/github.com\/android\/androidify\" target=\"_blank\" rel=\"noopener\">sample app<\/a> uses a standard Imagen model, so the results may look a bit different from the visuals in this post. However, the app using the fine-tuned model and a custom version of Firebase AI Logic SDK was demoed at Google I\/O. This app will be released later this year and we are also planning on adding support for fine-tuned models to Firebase AI Logic SDK later in the year.<\/p>\n<p><image><\/p>\n<div style=\"text-align: center;\"><img decoding=\"async\" alt=\"moving image of Androidify app demo turning a selfie image of a bearded man wearing a black tshirt and sunglasses, with a blue back pack into a green 3D bearded droid wearing a black tshirt and sunglasses with a blue backpack\" border=\"0\" id=\"imgCaption\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEiUWUfAjfjlb9Dim7K4vGfc-SRCnAR6t2a5A5kfpv7L1-ygYFLPNxSK9bzN6diYRxICZuL9cC-GjMVsE4FM8JgS5luQxJZqiQ7hpa9oxvhK5OG5K3Y_VfegvpFJDZDHdGSG8Xdj2EcZTjyGv-4vUSq9zLVfdRMs-UTVFLJq5nzG3p6f_gJj6xWm50jtAkU\/s1600\/androidify-app-demo.gif\" width=\"35%\"\/><\/div>\n<p><imgcaption><center><em>The original image&#8230; and <i>Androidifi-ed<\/i> image<\/em><\/center><\/imgcaption><\/image><\/p>\n<h2><span style=\"font-size: x-large;\">ML Kit<\/span><\/h2>\n<p>The app also uses the <a href=\"https:\/\/developers.google.com\/ml-kit\/vision\/pose-detection\" target=\"_blank\" rel=\"noopener\">ML Kit Pose Detection SDK<\/a> to detect a person in the camera view, which triggers the capture button and adds visual indicators.<\/p>\n<p>To do this, we add the SDK to the app, and use <span style=\"color: #0d904f; font-family: courier\">PoseDetection.getClient()<\/span>. Then, using the <span style=\"color: #0d904f; font-family: courier\">poseDetector<\/span>, we look at  the <span style=\"color: #0d904f; font-family: courier\">detectedLandmarks<\/span> that are in the streaming image coming from the Camera, and we set the <span style=\"color: #0d904f; font-family: courier\">_uiState.detectedPose<\/span> to true if a nose and shoulders are visible:<\/p>\n<p><!-- Kotlin --><\/p>\n<div style=\"background: #f8f8f8; overflow:auto;width:auto;border:0;\">\n<pre style=\"margin: 0; line-height: 125%\"><span style=\"color: #008000; font-weight: bold\">private<\/span> suspend <span style=\"color: #008000; font-weight: bold\">fun<\/span> <span style=\"color: #0000FF\">runPoseDetection<\/span>() {\n    PoseDetection.getClient(\n        PoseDetectorOptions.Builder()\n            .setDetectorMode(PoseDetectorOptions.STREAM_MODE)\n            .build(),\n    ).use { poseDetector -&gt;\n        <span style=\"color: #408080; font-style: italic\">\/\/ Since image analysis is processed by ML Kit asynchronously in its own thread pool,<\/span>\n        <span style=\"color: #408080; font-style: italic\">\/\/ we can run this directly from the calling coroutine scope instead of pushing this<\/span>\n        <span style=\"color: #408080; font-style: italic\">\/\/ work to a background dispatcher.<\/span>\n        cameraImageAnalysisUseCase.analyze { imageProxy -&gt;\n            imageProxy.image?.let { image -&gt;\n                <span style=\"color: #008000; font-weight: bold\">val<\/span> poseDetected = poseDetector.detectPersonInFrame(image, imageProxy.imageInfo)\n                _uiState.update { it.copy(detectedPose = poseDetected) }\n            }\n        }\n    }\n}\n\n<span style=\"color: #008000; font-weight: bold\">private<\/span> suspend <span style=\"color: #008000; font-weight: bold\">fun<\/span> PoseDetector.detectPersonInFrame(\n    image: Image,\n    imageInfo: ImageInfo,\n): Boolean {\n    <span style=\"color: #008000; font-weight: bold\">val<\/span> results = process(InputImage.fromMediaImage(image, imageInfo.rotationDegrees)).await()\n    <span style=\"color: #008000; font-weight: bold\">val<\/span> landmarkResults = results.allPoseLandmarks\n    <span style=\"color: #008000; font-weight: bold\">val<\/span> detectedLandmarks = mutableListOf<int>()\n    <span style=\"color: #008000; font-weight: bold\">for<\/span> (landmark <span style=\"color: #008000; font-weight: bold\">in<\/span> landmarkResults) {\n        <span style=\"color: #008000; font-weight: bold\">if<\/span> (landmark.inFrameLikelihood &gt; <span style=\"color: #666666\">0.7<\/span>) {\n            detectedLandmarks.add(landmark.landmarkType)\n        }\n    }\n\n    <span style=\"color: #008000; font-weight: bold\">return<\/span> detectedLandmarks.containsAll(\n        listOf(PoseLandmark.NOSE, PoseLandmark.LEFT_SHOULDER, PoseLandmark.RIGHT_SHOULDER),\n    )\n}\n<\/int><\/pre>\n<\/div>\n<p><image><\/p>\n<div style=\"text-align: center;\"><img decoding=\"async\" alt=\"moving image showing the camera shutter button activating when an orange droid figurine is held in the camera frame\" border=\"0\" id=\"imgCaption\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEgMtKfFPhqMDNlNuOga6sqMhSvvBlEoPLmfefQ5CT7jBG97equ7u8pEgDRq2iCa2_TUyHRMtf4P0vYxWBOdzHhONW6bYt1_3uIbiNZh_JMGHcoZwofagwZi0dh-zVLFadeu4ZmeAHNVImfs1tuqcXRDecF7z72I5uOK1tsmPBKX_zHucdsE7M8ewx8znsA\/s1600\/camera-shutter-button-activates-androidify-google-io.gif\" width=\"35%\"\/><\/div>\n<p><imgcaption><center><em>The camera shutter button is activated when a person (or a bot!) enters the frame.<\/em><\/center><\/imgcaption><\/image><\/p>\n<h2><span style=\"font-size: x-large;\">Get started with AI on Android<\/span><\/h2>\n<p>The Androidify app makes an extensive use of the Gemini 2.5 Flash to validate the image and generate a detailed description used to generate the image. It also leverages the specifically fine-tuned Imagen 3 model to generate images of Android bots. Gemini and Imagen models are easily integrated into the app via the Firebase AI Logic SDK. In addition, ML Kit Pose Detection SDK controls the capture button, enabling it only when a person is present in front of the camera.<\/p>\n<p>To get started with AI on Android, go to the <a href=\"https:\/\/developer.android.com\/ai\/gemini\" target=\"_blank\" rel=\"noopener\">Gemini<\/a> and <a href=\"http:\/\/developer.android.com\/ai\/imagen\" target=\"_blank\" rel=\"noopener\">Imagen<\/a> documentation for Android.<\/p>\n<p>Explore this announcement and all Google I\/O 2025 updates on <a href=\"https:\/\/io.google\/2025\/?utm_source=blogpost&amp;utm_medium=pr&amp;utm_campaign=event&amp;utm_content=\" target=\"_blank\" rel=\"noopener\">io.google<\/a> starting May 22.<\/p>\n<\/div>\n<p>[ad_2]<br \/>\n<br \/><a href=\"http:\/\/android-developers.googleblog.com\/2025\/05\/androidify-how-androidify-leverages-gemini-firebase-ml-kit.html\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] Posted by Thomas Ezan \u2013 Developer Relations Engineer, Rebecca Franks \u2013 Developer Relations Engineer, and Avneet Singh \u2013 Product Manager We\u2019re bringing back Androidify<\/p>\n","protected":false},"author":1,"featured_media":278003,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[146],"tags":[],"_links":{"self":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts\/278002"}],"collection":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/comments?post=278002"}],"version-history":[{"count":0,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/posts\/278002\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/media\/278003"}],"wp:attachment":[{"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/media?parent=278002"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/categories?post=278002"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/michigandigitalnews.com\/index.php\/wp-json\/wp\/v2\/tags?post=278002"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}