{"id":2676,"date":"2025-02-19T06:16:22","date_gmt":"2025-02-18T21:16:22","guid":{"rendered":"https:\/\/aireviewirush.com\/?p=2676"},"modified":"2025-02-19T06:16:23","modified_gmt":"2025-02-18T21:16:23","slug":"securing-deepseek-and-different-ai-methods-with-microsoft-safety","status":"publish","type":"post","link":"https:\/\/aireviewirush.com\/?p=2676","title":{"rendered":"Securing DeepSeek and different AI methods with Microsoft Safety"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p>A profitable AI transformation begins with a robust safety basis. With a speedy enhance in AI growth and adoption, organizations want visibility into their rising AI apps and instruments. Microsoft Safety gives risk safety, posture administration, information safety, compliance, and governance to safe AI purposes that you just construct and use. These capabilities will also be used to assist enterprises safe and govern AI apps constructed with the DeepSeek R1 mannequin and achieve visibility and management over using the seperate DeepSeek client app.\u00a0<\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_53 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title \" >Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\" role=\"button\"><label for=\"item-69e67a22d206c\" ><span class=\"\"><span style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/label><input aria-label=\"Toggle\" aria-label=\"item-69e67a22d206c\"  type=\"checkbox\" id=\"item-69e67a22d206c\"><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/aireviewirush.com\/?p=2676\/#Safe_and_govern_AI_apps_constructed_with_the_DeepSeek_R1_mannequin_on_Azure_AI_Foundry_and_GitHub\" title=\"Safe and govern AI apps constructed with the DeepSeek R1 mannequin on Azure AI Foundry and GitHub\u00a0\">Safe and govern AI apps constructed with the DeepSeek R1 mannequin on Azure AI Foundry and GitHub\u00a0<\/a><ul class='ez-toc-list-level-3'><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/aireviewirush.com\/?p=2676\/#Develop_with_reliable_AI\" title=\"Develop with reliable AI\u00a0\">Develop with reliable AI\u00a0<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/aireviewirush.com\/?p=2676\/#Begin_with_Safety_Posture_Administration\" title=\"Begin with Safety Posture Administration\">Begin with Safety Posture Administration<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/aireviewirush.com\/?p=2676\/#Safeguard_DeepSeek_R1_AI_workloads_with_cyberthreat_safety\" title=\"Safeguard DeepSeek R1 AI workloads with cyberthreat safety\">Safeguard DeepSeek R1 AI workloads with cyberthreat safety<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/aireviewirush.com\/?p=2676\/#Safe_and_govern_using_the_DeepSeek_app\" title=\"Safe and govern using the DeepSeek app\">Safe and govern using the DeepSeek app<\/a><ul class='ez-toc-list-level-3'><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/aireviewirush.com\/?p=2676\/#Safe_and_achieve_visibility_into_DeepSeek_app_utilization\" title=\"Safe and achieve visibility into DeepSeek app utilization\u00a0\">Safe and achieve visibility into DeepSeek app utilization\u00a0<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/aireviewirush.com\/?p=2676\/#Complete_information_safety\" title=\"Complete information safety\u00a0\">Complete information safety\u00a0<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/aireviewirush.com\/?p=2676\/#Stop_delicate_information_leaks_and_exfiltration\" title=\"Stop delicate information leaks and exfiltration\u00a0\u00a0\">Stop delicate information leaks and exfiltration\u00a0\u00a0<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"https:\/\/aireviewirush.com\/?p=2676\/#Be_taught_extra_with_Microsoft_Safety\" title=\"Be taught extra with Microsoft Safety\">Be taught extra with Microsoft Safety<\/a><\/li><\/ul><\/nav><\/div>\n<h2 class=\"wp-block-heading\" id=\"secure-and-govern-ai-apps-built-with-the-deepseek-r1-model-on-azure-ai-foundry-and-github\"><span class=\"ez-toc-section\" id=\"Safe_and_govern_AI_apps_constructed_with_the_DeepSeek_R1_mannequin_on_Azure_AI_Foundry_and_GitHub\"><\/span>Safe and govern AI apps constructed with the DeepSeek R1 mannequin on Azure AI Foundry and GitHub\u00a0<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<h3 class=\"wp-block-heading\" id=\"develop-with-trustworthy-ai\"><span class=\"ez-toc-section\" id=\"Develop_with_reliable_AI\"><\/span>Develop with reliable AI\u00a0<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Final week, we introduced <a href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/deepseek-r1-is-now-available-on-azure-ai-foundry-and-github\/\" target=\"_blank\" rel=\"noreferrer noopener\">DeepSeek R1\u2019s availability on Azure AI Foundry and GitHub<\/a>, becoming a member of a various portfolio of greater than 1,800 fashions.\u00a0 \u00a0<\/p>\n<p>Prospects as we speak are constructing production-ready AI purposes with Azure AI Foundry, whereas accounting for his or her various safety, security, and privateness necessities. Much like different fashions supplied in Azure AI Foundry, DeepSeek R1 has undergone rigorous crimson teaming and security evaluations, together with automated assessments of mannequin habits and in depth safety opinions to mitigate potential dangers.\u00a0Microsoft\u2019s internet hosting safeguards for AI fashions are designed to maintain buyer information inside Azure\u2019s safe boundaries.\u00a0<\/p>\n<p>With Azure AI Content material Security, built-in content material filtering is out there by default to assist detect and block malicious, dangerous, or ungrounded content material, with opt-out choices for flexibility. Moreover, the security analysis system permits prospects to effectively take a look at their purposes earlier than deployment. These safeguards assist Azure AI Foundry present a safe, compliant, and accountable atmosphere for enterprises to confidently construct and deploy AI options.\u202fSee <a href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/deepseek-r1-is-now-available-on-azure-ai-foundry-and-github\/\" target=\"_blank\" rel=\"noreferrer noopener\">Azure AI Foundry and GitHub<\/a> for extra particulars.<\/p>\n<h3 class=\"wp-block-heading\" id=\"start-with-security-posture-management\"><span class=\"ez-toc-section\" id=\"Begin_with_Safety_Posture_Administration\"><\/span>Begin with Safety Posture Administration<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>AI workloads introduce new cyberattack surfaces and vulnerabilities, particularly when builders leverage open-source assets. Subsequently, it\u2019s essential to start out with safety posture administration, to find all AI inventories, corresponding to fashions, orchestrators, grounding information sources, and the direct and oblique dangers round these elements. When builders construct AI workloads with DeepSeek R1 or different AI fashions, <a href=\"https:\/\/www.microsoft.com\/security\/business\/cloud-security\/microsoft-defender-cloud\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Microsoft Defender for Cloud<\/strong><\/a><strong>\u2019s AI safety posture administration capabilities<\/strong> might help safety groups achieve visibility into AI workloads, uncover AI cyberattack surfaces and vulnerabilities, detect cyberattack paths that may be exploited by dangerous actors, and get suggestions to proactively strengthen their safety posture towards cyberthreats.<\/p>\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" alt=\"AI security posture management in Defender for Cloud identifies an attack path to a DeepSeek R1 workload, where an Azure virtual machine is exposed to the Internet.\" class=\"wp-image-137417 webp-format\" srcset=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/SPM-Blog-Image-final-1024x551.webp 1024w, https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/SPM-Blog-Image-final-300x161.webp 300w, https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/SPM-Blog-Image-final-768x413.webp 768w, https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/SPM-Blog-Image-final-1536x826.webp 1536w, https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/SPM-Blog-Image-final.webp 1907w\" src=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/SPM-Blog-Image-final-1024x551.webp\"\/><figcaption class=\"wp-element-caption\"><em>Determine 1. AI safety posture administration in Defender for Cloud detects an assault path to a DeepSeek R1 workload<\/em>.<\/figcaption><\/figure>\n<p>By mapping out AI workloads and synthesizing safety insights corresponding to identification dangers, delicate information, and web publicity, Defender for Cloud repeatedly surfaces contextualized safety points and suggests risk-based safety suggestions tailor-made to prioritize essential gaps throughout your AI workloads. Related safety suggestions additionally seem throughout the Azure AI useful resource itself within the Azure portal. This gives builders or workload homeowners with direct entry to suggestions and helps them remediate cyberthreats sooner.\u00a0<\/p>\n<h3 class=\"wp-block-heading\" id=\"safeguard-deepseek-r1-ai-workloads-with-cyberthreat-protection\"><span class=\"ez-toc-section\" id=\"Safeguard_DeepSeek_R1_AI_workloads_with_cyberthreat_safety\"><\/span>Safeguard DeepSeek R1 AI workloads with cyberthreat safety<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Whereas having a robust safety posture reduces the danger of cyberattacks, the complicated and dynamic nature of AI requires energetic monitoring in runtime as nicely. No AI mannequin is exempt from malicious exercise and could be weak to immediate injection cyberattacks and different cyberthreats. Monitoring the newest fashions is essential to making sure your AI purposes are protected.<\/p>\n<p>Built-in with Azure AI Foundry, Defender for Cloud repeatedly displays your DeepSeek AI purposes for uncommon and dangerous exercise, correlates findings, and enriches safety alerts with supporting proof. This gives your safety operations heart (SOC) analysts with alerts on energetic cyberthreats corresponding to jailbreak cyberattacks, credential theft, and delicate information leaks. For instance, when a immediate injection cyberattack happens, Azure AI Content material Security immediate shields can block it in real-time. The alert is then despatched to Microsoft Defender for Cloud, the place the incident is enriched with Microsoft Menace Intelligence,\u00a0serving to SOC analysts perceive person behaviors with visibility into supporting proof, corresponding to IP deal with, mannequin deployment particulars, and suspicious person prompts that triggered the alert.\u00a0<\/p>\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" alt=\"When a prompt injection attack occurs, Azure AI Content Safety prompt shields can detect and block it. The signal is then enriched by Microsoft Threat Intelligence, enabling security teams to conduct holistic investigations into the incident.\" class=\"wp-image-137331 webp-format\" srcset=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/Picture1-1-1024x470.webp 1024w, https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/Picture1-1-300x138.webp 300w, https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/Picture1-1-768x352.webp 768w, https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/Picture1-1.webp 1414w\" src=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/Picture1-1-1024x470.webp\"\/><figcaption class=\"wp-element-caption\"><em>Determine 2. Microsoft Defender for Cloud integrates with Azure AI to detect and reply to immediate injection cyberattacks<\/em>.<\/figcaption><\/figure>\n<p>Moreover, these alerts combine with <a href=\"https:\/\/www.microsoft.com\/security\/business\/siem-and-xdr\/microsoft-defender-xdr\" target=\"_blank\" rel=\"noreferrer noopener\">Microsoft Defender XDR<\/a>, permitting safety groups to centralize AI workload alerts into correlated incidents to grasp the complete scope of a cyberattack, together with malicious actions associated to their generative AI purposes.\u00a0<\/p>\n<figure class=\"wp-block-image size-full is-resized\"><img decoding=\"async\" alt=\"A jailbreak prompt injection attack on a Azure AI model deployment was flagged as an alert in Defender for Cloud. \" class=\"wp-image-137425 webp-format\" style=\"width:724px;height:auto\" srcset=\"\" src=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/Jailbreak-Blog-Final-1.webp\"\/><figcaption class=\"wp-element-caption\"><em>Determine 3. <em>A safety alert for a immediate injection assault is flagged in Defender for Cloud<\/em><\/em><\/figcaption><\/figure>\n<h2 class=\"wp-block-heading\" id=\"secure-and-govern-the-use-of-the-deepseek-app\"><span class=\"ez-toc-section\" id=\"Safe_and_govern_using_the_DeepSeek_app\"><\/span>Safe and govern using the DeepSeek app<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Along with the DeepSeek R1 mannequin, DeepSeek additionally gives a client app hosted on its native servers, the place information assortment and cybersecurity practices could not align together with your organizational necessities, as is usually the case with consumer-focused apps. This underscores the dangers organizations face if staff and companions introduce unsanctioned AI apps resulting in potential information leaks and coverage violations. Microsoft Safety gives capabilities to find using third-party AI purposes in your group and gives controls for shielding and governing their use. <a id=\"_msocom_1\"\/><\/p>\n<h3 class=\"wp-block-heading\" id=\"secure-and-gain-visibility-into-deepseek-app-usage\"><span class=\"ez-toc-section\" id=\"Safe_and_achieve_visibility_into_DeepSeek_app_utilization\"><\/span>Safe and achieve visibility into DeepSeek app utilization\u00a0<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Microsoft Defender for Cloud Apps gives ready-to-use danger assessments for greater than 850 Generative AI apps, and the record of apps is up to date repeatedly as new ones turn out to be widespread. This implies that you may uncover using these Generative AI apps in your group, together with the DeepSeek app, assess their safety, compliance, and authorized dangers, and arrange controls accordingly.\u202fFor instance, for high-risk AI apps, safety groups can tag them as unsanctioned apps and block person\u2019s entry to the apps outright.<\/p>\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"853\" height=\"480\" src=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/DeepSeek-Blog-3.gif\" alt=\"Security teams can discover the usage of GenAI applications, assess risk factors, and tag high-risk apps as unsanctioned to block end users from accessing them.\" class=\"wp-image-137397\"\/><figcaption class=\"wp-element-caption\"><em>Determine 4. <em>Uncover utilization and management entry to Generative AI purposes primarily based on their danger components in Defender for Cloud Apps<\/em><\/em>.<\/figcaption><\/figure>\n<h3 class=\"wp-block-heading\" id=\"comprehensive-data-security\"><span class=\"ez-toc-section\" id=\"Complete_information_safety\"><\/span>Complete information safety\u00a0<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>As well as, <a href=\"https:\/\/learn.microsoft.com\/en-us\/purview\/ai-microsoft-purview#data-security-posture-management-for-ai-provides-insights-policies-and-controls-for-ai-apps\" target=\"_blank\" rel=\"noreferrer noopener\">Microsoft Purview Information Safety Posture Administration (DSPM)<\/a> for AI gives visibility into information safety and compliance dangers, corresponding to delicate information in person prompts and non-compliant utilization, and recommends controls to mitigate the dangers. For instance, the studies in DSPM for AI can supply insights on the kind of delicate information being pasted to Generative AI client apps, together with the DeepSeek client app, so information safety groups can create and fine-tune their information safety insurance policies to guard that information and stop information leaks.\u00a0<\/p>\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" alt=\"In the report from Microsoft Purview Data Security Posture Management for AI, security teams can gain insights into sensitive data in user prompts and unethical use in AI interactions. These insights can be broken down by apps and departments.\" class=\"wp-image-137328 webp-format\" srcset=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/Picture7-1024x663.webp 1024w, https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/Picture7-300x194.webp 300w, https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/Picture7-768x498.webp 768w, https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/Picture7.webp 1102w\" src=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/Picture7-1024x663.webp\"\/><figcaption class=\"wp-element-caption\"><em>Determine 5. <em>Microsoft Purview Information Safety Posture Administration (DSPM) for AI permits safety groups to realize visibility into information dangers and get really helpful actions to deal with the<\/em><\/em>m. <\/figcaption><\/figure>\n<h3 class=\"wp-block-heading\" id=\"prevent-sensitive-data-leaks-and-exfiltration\"><span class=\"ez-toc-section\" id=\"Stop_delicate_information_leaks_and_exfiltration\"><\/span>Stop delicate information leaks and exfiltration\u00a0\u00a0<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>The leakage of organizational information is among the many prime issues for safety leaders relating to AI utilization, highlighting the significance for organizations to implement controls that stop customers from sharing delicate data with exterior third-party AI purposes.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/security\/business\/information-protection\/microsoft-purview-data-loss-prevention\" target=\"_blank\" rel=\"noreferrer noopener\">Microsoft Purview Information Loss Prevention (DLP)<\/a> lets you stop customers from pasting delicate information or importing recordsdata containing delicate content material into Generative AI apps from supported browsers. Your DLP coverage also can adapt to insider danger ranges, making use of stronger restrictions to customers which might be categorized as \u2018elevated danger\u2019 and fewer stringent restrictions for these categorized as \u2018low-risk\u2019. For instance, elevated-risk customers are restricted from pasting delicate information into AI purposes, whereas low-risk customers can proceed their productiveness uninterrupted. By leveraging these capabilities, you&#8217;ll be able to safeguard your delicate information from potential dangers from utilizing exterior third-party AI purposes. Safety admins can then examine these information safety dangers and\u00a0carry out insider danger investigations inside Purview. These\u00a0similar information safety dangers are surfaced in Defender XDR for holistic investigations.<\/p>\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"853\" height=\"480\" src=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/wp-content\/uploads\/2025\/02\/DeepSeek-DLP-2.gif\" alt=\" When a user attempts to copy and paste sensitive data into the DeepSeek consumer AI application, they are blocked by the endpoint DLP policy. \" class=\"wp-image-137396\"\/><figcaption class=\"wp-element-caption\"><em>Determine 6. <em>Information Loss Prevention coverage can block delicate information from being pasted to third-party AI purposes in supported browsers.<\/em><\/em><\/figcaption><\/figure>\n<p>This can be a fast overview of among the capabilities that will help you safe and govern AI apps that you just construct on Azure AI Foundry and GitHub,\u00a0in addition to AI apps that customers in your group use.\u00a0We hope you discover this handy!<\/p>\n<p>To study extra and to get began with securing your AI apps, check out the extra assets beneath:\u00a0\u00a0<\/p>\n<h2 class=\"wp-block-heading\" id=\"learn-more-with-microsoft-security\"><span class=\"ez-toc-section\" id=\"Be_taught_extra_with_Microsoft_Safety\"><\/span>Be taught extra with Microsoft Safety<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>To study extra about Microsoft Safety options, go to our\u202f<a href=\"https:\/\/www.microsoft.com\/en-us\/security\/business\" target=\"_blank\" rel=\"noreferrer noopener\">web site.<\/a>\u202fBookmark the\u202f<a href=\"https:\/\/www.microsoft.com\/security\/blog\/\" target=\"_blank\" rel=\"noreferrer noopener\">Safety weblog<\/a>\u202fto maintain up with our professional protection on safety issues. Additionally, comply with us on LinkedIn (<a href=\"https:\/\/www.linkedin.com\/showcase\/microsoft-security\/\" target=\"_blank\" rel=\"noreferrer noopener\">Microsoft Safety<\/a>) and X (<a href=\"https:\/\/twitter.com\/@MSFTSecurity\" target=\"_blank\" rel=\"noreferrer noopener\">@MSFTSecurity<\/a>)\u202ffor the newest information and updates on cybersecurity.\u00a0<\/p>\n<\/div>\n<p><script async src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><script>\n\t\tfunction facebookTracking() {\n\t\t\t!function(f,b,e,v,n,t,s){if(f.fbq)return;n=f.fbq=function(){n.callMethod?\n\t\t\t\tn.callMethod.apply(n,arguments):n.queue.push(arguments)};if(!f._fbq)f._fbq=n;\n\t\t\t\tn.push=n;n.loaded=!0;n.version='2.0';n.queue=[];t=b.createElement(e);t.async=!0;\n\t\t\t\tt.src=v;t.type=\"ms-delay-type\";t.setAttribute('data-ms-type','text\/javascript');\n\t\t\t\ts=b.getElementsByTagName(e)[0];s.parentNode.insertBefore(t,s)}(window,\n\t\t\t\tdocument,'script','https:\/\/connect.facebook.net\/en_US\/fbevents.js');\n\t\t\tfbq('init', '1770559986549030');\n\t\t\t\t\t\tfbq('track', 'PageView');\n\t\t\t\t\t}\n\t<\/script><br \/>\n<br \/><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A profitable AI transformation begins with a robust safety basis. With a speedy enhance in AI growth and adoption, organizations want visibility into their rising AI apps and instruments. Microsoft Safety gives risk safety, posture administration, information safety, compliance, and governance to safe AI purposes that you just construct and use. These capabilities will also [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":2678,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[],"class_list":{"0":"post-2676","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-cloud-computing"},"_links":{"self":[{"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/posts\/2676","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2676"}],"version-history":[{"count":1,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/posts\/2676\/revisions"}],"predecessor-version":[{"id":2677,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/posts\/2676\/revisions\/2677"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/media\/2678"}],"wp:attachment":[{"href":"https:\/\/aireviewirush.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2676"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2676"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2676"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}