Copilot AI Hub and Data Loss Prevention

Posted by

Reading time: 3 minutes

Microsoft Purview AI Hub

During Ignite 2023, Microsoft announced many different incarnations of Copilot. Perhaps you’ve missed this, but it was very hard to do so. With these new functions, Microsoft also introduced information on several security and compliance features. If you want to read-up on these, here is the relevant blog:

https://techcommunity.microsoft.com/t5/microsoft-security-copilot-blog/securing-data-in-an-ai-first-world-with-microsoft-purview/ba-p/3981279

Two important aspects of this blog (in my opinion) were the introduction of the AI Hub, but also the introduction of Generative AI cloud platforms in Microsoft Defender for Cloud. The latter is going to allow us to create policies to govern access to these platforms.

But there is another way to prevent users from copy/pasting sensitive information to these types of platforms. And this is by using Endpoint DLP.

AI Hub

And this is also where the new AI Hub comes into play. This hub correlates all types of information on the use of Copilot and sensitive data within your environment. More information can be found in the blog article and I will write some more on this in the future.

But when looking at the public preview of the AI Hub, I noticed that you could Fortify your data security for AI. Well, that’s interesting πŸ™‚

I wanted to get to the bottom of this enticing promise. So what does this option do? First off – it’s part of the AI Hub Policy section. Secondly, it will use Endpoint DLP | Adaptive Protection | Information Protection to improve your data security.

Is this new?

No. This is not new. The policy will not introduce any new functionality or features. But it does create some settings that you can use directly. Be that, when you have the required E5 licenses of course.

The set-up includes an Endpoint DLP rule called Microsoft AI hub – Adaptive Protection in AI assistants . And this is interesting.

The rule includes (all) build-in sensitive information types in the 1 to Any configuration. You can modify this, if needed.

Next, the rule will Audit (but you can change this) any activity to a restricted cloud service domain. The sensitive service domains for this rule are called LLM Sites. Make sense right? πŸ™‚

Microsoft has already populated this group with 21 (when I wrote this) generative AI sites. When someone will copy/paste any sensitive information to one of these sites, this will be audited. And when connected to Insider Risk Management, these signals can be used in an Adaptive Protection policy.

So, do you need the AI Hub for this? No, but it does make it easier and quicker to create such policies. When we also include Microsoft Defender for Cloud Apps, we can also set specific policies to govern access to AI-type sites and services, based on real-time content inspection.


Leave a comment