Reading time: 10 minutes

When writing this post, Ignite 2025 was some weeks ago. So everyone has had some time to digest all the news. Let’s look at some of the announcements.
Microsoft Ignite 2025, our flagship conference, ended just a couple of weeks ago. And to be fair, it’s nice to see a lot of attention being given to data security & governance. Having been active in that line of work (and the community) for all these years, it seems right that these subjects get the credit they deserve. Most of all because they affect the maturity level of an organization in a very positive way.
This maturity level for data security & governance affects organizational resilience, compliance posture and business agility. Stronger controls for data classification, lifecycle management, and insider risk mitigation reduce exposure to breaches and compliance violations. It also allows for better and secure collaboration and AI-driven (accelerated) innovation.
As Ignite 2025 progressed, the underlying theme or vision from Microsoft became clear: as (Gen)AI becomes integral to work, security and governance are crucial components for success. A theme that is always top of mind during our customer engagements in the Microsoft Innovation Hub; Security and trust go hand-in-hand with innovation.
For this reason, the Microsoft AI solutions incorporate robust security, compliance, and governance features as standard. These measures enable customers to leverage the capabilities of AI to enhance employee experience, reinvent customer engagement, reshape business processes, and bend the curve on innovation. The next wave of business innovation will depend on combining AI capabilities with strong data protection, governance, and responsible practices, since innovation can’t move forward without proper safeguards.
But many of us know this already and have been advocating for these subjects for many years 😊 So let’s get back to Ignite 2025 and the information from the sessions.
Microsoft Ignite 2025
The concepts of Work IQ, Fabric IQ and Foundry IQ are impressive. I won’t be going into much depth on these concepts. But the ability to make your data even more valuable is pure gold for AI applications. I also won’t be going into depth for all announcements of Ignite 2025. For this, please go to the infamous Book of News. I do want to focus on some important components like Agent 365 and Microsoft Purview.
Let’s get started
I have recognized that sometimes information gets lost in all the bigger announcements. So here are some small pieces of information that you might have missed.
Recall
Microsoft Copilot+ PCs now have the ability to use Recall. This function creates snapshots (screenshots). These can be used to quicky retrace your steps and find your way back to where you were previously. When first announced there was (rightfully) feedback on data security and privacy. As these topics are paramount to Microsoft, the Recall functions has many guardrails build-in.

During Ignite an additional layer of protection was introduced: Data Loss Prevention (DLP) rules for safeguarding Recall on Copilot+ endpoints. These are managed centrally by Microsoft Purview and allow even more granular data security for the snapshots. For example: disabling the snapshots when company specific sensitive information was shown on the screen.

Microsoft Information Protection
An important change is coming in my personal favorite platform 😊 It’s related to the encryption function of the sensitivity labels. Let’s say you have a label named Internal\FTE Only. This label enforces specific permissions and files are encrypted when this label is used.
When you label a non-Office or PDF file, the file extension will change. For example, a png-file is changed to ppng. If this is not possible, then the extension is changed to PFILE.

These filetypes (ppng, pfile, and more) can only be opened by using the Purview Information Protection client on the endpoint. Native applications will not be able to open these files directly. And this is a big blocker for productivity.
This will change by using a concept known as advanced label-based protection. When you label a non-Office or pdf file with a label that has encryption assigned to it – this encryption is not enforced on the endpoint. Because of this, a sensitivity label is applied, but the file extension will not change. This enables applications to still open the file natively.
Because of the integration with Purview Endpoint DLP, when the file moved from the endpoint (say to removable disk), the protected format (and encryption) is applied again. Thus keeping the data secure. Because of the sensitivity label being applied, we can also use other Purview components that use these labels as part of their functionality.

IRM for Agents
I love the concept where agents will work using a specific Entra ID identity. This way, we can treat these “digital co-workers” how we treat our non-digital co-workers: using the Zero Trust concepts. More on this later in this article.
But as agents now have (or get) that identity, we can use Purview functions as well. And Insider Risk Management (IRM) is one of these. IRM helps to spot risky behavior by both people and AI agents early on, so companies can act fast and set rules as needed. IRM covers tools like Microsoft 365 Copilot, Copilot Studio, and Foundry agents, helping handle new risks like unsupervised access and prompt injection.
More extensive information
In the remainder of this post I want to highlight how Microsoft is infusing security, compliance, and responsible AI practices into its latest AI tools and cloud services. One thing to keep in mind; AI needs to be secure, reliable and governable. As Microsoft, we over a suite of solutions for any scenario, is multi-cloud and enterprise grade.
However, it still requires the organization to determine the possible risks, levels of sensitivity and data ownership in order to use these solutions to their full potential. Having said that, here are my key takeaways, and why if feel they matter to our customers and partners.

Agent 365

During Ignite we introduced Agent 365. Agent 365 is our platform for managing AI agents in an organization. It gives every agent an identity, tracks them in a central registry, and enforces security and compliance—like how companies manage employees. It helps prevent “agent sprawl” and ensures agents work safely with your data and apps.

Agent 365 is accessible from the Microsoft 365 admin-center. This makes sense, as we are treating the agents as co-workers that have an Entra ID. By having this identity, we can discover active agents, review what data they touch, and apply access policies.
Be aware that Agent 365 does not build or train AI models, does not create workflows or business logic and does not replace human oversight.
Data Security Posture Management
Microsoft Purview keeps on getting more powerful with additional functions. But some functions are also being consolidated. And this is very visible with the new Purview Data Security Posture Management (DSPM) platform. Purview already provided the DSPM functionality as well as the Data Security Posture Management for AI (DSPM for AI) platform.
Both the (classic) DSPM and DSPM for AI will be available until June 2026, when the new Purview DSPM experience becomes the centralized solution. For this blog, I’ll focus on this new experience.
One of the functions of DSPM is providing an AI (agent) observability view within your environment. AI observability means keeping an eye on AI systems—like models, LLMs, and agents—to see how they’re performing and make sure they’re safe to use.
Whilst Microsoft Foundry (did you notice the name change from Azure AI Foundry btw?) has AI observability build-in, Microsoft Purview did not have a dedicated section for this. DSPM will now show you an overview of all AI agents (regardless where these are hosted) including specific risks and sensitive information.

Data Security Investigation
This is not a new function, as this was introduced into Microsoft Purview some time ago. It is used to help security and compliance teams rapidly find and probe data leaks or inappropriate data use. The function is based on AI, and now offers a Data Security Posture agent; Allowing teams to use natural language to find any data security issues.
Data loss prevention
Even before the start of the AI wave, reducing data loss and combating insider risks was of paramount importance. AI and generative AI have boosted this importance. Therefore, we’ve extended our DLP capabilities for more AI scenarios. For example, if someone tries to use sensitive information in a Copilot or chat agent prompt, Purview DLP will detect it and refuse to provide a response – and preventing any leaking.
Our users are very creative. And one innovative way to circumvent DLP rules and Information Protection policies was to make use of the time it takes the system to scan for sensitive information. Let me explain.
The sensitivity of data is paramount for many functions in Microsoft Purview. We use it to auto-label documents and emails, DLP rules can be configured to use the sensitivity and IRM can use it within the risk policies. So, it’s a very important part of the overall solution.
But what if I were to create or open a blank document, type in some sensitive information and save that document to removal media. Purview functions will not be able to act on this because Microsoft Purview did not have time or ability to classify that newly created document. In the new functionality, Microsoft Purview Endpoint DLP will inspect the in‑memory content before save. This also works for printing such a document.

Microsoft Purview Endpoint DLP already had the option to scan documents that had not been classified. This functionality is called Just-In-Time (JIT) protection. But it did lacked the functionality to block the action I just described. At Ignite it was announced that Endpoint DLP can now block these actions even if the document is not saved. Users will see a notification that the information is being scanned before the action can be completed.

SharePoint Online
On a side-note; In SharePoint Online we can use the same approach and this option has been around for some time. By using this SharePoint Online PowerShell cmdlet:
Set-SPOTenant -MarkNewFilesSensitiveByDefault BlockExternalSharing
we basically tell SharePoint Online to treat every new file as “unscanned / unsafe” until DLP finishes scanning it. During this state, external users cannot access the file. This is relevant for the same reason as the new JIT-options; In SharePoint Online (and OneDrive), files need to be scanned when these are created or added. And this takes a bit of time. If you do want to risk sensitive information being leaked because of this, you will need to activate this setting and inform your users.
DLP for Fabric
On the structured data side, Fabric now fully supports Microsoft Purview data loss prevention policies. You can now detect sensitive information in supported Fabric item types (e.g., Lakehouse files, datasets, notebooks, pipelines) and triggering policy tips, alerts, email notifications, and access restrictions when violations occur.
Also, support for sensitivity labels and sensitive information types as conditions are included. The Fabric Data Agent inherits and respects permissions & DLP settings when reasoning over enterprise data. I will probably do another article on the Fabric Data Agent and Purview in the near future or you can see this earlier article.

Network data security
When we need to protect our sensitive information from unmanaged devices (where endpoint DLP or the Purview add-ons are not present), we already have several options that include Edge for Business, DSPM for AI (now DSPM) and Defender for Cloud Apps. But sometimes we even need to add an additional layer of security.
For this, we introduced network data security as a preview not long ago. This option allows you to integrate secure access service edge (SASE) solutions into Microsoft Purview enabling classifying HTTP(s) traffic sent from an endpoint device to websites, cloud apps, and generative AIs. These SASE solutions have now been updated to include Entra Global Secure Access (GSA).

Security Copilot in E5
One of the bigger announcements during Ignite 2025 was the inclusion of Microsoft Security Copilot in the Microsoft 365 E5 license-suite. This will allow security teams built-in AI help for threat analysis and compliance. Security Copilot comes with new specialized AI agents that are embedded into tools like Defender, Entra ID, Intune, and Purview. These AI agents can auto-triage alerts, hunt for threats, assist with access reviews, and even remediate insider risks across your environment.
In the end
Microsoft Ignite 2025 emphasized the critical role of security, compliance, and governance in integrating AI into business, highlighting new tools like Agent 365 for AI agent management, enhanced Microsoft Purview features for data governance and AI observability, expanded Data Loss Prevention capabilities, and the inclusion of Microsoft Security Copilot in the Microsoft 365 E5 suite to support secure and responsible AI adoption.
This shows the commitment to security & governance in the era of AI. Being able to protect data in multi-cloud scenario’s, across your entire data estate and from less complex to highly complex AI scenario’s is one of the strongest features for Microsoft Purview.
If you want to learn more, then follow these links: