Microsoft Purview with Unified SecOps – Powerful Combination?

The operating model is moving from a traditional SIEM that keeps everything hot to a blended approach. Microsoft Sentinel still runs your rules and workbooks in the analytics tier, while the Sentinel data lake gives you inexpensive long term storage and lake scale exploration. Connecting Sentinel to the Microsoft Defender portal unifies incidents, hunting and table management in one place. Microsoft has also published a retirement date for the Azure portal Sentinel experience and is steering new customers to the Defender portal now, which is a strong signal that the unified portal is the control plane to anchor on.


From SIEM to Sentinel data lake (Preview)

Think about detections as a two stage system. Hot analytics tables fuel near real time rules. The lake holds bulky or long lookback logs. You query the lake to find high value signals and then promote only the distilled results back into small analytics tables that rules can evaluate quickly. That pattern keeps costs predictable and performance snappy. Microsoft calls the promotion mechanism KQL jobs and search jobs. Summary rules are another built in way to aggregate noisy raw data into lean analytics tables.

Practical example

You land months of proxy or firewall events in the data lake. Create a scheduled KQL job that scans the lake each hour for suspect destinations and promotes only hits into the analytics tier.

// Lake query for detecting potentially harmful egress
CommonSecurityLog
| where TimeGenerated > ago(1h)
| where DeviceVendor =~ "Palo Alto Networks" or DeviceVendor =~ "Fortinet"
| where DestinationIP in (externaldata(bad: string)["https://example-threat-feed/bad_ips.csv"])
| project TimeGenerated, SourceIP, DestinationIP, DestinationPort, DeviceAction

In the job settings choose a destination analytics workspace and table name such as EgressHits_CL. Your analytic rule then runs quickly over EgressHits_CL without scanning the entire raw lake. The job based promotion workflow and the tradeoffs between the tiers are documented and they are designed for exactly this flow.


Unified RBAC in practice

In the Defender portal you manage access with unified role based access control. You still respect Azure RBAC on the underlying Log Analytics workspaces, but the day to day permissions that analysts feel live in the unified model. In practice most teams settle on three role groupings. Readers investigate. Responders take incident actions and run hunting. Administrators manage data sources, jobs and rules. Microsoft’s guidance covers prerequisites, migration of older roles and how Sentinel specific actions map into the unified set.

Real world example

Create a unified role that grants Data manage for Microsoft Sentinel collections and Incident responder for Defender XDR incidents. Assign it to the SOC on call group. Keep Log Analytics Contributor scoped only to the workspaces that actually need table or DCR changes. Review access with Privileged Identity Management for time bound elevation. That keeps day to day hunting fast in the portal without oversharing write permissions.


Onboarding Sentinel to the Defender portal

The onboarding wizard is short. In the Defender portal open System then Settings then Microsoft Sentinel and choose Connect a workspace then pick a primary workspace. After connection, incidents, Advanced hunting and table management show up in one place. New tenants are already being redirected to the Defender portal and Microsoft has dated the full move to that portal, so treat the Defender portal as the home for SOC processes.

A thirty day rollout that keeps momentum

Week one is about connection and confidence. Validate that incidents synchronize and that Advanced hunting can see your Sentinel workspace content. Week two inventories connectors and picks which tables belong in analytics and which move to the lake. Week three creates the first KQL job that promotes a small curated table for rules. Week four updates runbooks to the new navigation and tests an end to end scenario that crosses products. Microsoft’s integration doc details the incident sync and the no charge ingestion for alerts and incidents.


Incident unification and hunting across the lake and Defender XDR

Once the workspace is connected, analysts can query Defender and Sentinel data together with Advanced hunting. Defender incidents synchronize with Sentinel and changes in status and owner flow both ways. That creates a single queue without taking away the ability to run deep KQL over your own Sentinel tables.

Scenario 1

A user downloads a large number of sensitive files from SharePoint and then a Defender for Endpoint alert flags a suspicious browser extension. You correlate the two quickly by joining Defender device telemetry with Microsoft 365 audit events that you brought into Sentinel through the Office 365 data connector which writes to the OfficeActivity table.

// Sentinel workspace query
let exfil_ops = OfficeActivity
| where TimeGenerated between (ago(24h) .. now())
| where Workload in ("SharePoint","OneDrive")
| where Operation in ("FileDownloaded","FilePreviewed")
| summarize Downloads=count(), LastSeen=max(TimeGenerated) by UserId
| where Downloads > 500;
exfil_ops
| join kind=inner (
    SecurityAlert
    | where TimeGenerated between (ago(24h) .. now())
    | project AlertTime=TimeGenerated, CompromisedEntity, Title
) on $left.UserId == $right.CompromisedEntity
| project UserId, Downloads, LastSeen, Title, AlertTime

The OfficeActivity schema and the built in connector are documented references for this flow.

Scenario 2

Your lookback needs six months of DNS and proxy to see slow data staging. Query the data lake, then promote only the matches to a lean analytics table for triage and an alert.

// Lake investigation query
CommonSecurityLog
| where TimeGenerated > ago(180d)
| where DestinationDnsDomain endswith ".examplecloud.store"
| summarize FirstSeen=min(TimeGenerated), LastSeen=max(TimeGenerated), Hits=count(), UniqueSrc=dcount(SourceIP) by DestinationDnsDomain

If results show real risk, turn the query into a scheduled job that writes the summary into an analytics table, then point an analytics rule at the summary table.


Cost aware retention for the data lake era

A practical split looks like this. Keep identity, endpoint, email security and control plane logs in analytics for rules and workbooks. Move high volume network and application telemetry to the data lake and query it on demand. When you need recurring detection on lake data, schedule a KQL job or a summary rule to publish just the few fields you need into an analytics table. Microsoft’s compare and manage guidance explains the behaviors and limits of each tier so you avoid surprises when you switch tables between tiers.

A small playbook for retention decisions

Start by writing down the maximum useful detection lookback for each source. Keep only that much in analytics. Push the rest to the lake. For example keep thirty days of identity and endpoint in analytics for rule speed, three to twelve months of firewall in the lake for forensics, and promote only the exceptions. Revisit monthly with cost and query stats to ensure your jobs are paying for themselves. The Sentinel data lake overview and what is new pages are good north stars when you tune this mix.


Where Microsoft Purview logging and auditing fits

Purview Audit is your record of user and admin activity across Microsoft 365. The default audit retention is now one hundred eighty days for Audit Standard, while Audit Premium lets you retain up to ten years through explicit audit log retention policies. Those policies are defined in the Purview portal and can target specific services, actions or users.

The easiest way to bring those records into your SOC view is the Microsoft 365 data connector in Sentinel which writes Microsoft 365 audit events into the OfficeActivity table. That table becomes your pivot for DLP actions, label changes, file access and SharePoint or Exchange admin activity in detections and investigations.

Purview audit has been expanded to include Copilot interactions. If your governance program cares about who grounded a prompt with what data, make sure your audit retention policy includes the Copilot schema. You can then hunt for Copilot related activity either directly in Purview or after ingestion into Sentinel through the connector.

// Microsoft Copilot audit investigation in Sentinel
OfficeActivity
| where TimeGenerated > ago(7d)
| where Workload in ("Exchange","SharePoint","OneDrive","MicrosoftCopilot")
| where Operation has_any ("Copilot","Prompt","Grounding","Retrieve")
| project TimeGenerated, UserId, Workload, Operation, OfficeObjectId, ClientIP

Audit data also fills context during incident review. When a Defender incident involves exfiltration, add a Purview lens by checking for rapid changes in label or DLP policy hits for the same user and time window.

// Adding Purview label and DLP flavor to an investigation
let windowStart = ago(2h);
let windowEnd = now();
let suspectUser = "user@contoso.com";
OfficeActivity
| where TimeGenerated between (windowStart .. windowEnd)
| where UserId == suspectUser
| where Operation in ("SensitivityLabelApplied","DlpRuleMatch","DlpRuleUndo")
| summarize Events=count(), First=min(TimeGenerated), Last=max(TimeGenerated) by Operation

The retention and search behavior of the audit log and the policy model for long term retention are covered in the official docs. Treat those policies as part of your cost plan because Premium retention increases storage but often replaces brittle custom exports.


Pulling it all together

On day one connect Sentinel to the Defender portal and verify that incidents sync and that Advanced hunting can query your workspace content. During the first month shift bulky sources to the data lake and stand up one KQL job that promotes a small summary into analytics for an alert. At the same time confirm that Purview Audit is enabled, set a retention policy that meets your regulatory profile and verify that the OfficeActivity table is filling in Sentinel. By the end of the month your analysts work in one portal, detections run fast on lean tables, the lake holds the rest for lookbacks, and Purview audit gives you the governance grade timeline when you need it.

Share this post:

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top