MMS 2025 According to ChatGPT
—What would happen if you let an LLM loose on 200+ conference sessions
TL;DR
Hand ChatGPT the entire MMS 2025 SharePoint dump plus the “Deep Research Agent” prompt, and you’d get one unified, opinionated training course instead of 237 overlapping slide decks. Expect razor-focused modules on Intune, Entra, Defender, PowerShell, Microsoft Graph, APIs, DCR, Azure, AI/Copilot, Microsoft 365, and autonomous agents—complete with share-ready slides, speaker notes, and exec-level takeaways. Read on for the play-by-play.
The Final Course—Module Sneak Peek
Modern Device Management (Intune, Windows 365, AVD)
- Slide: Intune Best Practices for 2025 – Intune has evolved with cloud-first management, requiring new practices
- Slide: Application Deployment & Autopilot – Ensure app delivery is robust and Autopilot deployments are smooth
- Slide: Compliance & Configuration Policies – Drive security and stability through proper policy use
- Slide: Updates and Patching – Keep devices evergreen with modern tools
- Slide: Cloud PC & VDI Integration – Extend device management to Cloud PCs and virtual desktops
Identity and Access Management (Microsoft Entra ID & Zero Trust)
- Slide: Beyond Passwords – Modern Auth in 2025 – Passwords alone are outdated; modern authentication is essential
- Slide: Conditional Access & Zero Trust – Enforce who and what can access resources, based on trust signals
- Slide: Privileged Access and Administration – Secure the keys to the kingdom with least privilege and isolation
- Slide: Entra Innovations (2025) – New Entra ID services enhancing secure access
Endpoint Security and Zero Trust Protection
- Slide: Microsoft Defender – More Than Just AV – The Defender suite provides XDR (extended detection & response) across endpoints
- Slide: Hardening Endpoints – Best Practices – Apply layered defenses on each device
- Slide: Protecting Identities on Devices – Bridge endpoint and identity security for Zero Trust
- Slide: Security Operations & Response – Use tools and policies to respond quickly
- Automation, Scripting, and Integration (PowerShell, Graph API, DevOps)
- Slide: When to Use Console vs Graph/API – Choosing the right tool for management tasks
- Slide: PowerShell + Graph – Tactics and Tools – Using PowerShell to automate Microsoft Graph tasks
- Slide: Beyond Scripts – Workflow Automation – Integrating Azure services for advanced automation
- Slide: Real-World Automation Wins – Examples of scripting and integration paying off
Cloud Monitoring and Analytics (DCR, Azure Monitor, Sentinel)
- Slide: Modernizing Log Collection (AMA & DCR) – Move to Azure Monitor Agent and Data Collection Rules
- Slide: Cost Optimization in Log Ingestion – Strategies to control and reduce Sentinel/Azure Monitor costs
- Slide: Alerting and Analytics – Set up actionable alerts and utilize baseline analytics
- Slide: From Monitoring to Action (DevOps mindset) – Use monitoring data to drive continuous improvement
AI and Copilot for IT Operations
- Slide: The Rise of AI in IT Management – AI is transforming IT work by providing intelligent assistance
- Slide: Copilot in Intune – Practical Uses – Intune’s Copilot (preview) can assist with device management tasks
- Slide: Security Copilot – AI for SecOps – Microsoft Security Copilot can augment our security operations center
- Slide: Preparing for AI-Driven IT – Upskill and adjust processes to fully exploit AI tools
Optional Executive Summary
Why This Beats Scrolling Through 237 PPTXs
-
Time-to-value: One curated course > weeks of pecking and hunting over the decks.
-
Consistency: Standardized diagrams and naming conventions (no more “Intune / MEM / Endpoint Manager” confusion).
Final Thoughts and Results
The results? Of all the presentations provided, 16 were used in the research in order to produce the following course.
Ready to feed the bot? 🔥
Modern Device Management (Intune, Windows 365, AVD)
Slide: Intune Best Practices for 2025 – Intune has evolved with cloud-first management, requiring new practices:
- Organize with Entra ID groups: Use Azure AD (Entra ID) groups (dynamic where suitable) for targeting instead of on-prem collections. Develop clear naming conventions for policies, profiles, and groups to avoid confusion.
- User vs Device groups: Assign user-targeted configurations (settings follow the user across devices) to user groups, and device-centric configs to device groups. Device group assignments ensure settings “stick” to the device (great for shared or pre-provisioned devices).
- Leverage Intune Filters: Dynamic groups can be slow to update membership; Intune Filters can instantly refine assignment by device attributes, improving targeting speed and reducing reliance on complex group rules.
- Delegate with RBAC: Implement Intune RBAC roles (e.g. Help Desk Operator, App Manager) and custom scopes (e.g. by region or OS) to safely distribute admin tasks. Limit Global Admin use – use least privilege for Intune management.
Speaker Notes: As Intune is cloud-based, embracing Azure AD (“Entra ID”) grouping and cloud design is key. A common pitfall has been treating Intune like legacy tools – for example, expecting SCCM-style collections. Instead, use Azure AD groups (including dynamic membership) for deployment targets. However, dynamic group updates can lag (they “are slow”), so Intune’s Filters are recommended for real-time device targeting when deploying apps or policies. For instance, you might use a filter to include only Windows 11 devices in an assignment, rather than waiting on a dynamic group. Also, establish clear naming standards for all objects – with hundreds of apps and policies, consistent names (e.g. prefix by platform or profile type) will save your sanity. On permissions, Intune now offers built-in and custom RBAC roles. Take advantage of these to create “safety boundaries” – e.g. helpdesk staff can restart devices or read policies but not modify them. This minimizes risk and aligns with least-privilege principles in a Zero Trust world.
Slide: Application Deployment & Autopilot – Ensure app delivery is robust and Autopilot deployments are smooth:
- Win32 App Packaging: Standardize Win32 app installs via Intune’s packaging – include proper detection rules and exit codes. Always test installs under SYSTEM context (e.g. use psexec.exe -s) to catch issues before Intune deployment.
- Store Apps & Updates: Utilize Microsoft Store integrations (new Store for Business model) or Winget as appropriate. Intune’s Company Portal can distribute store apps seamlessly – plan migration from legacy Store for Business to the new experience.
- Dependency and Bitness Management: Define app dependencies in Intune (e.g. VC runtimes) and use assignment flexibility (assign 64-bit app as required, 32-bit as available if needed). This ensures the right version gets installed per device.
- Autopilot ESP considerations: In Windows Autopilot’s Enrollment Status Page (ESP), use the “blocking vs. non-blocking apps” wisely. Including too many required apps in the blocking phase can slow provisioning. Consider enabling “Only fail selected blocking apps in technician phase” so non-critical apps can install in user phase. This accelerates deployments by only blocking on essential apps.
Speaker Notes: Deploying applications through Intune requires a structured approach. Win32 apps (packaged as .intunewin) need careful preparation: we’ve seen issues in the field where detection scripts weren’t formatted correctly or missing exit codes, causing Intune to mis-report installations. Always test your .intunewin apps under the LOCAL SYSTEM account – many installers behave differently when run as a user. Using PsExec in testing (with the -s flag) revealed failures that would occur in Intune’s context. Also pay attention to app architecture: Intune allows targeting 64-bit vs 32-bit variants. A best practice is to require 64-bit where possible (modern standard) and offer 32-bit as available install for legacy needs. Autopilot ESP: When imaging devices with Windows Autopilot, a common hang-up is the Enrollment Status Page (ESP) taking too long or failing due to app installations. A key tip (from real-world trials) is the setting “Only fail selected blocking apps in technician phase”. If set to Yes (the default), Autopilot pre-provisioning will attempt all required apps and fail if any required app fails – thorough but potentially slow. If set to No, it will only install a subset of “critical” apps during provisioning and push the rest post-login. Choosing No can dramatically speed up provisioning for large app sets (with the trade-off that some apps install later for the user). The key is to designate truly essential apps as blocking and allow non-critical ones to roll out after the user starts working.
Slide: Compliance & Configuration Policies – Drive security and stability through proper policy use:
- Compliance as Gatekeeper: Define Intune Compliance Policies that mark devices as compliant only if they meet requirements (OS version, encryption, AV, etc.). This “health meter” feeds Conditional Access – only allow devices that are compliant (and thus healthy) to access corporate apps.
- Conditional Access Integration: Use Azure AD Conditional Access policies to enforce MFA and device compliance for cloud access. Avoid per-user MFA settings – use CA policies for a more flexible, targeted approach (e.g. exclude Autopilot enrollment from MFA to prevent setup issues).
- Configuration Profiles vs Baselines: Intune offers configuration profiles and security baselines. Baselines (pre-built templates of best-practice settings) are quick wins to harden Windows 10/11. However, note that not every security setting is in the baselines – you may need supplemental custom profiles. Review and tailor baselines (or use community tools like OpenIntuneBaseline for insights).
- Common Policy Pitfalls: Avoid conflicts – e.g., duplicate settings across multiple profiles can cause “last one wins” issues. Leverage Intune’s reporting and upcoming policy analytics to identify conflicting or redundant policies.
Speaker Notes: Device Compliance Policies are your first line of defense: they define what “healthy” means in your environment. For example, you can require a minimum OS version or that BitLocker is enabled. When a device fails compliance, Intune can mark it and even deliver remediation messages to the user. More importantly, through Conditional Access (CA), you can block that device from accessing sensitive cloud apps until it meets policy. This pairing of Intune compliance + Azure AD CA enforces Zero Trust: only trusted users on trusted (compliant) devices get in. Make sure to switch off old per-user MFA settings (legacy habit) and instead use CA policies for MFA – it’s more flexible and avoids MFA prompts in scenarios like device enrollment where they can be problematic. Intune provides Security Baselines which are Microsoft-recommended groups of settings (covering Windows security, Defender, Edge, etc.). They are great to get a quick security posture (our field experience shows many orgs see immediate score improvements by deploying baselines). But don’t assume the baselines cover everything – for instance, certain Defender or firewall settings might not be included. A common mistake is thinking “baseline applied = fully secure.” Always review baseline contents and use additional Configuration Profiles for any gaps. Community resources like OpenIntuneBaseline (an open project to diff baseline versions) can help.
Slide: Updates and Patching – Keep devices evergreen with modern tools:
- Windows Update for Business: Utilize Intune Update Rings to control Windows Update on endpoints. Stagger deployments (e.g. pilot and broad groups) and enforce deadlines. Monitor update compliance via Intune’s reports.
- Windows Autopatch (if available): Consider Windows Autopatch service for eligible licenses – it automates patch Tuesdays for Windows 10/11 and M365 Apps. Autopatch can reduce the ops burden, but requires device enrollment and adherence to its ring logic. Ensure proper onboarding (Azure AD > Device Registration, etc.).
- Third-Party Patching: Integrate Microsoft Defender Vulnerability Management or tools like Windows Package Manager for app updates. Leverage Intune’s integration with Winget sources to auto-update apps where possible.
- Stay Current with Windows: Plan for Windows 10 EOL (Oct 2025). Use Intune feature update policies to push Windows 11 upgrades. For cloud PCs or VDI, ensure images are updated or redeployed regularly.
Speaker Notes: Patching remains critical for security and stability. Intune’s built-in Update Rings let us enforce update cadence on our terms. We recommend defining at least 3 rings: a small early ring (IT or test devices) with minimal deferral, a mid-size “Pilot” with a slightly longer deferral, and the general population ring. This way, updates get tested before hitting everyone. Intune can auto-pause an update if failures are detected, but proactive ring design is key. For organizations with Microsoft 365 E5 or Windows Enterprise E3/E5, Windows Autopatch is a newer service that essentially outsources the Windows Update (and Office update) orchestration to Microsoft. It creates its own rings and watches deployment metrics. If your team is small or wants to ensure focus on other projects, Autopatch can be a boon – just note it needs Azure AD-joined or Hybrid Azure AD devices and up-to-date Intune enrollment. The MMS conference highlighted Autopatch onboarding steps and gotchas (like ensuring your Intune tenant has proper tenant attach settings). Also, don’t forget application patching. A lot of vulnerabilities come from third-party software. Intune doesn’t natively auto-update all apps, but you can use tools (Winget, Patch My PC, or Defender Vulnerability Management) to identify and remediate outdated apps. In summary, adopt a mindset of evergreen IT – with Windows as a service and cloud management, continuous updates are the norm. It’s better to have a streamlined update process than to fall behind and face a fire-drill update of hundreds of machines later (or, worse, run out-of-support software).
Slide: Cloud PC & VDI Integration – Extend device management to Cloud PCs and virtual desktops:
- Windows 365 Cloud PCs: Treat Cloud PCs as just another endpoint – Intune manages them alongside physical PCs. Use cases include contractors, BYOD scenarios (secure, isolated cloud workstation), and disaster recovery. Cloud PCs provide a full Windows experience streamed from Azure, with Microsoft handling the infrastructure (SaaS model).
- Frontline and Shared Scenarios: Leverage Windows 365 Frontline licenses for shift workers or part-time staff – one license can cover 3 users by allowing up to 3 Cloud PCs that run non-concurrently. Shared mode Cloud PC can even be reset after each session for kiosks or labs. This optimizes cost and management for flexible workforce needs.
- Azure Virtual Desktop (AVD): For more complex virtualization (multi-session, custom networking, or non-persistent setups), AVD is PaaS. Intune can now co-manage AVD session hosts (if Azure AD-joined) for policy and app deployment. Use AVD when you need multi-user sessions or full control over the VM templates; use W365 when simplicity and per-user fixed desktops are desired (Microsoft manages most of the infra).
- Management Notes: Ensure network connectivity (VPN or ExpressRoute) if Cloud PCs need on-prem resource access, or consider Entra Private Access (Next module) to provide Zero Trust access instead of VPN. Use Microsoft’s recommended image gallery for Cloud PCs or customize images carefully to include Intune config agents. Monitor Cloud PC performance via Endpoint Analytics.
Speaker Notes: Cloud PCs (Windows 365) have emerged as a practical solution for many scenarios – the MMS event underlined that the future of Windows is “AI and Cloud”. With workers increasingly remote or using personal devices, Cloud PCs allow a secure, fully managed Windows desktop to be delivered from Azure. We can deploy and manage them in Intune just like regular PCs – meaning all our policies, apps, conditional access rules apply. This greatly simplifies hybrid work enablement: e.g., a contractor can be given a Cloud PC and all corporate data stays in that cloud environment (data centralization improves IP protection). We also get built-in continuity: if a user’s personal device fails, they can login to their Cloud PC from another device and resume work. Windows 365 Frontline was highlighted as a cost-saver for scenarios like shift workers who don’t all need a Cloud PC simultaneously. One license can back multiple Cloud PCs (up to 3 per license) as long as only one is used at a time. There’s even a new Shared mode where a Cloud PC isn’t personalized but is wiped clean between uses – great for training labs or transient access where you don’t want to pay for a dedicated VM per user. In contrast, Azure Virtual Desktop (AVD) offers more customization (e.g. multi-session host pools where many users share a server, or deeper network control). The trade-off is that with AVD, the customer manages more components (image optimization, host scaling, etc.) whereas Windows 365 is fully managed by Microsoft aside from your Cloud PC config. A handy slide at MMS showed a responsibility matrix comparing W365 (mostly Microsoft-managed) vs. AVD vs. on-prem VDI. The key takeaway: you have options depending on requirements, but both W365 and AVD integrate with our Intune/MEM and Azure AD ecosystem – so we can apply our modern management and security policies universally.
Identity and Access Management (Microsoft Entra ID & Zero Trust)
Slide: Beyond Passwords – Modern Auth in 2025 – Passwords alone are outdated; modern authentication is essential:
- Threat Landscape: Over 99% of identity attacks target passwords – from phishing to password spray. Microsoft reports blocking ~7,000 password attacks per second. Stolen or weak passwords contribute to 80%+ of breaches.
- MFA Everywhere: Enforce Multi-Factor Authentication for all users – this single step blocks ~99% of automated attacks. Use Azure AD Security Defaults or Conditional Access policies to require MFA on all logins, especially for privileged roles.
- Go Passwordless: Plan a move to passwordless methods like Windows Hello for Business, FIDO2 security keys, or phone sign-in. These are phish-resistant and tied to device or biometrics. Passwordless improves security and user experience (no more periodic password expiry per new NIST guidelines).
- Password Hygiene: Where passwords remain, adopt passphrases (15+ characters), eliminate mandatory rotation (unless compromise suspected), and promote password managers. But ultimately, aim to reduce password use altogether in favor of more secure credentials.
Speaker Notes: The data is clear: passwords are a liability. At MMS, we saw stark statistics – e.g., 73% of passwords are duplicates across accounts, and attackers know it. The majority of breaches still leverage weak or stolen passwords. Enabling MFA is the low-hanging fruit; Microsoft’s studies and our own experience show it foils the vast majority of opportunistic attacks. Yet many breaches show MFA wasn’t enabled universally. So, our directive is MFA everywhere. This can be through Conditional Access policies that apply to all users (with exclusions only for break-glass accounts, etc.). The next step is passwordless. MFA is great, but if the primary factor is still a password, phishers will try to capture it. Passwordless methods (like Windows Hello, which uses a local PIN/biometric tied to a TPM, or FIDO2 keys) remove passwords from the equation. They are inherently MFA (something you have + something you are/know). Deploying Windows Hello for Business is a priority for us: it means users log on with a PIN or biometric which never leaves the device. FIDO2 keys are another tool – e.g., for shared PC environments or admin accounts for high security (the key ensures login only with the physical token present). Microsoft Entra ID fully supports these, and adoption is growing. In 2024, NIST even updated guidance to encourage longer passphrases and doing away with forced periodic changes – recognizing that complexity rules weren’t really helping. We should follow suit by focusing on length over complexity and user-friendly but secure auth like passwordless.
Slide: Conditional Access & Zero Trust – Enforce who and what can access resources, based on trust signals:
- Conditional Access Policies: Use Entra ID Conditional Access to require MFA, device compliance, or location/network criteria before granting access to cloud apps. Example: require compliant device + MFA for Office 365, block access from risky countries by default. These policies operationalize Zero Trust (“never trust, always verify”) at login time.
- Device Compliance Integration: Tie CA policies with Intune device compliance state. Only devices meeting our health standards can pass CA. This ensures things like up-to-date patches and disk encryption are pre-requisites to accessing sensitive data.
- Avoid Common CA Pitfalls: Design CA in stages – start with report-only mode to gauge impact. Be cautious with rules that might lock out enrollment or updates (e.g. don’t require compliant device for Intune enrollment itself!). Azure AD has an “exclude policy administrators” option – use it to prevent accidentally blocking all admins.
- Phishing-Resistant MFA: Upgrade MFA methods in CA – prioritize app notification or FIDO2 over less secure SMS/voice. Also consider Authentication Strength policies in Entra (e.g. require phishing-resistant MFA for critical apps).
Speaker Notes: Conditional Access (CA) is the heart of our Zero Trust implementation. It’s not just about who you are (user identity) but also the state of your device, location, and context. For instance, we can enforce that only devices marked compliant by Intune can access Exchange Online. This means if someone tries to log in from an unmanaged or non-compliant laptop, they’ll be denied – even with correct credentials. We also mandate MFA in CA policies to catch any stolen passwords. The MMS sessions emphasized that CA done right is a big win, but CA done wrong can cause havoc. We need to test policies in report mode and have break-glass accounts exempted. One real-world example discussed: during Windows Autopilot enrollment, if a CA policy unintentionally requires a compliant device and the device isn’t marked compliant until after enrollment, you get stuck in a loop of sign-in prompts. The advice: don’t use per-user legacy MFA (set those to Disabled) and instead craft CA such that enrollment flows aren’t blocked. Use workload identities (like Enrollment ID) or temporary relaxation for that scenario. Another key aspect is MFA strength. Traditional MFA (text codes) can be phished via clever social engineering. Microsoft introduced features to define Authentication Strength, allowing us to say “for this highly sensitive app, basic MFA isn’t enough, require phishing-resistant methods (like AAD passwordless or FIDO)” – this aligns with agencies and high-security org recommendations. Our plan should include upgrading our MFA methods for admins and high-risk access.
Slide: Privileged Access and Administration – Secure the keys to the kingdom with least privilege and isolation:
- Role-Based Access Control: Limit number of Global Administrators. Use granular admin roles (e.g. Exchange Admin, Intune Admin) so staff have only the rights needed. For extra security, implement Privileged Identity Management (PIM) – requiring on-demand activation (just-in-time) for high-privilege roles and approval workflows.
- Privileged Access Workstations (PAW): For highly privileged roles (Tier-0 admins), require using a secured workstation (physical or Cloud PC) that is hardened and not used for daily browsing/email. Virtual PAWs via Azure Virtual Desktop or Windows 365 can be an option – easier to deploy, though a dedicated physical PAW is the gold standard.
- Conditional Access for Admins: Implement CA policies that only allow admin role login from compliant, trusted devices (e.g., your PAWs) and require phishing-resistant MFA. Block admin access from personal devices.
- Administrative Units & Delegation: Use Entra Administrative Units to delegate admin control by scope (e.g., an AU per department or region). This prevents, say, a helpdesk admin in one unit from affecting users outside their scope. It’s an extra layer to contain what an admin can manage, aligning with the principle of least privilege.
Speaker Notes: Our administrators and privileged users are prime targets, so we must lock down administrative access. Best practice is to reduce standing admin access – no one should be a permanent Global Admin if it can be helped. Tools like Entra Privileged Identity Management (PIM) allow roles to be activated just when needed and then removed, greatly limiting exposure. Another point raised at MMS was using Privileged Access Workstations (PAWs). A PAW is a dedicated machine for sensitive admin tasks, fully locked down (think: no email, no web surfing, only tools to administer your environment). The “modern” approach could be a vPAW – a cloud-hosted privileged desktop via Windows 365 or AVD. This is great for external admins or scenarios where issuing hardware is difficult. Microsoft’s clean source principle was cited: all dependencies (OS, device, network) for admin tasks should be as secure as what they are protecting. So, if we manage M365, doing so from a hardened environment (with up-to-date patches, running Defender, minimal attack surface) is critical. We can enforce some of this with tech: e.g., a Conditional Access policy that says “if user is in an admin role, they can only access the Azure portal or admin apps from a device that is Azure AD joined and marked compliant”. We’d then ensure only PAWs meet those conditions. This, combined with requiring strong MFA (we might mandate FIDO key for admins), dramatically lowers risk of token theft or session hijacks. Also, Administrative Units in Entra ID let us segment administration – for example, a regional IT support person could be an Intune Admin for only their region’s devices/users, not the whole tenant. This is a way to technically enforce least privilege in multi-tenant or large org setups.
Slide: Entra Innovations (2025) – New Entra ID services enhancing secure access:
- Entra ID Governance: New features for access reviews, separation of duties, and workflow automation for joiner/mover/leaver. If licensed, implement these to regularly certify who has access to what.
- Entra Internet & Private Access: Microsoft’s foray into Security Service Edge. Entra Private Access provides Zero Trust network access to internal apps without VPN. It integrates with Entra ID for SSO and Conditional Access, routing traffic through Microsoft’s global network instead of broad network tunnels. This offers per-app connectivity with strong authentication.
- Passkeys Support: Entra ID now fully supports Passkeys (device-bound FIDO credentials) in preview. Plan pilot programs for passwordless login using Passkeys on mobile devices or Windows Hello as a passkey for web sign-in – this could eliminate passwords for many use cases.
- External Identities & Verified ID: If working with partners or customers, Entra’s External Identities allows inviting them securely with their own credentials. Entra Verified ID (digital credentials service) might see adoption for verifying identity attributes in B2B or even HR processes.
Speaker Notes: The Entra branding now encompasses more than just our “Azure AD” identity; Microsoft has introduced new cloud-based secure access solutions under Entra. One highlight is Entra Private Access, essentially a modern VPN replacement. Traditional VPNs grant network-level access, often too broad. Private Access instead brokers connections per application, using the cloud as the entry point. As noted in the session “Say Goodbye to VPNs, Hello Entra Private Access,” this service is a ZTNA (Zero Trust Network Access) solution integrated with Entra ID. Users authenticate via Entra ID and are allowed to reach specific internal apps through Microsoft’s global edge network, without exposing the entire network. This reduces lateral movement risk and improves user experience (no more full-tunnel VPN slowness). It’s something our network and security teams should evaluate, especially as we embrace Zero Trust. Additionally, Entra ID Governance capabilities are growing – things like periodic access reviews for groups and roles can now be automated. We should leverage that to ensure, for example, contractors’ accounts disable on schedule or privileged roles get re-approved every 90 days. On the cutting edge, Passkeys are coming – these are essentially passwordless logins that sync across devices (using platform biometric or PIN, backed by cloud sync of a FIDO credential). This could finally kill passwords for end-users if done right. Microsoft is pushing this as part of the passwordless future. We may start with a small pilot for our tech enthusiasts group to identify any integration issues. All these developments underscore that identity remains the cornerstone of our security, and it’s rapidly evolving with cloud intelligence and improved standards.
Endpoint Security and Zero Trust Protection
Slide: Microsoft Defender – More Than Just AV – The Defender suite provides XDR (extended detection & response) across endpoints:
- Defender for Endpoint (MDE): Not just antivirus – it offers behavior-based EDR detections, threat & vulnerability management, attack surface reduction (ASR) rules, firewall control, and automated investigation/remediation. Rich telemetry enables advanced hunting across events.
- Defender Vulnerability Management: Identify and remediate software vulnerabilities and misconfigurations in your device fleet. Prioritize patching based on real exposure and threat insights.
- Defender for Identity & Cloud Apps: Beyond devices, Defender extends to Identity (monitoring AD/Entra ID for compromised accounts, lateral movement) and Cloud Apps (CASB functionality – shadow IT discovery, OAuth app control, session monitoring). These integrate signals to give a unified incident view.
- Unified Portal & XDR: All Defender components feed the Microsoft 365 Defender portal. Incidents correlate alerts from email, identity, endpoints, and cloud apps into a single attack story. Automatic attack disruption can trigger across products – e.g., disable a user account (identity) when an endpoint threat is confirmed. This XDR approach breaks down silos so security ops can respond faster.
Speaker Notes: Microsoft Defender for Endpoint (MDE) has grown far beyond basic antivirus. In our training, it’s important everyone knows the range of capabilities we already own. MDE includes features like Web Content Filtering (to block malicious or unwanted URLs enterprise-wide), Attack Surface Reduction rules (like blocking Office from creating child processes – which stops many macro-based attacks), and even Device Control (USB device usage policies). The MMS session “Defender – More than just AV” highlighted a “Top 10” of MDE features and real-world benefits. For example, Tamper Protection was one – it locks down security settings from being changed by malware or even local admins. There’s also an interesting new capability: deception (creating fake honeytokens on endpoints to lure attackers – hinted as part of the feature set). Our team should ensure we’re leveraging these. Threat & Vulnerability Management (TVM) within Defender gives us a dashboard of software weaknesses in our devices. Instead of generic CVEs, it contextualizes which vulnerabilities are actually exploitable and active in our environment, so we can prioritize. For instance, if a critical vuln is detected on many devices, TVM might recommend a specific software update or configuration change. On the identity side, Defender for Identity (formerly Azure ATP) monitors domain controller logs and Entra ID signals for things like unusual login patterns or credential theft techniques. Meanwhile, Defender for Cloud Apps provides CASB capabilities – it can discover unsanctioned SaaS usage, monitor data within cloud services, and enforce policies (like blocking downloads of sensitive files). All these “Defender” components share information. If an endpoint is breached, Defender can alert you to related suspicious OAuth app consents or unusual AD queries in the same incident. Security operations benefit from this XDR approach, as it was stressed: a unified incident queue and even automated response across products (for example, isolating a machine, disabling an account, and blocking an email attachment all orchestrated as one response).
Slide: Hardening Endpoints – Best Practices – Apply layered defenses on each device:
- Attack Surface Reduction (ASR): Enable ASR rules via Intune Endpoint Security policies or security baselines. These rules block high-risk behaviors (e.g. Office macros launching script interpreters, executable content in mail, credential theft tools). They can drastically cut down on malware entry points without needing signatures. Test in audit mode if needed, then enforce.
- Firewall and Network: Ensure Windows Defender Firewall is on and locked by policy (prevent users or apps from altering it). Use a single consistent firewall profile (prefer “Public” or “Private” with strict rules for all networks) and consider an MDM policy to automatically adjust profiles based on domain join or network attributes. Block all inbound traffic except required management ports (Zero Trust approach – assume every network is hostile).
- Local Admin Management: Deploy LAPS (Local Administrator Password Solution) for Windows 10/11 via Intune policy – this rotates and vaults the local admin password per device. Better yet, use Microsoft Entra Endpoint Privilege Management (EPM) in preview to allow standard users to elevate specific tasks without giving full admin rights. No user should routinely operate as local admin – “If they don’t have admin rights, they can’t break their computer”.
- Firmware and Device Security: Require UEFI Secure Boot and TPM on all devices (these should be compliance policy checks). Enable Memory Integrity / HVCI via Intune (to block kernel exploits) if hardware supports it. Use Defender’s device health attestation for additional trust (e.g., only allow devices with Secure Boot and without firmware issues to access certain resources).
Speaker Notes: Hardening endpoints is about layers of protection. We assume breaches will try to start at the device, so we cut off as many avenues as possible. ASR rules are one of the most effective tools here. They’ve been shown to block whole classes of attacks like ransomware and living-off-the-land techniques. For example, one rule blocks Win32 API calls from Office macros – which basically stops macro viruses from dropping payloads. Another prevents scripts like JavaScript or VBScript from launching downloaded executables. We should deploy the full recommended set of ASR rules. The only caution is potential false positives; Microsoft’s guidance and the MMS hardening session suggest running them in audit first to see if line-of-business apps would be impacted. But most organizations find they can enable all or most rules after a testing period. On firewall – it’s not as flashy as EDR, but a tightly controlled host firewall is essential. Intune’s Security Baseline already configures Windows Firewall on, for all profiles. One tip: avoid having users switch firewall profiles or creating multiple sets of rules for “domain” vs “public” networks if possible. With so many working remotely, treat all networks as untrusted (maybe use the Private profile universally and lock it down). This was humorously referenced as “If you reeeeeally need different rulesets…” then some advanced trick can be done, but generally simpler is better. Local Administrators: We absolutely want to eliminate users having local admin on their machines. The quote from Sami Laiho was spot on: users often claim they need admin to fix things, but in reality, without admin rights they also can’t wreck things. Microsoft LAPS is now built into Windows 10/11 and Intune can configure it – we should ensure it’s deployed so each device’s admin password is unique and available to IT in emergencies (stored in Entra ID). The new Entra Endpoint Privilege Management feature is even more fine-grained – it will let standard users run specific approved applications with admin rights via a workflow, so day-to-day they’re standard users. This is the future of least privilege on endpoints, and we should pilot it.
Slide: Protecting Identities on Devices – Bridge endpoint and identity security for Zero Trust:
- Credential Guard & LSASS Protection: Enable Credential Guard on capable Windows devices – it isolates login secrets to prevent theft of credentials (like pass-the-hash attacks). Similarly, enable LSASS as a protected process (Intune can configure “DisableWin32ProtectedProcess” = 0) so malware can’t dump it.
- Account Lockout for Local Admin: New Windows 11 security features automatically lock out local administrator accounts after failed attempts (thwarting RDP brute force). Ensure this policy is set (Account Lockout Threshold = e.g. 10) on all endpoints – it’s included in recent baselines.
- Defender SmartScreen & Application Control: Use Defender SmartScreen to block access to known phishing or malware-hosting sites enterprise-wide. Additionally, consider Windows Application Control (AppLocker or WDAC) for high-risk environments to allow only trusted apps. This prevents unknown executables from ever running.
- Endpoint Identity Integration: Utilize device compliance as an identity factor (covered prior) and enable Device Authentication wherever supported (e.g., require domain-joined or AAD-joined device for certain legacy apps via certificate authentication). This ties a user’s identity to a physical device identity, raising the assurance level.
Speaker Notes: Endpoints are where user identities are often stolen. Techniques like Pass-the-Hash or dumping credentials from memory are still prevalent. By enabling Credential Guard, we use virtualization-based security to protect NTLM hashes and Kerberos tickets in memory – so even if malware runs, it can’t easily grab those secrets. The hardening top-10 list included things like Account Lockout policies. This refers to the recent Windows update where even local admin accounts can be configured to lock after several failed attempts. This setting (now default on new Windows builds) is a direct defense against RDP brute force attacks (a common ransomware entry). We need to verify this is active via our Intune security baseline or custom policy – it’s a simple but effective trick. Another intersection of endpoint and identity is using device state in identity decisions (Conditional Access we discussed). We are essentially treating the device as part of the user’s identity claim. That’s why ensuring devices are Entra ID joined or Hybrid joined and healthy is so important – Azure AD issues a device token that gets presented during auth, proving the device is known and managed. When enabled, a rogue device with stolen creds alone can’t access resources – it fails the device compliance check. This tight coupling (user + device both must be trusted) is a core Zero Trust tenet and one we are implementing fully.
Slide: Security Operations & Response – Use tools and policies to respond quickly:
- Incident Response Playbooks: Configure automated playbooks in Microsoft 365 Defender or Sentinel – e.g., if a device has a high-severity alert, automatically isolate it and open a ticket. Leverage Defender’s built-in auto-investigation: it can often resolve threats without human input.
- Microsoft Sentinel Integration: Stream all Defender alerts and signals to Sentinel (if in use) for long-term retention, correlation with non-Microsoft sources, and advanced hunting. Use Sentinel’s UEBA (user and entity behavioral analytics) to spot anomalous patterns across identity and devices.
- Continuous Monitoring: Set up Azure Monitor alerts on important security events (e.g., sudden surge in antivirus detections, or audit failures). Leverage Azure Monitor Agent & DCR to collect Windows Security events at scale for analysis (with filters to manage volume).
- User Education & Policies: Lastly, an aware user base is part of endpoint security. Use tools like attack simulation training (Defender for O365’s phishing simulations) and enforce policy that any incident (lost device, suspicious email) is promptly reported. Combine tech with training for best results.
Speaker Notes: Even with all preventive measures, incidents will happen. The difference maker is how quickly we detect and respond. Microsoft Defender’s ecosystem provides automated investigation and remediation – for example, if a known malware is found, Defender can automatically take action to quarantine the file, kill processes, and even roll back changes (on Windows 11 with block-on-first-sight, some ransomware behaviors can be auto-mitigated). In one MMS case study, organizations highlighted how enabling automatic device isolation for high-risk threats contained incidents to a single machine. We should define Incident Response playbooks in our tools. Microsoft 365 Defender allows setting actions or sending notifications on certain alerts. If we use Sentinel, that’s even more powerful: Sentinel can run a Logic App playbook when an analytic triggers (like disabling a user account if impossible travel is detected, etc.). The integration of Defender with Sentinel was also a cost topic – note that certain Microsoft 365 E5 data (like Defender logs) can be ingested to Sentinel at no extra cost up to a limit. We should take advantage of that by connecting those data sources. This gives our SOC a single pane for all logs, and Sentinel’s correlation rules can combine, say, firewall logs with Defender alerts to see a bigger picture. Lastly, we can’t ignore the human element. Many attacks (like phishing that leads to endpoint malware) can be stopped by an observant user. Continue to educate users about not ignoring Defender warnings, reporting strange behavior, and practicing safe computing. Tie this into our endpoint management – for instance, if a user reports a potentially malicious USB, we should have a process to collect that device and analyze it. Our tech can isolate and remediate, but a vigilant team and user base multiplies our security posture.
Automation, Scripting, and Integration (PowerShell, Graph API, DevOps)
Slide: When to Use Console vs Graph/API – Choosing the right tool for management tasks:
- Intune Admin Center (Console): Great for daily operations – quick one-off changes, visual reviews of config, or small-scale tasks. Ideal when you need convenience or when still learning a feature.
- Graph API / Scripting: Best for bulk actions (e.g., assign a policy to 500 groups), automation (scheduled tasks), custom reporting, or integration with other systems. Graph gives precision and scale – treat configurations as code (JSON) that can be templatized and re-used.
- Strategy – Blend Both: Use the GUI to prototype and for oversight (e.g., verifying settings), but use Graph/PowerShell when you need repeatability or to avoid UI limitations (like no bulk delete in console). Document scripts and share with the team to build collective tooling.
- Avoid Pitfalls: Console can have hidden delays (e.g., policy propagation) and doesn’t easily show JSON under the hood; Graph has a learning curve (auth and permissions, API versions). Use Graph Explorer or tools like Postman to practice calls, and always handle pagination and errors in scripts.
Speaker Notes: A big theme is using automation to maximize our efficiency in device management. The question often arises: when should I click in the portal vs. when should I script against Graph? The guidance from experts is clear – use both, to their strengths. The Intune UI is user-friendly and good for quick tasks or initial setup. But it can be slow for repetitive work. For example, if we needed to create 50 device configuration profiles with only slight differences, doing that by hand would be tedious and error-prone. Instead, scripting that via Graph API ensures consistency and saves hours. Conversely, if we have a one-time emergency change at 2 AM, it might be faster to just jump in the console for that single tweak. The MMS session “Console or Code?” emphasized thinking in terms of speed vs. control. The console is about immediate human speed and visual feedback; code is about controlling exactly what happens at scale. One takeaway: Start with console, scale with Graph. Do initial configurations in the portal if you like, then export those settings via Graph (Intune’s PowerShell module or Graph queries) so you have them as JSON. Then you can re-use or modify through code. Also, share scripts within the team – maybe maintain a repository of our common Graph scripts (for onboarding new devices, generating reports, cleaning up old objects, etc.). This builds our internal toolkit over time and helps onboard new team members faster with automation.
Slide: PowerShell + Graph – Tactics and Tools – Using PowerShell to automate Microsoft Graph tasks:
- Microsoft Graph PowerShell SDK: Use Microsoft’s official Graph PowerShell modules (e.g. Install-Module Microsoft.Graph.Intune). These cmdlets handle auth and let you call Graph with PowerShell syntax (e.g. Get-MgDeviceManagementDevice to list devices). It’s convenient but ensure you update modules for latest API changes.
- Direct REST calls: For complex or preview features not in the SDK, use Invoke-RestMethod in PowerShell. Obtain an OAuth token (client credentials or user delegated) and call Graph endpoints directly. This requires understanding Graph’s REST format and permissions.
- Scripts for Reporting: Common use-case – pulling data (devices without a compliance policy, users missing a config, etc.) via Graph and outputting CSV or feeding into PowerBI. PowerShell can iterate through paginated results easily and aggregate data.
- Community Tools: Leverage community scripts like Intune Backup/Restore (Graph-based script to export all Intune configs as JSON) or Graph Explorer (web tool) for testing queries. The community dashboard from MSEndpointMgr for Intune Audit Logs is another example – it uses Graph and Log Analytics to visualize changes. Don’t reinvent the wheel if a script exists.
Speaker Notes: PowerShell remains our go-to automation tool, and it plays very nicely with the Graph API. Microsoft’s provided the Graph PowerShell SDK, which essentially wraps Graph calls into PowerShell cmdlets. For instance, to get Intune data you might run Connect-MgGraph -Scopes DeviceManagement.Read.All (to authenticate) and then Get-MgDeviceManagement_ManagedDevices to list devices. Under the hood it’s calling Graph, but we can use familiar PowerShell objects. This is great for most tasks. Just note the SDK may lag a bit behind new API features – sometimes a brand-new preview feature isn’t in the module yet, so you fall back to manual REST calls. Doing direct REST calls in PowerShell is absolutely possible – the session covered how to get an OAuth token via various methods (client credentials for app scripts vs. interactive). One tip: use Managed Identities if running scripts in Azure (like in an Azure Automation Runbook or Function) so you don’t even need to handle secrets – the script can request a token in its Azure context. We saw a script example Configure-ManagedIdentityPermission.ps1 that helps assign Graph API permissions to a Managed Identity. This is a best practice – it avoids storing usernames/passwords or app secrets. For reporting, PowerShell + Graph is extremely powerful. For example, we can script: “fetch all devices, then for each device call the compliance status endpoint, then output a list of devices not meeting X criteria.” This beats clicking through the Intune UI page by page. Over 200 sessions at MMS – many speakers shared scripts on GitHub; we should take advantage of those. An Intune audit log dashboard was mentioned – it uses log data to show who changed what in Intune. Such tools rely on pulling data via Graph or REST. Let’s incorporate the best of these community solutions into our practice.
Slide: Beyond Scripts – Workflow Automation – Integrating Azure services for advanced automation:
- Azure Logic Apps / Power Automate: Use these low-code services to create automation workflows triggered by events. E.g., a Logic App could watch an Intune webhook or Graph subscription (if a new device is enrolled) and then take action (send Teams alert, write to SharePoint, etc.). Logic Apps can also schedule recurring jobs without needing infrastructure.
- Azure Automation & Function Apps: For PowerShell-based tasks, Azure Automation Runbooks can run PowerShell on a schedule or in response to webhooks. Azure Functions can run small pieces of code (PowerShell or C#) and are great for responding to events or processing data. Use these for things like nightly compliance reports, auto-remediation (e.g., if a device is non-compliant for 30 days, trigger an email or remove it from AAD).
- Log Analytics & Monitor Integration: Send logs and data to Azure Monitor (Log Analytics) – e.g., Intune diagnostic logs, Azure AD sign-in logs – and use KQL queries to derive insights. You can hook alerts on these queries to automate responses (e.g., alert if many devices failed to sync). Data from multiple sources can be combined in Log Analytics for a “single source of truth” that automation can use.
- Securing Automation: When using cloud automation, prefer Managed Identities for auth (so no credentials are in code). Restrict what permissions these identities have – follow least privilege (e.g., an automation that only reads Intune data should not have write permissions). Monitor audit logs for automation actions to catch any runaway scripts.
Speaker Notes: Automation isn’t just running scripts manually – we can wire our systems together so they respond automatically to certain triggers. This is where Azure Logic Apps and Power Automate shine. For example, we could create a Logic App that triggers whenever a new Intune device is enrolled (Graph has the ability to create subscriptions to events). The Logic App might then log details to a database, send a welcome email to the user with device tips, or create a ticket for asset tracking. All of that can happen without human intervention. Logic Apps provide connectors to hundreds of services, making integration between Intune/Entra and, say, ServiceNow or Teams relatively straightforward with drag-drop design. For our more heavy-duty scripts (PowerShell that perhaps does complex data crunching or needs to run on a schedule), Azure Automation or Functions are a better fit. Azure Automation can host our PowerShell runbooks – think of things like a nightly job to clean up devices that haven’t checked in for 90 days, or to sync Intune data with an CMDB. We can set those runbooks to run at 2 AM, and Automation will take care of execution (with managed identity to authenticate to Graph). Azure Functions can be event-triggered (e.g., an HTTP request or a message on a queue triggers a PowerShell function to execute immediately). These are useful for near real-time reactions. One real-world example: a Function that runs whenever a user is added to a certain Azure AD group and automatically assigns them a particular Intune role or license – pure automation of user onboarding steps. Importantly, we must secure our automation. Grant the minimal Graph permissions needed – one slide at MMS humorously noted you must be Global Admin to consent Graph app permissions, but that doesn’t mean the app should have full GA rights. If an automation only needs to read device info, give it just Device.Read.All, not Device.ReadWrite.All. Managed Identities help since they are Azure AD principals we control and can scope. Also, maintain an inventory of these automations and regularly review their logs. They do what we program – which is powerful but could also cause harm if a bug or malicious change occurs. Regularly audit that our automations are running as expected and not overstepping (e.g., if one started modifying 10,000 objects due to a logic flaw, we’d want to catch that early via logs or alerts!).
Slide: Real-World Automation Wins – Examples of scripting and integration paying off:
- Bulk Policy Assignment: A PowerShell script using Graph that assigned a new compliance policy to all 5,000 devices in minutes, versus hours of clicking – ensured 100% coverage with a one-time run.
- Intune Configuration Backup: Using community script to export all Intune configs (policies, apps, compliance rules) to JSON (Infrastructure-as-Code mindset). This provided source control for Intune – enabling change tracking and quick recovery if someone mis-configures something.
- License Management Bot: An Automation runbook that checks for unassigned Microsoft 365 licenses nightly and reclaims them (or assigns to new joiners by reading HR data). Saved thousands of dollars by recycling licenses promptly.
- Graph + Teams for Alerts: Developed a small Azure Function that queries Graph for devices with high-risk alerts (e.g., multiple failed logins or malware found) and posts an adaptive card into a SecOps Teams channel with device details and remediation options. This reduced incident response time by integrating alerts into our daily chat workflow.
Speaker Notes: To make this concrete, let’s discuss a few success stories of automation (some inspired by MMS attendee stories and our own experience): – We had a scenario of deploying a new compliance policy requiring BitLocker. Instead of manually selecting groups and dealing with the Intune UI which can time out with very large selections, we wrote a quick PowerShell script to add all devices to the policy via Graph. It looped through devices from Graph (with proper paging) and called the assignment endpoint. This accomplished in ~10 minutes what might have taken an afternoon manually. Moreover, we kept that script – so next time we have a global change, we just adjust and re-run. – Intune doesn’t have a native “backup” or export function for all settings, but the community filled that gap. We used a public IntuneBackup script (which uses Graph under the hood) to pull down every policy, app configuration, etc., as JSON and PowerShell. We now run this monthly (could even automate it) and store in a git repository. Not only does this give us a safety net if someone accidentally deletes a policy, but we can track changes over time by diffing those JSON files. It’s like having version control for our device management configuration – a key DevOps principle. – Another win: License management bot – we noticed we were overspending on M365 licenses because accounts of departed users weren’t being freed. So we scripted using Azure AD Graph and some HR database checks: nightly, for any user marked as inactive or left, the script removes their licenses and assigns to a pool (or flags for re-assignment). If it finds new unlicensed users, it assigns from the pool. This closed a costly gap and ensured new hires aren’t waiting for licenses. – Finally, tying Graph to Microsoft Teams: one team built a Teams bot that surfaces Intune/Defender alerts right into chat. For example, if a device is marked with “High Risk” by Azure AD Identity Protection or has a serious Defender alert, the bot posts in the IT channel with device name, user, and one-click buttons to trigger actions (like isolate machine, or open in the portal). This kind of integration speeds up our response; admins don’t have to watch multiple dashboards – the information comes to where we already communicate. All of this is powered by Graph API calls combined with Teams webhooks. This exemplifies using Microsoft’s cloud as a platform – stitching services together to work smarter, not harder.
Cloud Monitoring and Analytics (DCR, Azure Monitor, Sentinel)
Slide: Modernizing Log Collection (AMA & DCR) – Move to Azure Monitor Agent and Data Collection Rules:
- Azure Monitor Agent (AMA): Replace legacy agents (Log Analytics agent, Depender, etc.) with AMA on servers and clients. AMA is the unified telemetry collector for Azure Monitor, Defender, Sentinel, etc., and it’s managed via the cloud (no more MMA direct workspace links).
- Data Collection Rules (DCR): AMA uses DCRs to configure what data to collect, how to transform it, and where to send it. A DCR defines sources (Windows events, performance counters, syslogs, etc.) and can filter or modify data before ingestion. This allows a scalable, declarative way to manage logging across many machines.
- Benefits: DCRs enable granular control – e.g., collect System events but exclude noisy Event IDs, or capture specific performance counters at a certain interval. You can send data to multiple destinations (Log Analytics, storage, event hubs) from one agent by attaching multiple DCRs. Also, you scope DCRs by machine (via DCR Associations), making it easy to have different logging for different server roles.
- Migration Plan: Audit where legacy Microsoft Monitoring Agent is installed. Set up AMA and equivalent DCRs to mirror needed logs. Use Azure Policy to auto-deploy AMA and DCR association to VMs at scale. Run both agents in parallel for a short period if needed, then retire the old ones.
Speaker Notes: Our monitoring is shifting from old agents to the new Azure Monitor Agent (AMA). The AMA is one agent to rule them all – whether it’s feeding data to Azure Monitor Logs, Microsoft Sentinel, or other Azure services, this one agent handles it. Importantly, it’s configured not by the old method of workspace linking and config in the portal, but by Data Collection Rules (DCRs). Think of a DCR as a recipe for what the agent should do: for example, “collect Windows Application event logs that match X filter, and send them to Log Analytics workspace Y”. We attach that rule to a set of machines (via Azure Resource Manager scopes, like subscription, resource group, or a specific VM). This decouples configuration from the agent itself and allows applying different rules to different sets of machines easily. From the session “Demystifying DCRs”, we learned a DCR has sections for data sources (could be Windows Event log, Linux syslog, performance data, etc.), an optional transformation (a KQL-based filter/transform on incoming data), and destinations. The transformation piece is gold – it lets us drop noise at the agent before it ever hits the cloud (why pay to ingest logs you don’t need?). For instance, you might only ingest Windows Security Events of level “Error” or higher, or strip out certain columns from telemetry to reduce size. This was much harder or impossible with the old agent. Our plan: enable AMA on all our servers and any client devices where we need detailed logs. The session on Azure Monitor migration recommended using Azure Policy to deploy the AMA extension and DCR associations automatically. This ensures any new VM coming online gets the agent and correct DCR config without manual steps. We’ll gradually phase out the old agent (MMA) – Microsoft’s support for it is waning, and AMA+DCR is required for newer features like Microsoft Defender for Servers P2 benefits and advanced VM insights.
Slide: Cost Optimization in Log Ingestion – Strategies to control and reduce Sentinel/Azure Monitor costs:
- Leverage Ingestion Credits: Use your Microsoft licensing benefits. Defender for Servers P2 includes 500 MB/day per server of free security log ingestion (cumulative across all servers). M365 E5 gives 5 MB per user/day of free ingestion for Office 365 logs (Azure AD, etc.). Ensure these are utilized by sending those logs to Sentinel – effectively free up to the limits.
- Data Filtering via DCR: Apply DCR XPath filters on Windows Event logs and Syslog filters on Linux to drop high-volume, low-value data before ingestion. For example, exclude successful noise events or verbose application logs that aren’t actionable. Even a 10% reduction in data can translate to big $ savings monthly.
- Retention and Archiving: Adjust Log Analytics retention – keep data hot (readily queryable) for the period you truly need (e.g., 30 or 90 days for security events), then let it auto-archive to cheaper storage. Use Basic Logs for high-volume data that you query rarely (e.g., VM performance logs) – it’s cheaper ingestion. For Sentinel, consider exporting older incidents or using the new Log Archival for long-term compliance storage.
- Right-Size Your Commitment: If using Sentinel heavily, choose an appropriate Commitment Tier – a fixed daily ingestion commitment with discounted rate. Monitor your usage; if you consistently ingest, say, 500 GB/day, committing to that tier will lower the per-GB cost. Re-evaluate periodically – you can increase tier anytime (restarts 31-day period) and decrease when the term ends.
Speaker Notes: Cloud log analytics can get expensive if unchecked. Thankfully, Microsoft provides some ingestion allowances with certain licenses. For instance, each server covered by Defender for Servers Plan 2 (part of Defender for Cloud) gets up to 500 MB/day of security logs ingested free. If you have 100 servers, that’s up to 50 GB/day at no cost – which is significant. Similarly, our Microsoft 365 E5 users grant some free ingestion of Office 365 audit logs. We must verify we’ve correctly connected those data sources so we’re not missing out on free allocations. The Sentinel Cost Management session emphasized these as “use what you’ve already paid for” tactics. The second big lever is filtering. With DCRs on AMA, we can do things like only collect events we care about. John Joyner and Morten (speakers) gave examples: they filtered out Windows event ID 4625 (failed logon) from some noisy test systems, and that alone cut GBs per day. Another trick: if you have Linux or networking devices sending syslog, use the syslog filtering in the DCR to drop, say, “info” level messages that aren’t needed. This is about being intentional with what we ingest. You can also filter at the source for things like IIS logs – maybe you don’t need every web request logged at full detail for every server. Retention policies: Azure Monitor lets us choose how long to keep data in the workspace before archiving or purging. If our compliance or investigations typically only look 90 days back, don’t keep 2 years of data in hot storage – that racks up costs. We can archive older data (still accessible, just requires a retrieval step and is slower/cheaper). Also use Basic Logs for data types that don’t need full analytics features – e.g., massive telemetry from IoT or debug logs. They cost much less per GB. Microsoft has been releasing features to that end because everyone’s dealing with the data tsunami. Lastly, watch your Commitment Tier with Sentinel. This is like a cell phone plan for data – commit to a certain amount per day. If we go over occasionally, we pay overage; if under, we still pay the committed amount. We should find the sweet spot – you don’t want to pay for 500 GB/day if you only use 300, but if you regularly use 300 and are on pay-as-you-go, you’re leaving volume discounts on the table. The session suggested reviewing usage quarterly and adjusting tiers (which you can do after each 31-day cycle). Overall, through filtering, smart retention, and using entitlements, some orgs saved 30% or more on their Sentinel bills – that’s real money we can channel to other investments.
Slide: Alerting and Analytics – Set up actionable alerts and utilize baseline analytics:
- Azure Monitor Alerts: Define alerts for critical conditions – e.g., CPU >90% sustained 10 min, low available memory, or a specific Event ID (service crash). Use action groups to notify teams or trigger remediation runbooks. Ensure alert rules are tuned (thresholds right-sized to avoid noise). Leverage dynamic thresholds for metrics when available so the system learns normal baseline.
- Azure Monitor Baseline Alerts (AMBA): Microsoft provides a set of pre-built alert rule templates for common scenarios (e.g., CPU hung, low disk space, unexpected reboot). Deploy these as a starting point for monitoring servers and services. Then adjust as needed for your environment.
- Governance via Azure Policy: Use Azure Policy to enforce monitoring standards – e.g., every subscription must have log analytics workspace, every VM must enable certain log categories. Policy can even auto-enable VM Insights or AMA on new resources. Regularly review compliance reports to catch resources that drift (like an unmonitored VM).
- KQL for Insights: Teach the team Kusto Query Language (KQL) to query logs. This unlocks ad-hoc analysis: e.g., investigating a spike in CPU by correlating logs, or querying sign-in logs to see patterns. KQL is powerful for creating custom workbooks/dashboards that visualize our data across monitoring and security domains.
Speaker Notes: Good monitoring isn’t just about gathering data, it’s about getting useful alerts from that data. We should implement a set of core Azure Monitor Alerts for our infrastructure. This includes basic health metrics like CPU, memory, disk – but tuned to realistic thresholds (maybe 95% CPU if sustained 5 min, to avoid transient spikes causing alerts). Also specific event-based alerts: if a domain controller logs an error event about replication failure, that should alert immediately. We can have alerts trigger emails, SMS, or Teams messages through action groups. For really critical things, an action group could even trigger an auto-remediation script (for instance, if a service stops, run a Logic App to attempt to restart it and notify). Microsoft has published Azure Monitor Baseline Alerts (AMBA) – essentially a library of recommended alert rules. In the session, Brian Wren talked about how to deploy and adapt these. We should take advantage of that work: deploy the template alerts for Windows Server, for Azure AD, for VMs, etc., and then modify thresholds once we see how often they fire. It’s easier to tweak an existing rule than to think of all the alerts from scratch. Azure Policy for monitoring is something we might not have fully used yet. With Policy, we can require that certain diagnostics are enabled. For example, ensure every Azure SQL DB has auditing turned on to a Log Analytics workspace, or every new subscription automatically gets our standard log workspace and alert rules deployed. Policy can not only audit but also remediate (via deployIfNotExists effects) – meaning it can automatically configure things. A policy could detect a VM without AMA and then kick off the installation of the AMA extension and assign the appropriate DCR. This kind of governance-as-code ensures nothing falls through the cracks even in a large, dynamic cloud environment. Finally, empowering ourselves with KQL (Kusto Query Language) is a force-multiplier. All our logs in Log Analytics/Sentinel are queryable with KQL. With it, we can do things like join data from Intune and Azure AD to investigate an issue (e.g., cross-reference a device’s patch status with sign-in locations). It’s like SQL for logs. We should include some KQL training in this course, as it was mentioned across multiple sessions as a key skill for modern cloud IT pros. The better we can slice and dice our data, the more value we get from our monitoring investments – finding trends, identifying root causes, and demonstrating results (e.g., show via a query that our patch compliance improved 20% after implementing Autopatch – great for an executive report).
Slide: From Monitoring to Action (DevOps mindset) – Use monitoring data to drive continuous improvement:
- Feedback Loop: Treat monitoring outcomes as feedback for configuration. For example, if alerts show frequent threshold breaches, consider if the threshold is too low or the system needs resources. If many devices report a policy not applying, adjust the policy or target. Use insights to refine your configurations continually.
- Dashboards and Reporting: Create tailored dashboards for different audiences – an Ops dashboard with real-time system statuses, a Security dashboard with current threat alerts, and an Executive dashboard with KPIs (uptime, compliance percentage, mean time to resolve incidents). Leverage Azure Monitor Workbooks or Power BI for this, combining data as needed.
- Incident Post-Mortems: After major incidents (e.g., outage or security event), use log analytics to do root cause analysis. Document what monitoring missed or which alert was noisy, and improve the monitoring rules set. Possibly introduce new logs or alerts to catch it next time.
- Cloud AI Analytics: Keep an eye on new AI-driven monitoring features – e.g., anomaly detection in metrics, User Behavioral Analytics in Sentinel, or even Copilot-like assistants that can sift logs. These can augment our abilities by spotting patterns we might miss.
Speaker Notes: The end goal of collecting all this data is to take action and improve. If monitoring is telling us something repeatedly, we shouldn’t just get alert-fatigue – we should either fix the underlying issue or improve the signal. For instance, if we get frequent CPU high alerts on a VM that’s mission-critical, maybe the answer is to scale it up or optimize the application. Or if an alert is always false alarm, tune its threshold or disable it. We want meaningful alerts only – quality over quantity. Building dashboards helps different stakeholders consume this information. Operations teams might need a wall monitor showing server statuses, while security teams want a map of incoming threats and their handling status. Executives likely care about trends and overall posture (e.g., “We have 98% of devices compliant with security policies this quarter, up from 90% last quarter, and here’s the ROI: 0 major incidents.”). Using Azure Monitor Workbooks, we can actually create pretty sophisticated dashboards with graphs and charts from our log data, and even share them in the Azure portal or as websites. Power BI is also an option for combining operational data with, say, business data to show impacts. A DevOps mindset means after every incident or change, learn from the data. If a cyber incident happened, did our monitoring catch it? If not, what log or alert could be added so it would next time? If a critical server went down and we only found out when users called, why didn’t our alerts notify us? Maybe we missed monitoring a particular system or the alert rule was mis-scoped. Each event is a chance to calibrate. Lastly, there’s a trend of AI in monitoring – some tools now will auto-detect anomalies (like a sudden spike in login failure rate might indicate a password spray attack, and an AI could flag that even if you didn’t set a specific threshold). Microsoft Sentinel has User Entity Behavioral Analytics that baseline normal behavior and alert on deviations. And looking forward, we might see AI copilots that can summarize log storms or suggest which alerts are related. We should be prepared to experiment with these emerging features to stay ahead of the curve. They won’t replace our judgment, but they can crunch data faster than we can, giving us a head start in incident detection.
AI and Copilot for IT Operations
Slide: The Rise of AI in IT Management – AI is transforming IT work by providing intelligent assistance:
- AI’s Role: No longer just a buzzword – AI is becoming an intuitive helper in IT, turning data into decisions. It can correlate signals across systems, suggest actions, and even automate routine tasks. As Microsoft puts it, “AI makes your workspace intuitive and fundamentally changes how IT is done”.
- Copilot Vision: Microsoft’s “Copilot” systems are AI assistants embedded in products. They aim to meet admins “where they are” (in the flow of work), simplify the complex, and help focus on what’s important. Instead of trawling through logs or docs, an admin can ask the Copilot in plain language for insights or to perform tasks.
- Examples of AI Assistance: Natural language querying of device data (“Show me all devices missing yesterday’s patch”), automated summary of an outage’s root cause from multiple sources, or generating a PowerShell script snippet on the fly. These reduce toil and augment human capability.
- Responsible AI: All AI features are built with privacy and security in mind – Microsoft emphasizes your organizational data is not used to train the public models and is kept within compliance boundaries. We should still validate AI outputs, but we can trust the platform to handle data respectfully.
Speaker Notes: We’re at an inflection point where AI is becoming a co-worker in IT. At MMS, this was a hot theme – every other session touched on AI in some form. Microsoft’s stance is that AI will empower IT pros by taking over drudgery and making expertise more accessible. Think of AI as a very fast-reading, context-aware assistant who’s read all the docs and seen tons of configurations. For instance, an AI might notice “Hey, five devices failed to install an app due to the same error – here’s the likely fix” without you digging through logs for an hour. Or you could ask, “Copilot, summarize the compliance state of our Windows 11 laptops” and get a clear answer or even a chart. This lines up with Microsoft’s design principles for Copilot – meeting you in whatever console you’re using, helping you focus, and simplifying complex tasks. It’s important to note Microsoft’s messaging on Responsible AI – they know companies are worried about data leakage or AI going rogue. They assure that when you use, say, Intune Copilot, your data stays within that service and isn’t used to train the base AI model for others. Essentially, the AI is pre-trained (like GPT-4) and then it’s bounded by your enterprise data and policies when it serves you. They’ve built in guardrails aligned with fairness, privacy, security, etc. So while we always need to review AI outputs for accuracy, we shouldn’t fear that using these tools will expose sensitive info externally.
Slide: Copilot in Intune – Practical Uses – Intune’s Copilot (preview) can assist with device management tasks:
- Device Queries: Ask Copilot questions about devices or users and get instant answers. E.g., “Show devices with BitLocker off” or “Which devices have had installation errors in the last week?” – Copilot will query Intune data and present a summary or list (saving you from building complex filters or reports).
- Troubleshooting Assistance: Copilot can help diagnose device issues. For a given device, it can compare its metrics to fleet averages, explain error codes, or highlight recent changes/policies that might be relevant. It essentially triages the vast telemetry for you to pinpoint likely causes.
- Policy Insights and Creation: It can summarize what a policy does in plain language (great for understanding complex config profiles), and even help craft a new policy – e.g., you tell Copilot what you want (“restrict USB drives”) and it suggests the Intune settings or script to deploy.
- Integration & “Open” Chat: Initially, Intune Copilot scenarios are scoped (focused prompts with defined outputs). But it’s moving toward a more open chat interface where you can converse follow-up questions, with history, to refine the output. Imagine debugging an app deployment by iteratively asking why a device failed, then narrowing down by software, all in one thread.
Speaker Notes: Let’s talk about Copilot in Intune, which was demoed in preview. This is like having a support engineer embedded in the Intune portal. For example, rather than navigating through four blades to compile a list of non-compliant devices, you literally ask, “Copilot, list devices that are non-compliant with reason and last check-in time.” The AI translates that into the right Graph queries and presents you an answer. If you need more detail, you ask a follow-up. This natural language interface can dramatically speed up obtaining information. It lowers the barrier for junior admins too – they might not know Intune’s UI deeply, but they can ask questions and learn from the results. In troubleshooting, one powerful feature described is comparing a problematic device to healthy ones. For instance, a laptop isn’t getting a config policy – you ask Copilot, and it might check that laptop’s logs, and also see that 95% of other devices got it successfully, and then spot what’s different (maybe it’s on an older OS build or in a different group). It can surface those insights whereas a human might take hours digging. Copilot can explain error codes in plain English (no more hunting through obscure documentation for that hex code). The Intune team even showed scenarios like summarizing a complex policy – e.g., you have a multi-setting config profile, Copilot can break down in summary what it does. This helps avoid misconfigurations; you sanity-check the summary to ensure the policy aligns with intent. While early Copilot features were “scope-based” (specific prompts/buttons in Intune UI that do one thing), Microsoft is transitioning to a conversational model. This means you’ll have a chat panel in Intune (as they already previewed) where Copilot retains context. You could ask: “Why did John’s device encryption fail?” It might respond with an analysis (policy X didn’t apply because device is in group Y). Then you follow up: “Generate a remediation script for that.” And it gives a PowerShell script to push via remediation. This back-and-forth, with memory, is key – it’s like pair programming but for IT operations. Early adopters in preview gave feedback that they needed more open prompting, hence the shift.
Slide: Security Copilot – AI for SecOps – Microsoft Security Copilot can augment our security operations center:
- Threat Analysis: Security Copilot (built on GPT-4 plus security-specific model) can ingest an incident (from Sentinel or Defender) and summarize the attack – telling a coherent story of what happened across endpoints, identities, etc. It can identify the attack vectors and impacted assets in minutes, which analysts might take hours to correlate.
- Investigation Assistance: Analysts can ask follow-ups like “Which accounts did this malware attempt to use?” or “Is this IP address seen elsewhere in our logs?” and the Copilot will query the data. It’s like having a threat hunter on demand that knows your environment’s telemetry.
- Response Suggestions: Copilot can suggest remediation steps or even craft a PowerShell or KQL query to contain or further investigate the threat. For instance, it might output a script to isolate all machines that communicated with a malicious IP. Analysts can review and execute, accelerating the response.
- Continuous Learning: The system incorporates global threat intelligence and learns from user feedback. If it makes an incorrect assumption and the analyst corrects it, it adapts. Over time this can improve detection of what is truly malicious vs. benign in your context.
Speaker Notes: Security Copilot is an exciting development for our cybersecurity team. Imagine an AI that has digested billions of threat signals and can cross-reference that with our own logs. When a complex incident happens (say a multi-stage attack where a phishing email led to credential theft, then lateral movement, then data exfiltration), Copilot can connect the dots extremely fast. It generates an incident summary or attack timeline that we usually have to painstakingly piece together from various tool consoles. In a demo, Microsoft showed how an analyst, instead of manually querying, just asked Security Copilot, “Have we seen this suspected malware file elsewhere in the org?” and it came back with a list of machines and times it was detected (by tapping into Defender data). That’s a task that might require writing KQL queries or scripts normally. The AI can also go proactive: “What are notable anomalies in the last 24 hours?” It might surface things that weren’t caught by static rules, like a user downloading an unusual amount of data at 3 AM or a device suddenly authenticating from two countries. This is similar to how Sentinel’s UEBA works, but Copilot puts a natural language layer on top. One key point: Security Copilot is grounded in Microsoft’s security graph and our data – it’s not making random guesses. It uses a security-hardened model and won’t execute changes itself; it gives recommendations. We must still validate and approve actions, which is good – we stay in control. But it offloads the heavy “analysis paralysis” that large data volumes cause. By using it, we can respond faster and with more confidence because we didn’t overlook something buried in the noise. It’s like having a Level-3 SOC analyst available 24/7 who never gets tired. This is a huge force multiplier given the talent shortage in security.
Slide: Preparing for AI-Driven IT – Upskill and adjust processes to fully exploit AI tools:
- Learn Prompting Skills: Just like crafting good search queries, learn to craft effective prompts for Copilots. Be specific about what you need (“in plain English” or “show steps to resolve X”). We might even develop internal “prompt playbooks” for common tasks (e.g., how to ask Copilot to generate a report vs. troubleshoot an issue).
- Validate and Verify: AI is not infallible. Always review suggestions or scripts it provides before applying. Use test devices or pilot groups to verify actions. Maintain a healthy skepticism – if something seems off, double-check with traditional methods.
- Security and Privacy Training: Ensure the team knows what data can or can’t be shared with AI. Even though Copilot is enterprise-ready, we avoid inputting ultra-sensitive info in prompts unnecessarily. Use anonymized or ID numbers instead of real names in queries, for example, when possible.
- Embrace Automation Ethos: Encourage the habit of, “If Copilot can do it, let it.” Free up human time for higher-level analysis and planning. Invest the time saved into strategic projects or learning. As AI handles routine tasks, our roles may shift more to oversight and design – which is a good thing.
Speaker Notes: To make the most of AI in our daily work, we need to adapt and learn new skills. One emerging skill is prompt engineering – basically, knowing how to ask AI the right way. In early trials, users found that how they phrased a question to Copilot affected the quality of answer. For example, saying “summarize the issue in non-technical terms for helpdesk” might yield a more useful output for communicating to end-users, whereas “provide a detailed technical analysis” would give a deep dive. We should share among the team what types of prompts get good results for Intune Copilot or Security Copilot. Maybe even create a cheat-sheet (some have joked it’s like a new coding language – not quite, but skillful prompting is key). We also must keep our professional judgment at the forefront. AI can and will make mistakes – maybe due to incomplete data or misinterpretation. Always verify critical things. If Copilot suggests a PowerShell script to remediate an issue, run it in a lab first or at least inspect the code line by line. This is similar to how we treat any new tool or junior admin’s work – trust but verify. Over time, as we gain confidence in patterns, we might streamline that, but initially we double-check everything. Another consideration: data handling. The AI might have access to a lot, but what we explicitly feed it in a prompt (like copying a chunk of a log or an error message) – we should ensure that’s okay under our policies. Microsoft says they’re not taking our prompts to train their models, which is comforting. But as a rule, we avoid putting secrets or PII in prompts. Instead of “User John Smith’s SSN 123-45-6789 had an issue,” we’d ask “User ID 1001 had an issue with payroll app” – keep it high-level. Lastly, mindset – we should embrace that AI will handle more grunt work. This is not about making us obsolete; it’s about elevating our work. If Copilot prepares a draft of a report, we’re freed to add analysis and context to it rather than crunching numbers. If it troubleshoots common issues, we can focus on architecture improvements to prevent those issues. Encourage the team to use these tools and not feel threatened by them. The organizations that thrive will be those where humans + AI work in tandem effectively. So we’ll incorporate AI tool training alongside our traditional training, ensuring everyone is comfortable and sees AI as part of the toolkit.
Optional Executive Summary:
The 2025 MMS conference revealed clear trends across Intune, security, cloud, and AI that underscore a new paradigm for IT management. Modern Device Management is firmly cloud-centric – with Intune best practices emphasizing Azure AD (Entra ID) integration, policy automation, and an “evergreen” mindset. Organizations are moving away from on-premise GPO/SCCM mentalities to leveraging Intune’s cloud speed and scale. A key takeaway is to avoid legacy pitfalls: for example, use Intune Filters and dynamic groups wisely to target devices, and regularly review your configurations to eliminate conflicts and outdated settings. The consensus is that keeping pace with Intune’s continuous improvements (monthly updates, new features) is now part of the job – “Don’t fall behind” as one session put it. Upskilling in Intune and cloud management isn’t optional; it’s necessary to manage Windows 10/11, Linux, and even MacOS devices at scale with agility.
Endpoint security has equally evolved. The conference hammered home a holistic, Zero Trust approach where identity, endpoint, and cloud app security all interlock. Microsoft’s Defender suite provides an integrated XDR platform – and the message was that most companies are under-utilizing it. Enabling features like Attack Surface Reduction rules, Tamper Protection, and leveraging Defender’s threat intelligence can prevent breaches proactively (blocking common ransomware techniques and credential theft before they cause damage). Real-world case studies showed that basic cyber hygiene (MFA, up-to-date patching, device hardening) could mitigate 98% of attacks – a statistic that strongly justifies continued investment in security tools and training. Teams should be cross-training: endpoint admins learning conditional access, and identity admins learning device compliance, because the lines are blurring. A standout theme was cost-effective security – using what you already own (like E5 license features or Sentinel ingestion credits) to avoid unnecessary spend. In short, invest in fully deploying the security capabilities at hand and fine-tuning them, rather than buying point solutions that add complexity.
Automation and AI emerged as the twin pillars driving the future of IT operations. Nearly every session touched on the importance of automating repetitive tasks – whether via PowerShell scripts calling the Graph API or using Azure Logic Apps for workflow integration. The ROI of automation is concrete: we heard how organizations saved hundreds of hours and reduced errors by scripting bulk operations and establishing “infrastructure as code” for their configurations. This not only improves efficiency but also consistency and compliance. It’s clear that IT professionals need to sharpen their scripting and cloud integration skills – the ability to tie services together (Intune, Azure Monitor, Teams, etc.) with code or low-code tools is highly valued.
On top of that, AI is set to amplify every aspect of IT. Microsoft’s Copilots for Intune and Security are on the cutting edge – early adopters report significantly faster troubleshooting and more insightful analysis. An executive insight: AI won’t replace IT staff, but IT staff who harness AI will outpace those who don’t. The major trend is AI-assisted administration: mundane tasks and initial triage can be offloaded to digital assistants, allowing human experts to focus on strategy and complex problems. For example, Intune Copilot can draft a device compliance report or suggest a fix for a policy conflict in moments – tasks that might take an engineer hours. This means our team can manage a larger environment or new technologies without linear headcount growth. The actionable recommendation is to invest in AI training and pilot programs now. Develop internal guidelines for AI use, and identify high-impact use cases (like using Security Copilot in incident response to reduce MTTR – mean time to resolve).
In summary, MMS 2025’s overarching theme was one of integration and intelligence. Siloes between endpoint management, security, and identity are dissolving – today’s IT admins need a T-shaped skillset, with deep expertise in their area but familiarity across adjacent domains. Every tool is becoming more connected (via Graph/APIs) and more intelligent (via AI/ML). Adopting a cloud-first, automation-first mindset is critical for keeping our IT operations agile, secure, and cost-effective. The content strongly justifies upskilling our team in areas like Azure AD/Entra (for Zero Trust identity), Microsoft Graph and PowerShell (for automation), Azure Monitor/Sentinel (for cloud-native monitoring), and AI Copilots. These investments will pay off in productivity gains and improved security posture. As one attendee noted, there were “so many takeaways I can apply at work” – the next step is turning these learnings into action by updating our internal practices, training our staff on the new tools, and embracing the cloud innovations at our fingertips. With the strategies and best practices distilled from MMS 2025, we can confidently evolve our IT landscape to be more resilient, efficient, and ready for the future.
Comments
Post a Comment