Back to blog
| 12 min read

Building a Patch Management Ring Strategy from Scratch

You have 200 machines, a WSUS server you last looked at in February, and Microsoft just dropped 97 patches on Patch Tuesday. You approve everything at once, push to the whole fleet, and pray nothing breaks. Here's how to stop praying and start staging.

Patching WSUS Best Practices PowerShell

Why ring-based patching matters

The "approve everything on Tuesday, push it all on Wednesday" approach works until it doesn't. And when it doesn't, it fails catastrophically. A bad driver update bricks half your laptops. A .NET patch breaks a line-of-business app. A firmware update causes blue screens on a specific hardware model. Suddenly you're reimaging 40 machines at 11 PM on a Wednesday, and your phone is ringing with every department head in the building.

If you needed a recent reminder of why staged rollouts matter, look no further than the CrowdStrike incident in July 2024. A single faulty channel file pushed to the entire fleet simultaneously took down 8.5 million Windows devices worldwide. Airlines grounded, hospitals went to paper, 911 systems went offline. The root cause wasn't just a bad update — it was the absence of staged deployment. Every machine got the same update at the same time, so every machine failed at the same time. There was no canary, no ring of early adopters to catch the problem before it reached critical infrastructure.

A ring-based strategy is the fix. Instead of treating your entire fleet as one giant target, you split it into groups that receive patches at staggered intervals. Problems surface in the early rings — a handful of IT-owned devices — before they ever touch a finance workstation or a production server. You get a built-in feedback loop: deploy, wait, verify, promote. The cost is a few days of delay. The benefit is that you never have to explain to the CFO why every machine in accounting is down.

For small IT teams, this feels like overhead. You barely have time to approve patches, let alone manage four deployment rings. But the tooling is straightforward, the setup is a one-time investment, and the alternative — a fleet-wide outage with zero rollback path — is far more expensive than a few hours of configuration.

The four-ring model

The rings aren't arbitrary groupings. Each one has a purpose, a specific device population, and a delay that reflects how much risk you're willing to absorb at that stage.

0

Ring 0 — Pilot

Devices: IT team laptops and workstations. The machines owned by the people who can diagnose and recover from a bad patch without filing a ticket.

Size: 5-10 machines (or ~3-5% of fleet).

Delay: Deploy immediately on Patch Tuesday, or within 24 hours.

Purpose: Catch obvious breakage — BSODs, app crashes, boot loops — before anyone outside IT is affected. If a patch kills Outlook on your machine, you pull it from the pipeline before it reaches 200 users.

1

Ring 1 — Early Adopters

Devices: Departments with tech-comfortable users who won't panic at a reboot prompt. Marketing, dev teams, or any group with a good working relationship with IT. Include a mix of hardware models to catch driver-specific issues.

Size: ~10-15% of the fleet.

Delay: 3 days after Ring 0.

Purpose: Broader hardware and software coverage. Ring 0 might be all ThinkPads running the same apps. Ring 1 introduces Dells, Surfaces, and that one team still running AutoCAD 2019. This is where compatibility issues surface.

2

Ring 2 — Broad Deployment

Devices: The general workforce. Standard desktops, laptops, conference room PCs, shared workstations. This is the bulk of your fleet.

Size: ~70% of the fleet.

Delay: 7 days after Ring 0.

Purpose: By this point, the patches have been running on real workloads for a week. Known-bad patches have been pulled. Any app-specific issues have been identified and mitigated. This is the "safe to deploy" wave.

3

Ring 3 — Critical / Protected

Devices: Production servers, domain controllers, finance workstations, executive devices, PCI-scoped machines, and anything where downtime has direct revenue or compliance impact.

Size: ~10-15% of the fleet.

Delay: 14 days after Ring 0.

Purpose: Maximum confidence before touching anything that matters. Two weeks of soak time across hundreds of machines. These devices get patched last because the cost of a bad patch here is highest — a downed SQL server or a BSOD on the CEO's laptop during a board presentation.

Ring Population % of Fleet Delay
Ring 0 — Pilot IT team devices 3-5% Day 0
Ring 1 — Early Adopters Willing departments 10-15% Day 3
Ring 2 — Broad General workforce ~70% Day 7
Ring 3 — Critical Servers, finance, execs 10-15% Day 14

Setting up rings in practice

The ring model is only useful if your tooling knows which devices belong to which ring. That means tagging devices in Active Directory so WSUS (or Intune, or SCCM) can target them. Here's the practical setup.

Tag devices with ring assignments in AD

Use extensionAttribute1 on computer objects to store the ring assignment. This attribute is available on all AD computer objects, syncs to Entra, and doesn't interfere with anything else. Alternatively, use the Description field if your org already uses extension attributes for other purposes.

# Assign ring values to computer objects in AD
# Ring values: "Ring0-Pilot", "Ring1-EarlyAdopter", "Ring2-Broad", "Ring3-Critical"

# Assign a single device to Ring 0
Set-ADComputer -Identity "IT-LAPTOP-01" -Replace @{
  extensionAttribute1 = "Ring0-Pilot"
}

# Bulk assign from a CSV file
$Assignments = Import-Csv -Path ".\RingAssignments.csv"
# CSV format: ComputerName,Ring

foreach ($Device in $Assignments) {
  try {
    Set-ADComputer -Identity $Device.ComputerName -Replace @{
      extensionAttribute1 = $Device.Ring
    }
    Write-Host "[OK] $($Device.ComputerName) → $($Device.Ring)" -ForegroundColor Green
  } catch {
    Write-Warning "[FAIL] $($Device.ComputerName): $_"
  }
}

Create WSUS computer groups matching rings

WSUS uses computer groups to control which machines get which approvals. Create a group for each ring, then set WSUS to use server-side targeting so group membership is controlled by GPO or script — not by what the client reports.

# Create WSUS computer groups for each ring
$WsusServer = Get-WsusServer -Name "wsus01.contoso.com" -PortNumber 8530

# Create the ring groups
$Rings = @(
  "Patch-Ring0-Pilot",
  "Patch-Ring1-EarlyAdopter",
  "Patch-Ring2-Broad",
  "Patch-Ring3-Critical"
)

foreach ($Ring in $Rings) {
  $WsusServer.CreateComputerTargetGroup($Ring)
  Write-Host "Created group: $Ring"
}

# Move computers to their ring group based on AD attribute
$AllComputers = $WsusServer.GetComputerTargetGroups()

foreach ($Ring in $Rings) {
  $RingTag = $Ring -replace "Patch-", ""
  $TargetGroup = $AllComputers | Where-Object { $_.Name -eq $Ring }

  # Get AD computers in this ring
  $ADComputers = Get-ADComputer -Filter {
    extensionAttribute1 -eq $RingTag
  } -Properties extensionAttribute1

  foreach ($PC in $ADComputers) {
    $WsusComputer = $WsusServer.GetComputerTargetByName($PC.Name)
    if ($WsusComputer) {
      $TargetGroup.AddComputerTarget($WsusComputer)
    }
  }
}

Query ring membership

Before you approve patches, you want to know exactly what's in each ring. This is also the report you'll show your manager when they ask "which machines are in the pilot group?"

# List all computers by ring assignment
$RingValues = @(
  "Ring0-Pilot",
  "Ring1-EarlyAdopter",
  "Ring2-Broad",
  "Ring3-Critical"
)

foreach ($Ring in $RingValues) {
  $Devices = Get-ADComputer -Filter {
    extensionAttribute1 -eq $Ring
  } -Properties extensionAttribute1, OperatingSystem, LastLogonDate |
    Select-Object Name, OperatingSystem, LastLogonDate

  Write-Host "`n=== $Ring === ($($Devices.Count) devices)" -ForegroundColor Cyan
  $Devices | Format-Table -AutoSize
}

# Find unassigned computers (need ring assignment)
$Unassigned = Get-ADComputer -Filter {
  extensionAttribute1 -notlike "Ring*"
} -Properties extensionAttribute1, LastLogonDate |
  Where-Object { $_.LastLogonDate -gt (Get-Date).AddDays(-30) }

Write-Host "`n=== UNASSIGNED (active in last 30 days) === ($($Unassigned.Count) devices)" -ForegroundColor Yellow
$Unassigned | Select-Object Name, LastLogonDate | Format-Table -AutoSize

Pro tip: Run the unassigned query weekly. New machines get imaged, join the domain, and sit in no ring. Those unassigned devices aren't getting patched on any schedule — they're a compliance gap that auditors will flag.

Compliance tracking

Deploying patches is half the job. The other half is knowing whether they actually installed. A device can be "targeted" for a patch in WSUS and still not have it installed because it hasn't checked in, the user deferred the reboot, or the install failed silently. Without compliance tracking, you're flying blind.

You need three things: per-ring compliance percentages so you know which rings are healthy, a list of chronically non-compliant devices so you can investigate them individually, and a weekly report for management that shows the trend over time.

Per-ring compliance report

This script queries WSUS for each ring group and calculates the percentage of devices that have installed all approved patches. It also identifies the specific devices dragging the numbers down.

# Get patch compliance per ring from WSUS
$WsusServer = Get-WsusServer -Name "wsus01.contoso.com" -PortNumber 8530
$UpdateScope = New-Object Microsoft.UpdateServices.Administration.UpdateScope
$UpdateScope.ApprovedStates = "LatestRevisionApproved"

$Results = @()
$RingGroups = @(
  "Patch-Ring0-Pilot",
  "Patch-Ring1-EarlyAdopter",
  "Patch-Ring2-Broad",
  "Patch-Ring3-Critical"
)

foreach ($GroupName in $RingGroups) {
  $Group = $WsusServer.GetComputerTargetGroups() |
    Where-Object { $_.Name -eq $GroupName }

  $Computers = $Group.GetComputerTargets()
  $Compliant = 0
  $NonCompliant = @()

  foreach ($Computer in $Computers) {
    $Status = $Computer.GetUpdateInstallationSummary()
    if ($Status.NotInstalledCount -eq 0 -and $Status.FailedCount -eq 0) {
      $Compliant++
    } else {
      $NonCompliant += [PSCustomObject]@{
        Name        = $Computer.FullDomainName
        NotInstalled = $Status.NotInstalledCount
        Failed      = $Status.FailedCount
        LastContact = $Computer.LastReportedStatusTime
      }
    }
  }

  $Total = $Computers.Count
  $Pct = if ($Total -gt 0) { [math]::Round(($Compliant / $Total) * 100, 1) } else { 0 }

  $Results += [PSCustomObject]@{
    Ring           = $GroupName
    TotalDevices   = $Total
    Compliant      = $Compliant
    NonCompliant   = $Total - $Compliant
    CompliancePct  = "$Pct%"
  }

  # Output non-compliant devices for this ring
  if ($NonCompliant.Count -gt 0) {
    Write-Host "`n  Non-compliant in ${GroupName}:" -ForegroundColor Yellow
    $NonCompliant | Format-Table -AutoSize
  }
}

# Summary table
Write-Host "`n=== PATCH COMPLIANCE SUMMARY ===" -ForegroundColor Cyan
$Results | Format-Table -AutoSize

# Export for the weekly management report
$Results | Export-Csv -Path ".\PatchCompliance-$(Get-Date -Format 'yyyy-MM-dd').csv" -NoTypeInformation

Identifying chronic offenders

Some devices show up on the non-compliant list every single week. These are your chronic offenders — machines that haven't successfully patched in multiple cycles. They're usually offline laptops, forgotten conference room PCs, or devices with corrupted Windows Update components. Track how many consecutive cycles a device has missed, because that drives your escalation workflow.

# Find devices that haven't reported to WSUS in 14+ days
$StaleThreshold = (Get-Date).AddDays(-14)
$StaleDevices = $WsusServer.GetComputerTargets() | Where-Object {
  $_.LastReportedStatusTime -lt $StaleThreshold
} | Select-Object FullDomainName, LastReportedStatusTime,
  @{N="DaysSinceContact";E={
    [math]::Round(((Get-Date) - $_.LastReportedStatusTime).TotalDays, 0)
  }} | Sort-Object DaysSinceContact -Descending

Write-Host "Devices not reporting for 14+ days: $($StaleDevices.Count)"
$StaleDevices | Format-Table -AutoSize

Escalation workflow

Identifying non-compliant devices is step one. Actually getting them patched is step two, and it requires a repeatable escalation process. Without one, the same devices sit on your non-compliant list for months, your compliance percentage stagnates, and your next audit finding writes itself.

Here's a three-tier escalation model that balances user autonomy with security requirements. The key is automation at every tier — if you're manually sending reminder emails, you'll stop doing it within two weeks.

Tier 1: Automated reminder (1 missed cycle)

When a device misses its patch window by more than 3 days, automatically email the device owner (pulled from AD's ManagedBy attribute or your CMDB). The email should be friendly, specific, and actionable: "Your laptop IT-LAPTOP-42 is missing 3 security patches. Please connect to the VPN and restart your machine before Friday." Include a self-service link to your patching KB article.

Tier 2: Manager escalation (2 missed cycles)

If the device is still non-compliant after 2 patch cycles (roughly 4 weeks), escalate to the user's manager. The email to the manager includes the device name, the user, the number of missing patches, and how long the device has been out of compliance. Most managers will nudge their report to restart their laptop. This resolves ~80% of chronic cases without IT having to chase anyone down.

Tier 3: Force-patch or quarantine (3 missed cycles)

After 3 missed cycles (6+ weeks non-compliant), the device is a security liability. At this point, you have two options depending on your organization's policy: force-install patches with an automatic reboot during off-hours (with advance warning to the user), or quarantine the device via conditional access or network segmentation until it's patched. Document the action, notify the user and their manager, and log it for audit purposes. This is the stick that makes the carrot (Tier 1 reminders) credible.

Important: Whatever escalation model you choose, document it in a policy that management has signed off on. When you force-reboot the VP of Sales' laptop, you want a policy to point to — not a verbal agreement from three months ago.

Common mistakes to avoid

Putting too many devices in Ring 0

Ring 0 is supposed to be a canary, not a deployment wave. If you put 50 machines in Ring 0 because "we want to catch more issues," you've just created a scenario where a bad patch takes down your entire IT department simultaneously. Keep Ring 0 small — 5 to 10 machines, max. That's enough to test across a few hardware models and OS builds without crippling the team that's supposed to be responding to problems.

Not having a rollback plan

Rings give you time to catch bad patches, but only if you have a way to pull the emergency brake. Before you promote a patch from Ring 0 to Ring 1, define what "rollback" looks like: can you uninstall the update via wusa.exe /uninstall? Do you need to decline it in WSUS? For firmware and driver updates, rollback might mean reimaging. Know the answer before you need it. Have the WSUS decline action or the Intune profile removal ready to go.

Ignoring driver updates

Most ring strategies focus on Windows cumulative updates and security patches. Driver updates get approved with a rubber stamp or ignored entirely. But driver updates are the most common cause of hardware-specific breakage — a new Intel graphics driver that causes display flickering on specific models, or a Realtek audio driver that kills Teams call quality. Include driver updates in your ring pipeline, or at minimum, test them separately before broad approval.

No executive visibility

If leadership doesn't know you're running a ring strategy, they can't support it. When a VP complains "my laptop hasn't gotten the latest update yet and everyone else has," you need your CISO or IT director to back you up with "that's by design — their machine is in the critical ring." Send a monthly one-page summary: overall compliance percentage, mean time to patch, and any incidents prevented by the ring model. Make the value visible.

Putting it all together

The approach above gives you the framework, but building it from scratch still means writing the ring assignment scripts, the compliance queries, the escalation automation, and the reporting templates. For a small team that's already stretched thin, that's a few weekends of work.

If you want to skip the assembly step and get straight to deploying, we've packaged all of this into a ready-to-run toolkit.

Want the complete ring strategy toolkit?

The Patch Management Ops Pack includes production-ready scripts for every step covered in this post — ring assignment, compliance reporting, automated escalation, and executive summaries. Drop them into your environment and have a working ring strategy by end of day.

  • Set-PatchRing.ps1 — Bulk ring assignment with CSV import and AD tagging
  • Get-PatchCompliance.ps1 — Per-ring compliance with stale device detection
  • New-RemediationTask.ps1 — Automated escalation with email notifications
  • Escalation workflow templates (Tier 1/2/3)
  • Executive summary report template (one-page PDF-ready)
View Patch Management Ops Pack $69 one-time purchase