5 Reasons Configuration Manager and 1E Nomad Are Still Better Together

5 Reasons Configuration Manager and 1E Nomad Are Still Better Together

The recent release of Configuration Manager (1610) introduced Peer Cache as a Preview Feature. The Peer Cache concept was first introduced a year ago in the first iteration of Configuration Manager Current Branch (1511) when it was known as Windows PE Peer Cache. It could only be used to share Package content during the Windows PE stage of OS deployment Task Sequence.

Configuration Manager 1610 includes a Preview Feature which is effectively an update to the Windows PE Peer Cache feature. Renamed simply to Peer Cache, (first introduced in Technical Preview 1604) it no longer has the limitations of its predecessor – it can now share any content (Packages, Applications and Software Updates) either in Windows PE or in the full OS.

The introduction of Peer Cache has led people considering peer-to-peer content distribution options to question the need for 1E Nomad. So, what benefits do you get from Nomad that continue to ensure Configuration Manager and 1E Nomad are still better together.

  1. Bandwidth management

Peer-to-peer content sharing can only occur if the content is available on one or more local peers, so someone has to get it from the DP across the WAN in the first place. When Peer Cache needs to get content from a DP, it will use the Background Intelligent Transfer Service (BITS). BITS is unable to manage bandwidth because it is only aware of network activity on the local client network adapter – it is completely unaware of any network conditions beyond the local adapter or the cumulative effect of other users on the network. If the client getting the content has a 1GB network adapter and the user is currently using half of that, BITS will assume it has 500Mbps to play with.

In reality, of course, the uplink to the DP is much slower (perhaps 5Mbps) so in this scenario BITS will saturate the uplink. You can configure each client to limit the rate that BITS will use (so, for example, it will never use more than 50kbps), but this still doesn’t help if the uplink is already busy (especially if you get into a situation where several clients are simultaneously downloading content using BITS) and slows everything down unnecessarily when the uplink is not busy.

In contrast, Nomad was designed from the ground up to dynamically adjust to the end-to-end network conditions when downloading content over the WAN. It uses Reverse QoS – a constantly learning algorithm that takes into account the end-to-end turnaround time of each block of data and dynamically adjusts the rate of transfer according to the conditions. If the link is quiet, Nomad goes faster. As soon as traffic increases, Nomad rapidly backs off, giving the other traffic priority and ramps up if bandwidth becomes available again. Nomad will never saturate any network link.

2. Immediate content sharing

Using Peer Cache, the content must be fully downloaded on a peer and that peer must report back as a data source to CM (which can take up to 24 hours) before peers can get content from it. Consider the scenario where you need to get a patch out to all your workstations. If you simply create a deployment to all workstations, using Peer Cache they will all download from the DP (all using BITS so now your network gets saturated) because none of them have been given the opportunity to fully download the content and report back.

Instead, you must select one or two devices in each subnet or Boundary Group (see point 4 below), deploy the update to these devices and wait until they have each reported back that they have the content (up to 24 hours) before you can deploy the patch to everyone else. In effect, when using Peer Cache, you have to manage the initial distribution of any new content as you would to workstation-based DPs.

In contrast, Nomad clients cooperate with each other as soon as any one of them needs content. If no-one has it locally, one of them will be elected as a master and start to download it from the DP. Peers can immediately start downloading from the elected master even while it is downloading the content itself, so there is no need to wait for content to be fully downloaded on any client. You can deploy a patch to all workstations in a single deployment, confident that your network will not grind to a halt and that all target systems will be patched as quickly as possible.

3. Dynamic election

When Peer Cache is in use, the CM client makes a standard content request to the Management Point and will be returned a list of content sources that will include available DPs and peers within the ‘local’ Boundary Group. The client will work its way through this list, using the first available source it finds, which potentially means waiting around as it attempts to connect to lots of unavailable clients before finding an available source. Once the download has started, if the selected source goes offline the client will eventually go back to the original source list to try to find an alternative source, but this process is not particularly dynamic at present.

In contrast, when a Nomad client requires content it will always start with an election broadcast to identify any clients on the local subnet that have (or are in the process of getting) the content. If there are several sources available, Nomad will elect the best device – it will select a server over a workstation and a workstation over a laptop, prefer wired to wireless, prefer devices that have been on longer (uptime) and even allow administrators to define their own weighting values to make some devices more or less likely to win these elections.

If there are no sources on the local subnet, Nomad can optionally look for alternative sources in adjacent subnets (the grouping of subnets into a location being defined by an administrator). If the elected master goes offline, the peers will immediately elect a new master using the same process (local subnet first, followed by lookup in adjacent subnets). The election process will always prefer the client that has the most of the requested content already cached, so if the newly elected master does not have all the requested content yet (see point 2 above), it will resume download from the remote DP as the remaining peers resume downloading from the new master. The whole process is dynamic, always prefers the most suitable device and doesn’t waste time attempting to connect to devices that may no longer be online.

The whole process is dynamic, always prefers the most suitable device and doesn’t waste time attempting to connect to devices that may no longer be online.

4. Boundary Group dependence

Peer Cache uses Boundary Groups to determine which peers are ‘local’. A peer in the local subnet will not necessarily be used in preference to any other peer in the same Boundary Group, which could span several subnets or even locations. Note that Boundary Group configuration has changed in CM 1610 so you’ll need to read up on that.

If a client is in a Boundary that is not included in a defined Boundary Group, it will only download from a DP assigned to the Default-Site-Boundary-Group. Assuming the client is within a Boundary that is included in a Boundary Group, it will be able to use other peers within the Boundary Group that have the content available. However, this process uses inventory information to determine which peers are within the same Boundary Group. If a laptop has registered content and moves from one Boundary Group to another but has not since sent inventory data, it will potentially be used as a content source by peers in the original Boundary Group.

This will result in multiple clients downloading across the WAN, all using BITS (cue more network congestion!)

As described in point 3, Nomad will always use an election process to identify local clients that have the content, before looking for content in neighboring subnets using Nomad’s optional Single Site Download (SSD) feature. When SSD is used, Nomad clients report their initial subnet when the agent starts and any changes while the agent is running as soon as they occur, so you can be confident that any selected master is currently ‘local’ to the peer that is requesting it.

5. Additional OS Deployment features

If you haven’t already, you are probably considering using Configuration Manager to deploy Windows 10 in your environment. Nomad includes three features that are specifically designed to help you with deploying Windows

  • BIOS to UEFI conversion. In CM 1610, Microsoft introduced Task Sequence steps to manage BIOS to UEFI. In practice, this simply adds support for changing the boot mode during a running Task Sequence (prior to 1610 it was not possible to restart the computer in the Task Sequence if the device had changed from BIOS to UEFI since the Task Sequence started – unless, of course, you had Nomad :-). However, you still need to construct your own command lines, with all the necessary logic, to actually change the firmware configuration from BIOS emulation to UEFI. Nomad includes a comprehensive BIOS to UEFI solution, implemented through two simple Task Sequence steps. The first of these gets around the boot mode restriction in previous CM versions and the second allows an administrator to select the configuration options (e.g. UEFI, SecureBoot, Enable PXE etc.) they want applied. 1E has done the hard work translating these options to the relevant OEM commands for Dell, Lenovo and HP systems and performs all the necessary logic at runtime. No command line steps, no logic to add into the TS, just add the steps to your Task Sequence, select the options you want and the rest is taken care of.
  • Peerbased PXE. No doubt reducing the number of DPs is the key reason for investigating peer-to-peer solutions in the first place. But if you want to use PXE to build new computers (or rebuild existing PCs that refuse to boot into the existing OS), you’ll still need Distribution Points (with the PXE option enabled) otherwise you’ll be booting over the WAN. Nomad includes PXE Everywhere that can turn every client into a PXE server, so your OSD boot images are always obtained from the local subnet and you don’t need to worry about DHCP options or IP helpers on your routers
  • Peer-based State Migration. If you want to migrate user data during a wipe-and-load (e.g. where the disk is re partitioned to support UEFI) or replace scenario, you’ll need a State Migration Point. Nomad includes the Peer Backup Assistant that eliminates the need for State Migration Points by enabling available storage on peers to be used to temporarily store that user data while the device is being migrated.

There is no doubt that peer-to-peer content distribution is in demand to enable organizations to reduce their CM infrastructure while getting the same level of service to end users without causing network congestion. I hope this blog provides some food for thought when considering the options and demonstrates the value that Nomad continues to add as Microsoft continue to develop our favorite systems management platform.

Want to write for 1E? We’ve made it easy to be a part of a quickly growing environment fostering the ideas and expertise of Microsoft MVPs. Not an MVP? You can still apply to write for us here. We can’t wait to hear what you’ve got to say!

Follow us on social: Twitter, Facebook, LinkedIn

Share this post

Share this post on your favourite social media platform.

Find this article useful?

If so please click here