The release of Configuration Manager (1610) introduced Peer Cache as a Preview Feature.

The Peer Cache concept was first introduced in the first iteration of Configuration Manager Current Branch (1511). It was called Windows PE Peer Cache. It was used to only share Package content during the Windows PE stage of OS Deployment Task Sequence.
Configuration Manager 1610 included a Preview Feature which is effectively an update to the Windows PE Peer Cache feature. Renamed simply to Peer Cache, (first introduced in Technical Preview 1604) it no longer has the limitations of its predecessor – it can now share any content (Packages, Applications and Software Updates) either in Windows PE or in the full OS.
The introduction of Peer Cache has led people considering peer-to-peer content distribution options to question the need for 1E Nomad. Now that Configuration Manager is up to version 1810, what benefits do you get from Nomad? How do you continue to ensure Configuration Manager and 1E Nomad are still better together? I'm glad you asked. (For even more information, you can rewatch the Configuration Manager State of the Nation 2018 here.)

  1. Bandwidth management

Peer-to-peer (P2P) content sharing can only occur if the content is available on one or more local peers. So, someone has to get it from the DP across the WAN in the first place. When Peer Cache needs to get content from a DP, it will use the Background Intelligent Transfer Service (BITS). BITS is unable to manage bandwidth because it is only aware of network activity on the local client network adapter. It is completely unaware of any network conditions beyond the local adapter. It also doesn't understand the cumulative effect of other users on the network. If the client getting the content has a 1GB network adapter and the user is currently using half of that, BITS will assume it has 500Mbps to play with.

In reality, of course, the uplink to the DP is much slower (perhaps 5Mbps) so in this scenario BITS will saturate the uplink.

You can configure each client to limit the rate that BITS will use (so, for example, it will never use more than 50kbps), but this still doesn’t help if the uplink is already busy (especially if you get into a situation where several clients are simultaneously downloading content using BITS) and slows everything down unnecessarily when the uplink is not busy.
In contrast, Nomad was designed from the ground up to dynamically adjust to the end-to-end network conditions when downloading content over the WAN. It uses Reverse QoS – a constantly learning algorithm that takes into account the end-to-end turnaround time of each block of data and dynamically adjusts the rate of transfer according to the conditions. If the link is quiet, Nomad goes faster. As soon as traffic increases, Nomad rapidly backs off, giving the other traffic priority and ramps up if bandwidth becomes available again. Nomad will never saturate any network link.

2. Immediate content sharing

Using Peer Cache, the content must be fully downloaded on a peer and that peer must report back as a data source to CM (which can take up to 24 hours) before peers can get content from it. Consider the scenario where you need to get a patch out to all your workstations. If you simply create a deployment to all workstations, using Peer Cache they will all download from the DP (all using BITS so now your network gets saturated) because none of them have been given the opportunity to fully download the content and report back.
Instead, you must select one or two devices in each subnet or Boundary Group (see point 4 below), deploy the update to these devices and wait until they have each reported back that they have the content (up to 24 hours) before you can deploy the patch to everyone else. In effect, when using Peer Cache, you have to manage the initial distribution of any new content as you would to workstation-based DPs.
In contrast, Nomad clients cooperate with each other as soon as any one of them needs content. If no-one has it locally, one of them will be elected as a master and start to download it from the DP. Peers can immediately start downloading from the elected master even while it is downloading the content itself, so there is no need to wait for content to be fully downloaded on any client. You can deploy a patch to all workstations in a single deployment, confident that your network will not grind to a halt and that all target systems will be patched as quickly as possible.

3. Dynamic election

When Peer Cache is in use, the CM client makes a standard content request to the Management Point. It returns a list of content sources that includes available DPs and peers within the ‘local’ Boundary Group. The client works its way through this list, using the first available source it finds. It potentially means waiting around as it attempts to connect to lots of unavailable clients before finding an available source. Once the download starts, if the selected source goes offline, the client will eventually go back to the original source list to try to find an alternative source. However, this process is not particularly dynamic at present.

In contrast, when a Nomad client requires content it will always start with an election broadcast to identify any clients on the local subnet that have (or are in the process of getting) the content.

In the event there are several sources available, Nomad will elect the best device. It selects a server over a workstation and a workstation over a laptop. It also prefers wired to wireless and prefers devices that have been on longer (uptime). Nomad even allows administrators to define their own weighting values to make some devices more or less likely to win these elections.
When there are no sources on the local subnet, Nomad optionally looks for alternative sources in adjacent subnets. (The grouping of subnets into a location being defined by an administrator). If the elected master goes offline, the peers will immediately elect a new master. It uses the same process (local subnet first, followed by lookup in adjacent subnets). The election process will always prefer the client that has the most of the requested content already cached, so if the newly elected master does not have all the requested content yet (see point 2 above), it will resume the download from the remote DP as the remaining peers resume downloading from the new master. The whole process is dynamic, always prefers the most suitable device and doesn’t waste time attempting to connect to devices that may no longer be online.

The whole process is dynamic. It always prefers the most suitable device. It doesn’t waste time attempting to connect to devices that may no longer be online.

4. Boundary Group dependence

Peer Cache uses Boundary Groups to determine which peers are ‘local’. A peer in the local subnet will not necessarily be used in preference to any other peer in the same Boundary Group. This could span several subnets or even locations. Note that Boundary Group configuration has changed in CM 1610 so you’ll need to read up on that.
If a client is in a Boundary that is not included in a defined Boundary Group, it will only download from a DP assigned to the Default-Site-Boundary-Group. Assuming the client is within a Boundary that is included in a Boundary Group, it will be able to use other peers within the Boundary Group that have the content available. However, this process uses inventory information to determine which peers are within the same Boundary Group. If a laptop has registered content and moves from one Boundary Group to another but has not since sent inventory data, it will potentially be used as a content source by peers in the original Boundary Group.

This will result in multiple clients downloading across the WAN, all using BITS (cue more network congestion!)

As described in point 3, Nomad will always use an election process to identify local clients that have the content. It does this before looking for content in neighboring subnets using Nomad’s optional Single Site Download (SSD) feature. When you use SSD, Nomad clients report their initial subnet when the agent starts.

5. Additional OS Deployment features

If you haven’t already, you are probably considering using Configuration Manager to deploy Windows 10 in your environment. Nomad includes three features that are specifically designed to help you deploy Windows.

  • BIOS to UEFI conversion.

    In CM 1610, Microsoft introduced Task Sequence steps to manage BIOS to UEFI. In practice, this simply adds support for changing the boot mode during a running Task Sequence.  Prior to 1610, it was not possible to restart the computer in the Task Sequence if the device had changed from BIOS to UEFI. Unless, of course, you had Nomad! However, you still need to construct your own command lines. You need all the necessary logic to actually change the firmware configuration from BIOS emulation to UEFI. Nomad includes a comprehensive BIOS to UEFI solution, implemented through two simple Task Sequence steps.
    The first of these get around the boot mode restriction in previous CM versions. The second allows an administrator to select the configuration options (e.g. UEFI, SecureBoot, Enable PXE etc.) they want to be applied. 1E has done the hard work translating these options to the relevant OEM commands. We've done so for Dell, Lenovo, and HP systems. We also perform all the necessary logic at runtime. No command line steps, no logic to add to the TS. Just add the steps to your Task Sequence. Then select the options you want.

  • Peerbased PXE.

    No doubt reducing the number of DPs is the key reason for investigating peer-to-peer solutions in the first place. You can use PXE to build new computers (or rebuild existing PCs that refuse to boot into the existing OS). However, you still need Distribution Points (with the PXE option enabled). Otherwise, you’ll be booting over the WAN. PXE Everywhere turns every client into a PXE server. Your OSD boot images are always obtained from the local subnet. You don’t need to worry about DHCP options or IP helpers on your routers.

  • Peer-based State Migration.

    You can migrate user data during a wipe-and-load (e.g. where the disk is repartitioned to support UEFI). However, you’ll need a State Migration Point. Nomad includes the Peer Backup Assistant that eliminates the need for State Migration Points. Available storage on peers is enabled so that user data during migration can be stored temporarily.

There is no doubt that P2P content distribution is in demand.  It enables organizations to reduce their CM infrastructure. It also gives the same level of service to end users without causing network congestion. I hope this post provides some food for thought when considering the options available.  To learn more about the value that Nomad continues to add as Microsoft grows, follow us on social.