The recent release of Nomad 2012 v5.2 inspired me to update my Peer Backup Assistant (PBA) task sequences. If you’re using Nomad you might like to do the same in order to better understand PBA. And if not let us know and we’ll be pleased to show it to you.
You may know that Peer Backup Assistant does essentially what it says, backing up data to peer clients. That’s especially useful for operating system deployments when you can’t save the USMT data on the disk of the computer that is being built or upgraded and you don’t want to invest in State Migration Points. My favorite v5.2 enhancement is that we’ve expanded PBA by taking advantage of a popular Nomad v5.0 feature: Single Site Download (SSD). If you happen to have a large location with multiple network subnets at the end of a slow WAN link, such as a building in a distant country, SSD can be very handy. Nomad does a single smart download of your ConfigMgr package to the building and then thanks to SSD that copy can be shared amongst the multiple subnets. And the definition of that location (“site”, but not to be confused with a ConfigMgr site) is easily set and thus easily maintained as the building’s network changes over time.
With v5.2 we can now use the functionality underlying SSD with PBA. So when your OSD task sequence backs up the computer’s state and user data to the peers, those peers can be on multiple subnets. You just specify the maximum and minimum number of peers you want the USMT data to go to. If you specify more than are on the current subnet, the other subnets will be contacted. You don’t even have to worry about adding delay to your task sequence because the first peer to back up your data is actually the one that pushes the extra copies to other peers, so your OSD task sequence can continue while this is happening.
I should also note that while PBA could backup to two peers prior to 5.2, you can now back up to as many as seven peers. That should be enough redundancy for anyone!
So what were my task sequence updates? When I’m testing I try to keep things as simple as possible so that I can focus on the functionality I’m working on. With PBA that means I had one task sequence to do the backup and another to do the restore. Of course I’m already using SSD in my main lab. So when I upgraded my Nomad clients to v5.2 I set a flag to let them use the single site functionality with PBA and I just added one step to my backup task sequence to backup to multiple clients. Here's how I did it:

  1. Install the Nomad 5.2 tools on your ConfigMgr site server
  2. Install the Nomad 5.2 console extension on your ConfigMgr consoles
  3. Upgrade the clients to 5.2 with the SSPBAEnabled=1 flag set so that PBA with single site is enabled
    1. This step can also be used to enable single site and PBA on the clients if they aren’t already
  4. Create a task sequence to do a Nomad PBA backup. That’s done using the ConfigMgr task sequence editor to add these steps:
    1. Provision Nomad PBA Data Store
    2. Capture User State (USMT)
    3. Close Nomad PBA Data Store
    4. Data Store High Availability
      1. This is the new step for v5.2
      2. In this step you can specify the minimum count of backups (beyond the original one), the maximum desired (up to 6), and how many backups should be done while the task sequence waits (synchronously – the rest will be done while the task sequence continues)
      3. If I want 3 synchronous backups then the minimum count of backups must be 3 or more
      4. If the synchronous backups can't be done then the task sequence will stop, leaving the machine as is. So that should be your 'must have' count of backups. The asynchronous backups are bonus backups, just in case
  5. Restore task sequence details
    1. Set a Task Sequence Variable – PBACOMPUTERNAME to %ComputerName%
    2. Locate Existing Nomad PBA Data Store
    3. Restore User State (USMT)
    4. Release Nomad PBA Data Store
  6. Reset task sequence details, for redoing the tests

See the Nomad documentation for complete details, as always. But as you can see there were just two simple changes for me to make, given that I had Nomad, PBA, and SSD working in my lab already. Even if you don’t have those,
getting this working is not hard.
With those settings in place I was able to backup data from all variety of clients to as many or as few peers as I like, including on multiple subnets. Backing up to multiple clients synchronously takes a little extra time but in fact most of the backup time is USMT processing, which has to happen anyway. And with the async option the backups don’t have to delay the OS build task sequence. Restoring the data was the same as before but now there were multiple peers to choose from, sometimes on other subnets, in case any should happen to be unavailable.
One thing that I did find important was that when I tested a backup I should then do the restore, which is what you’ll do in production. Repeating a bunch of backups in a row without restoring (or at least resetting) could cause a problem, and isn’t a real-world process.
Enjoy!
Oh, and you might notice that this new version is v5.2 while the previous version was v5.0. So what happened to v5.1? v5.1 was a minor release between our quarterly release cycles and so we released it on a limited basis to those customers that needed its specific updates.