Search
Close this search box.

Why should I use Nomad when I have a server at each site?

auditDefenseWP

I was recently asked to explain how 1E’s Nomad could add value in an organization that had a large number of remote sites, each with an existing server that hosted other applications which could be used as a ConfigMgr Distribution Point (DP). In this case there were 50,000 systems at approximately 2000 sites with varying hardware and disk space available on the servers.
Even though they have existing servers that can host the DP role at every site, having that many DPs can present its own set of challenges:
Many of the servers were older systems that didn’t have a lot of memory or disk space available. Due to the lower amount of memory you could potentially see performance issues during distributions as well as having to monitor and manage them so they are not filling up the volumes.
Anyone who has dealt with ConfigMgr for any length of time knows that DPs can have issues receiving packages from the parent site, especially when they are hosted on servers that have other non-ConfigMgr processes and functions running on them. It isn’t as simple as sending a package out to 2000 DPs. With that large number of DPs, often the reality is sending the package to the DPs and then seeing which ones actually received it and which ones had errors. Once you resolve the issues (disk space, file transfer issues, etc.) then you are ready to deploy your package.
To support 2000 traditional DPs, it would require a minimum of seven secondary sites under a single primary as there is a recommended limit (by Microsoft) of 250 DPs per primary or secondary site. To support the pull DP functionality that was introduced in ConfigMgr 2012 SP1, it would require a single primary with two or three traditional DPs in the datacenter and 2000 pull distribution points in the field.
Using traditional DPs, a site is limited as to how many DPs it can distribute content to at a time. When you are talking about high number DPs, even if you crank the number of threads up from the default of five (at the risk of bogging down the primary site server and the WAN link to it) it could still take a while to get a package out to all of the DPs. Take for example a small package that might take 10 minutes on average to get it across the wire to the DPs. With 250 DPs per site and increasing the threads to 15, it would take approximately three hours just to get the content to all of the DPs. Then you would be ready to deploy the package to the end users.
Now consider what would happen with Nomad and only having two or three DPs in the data center. It would take less than 10 minutes to get the content to the DPs since they are local in the data center. Then one client at each site would start downloading the content and immediately it would begin sharing it with its peers. You would then have 2000 clients coming back to the data center downloading content using Nomad’s Reverse QoS technology without the risk of WAN link saturation to the remote site or the data center. While the download is occurring it is simultaneously being shared with the peers at each location, so when the download is finished the package is ready to be executed on the clients. It is hard to put a number on how long it would take without knowing the size of the WAN link going into the data center and to each site, but let’s assume it takes a much longer amount of time compared to the time it takes to get the same content out to a traditional DP, say two hours instead of 10 minutes. What we get with Nomad is that it takes less than 10 minutes to get the content to the DPs in the data center and two hours to get the content out to all of the clients. You are doing the same job a bit faster without all of the overhead and upkeep on 2000 DPs.
If you are using pull DPs, they are acting as normal ConfigMgr clients receiving content from traditional DPs back at the datacenter. This means you have 2000 pull DP clients coming back to the data center as you would with Nomad, so you would assume the transfer time is as long if not longer as it would be with Nomad. This is because pull DPs are acting as regular clients accessing content from a remote DP only using BITS for controlling the bandwidth. Since BITS doesn’t control bandwidth in an intelligent manner the way Nomad does you are at risk of saturating your WAN links to all your sites as well as your data center link. That on top of the fact that you still have the performance, disk space, upkeep and maintenance issues on those 2000 DPs.
When it comes to planning your migration from ConfigMgr 2007 to 2012, Nomad will not only simplify your DP infrastructure requirements, but will also accelerate the migration time significantly. Even with the shared DP capabilities between ConfigMgr 2007 and 2012, you can achieve a more rapid transition to the new hierarchy by consolidating your ConfigMgr 2012 infrastructure using Nomad. Rather than carrying over your existing DP infrastructure, you can eliminate the majority of them as a part of the migration plan.
When you take a close look at things, Nomad clearly adds value and improves performance compared to using traditional or pull DPs.

Report

The FORRESTER WAVE™: End-User Experience Management, Q3 2022

The FORRESTER WAVE™: End-User Experience Management, Q3 2022