One of Nomad’s main premises is to enhance Microsoft System Center Configuration Manager (CM), and it does this by leveraging the native infrastructure components for the most efficient and optimal design to meet an organization’s requirements. Nomad is not designed simply to eliminate all Distribution Points from a CM design but to enable a more efficient design. Having said this, our customers (ranging from 1000 to 450,000 clients) see significant infrastructure reduction in their designs of CM with Nomad.

Typically, a Nomad integrated design sees a reduction of at least 95% of the servers required within a CM infrastructure as compared with a native-only design. It can be a reduction or elimination at all levels – Primary, Secondary and Distribution Point servers as well as the complete removal of the server requirement for PXE Service Points and State Migration Points to additionally support a robust and efficient zero-touch operating system deployment strategy. By eliminating all unnecessary servers, an organization with a hundred site locations for example, can centralize and consolidate the Distribution Points to two or three DPs in their central data center. (A minimum of two is advised as this provides redundancy if one of the DPs requires rebooting due to security updates etc.)

Nomad has developed over the past 10 years in close partnership with Microsoft and is designed to enhance, rather than replace, the core functionality of CM. Organizations can build on the well-known and fully supported component services of CM as well as the Microsoft technologies on which it is built. As well as ensuring full professional support from Microsoft and 1E there is the availability of a large and highly specialized community-based support network.

Another design consideration of Nomad is to simplify site boundaries and boundary group management.

In a traditional CM environment, you would need to specify a CM boundary group for every Distribution Point server. You may also want to define individual office locations with their own boundary information, and you may also have other scenarios where you want additional CM boundaries (such as incoming VPN connections, etc.). Boundary ranges just keep on piling up, and you need to keep these accurate to avoid existing clients becoming unmanageable or falling into a dead-zone.

When following the recommendations to aggressively eliminate distribution points – down to only a few co-located at a core data center location – creating boundaries and boundary groups for locating content is no longer required. The Nomad Add-on installed on all CM clients provides the end-point intelligence needed to protect the WAN when downloading from the DPs at the core data center. Nomad’s Single Site Download (SSD) feature further simplifies content distribution because there will only ever be one CM/Nomad client downloading from the central DPs per package, and per site/location.

During content distributions, the CM client factors in the boundary (or boundary-less) situation when sending a Content Location request to a Management Point, when determining which DP from which to download content. After the CM client gets a list of DPs, it (Content Transfer Manager) then passes on the list to Nomad. So the boundary situation and its impact to locating DPs have already been evaluated by the time Nomad is invoked. This is why Nomad does not require boundaries to configured – the CM client has already selected the relevant DPs for the required content.

As Nomad is integrated into CM using the approved Microsoft Alternate Content Provider (ACP) mechanism, the normal behavior will resume. If none of the available Distribution Point servers (provided to Nomad by the CM Client) are available, then Nomad will obtain the required content from the Fallback Distribution Point (if enabled in the Deployment options).

With Nomad integrated into your CM environment, any existing DP’s will be consolidated and centralized into your central – or regional - Data Center. When the CM Client needs content, it will instruct Nomad to locate and obtain the required content. Nomad is integrated into CM using Alternate Content Provider mechanism, so the CM Client provides Nomad with the required content identification number (i.e.: Content ID, Package ID, etc.). It also provides a list of available Distribution Point servers that exist within the CM client’s boundaries (i.e.: IP range, IP subnet, or AD site). Nomad will always check if it can locate the content in the local site first. But if the content isn’t found locally then Nomad will work through the list of Distribution Point servers provided by CM. The DPs to be used will depend on how CM boundaries have been configured.

One of the main design principles of Nomad is always to ensure that CM bandwidth utilization never impacts that of business traffic. Nomad is a “gentlemen” on the network, conceding to other traffic as more important than itself.

Nomad uses a deterministic statistical algorithm, we call Reverse QoS, to continually analyze the available bandwidth across any end-to-end network route required for downloading content. The Reverse QoS algorithm analyzes the throughput as each block of a file or package is downloaded by Nomad, and as this is a continuous process, it can rapidly adjust its throughput accordingly to account for any changes in network utilization (by other non-CM traffic). It calculates network speed and congestion, irrespective of the network infrastructure configuration or number of hops. (note: Nomad detects congestion that can occur anywhere as there are multiple hops between the client and server.)

As this analysis is continuous throughout the entire download process, Nomad can identify any increase in network traffic on the network and rapidly reduce its throughput accordingly to compensate for this and ensure that it does not use the network when other systems require it. Conversely, if the algorithm identifies that more bandwidth becomes available as other systems usage reduces, then Nomad can increase its throughput to make use of this spare network capacity and thus accelerate the download process.

Nomad is proven across its many customers to account, and automatically adjust, for all types of networks as may be required; from well-connected LANs to extremely low-bandwidth WAN connections and even low bandwidth, high latency satellite links.

Nomad leverages Microsoft Remote Differential Compression on CM distribution points to create what we call Nomad RDC. Nomad RDC uses the RDC APIs to create binary-level differences of files between package versions. Binary deltas have a far greater efficiency over that of byte-level differencing because the RDC algorithm is based on fingerprinting blocks on each file. Since many types of file changes can cause the contents within a file to move location (e.g.: a small insertion or deletion at the beginning of a file can cause the rest of the file to become misaligned to the original content) the blocks used for comparison are not based on static arbitrary file addresses but on addresses defined by the contents of each file segment. If a part of a file changes in length or blocks of the contents get moved to other parts of the file, the block boundaries for the parts that have not changed remain fixed related to the contents. In that way, the series of fingerprints for those blocks don’t change either; they just change position.

The Nomad RDC on Distribution Points determines the binary-level differences and synchronizes the changes with Nomad on the CM clients – and not the entire content package. Nomad RDC on the Distribution Point handles binary differences of the files in the content, therefore removing the dependency of the Microsoft Remote Differential Compression service on the CM client computers. There are no component requirements on the client end-points, other than the Nomad Add-on itself being installed.

Yes. Nomad uses a master-to-peer methodology to maximize throughput and speed of sharing content from the downloading (or cached) master to other requesting peer clients whilst also being highly resilient to any client power or network state changes and avoiding any single points of failure in the process.

A Nomad master is elected to this role based on a number of criteria such as chassis type, uptime, resources and OS to ensure it is the best possible candidate for the role. As it is downloading, it is simultaneously sharing this content with up to six peers at any one time. After a peer has transferred an amount of data from the master it will disconnect to allow any additional peers to connect to it and continue this process.

As the Nomad master will have the slowest WAN network connection to transfer data, it is this client that is the slowest common denominator regarding potential throughput in this process. As a consequence, with peers connecting on a faster LAN connection, the method of connect/transfer/back- off/re-connect of the requesting peer clients has the result that all clients will obtain the complete package being downloaded at almost the same time as the Nomad master. Nomad’s base standard peer-to-peer capability has been proven when delivering content to as many as 254 simultaneous clients in a Class-C subnet.

With Nomad v5.0, we created the Fan-Out feature to provide even greater speed and performance in delivering content to many clients simultaneously. This new feature was primarily designed to deliver rapid transfer performance in subnets with greater client numbers than a Class-C subnet, such as a super-net, and also when the download master is well connected to a distribution point, such as on a LAN or over a superfast WAN connection. This feature is however equally relevant to a normal Class-C subnet over a limited WAN as it simply improves Nomad’s original performance metrics.

Fan-Out achieves this increase in peer-to-peer performance by providing extra layers of Fan-Out peer masters that directly feed from the main Nomad master and can each, in turn, serve content to a number of peer systems in the same peer-to-peer method as the base method described above.

Nomad includes CM-compatible PXE technology on all computers wherever they are located, head office or remote locations, this is known as PXE Everywhere. This feature allows dynamic elections to take place at local sites, and peer systems to determine the best system to host the PXE process and respond to PXE requests.

CM is used to populate the necessary boot image(s) on CM client computers. This process is straightforward and requires no configuration. For example, if a computer performs a network boot (i.e.: pressing F12), then the PXE Everywhere master first checks with CM to determine what should be provided back to the PXE booting computer. CM variables (such as the boot image and Task Sequence IDs) are used to automatically determine which boot image should be used by the client requesting the PXE services.

PXE Everywhere is well-integrated with CM and includes a web service which fosters this relationship. The PXE Everywhere web service adds a layer of security to the PXE boot process, seamlessly integrating with CM security:

Accepts credentials from PXE client – accepts a PXE client’s MAC address and SMBIOS GUID that are later forwarded to the CM site for authentication.

Determines if PXE client is authorized by CM – queries CM, asking if there is an approved OS deployment assigned to the PXE booting computer.

PXE Everywhere has always been an integral part of the Nomad solution offering and license, making it a key component of any effective automated zero-touch operating system deployment capability. PXE Everywhere was the first enterprise-ready PXE service running on workstation class operating systems in the market and continues to build on its pedigree in enterprise systems management with its recent enhancement of the patent-pending Dynamic PXE capability.

Additionally, Nomad was the first peer-to-peer software solution with full capabilities available within the Windows PE environment. Nomad can locate and obtain content locally through its peer-to-peer capabilities. All package content required from WIM (OS) image files, driver packages, software updates and applications are supported for a fully automated end-to-end operating system deployment process.

All of these capabilities can be directly injected into the native Task Sequence build process logic with Nomad-specific Task Sequence actions for all operations available within the native CM console.

The Nomad Dashboard presents a graphical snapshot of the current configuration, client health and content delivery activity within the familiar CM console. The Dashboard provides all the information you need in a single console and CM Role Based Administration is not bypassed.

You can drill down into the panels to examine the data in finer detail or to explore the status and statistics associated with other deployments, content or Precache jobs. You can track if a deployment or Precache job has completed or if, for some reason such as an outage, the download has been impacted.

Nomad uses its dynamic cache for storing downloaded CM content. It also uses Windows NTFS hard-links to mirror the existence of this cached content in the CM Client cache folder as a requirement of the CM client itself to execute. Hard-linking is a Microsoft designed and supported technology included as part of the underlying Windows NTFS file system that allows for content to appear in two folder locations within the NTFS file structure while only existing as a single instance on the physical disk.

To remove aged package content from the Nomad cache in efforts of freeing cache space to accommodate new content, Nomad invokes an automatic cache cleaning operation. This operation goes through an intelligent devolution process to make sure the best and most efficient decisions are being made before downloading the requested package content and deleting others that have already been downloaded. A cache priority setting for each package can additionally be set by CM administrators within the Nomad properties tab of a CM package directly within the CM console. As space may be required within the Nomad cache for new packages being deployed, or to adjust the maximum cache size, Nomad automatically manages this cached content by prioritizing the removal of package content according to the cache priority settings of the previously deployed and installed packages contained. This automatic management of content within the Nomad cache ensures the most relevant packages are retained while only redundant content is removed to make space for the newly deployed packages as and when this is required.

Only Nomad from 1E will allow you to drastically reduce your infrastructure servers and components without putting a strain on client resources or risking a Blue Screen of Death. Nomad follows best engineering practice - it does not require a kernel level driver – therefore it mitigates any issues where systems would need to have to be visited manually and fixed or ultimately be completely rebuilt! Requiring a driver is not a benefit, this is a risk.

Nomad also does not require other pre-requisites such as JRE as it is written in very tight C/C++ and has a tiny client footprint while still giving optimal network and systems management to the administrator. With Nomad you do not need to worry about having to maintain all of this software and the pre-requisites through service packs and OS upgrades.

The Nomad Dashboard presents a graphical snapshot of the current configuration, client health and content delivery activity within the familiar CM console. The Dashboard provides all the information you need in a single console and CM Role Based Administration is not bypassed.

You can drill down into the panels to examine the data in finer detail or to explore the status and statistics associated with other deployments, content or Precache jobs. You can track if a deployment or Precache job has completed or if, for some reason such as an outage, the download has been impacted.

Nomad ensures it will never impact the user or computer, and therefore always ensures that at least 10% of the total disk space is always available for the user and system. This is a feature we’ve included in Nomad because CM can completely fill up all available disk space and prevent the computer from operating correctly.

CM clients (with Nomad) will only obtain content that is actually required by them (to be installed), and this means you won’t need to have your entire CM software repository located at all site locations – only the content those clients actually need will be obtained.

Nomad fully supports streaming and download & execute modes for all App-V applications when integrated with CM. Nomad adds additional network efficiency by only downloading the shortcut icons and advertisement information, therefore saving time and bandwidth. Nomad further optimizes the download of App-V content by enabling normal Nomad behaviour to obtain App-V content from peer computers, rather than being forced to obtain the content from a centralized App-V streaming server or Distribution Point.

Nomad 6.3 and later includes a "pause/resume" capability to stop distribution of content should you decide the content isn't correct or you wish to delay distribution. This uses 1E's Tachyon technology to instruct every endpoint to stop the transfer process. With Tachyon, every endpoint gets the instruction immediately and doesn't need to wait for ConfigMgr. When ready, just resume the job from the CM console.

Short Answer: No – Nomad does not use BITS or BranchCache, period. 1E have invented an intelligent bandwidth throttling technology (not rate-limited like BITS), and many-to-many sharing (like peer-to-peer but with the scalability required in large organizations).

Long Answer: The early Nomad pre-dates BITS (and obviously BranchCache and Peer Cache too). Organizations that used SMS (pre-CM) needed to transfer content to remote computers or Distribution Points around the world, but due to poor WAN links, the package distribution process kept failing. 1E created “SMSNomad” to help SMS Clients download content from centralized Distribution Points (usually in the HQ or Datacenter). This was the first SMS add-on to query the network connection between the client computer and the remote DP, including all the switches and routers between the two points. From the early days, Nomad’s intelligent bandwidth throttling technology exceeded all our expectations – our customers could even transfer 5GB OS images to remote locations – and all without any changes whatsoever to the existing infrastructure!

Yes. Nomad uses a master-to-peer methodology to maximize throughput and speed of sharing content from the downloading (or cached) master to other requesting peer clients whilst also being highly resilient to any client power or network state changes and avoiding any single points of failure in the process.

A Nomad master is elected to this role based on a number of criteria such as chassis type, uptime, resources and OS to ensure it is the best possible candidate for the role. As it is downloading, it is simultaneously sharing this content with up to six peers at any one time. After a peer has transferred an amount of data from the master it will disconnect to allow any additional peers to connect to it and continue this process.

As the Nomad master will have the slowest WAN network connection to transfer data, it is this client that is the slowest common denominator regarding potential throughput in this process. As a consequence, with peers connecting on a faster LAN connection, the method of connect/transfer/back- off/re-connect of the requesting peer clients has the result that all clients will obtain the complete package being downloaded at almost the same time as the Nomad master. Nomad’s base standard peer-to-peer capability has been proven when delivering content to as many as 254 simultaneous clients in a Class-C subnet.

With Nomad v5.0, we created the Fan-Out feature to provide even greater speed and performance in delivering content to many clients simultaneously. This new feature was primarily designed to deliver rapid transfer performance in subnets with greater client numbers than a Class-C subnet, such as a super-net, and also when the download master is well connected to a distribution point, such as on a LAN or over a superfast WAN connection. This feature is however equally relevant to a normal Class-C subnet over a limited WAN as it simply improves Nomad’s original performance metrics.

Fan-Out achieves this increase in peer-to-peer performance by providing extra layers of Fan-Out peer masters that directly feed from the main Nomad master and can each, in turn, serve content to a number of peer systems in the same peer-to-peer method as the base method described above.

Yes – of course. Nomad wouldn’t be an approved product if it didn’t integrate with CM as Microsoft formally requires. All Nomad functions, including Nomad Dashboard, honor CM’s Role-Based Administration. This also means that Nomad doesn’t access the CM database directly or take any other routes to bypass CM security, unlike other 3rd-party options.

TOP