I’ve had a few engagements in recent months where conversations on how App-V is delivered to endpoints via Configuration Manager was misunderstood a bit, and at first, I was among the misguided. These talks found me digging into how CM manages App-V content and, subsequently, how data is made available to the App-V client via a Distribution Point instead of an App-V Streaming Server.
I will address this topic at a base CM perspective and mostly leave 1E technologies out of it. Also, I am placing a good bit of humor in here so the read is hopefully informative and slightly entertaining but, make no mistake, this content has been researched, tested personally, and verified in active customer environments. Most importantly though, I submit this information with the request that anyone who reads it will give their thoughts as good members of the systems management community. I also composed this because if you are getting ready for App-V in your environment, focus on the stability this technology gives applications and don’t focus so much on streaming (echoing a great blog article from Tim Mangan where he basically says what I am pointing out). Please challenge the content if you find a gap or support the time spent by telling me if it helps.
Before you jump in, here is a list of what you will get from this read:
- How App-V is handled by CM management components
- Options for deploying App-V apps
- How the CM client manages streaming content vs locally cached content
- How the App-V client interacts with streaming vs locally cached content
- Main example of when streaming is the right idea and why
- Reasons why streaming in a non-virtual environment introduces more cons than pros
OK lets go… After software has been packaged up in App-V and added into CM, the application files are loaded into the single instance file store then placed on targeted Distribution Points. Just like staging any other Application in CM is how you should think of it. So instead of thick installation files, a packet of data the App-V client will leverage to present a virtualized instance of the software is stored.
Now on to the how it is doled out, that is where this gets interesting, well as interesting as App-V gets anyway (to me very interesting and, yes, I need to get out more). This is the place where I initially got foggy on how things work, what App-V is really doing, and what each option really gets you. I will explain using two scenarios which are your options in CM for deploying, verbatim:
- Stream virtual application from distribution point – (this is described in line with a “Required” setting for deploying, if a Deployment was set to “Available” events A and B would happen right after a user triggers an install action in Software Center) –
- CM Deployment download action (event tied to the “Available Time” in a “Required” deployment) – CM provides a client with the virtual application’s manifest file, icons, and framework .osd files via BITS (this content is almost always tiny unless you are doing some crazy packaging) placing into CM cache.
- CM Deployment installation action (event tied to the “Deadline Time” in a “Required” deployment) Abovementioned files are passed to the local App-V client which then places the main program icon somewhere so an end user may launch the app like any other. The CM client is now done with this transaction and BITS out of the picture I would keep a mental note of this BITS being out of the picture thing too BTW.
- End-user launches App-V application via placed icon – The App-V client knows where to go to stream that app, from it’s friendly neighborhood DP through the App-V client, and after a small delay (this small delay will happen the first time the app is launched no matter whether it is streamed or locally cached but the streaming deal usually is a bit longer) the application opens for the end-user and content stream is initiated. The App-V client will stream prioritized content to align with the modules of said application which are in use by the user. No matter what portions of the app are in use, App-V streams down the entire content for use after this first execution and vola you have a virtualized app installed fully!
- Download content from distribution point and run locally – (same thing, explained with “Required” Deployment in mind) –
- CM Deployment download action (event tied to the “Available Time”) – CM provides a client with the virtual application’s full content via BITS into CM cache.
- CM Deployment installation action (event tied to the “Deadline Time”) – Abovementioned files are passed to the local App-V client which then places the main program icon somewhere so an end user may launch the app like any other.
- End-user launches App-V application via placed icon – App-V client streams (WHAT? STREAMS? Yes it is still streaming) the application locally from CM cache to the local App-V client and after the same small delay, the application is open and running while being instantaneously streamed to App-V at the same time. Now you, my friend, have an installed App-V application, go tell all your friends.
**This detail was composed so a complex scenario is more easily consumable, for more/full, detail please refer to Microsoft’s Whitepaper on App-V & CM
So now you may be coming to the same realization I did when I learned how this worked end to end. What am I really getting when streaming from a DP?
Well a lot if, you are truly going virtual, and by that I mean you are streaming applications within a properly implemented virtual PC environment with non-dedicated OS instances (meaning when a user opens a virtual PC, it is a fresh OS and personal data is network hosted not a virtual PC assigned to them personally – I managed this type of environment long enough to know you don’t wanna do this if you can avoid it and business can be aligned). Additionally, this virtual PC would have been provisioned with limited disk space, resources, and you would truly want portable applications as this is propose built/scaled. Guess what else you would have in this environment? That’s right, a DP sitting right next to it in the same datacenter and subsequently, with extremely high connectivity. So in this scenario, streaming is super cool and very applicable to use case.
Now, let’s talk about what I have seen out there in the wild when I hear about shops which have implemented App-V streaming (and also happen to talk to their users) in an office environment with a DP on that local LAN. Spoiler alert, it’s a bit quirky/risky and virtual app performance has some hiccups, let’s go over why:
- Top point is, essentially, these environments are setting the stage for a data transmission storm on their local LAN as the App-V client goes after streaming content during production hours since this data acquisition is only triggered when the user first launches their app. Now, this is a highly connected LAN network so that should be fine right? Well don’t assume that all other bandwidth needs rocking around that local area will perform as your SLAs would hope. In short, don’t risk hampering other daytime data needs when you can easily cache the App-V content locally to systems during maintenance windows in a controlled fashion with a result of extremely high availability App-V streaming (meaning the App-V client streams from the local hard drive, because it doesn’t get faster than that my friends).
- App-V clients do not throttle/manage bandwidth consumption and even if this is on a local LAN, I would not recommend giving any process carte blanche when you have desktop class PCs which should have ample disk space to handle the cached content in CM. Not to mention the fact that once this content is pushed into the App-V client, the CM client may do its thing and clean up in edge cases. While I’m at it I will share that I had my hopes dashed as the App-V client does have registry settings which pertain to something which could relate to throttling. However, they do nothing as they are obsolete leftover from old versions of SoftGrid. Additionally, I’m not going to go so far as to say the native CM client and BITS throttles bandwidth either, however you at least can rate limit the client. How inefficient that approach is, choking your ability to provide CM content in general, is a topic for another day though.
- Streaming App-V in this type of scenario is pretty much like deploying software using “Run from DP” and that always works 100% reliably right?? (please note sarcasm, some of you may have this working really well, I never had a lot of luck when looking for a most reliable option). Well in this App-V scenario, you are placing end-user usage of that application in that mix.
- You probably don’t only have desktops in play and setting up separate deployments for laptops vs desktops is a lot of work for no good reason as far as I have been able to discern. I say this because you don’t want the instance where a laptop gets the manifest for a streaming app then goes home and connects to VPN…..
- Best case scenario the App-V application content was not stored on your VPN supporting DP, this means the user is left in a ditch but it’s better than the next scenario.
- Worst case the content is on the DP supporting VPN and you now have a crazy App-V client losing its mind with your WAN.
- App-V client does not throttle/manage bandwidth at all, just want to point that out again. Here is some math around that daytime application demand – 80 PCs launch a virtualized MS Word Viewer mostly all at 9 am, app size of 100 mb means 8 gig moving around that LAN during the day taking all the bandwidth it can, and that’s just a low level example.
- Lastly, if this is being used within an enterprise PC fleet, stay away from unneeded risk of application responsiveness issues because they will exist albeit possibly not the majority of experiences, some users will not like it, and we all know how that perception of IT services works in those scenarios….”We have gotten reports from business leadership that your App-V solution is not working for users”. Don’t pretend like you have not heard a response like this in the past concerning other things which you know relate to a small handful of issues.
- Bottom line, production bandwidth needs are best kept as predictable as possible when it comes to IT products and services. This means if you have the ability to push any consuming processes into a non-production time window, do it.
- Perceived pro with streaming is that applications are instantly updated when updated on the DP. This is not actually the case, all you really do here is initiate a streaming of content as soon as the user triggers the application which is another opportunity for latency for both App-V application and bandwidth consumption overall. Alternative here is to simply plan your application updates so they may be supplied to devices during off peak hours. This also allows you to update in a phased manner, remember, just because you can update an application on all devices at the same time it does not mean you should. Probably the same reason you do not have an entire environment update an application at the same time, because not all issues with software deployment are related to installation issues. Compatibility challenges and end-user questions are a good portion of that and this would happen all at once.
- In case you didn’t notice, when streaming in CM, Feature Blocks are downloaded entirely upon first launch so the seemingly appealing aspect of App-V only pulling blocks it needs when streaming is not present when leveraging CM apparently. Not that I think this is good anyway, since the as-needed streaming means you have clients downloading content in even more of an unpredictable manner.
Now, I will end with saying that Nomad fully supports DP transmissions of App-V content to CM clients and that means everything I am talking about above would have the added benefit of obliterating the value BITS presents when supplying App-V applications to an entire estate as a single, reliable, and readily available service no matter where clients are sitting (that’s kind of our thing, just saying).