So how has that initial POC been going? After a year of production use the general feedback from the end user has been very positive with few calls to the customers helpdesk and no real issues passed onto me. With users designing detailed multi layer automotive diagrams from a country on the other side of the world – it’s been a very successful deployment.
This year has lead onto a couple of new installations. One local customer wanted to use AutoCad on XenApp 7.8 servers running a shared desktop on Windows Server 2008 R2, and some standalone virtual PCs running Windows 10. Provisioning Server 7.8 was used to spin up some 10 XenApp servers. This solution is for students – so the need for intense graphical rendering is somewhat lesser than what you would need in vehicle design or in construction. Having a NVidia vGPU shared across some 15 users is a great way to introduce CAD beginners to the tools and apps they will use later in their career while the the system itself is very capable of dealing with all they can throw at it.
The XenServer buid was 6.5 and completed just before v7 was released. A pair of Dell R710 servers each with 2 x NVidia K2 cards. One thing to look out for is that Citrix XenServer does not officially support installation on SD cards, unlike other hyper-visors. While it will install and works ok – the OS footprint of v6.5 and v7 is noticeably different. Stick to a local pair of hard disks in a Raid 1 mirror so there is plenty of room to upgrade to the new OS – and make sure the server comes with a Perc Raid card..and right cable. Don’t ask me about getting those later!
Another recent build on a Dell Precision R7910 Workstation was completed with XenServer 7, XenDesktop 7.8 and PVS 7.8. The Workstation technical spec is actually better suited to multi user CAD than the equivalent server. Again, the customer was looking to run AutoCad Revit on a pre-built Windows 7 virtual machine imported from Hyper-v. All possible! The big Dell itself had two NVidia Grid K2 cards, eight 400Gb local SSD drives, eight NICS and loads of RAM.
With no students in sight, this time the solution was for serious users. Designing and constructing world class buildings around the world needs decent hardware and graphics – no XenApp this time.
The initial import of the virtual machine completed with no issues, but we then came across some boot problems after importing the local C drive up to the PVS to create a vdisk for multiple devices to use. With recurring BSOD – initially this was suspected to be due to ghost network cards in the Device Manager. True enough, there were a few extra cards listed in there from the previous hyper visor build and and from a Cisco VPN adapter. Once removed, booting was better – but still the occasional blue screen. Closer analysis of the error and some Googling turned up some hints that AV was interfering with the PVS-TFTP streaming to the disk less clone. With some further digging we found that AV was being auto-deployed on boot from an inherited Group Policy. Once disabled – the boot issues were no longer present.
The size of the vdisk also posed some problems when scheduling reboots in the Studio Delivery Group. Due to the size, other devices were struggling to boot up as the first one or two PCs were still streaming. This was resolved with some help from Citrix support who suggested a PowerShell script to increase the time between each machines restart and boot. This worked a treat. Also very helpful was a hands-on customer keen to get to know all the components – who patiently worked through many of the issues after I had left site. Many thanks for that.
It may have been said before, but the feedback from this customer’s end user was again – “Better than the physical PC!”. Citrix and NVidia Grid – Pretty awesome.
Don’t forget Part1!