Forum Discussion

JWellner_291167's avatar
JWellner_291167
Icon for Nimbostratus rankNimbostratus
Jun 14, 2017

Asymmetric Hardware Requiring Asymmetric View Pool Entitlements

Hi Guys!

 

Need some thoughts on a View farm design. Givens: Horizon 7 Advanced Licensing F5 BigIP x2 - Full alphabet soup license NVIDIA GRID Profiles on page 4 of vGPU User Guide

 

Pod A

 

  • 5x Cisco UCS B200
  • 2.0Ghz, 2 sockets, 28 cores, 56 threads
  • 512GB RAM
  • NetApp SAS backed
  • 1x NVIDIA GRID M6 in vSGA Mode

Pod B

 

  • 5x Cisco UCS C240
  • 2.6Ghz, 2 sockets, 24 cores, 48 threads
  • 512GB RAM
  • Micron PCIe Backed
  • 2x NVIDIA GRID M10 in vGPU Mode 1GB per VM

Pod C

 

  • 5x Cisco UCS C240
  • 2.1Ghz, 2 sockets, 36 cores, 72 threads
  • 512GB RAM
  • Micron PCIe Backed
  • 2x NVIDIA GRID M10 in vGPU Mode 1GB per VM

Pod D

 

  • 5x Cisco UCS C240
  • 2.1Ghz, 2 sockets, 36 cores, 72 threads
  • 512GB RAM
  • Micron PCIe Backed
  • 2x NVIDIA GRID M10 in vGPU Mode 1GB per VM

Pod X

 

  • 1x Cisco UCS B200
  • 2.0Ghz, 2 sockets, 28 cores, 56 threads
  • 512GB RAM
  • NetApp SAS backed
  • 1x NVIDIA GRID M6 in vSGA Mode
  • 1x Cisco UCS C240
  • 1Ghz, 2 sockets, 36 cores, 72 threads
  • 512GB RAM
  • Micron PCIe Backed
  • 2x NVIDIA GRID M10 in vGPU Mode 1GB per VM

Use Scenario:

 

We have 3 types of users.

 

General Purpose Students and Staff

 

Need some GPU to make Win10 experience better

 

Windows 10 Creator vGPU Backed @ 512MB

 

Power User Students and Staff

 

Using apps that require GPU - Adobe Apps, Games, Autocad, Etc.

 

Windows 10 Creator vGPU Backed @ 1GB

 

Generic Accounts

 

These accounts are used for:

 

Kinder -> Second Grade

 

Library Search Kiosks

 

Academic Testing Kiosks (for SBA and MAP computer based testing)

 

School Board Meeting Kiosks

 

Probably fine using vSGA mode graphics

 

Problem:

 

We have a bottleneck with vSphere and View Composer where it takes an inordinate amount of time to prep pools during login/logout storms or events that cause the environment to go sideways.

 

Our thought is to break up into pods to mitigate having all 4 pods go down due to view component parts at the same time. This would also provide the side benefit of only impacting 25% of the environment at a time when going to do upgrades and even make it possible depending on the calendar to do the maintenance of the environment during the production day.

 

Snag:

 

Pod A is composed of vastly asymmetric hardware than what Pod’s B, C and D are composed of. Pod X is set aside so that we have a test bed for image development, upgrade testing and firmware testing with UCS. Hosts from Pod X could be added to Pod A or B if needed to help with increasing capacity.

 

The idea would be to put users that have less need for GPU onto Pod A and run the NVIDIA M6 GRID Cards in vSGA mode to make Windows 10 more bearable due to the limitation of 16 VMs with a 512MB frame buffer if in vGPU mode.

 

Then users who need vGPU would be routed to the C240 pods (Pod B, C and D) and could have a mixture of 512MB and 1GB profiles on the same host provided that like profiles map to the same card giving a theoretical max load of 64x 512MB VMs and 32x 1GB VMs per host and around 5GB RAM per VM if evenly distributed also understanding that in a running configuration that doesn’t allow for enough CPU threads per desktop.

 

So at the end of this is what is the best method to use for having a user login to an entitled pool that doesn’t exist on all 5 pods that are deployed. Do you solve this with a Big-IP config, Cloud Pod Architecture or something else less complex (ie. Load balance Pod B, C and D. Have a separate DNS for Pods A and X.)?