Choosing a storage solution
Today as a preparation for a meeting tomorrow I did a “deep” dive into two storage solutions. I’m not a Storage guy let’s put that up front, I’m more the overal solution kind of guy and I expect the storage guys to bring me the performance I need. I’ll tell them how much IOPS I need and they just need to make sure it’s not snooped away by Exchange or SQL… Now it’s different, got a customer that is having issues with their VDI environment and they ask me to help them out.
First let’s take a look at the environment so that we all are on the same train. They got a VMware Horizon View 6 environment running Microsoft Windows 7. The storage solution they got now is a MSA iSCSI SAN which they plan not the replace.
They use a mixture of 2D and 3D application for engineering and landscaping.
They have 90 non-persistent vm’s and when I’m done they will have three ESX hosts (N+1)
The issue they ran into is that it’s a bit like syrup running down a spoon.. slow in other words.
So I took a look at what possibilities we have within PQR, the following options are available;
- Classical SAN – add more disks to SAN to achieve more IOPS
- Local Storage e.g. FusionIO accelerator
- VMware VSAN
- Pernixdata FVP
The first two were skipped straight away, I don’t want to add more disk or disk shelfs for that will not provide with enough IOPS. We tried that for too many years. Local storage alone is not an option for it deprives the customer from the ability to move desktops and do maintenance in a normal way.
So two option were left, VMware VSAN and Pernixdata FVP.
Two completely different solutions that as pointed out by a colleague are apples and pears. I happen to like fruit so let’s compare anyway. I understand they are different but in this situation I’m looking to accelerate the storage and I’m looking at solutions, simplicity and costs. When this was to be greenfield environment perhaps the products would be completely different.
I’m not gonna write about how it works, there are literally thousands of blog that will tell you that. I took a look at what you would need to speed up 90 virtual machines.
A VMware SAN Needs at least three ESX host, four is advisable. I calculated with three for that is the number we need to achieve the other goals.
VMware VSAN requires a local SSD disk and a local spinning disk, the spinning disk is used for the storage and the SSD for the performance. Simple and effective.
There are two very good calculators to be found for VMware VSAN, one by Ducan Epping and one by VMware.
If you fill these in with the number that I presented in the beginning you see the following result.
- You need 7 spinning disks per host (I select 900GB SAS as a disk)
- That’s complemented with a SSD of 200GB (0,11TB so I rounded that up to 0.2TB)
So together you have 28 x 900GB SAS spinnings disks and 3 x 200GB SSD’s and three hosts.
The license of VMware is pretty sweet, they have a $50 concurrent license per vm for VSAN.
So with 100 vm’s (easier to calculatie) that bring the license up to $ 5000.
Same as with VMware VSAN goes with Pernixdata FVP, many blogs have been written so read them to know how it works.
Pernixdata FVP needs two hosts (as mentioned by Sean) but we’ll go for three host to have and N+1 solution, and that’s what we need and aligns nicely with VMware VSAN in the comparison.
Pernixdata FVP works with a local SSD disk only, it uses the current SAN for the storage which is a huge difference with VMware VSAN.
>> Addition: With the 2.0 version you can also do caching of non-persistent VDI read and writes in RAM which makes it even more fast.
There is no calculator available to size the local SSD so we have to follow a rule of thumb of normal non-persistent linked clones. so that would be that about 2GB of disk space would be needed per vm. Best practice as I learned from Frank and Maikel would be between 5% and 30%, it depends so take a guess and measure.
So let’s assume we have 50 vm’s per host. (yeah I know we have three hosts, but I tend to not calculate the +1 in the picture, you need to be able to run without it). with 50 vm’s per host and 2GB that would be a 100GB but with Pernixdata FVP we have to make sure we have enough room for the vm’s running on the other hosts..
So if we calculate 2-4GB of disk space needed for 100 vm’s we’re pretty safe, that would mean a 400GB SSD disk in each host.
So what do we need in hardware is:
- 3 x 400GB SSD disks.
Than if we look at the license we notice that Pernixdata has a Essentials Plus license that can be used with maximum of 100 vm’s. ideal for this scenario, this license costs $9900.
Putting it together
So let’s put it all together;
100 x $50 VMware VSAN license for concurrent vm’s
28 x 900GB SAS disk
3 x 200 GB SSD
It comes around 15K with this configuration.
1 x $9900 Essentials Plus for 100 vm’s
3 x 400 GB SSD or more Server RAM
It comes around 10K with this configuration (SSD).
I did a bit of google to find prices so that list+google margin but I think VMware VSAN is about 3K to 4K more expensive. For this customer, for he wants to keep his SAN I think Pernixdata FVP would suit fine and I think a VSAN would be the wrong choice but bring your thought along.
Pros and cons
I learned that a VSAN might crash under heavy I/O load. Further on VSAN needs a lot of space in the server, 7 spinning disks and an SSD is not an option for most server I see.
With version 2 of Pernixdata FVP and the ability to work in RAM the game has changed, When no local storage is needed the performance is astounding like with Citrix PVS and the cahce to RAM option. I think I was one of the first to adapt this Citrix option in production (looking at the number of people warning me when I did made me think so at least).
Pernixdata FVP has less requirements for the number of hosts needed, a VSAN needs at least three host and preferably more.
This is what I made of this, perhaps I’m miles of track.. let me know and I’ll learn.