Artur Bergman, founder of a CDN exclusively powered by super fast SSDs, has made many compelling cases over the years to use them. He was definitely ahead of the curve here, but he’s right. Nowadays, they’re denser, 100x faster and as competitively priced as hard disks in most server configurations.
At Etsy, we’ve been trying to get on this bandwagon for the last 5 years too. It’s got a lot better value for money in the last year, so we’ve gone from “dipping our toes in the water” to “ORDER EVERYTHING WITH SSDs!” pretty rapidly.
This isn’t a post about how great SSDs are though: Seriously, they’re amazing. The new Dell R630 allows for 24x 960GB 1.8″ SSDs in a 1U chassis. That’s 19TB usable ludicrously fast, sub millisecond latency storage after RAID6, that will blow away anything you can get on spinning rust, use less power, and is actually reasonably priced per GB.
So if this post isn’t “GO BUY ALL THE SSDs NOW”, what is it? Well, it’s a cautionary tale that it’s not all unicorns and IOPs.
The problem(s) with SSDs
When SSDs first started to come out, people were concerned that these drives “only” handled a certain number of operations or data during their lifetime, and they’d be changing SSDs far more frequently than conventional spinning rust. Actually, that’s totally not the case and we haven’t experienced that at all. We have thousands of SSDs, and we’ve lost maybe one or two to old age, and it probably wasn’t wear related.
Spoiler alert: SSD firmware is buggy
When was the last time your hard disk failed because the firmware did something whacky? Well, Seagate had a pretty famous case back in 2009 where the drives may not ever power on again if you power them off. Whoops.
But the majority of times, the issue is the physical hardware… The infamous “spinning rust” that is in the drive.
So, SSDs solve this forever right? No moving parts.. Measured mean time to failure of hundreds of years before the memory wears out? Perfect!
Here’s the run down of the firmware issues we’ve had over 5 or so years:
Intel
Okay, bad start, we’ve actually had no issues with Intel. This seems to be common across other companies we’ve spoken to. We started putting single 160GB in our web servers about 4 years ago, because it gave us low power, fast, reliable storage and the space requirements for web servers and utility boxes was low anyway. No more waiting for the metal to seize up! We have SSDs that have long outlived the servers.
OCZ
Outside of the 160GB Intel drives, our search (Solr) stack was the first to benefit from denser, fast storage. Search indexes were getting big; too big for memory. In addition, getting them off disk and serving search results to users was limited by the random disk latency.
Rather than many expensive, relatively fast but low capacity spinning rust drives in a RAID array, we opted for OCZ Talos 960GB disks. These weren’t too bad; we had a spate of initial failures in what seemed like a bad batch, but we were able to learn from this and make the app more resilient to failures.
However, they had poor SMART info (none) so predicting failures was hard.
Unfortunately, the company later went bankrupt, and Toshiba rescued them from the dead. They were unavailable for long enough that we simply ditched them and moved on.
HP SSDs
We briefly tried running third party SSDs on our older (HP) Graphite boxes… This was a quick, fairly cheap win as it got us a tonne of performance for relatively little money (back then we needed much less Graphite storage). This worked fine until the drives started to fail.
Unfortunately, HP have proprietary RAID controllers, and they don’t support SMART. Or rather, they refuse to talk to non-HP drives using off the shelf technology, they have their own methods.
Slot an unsupported disk or SSD into the controller, and you have no idea how that drive is performing or failing. We quickly learnt this after running for a while on these boxes, and performance randomly tanked. The SSDs underlying the RAID array seemed to be dying and slowing down, and we had no way of knowing which one (or ones), or how to fix it. Presumably the drives were not being issued TRIM commands either.
When we had to purchase a new box for our primary database this left us with no choice: We have to pay HP for SSDs. 960GB SSDs direct from HP, properly supported, cost us around $7000 each. Yes, each. We had to buy 4 of them to get the storage we needed.
On the upside, they do have fancy detailed stats (like wear levelling) exposed via the controller and ILO, and none have failed yet almost 3 years on (in fact, they’re all showing 99% health). You get what you pay for, luckily.
Samsung
Samsung saved the day and picked up from OCZ with a ludicrously cheap 960GB offering, the 840 EVO. A consumer drive, so very limited warranty, but for the price (~$400-500) you got great IOPS and they were reliable. They had better SMART info, and seemed to play nicely with our hardware.
We have a lot of these drives:
[~/chef-repo (master)] $ knife search node block_device_sda_model:'Samsung' -a block_device.sda.model 117 items found
That’s 117 hosts with those drives, most of them have 6 each, and doesn’t include hosts that have them behind RAID controllers (for example, our Graphite boxes). In particular, they’ve been awesome for our ELK logging cluster
Then BB6Q happened…
I hinted that we used these for Graphite. They worked great! Who wouldn’t want thousands and thousands of IOPs for relatively little money? Buying SSDs from OEMs is still expensive, and they give you those darn fancy “enterprise” level drives. Pfft. Redundancy at the app level, right?
We had started buying Dell, who use a rebranded LSI RAID controller so they happily talked to the drives including providing full SMART info. We had 16 of those Samsung drives behind the Dell controller giving us 7.3TB of super fast storage.
Given the already proven pattern, we ordered the same spec box for a Ganglia hardware refresh. And they didn’t work. The RAID controller hung on startup trying to initialise the drives, so long that the Boot ROM was never loaded so it was impossible to boot from an array created using them.
What had changed?! A quick
"MegaCli -AdpAllInfo -a0 | diff"
on the two boxes, revealed: The firmware on the drive had changed. (shout out to those of you who know the MegaCli parameters by heart now…)
Weeks of debugging and back and forth with both Dell (who were very nice given these drives were unsupported) and Samsung revealed there were definitely firmware issues with this particular BB6Q release.
It was soon released publicly, that not only did this new firmware somehow break compatibility with Dell RAID controllers (by accident), but they also had a crippling performance bug… They got slower and slower over time, because they had messed up their block allocation algorithm.
In the end, behind LSI controllers, it was the controller sending particular ATA commands to the drives that would make them hang and not respond.. And so the RAID controller would have to wait for it to time out.
Samsung put out a firmware updater and “fixer” tool for this, but it needed to move your data around so only ran on Windows with NTFS.
With hundreds of these things that are in production and working, but have a crippling performance issue, we had to figure out how they would get flashed. An awesome contractor for Samsung agreed that if we drove over batches of drives (luckily, they are incredibly close to our datacenter) they would flash them and return them the next day.
This story has a relatively happy ending then; our drives are getting fixed, and we’re still buying their drives; now the 960GB 850 PRO model, as they remain a great value for money high performance drive.
Talking with other companies, we’re not alone with Samsung issues like this, even the 840 PRO has some issues that require hard power cycles to fix. But the price is hard to beat, especially now the 850 range is looking more solid.
LiteOn
LiteOn were famously known for making CD writers back when CD writers were new and exciting.
But they’re also a chosen OEM partner of Dell’s for their official “value” SSDs. Value is a relative term here, but they’re infinitely cheaper than HP’s offerings, enterprise level, fully supported and for all that, “only” twice the price of Samsung (~$940)
We decided to buy new SSD based database boxes, because SSDs were too hard to resist for these use cases; crazy performance and at 1TB capacity, not too much more expensive per GB than spinning rust. We had to buy many many 15,000rpm drives to even get near the performance, and they were expensive at 300GB capacity. We could spend a bit more money and save power, rack space, and get more disk space and IOPs.
For similar reasons to HP, we thought best to pay the premium for a fully supported solution, especially as Samsung had just caused all these issue with their firmware issues.
With that in mind, we ordered some R630’s hot off the production line with 960GB LiteOn’s, tested performance, and it was great: 30,000 random write IOPs across 4 SSDs in RAID6, (5.5 TB useable space).
We put them live, and they promptly blew up spectacularly. (Yes, we had a postmortem about this). The RAID controller claimed that two drives had died simultaneously, with another being reset by the adapter. Did we really get two disks to die at once?
This took months of working closely with Dell to figure out. Replacement of drives, backplane, and then the whole box, but the problem persisted. Just a few short hours of intense IO, especially on a box with only 4 SSDs would cause them to flip out. And in the mean time, we’d ordered 50+ of these boxes with varying amounts of SSDs installed, having tested so well initially.
Eventually it transpires that, like most good problems, it was a combination of many factors that caused these issues. The SSDs were having extended garbage collection periods, exacerbated by a smaller amount of SSDs with higher IO, in RAID6. This caused the controller to kick the drive out of the array… and unfortunately due to the write levelling across the drives, at least two of them were garbage collecting at the same time, destroying the array integrity.
The fix was no small deal; Dell and LiteOn together identified and fixed weaknesses in their RAID controller, the backplane and the SSD firmware. It was great to see the companies working together rather than just pointing fingers here, and the fixes for all sizes except 960GB was out within a month.
The story here continues for us though; the 960GB drive remains unsolved, as it caused more issues, and we had almost exclusively purchased those. For systems that weren’t fully loaded, Dell kindly provided us with 800GB replacements and extra drives to make up the space. For the rest, because the stress across the 22 drives means garbage collection isn’t as intense, so they remain operating until a firmware fix.
Summary
I’m hesitant to recommend any one particular brand, because I’m sure as with the hard disk phenomenon (Law where each person has their preferred brand that they’ve never had issues with but everyone else has), people’s experiences will have varied.
We should probably collect some real data on this as an industry and share it around; I’ve always been of the mindset that we’re weirdly secretive sometimes of what hardware/software we use but we should share, so if anyone wants to contribute let me know.
But: you can probably continue to buy Intel and Samsung, depending on your use case/budget, and as usual, own your own availability and add resiliency to your apps and hardware, because things always fail in ways you can’t imagine.
We disqualify more SSDs at work then we qualify. Similar to BMCs every vendor has issues, and we’ve seen reliability problems.
However I will also add, combining RAID controllers and SSDs is a trip to dante’s 9th circle of hell. Both sets of firmware are buggy, and RAID controllers still haven’t been well designed for SSDs.
Hey Laurie – thanks for the insightful thoughts and sorry you had so many bumps to work through. Have you ever come across our product (DataGravity)? I’d love to hear your thoughts on it…we’ve built a storage device that is a hybrid flash array. It’s not built to be lightning fast storage, as you mentioned before, but to be fast enough for your unstructured data and/or file shares and to give you insights into what is living on the array and within each files, share and/or VM. Always curious to hear what we can be doing better from people who are in the trenches – feel free to tell us it’s terrible if you think it is! Thoughts?
I’ve had issues with Dell RAID controllers (LSI) in a few different boxes with spinny disks – even in a JBOD configuration.
I switched to SuperMicro machines – they’re well engineered and seem to have a lot less bullshit problems than the Dell boxes I had running any type of disks I wanted including SSDs.
As far as RAID, I’ve mostly done software RAID (Linux mdadm and friends) the last few years – it just seems a lot less prone to catastrophic failure than hardware RAID.
I am also avoiding hardware raids, reason beeing that in case of hardware failure of the controller i might possibly would need an exact replacement of the controller including firmware stepping.
Had exactly that issue with an HP controller which failed a replaced harddisk -> then controller would’nt boot -> error message: raid inaccessible, possibly data loss.
cold replacement controller did not work with the raid, even with the new harddisk removed.
problem with ssd is that they do not have a reasonable max. time to complete an IOP when they need to garbage collect . And then a ssd-unaware raid firmware might deactivate the disk. And then the next disk… Off you go…
(in this instance we removed the new drive, using the old controller and then migrated the data -> raid is no backup !)
Using a software raid you can switch to another box and import the raid in worst case.
Now i have my data living live with at minimum 2 copies on different sets of hardware with regular scrubbing to detect data rot -> this still does not qualify as backup.
But as the master chief experienced:
you need to backup data you really do not want to loose.
Controller queue depth is something to factor in as well.
Dell uses OEM version of LSI we all know that.
But Dell apparently also changed the default queue depth of the controller at the firmware level.
For instance the Dell H310 is a low-end controller based on the LSI 2008. The Dell H310 has a QD of 25 whilst the LSI version has a QD of 600!!!
Dell is not the only one, HP Smart Array B120i has a QD of 31 as well as Intel C602 AHCI (Patsburg).
Sure a SSD can empty a queue much faster than an HD but an HD doesn’t have concerns with garbage collector, TRIM and wearing issues…
Same here, we started buying supermicro for our new rack builds about a year ago. We get a few more DoA nodes, but supermicro takes care of them pretty quickly. We’re mostly using the “Fat Twin” models, which are much better designed mechanically than the Dell C-Series we used to order.
But to re-iterate other people’s comments. Stop using hardware raid. The only thing left we use hardware raid for is a legacy mysql installation. But we use raid0 on SSDs simply to get a large enough single filesystem. We’re actively working to move things to Cassandra, where we JBOD all the SSDs.
We bought only SSDs from companies that make the NAND themselfes (Intel, Crucial/Micron, SanDisk, Samsung) and had no problems yet. Would only buy from these companies – although that doesnt save oneself if I look at samsungs problems with the 840 evo series.
There are nothing special about SSD, you can even make one yourself if you have the NAND and Controller.
All The magic is in the controller, and controller is nothing if there isn’t a decent firmware telling it what to do. Hence the most important thing about SSD isn’t the NAND manufacturer, most of them do a pretty good job. But the software telling it what to do if it dies or degrade.
Pick the company that write the best software and has best support. Looking at the list it is not hard to see Intel being the best. Others doesn’t even come close.
You are close, the 4 foundries (at scale) of Non-Volatile Memory, stack up like this:
Intel-Micron
Samsung
Hynix
Sandisk-Toshiba
It’s a concentrated business even though it is not fully mature by any means yet. Non-Volatile Memories in fact are still taking off, with memory types all the time. Just look at newegg computers and how they are sold and what the standard storage choice people. People don’t realize how much data they really produce, and choose the hotter, more prone to fail mechanical drive. My gamer son still rocks hard on his Intel 530 drive at 180GB, because he knows how to store his data.
In SSDs, maturity and stability of the design and manufacturing processes, and most of all the Zeus-like firmware gods within the best SSD makers is what you need to look for. Don’t choose SSDs with Panas-like (flighty) firmware teams. There are companies that love too ship too many drives (all the time) another drive. But is this one really stable, hmmm the onus is again on me to figure it out.
This is probably most important. Daily broken (bricked) SSDs (because the firmware had to stop operations), makes for a very bad in IT Operations, when they could be automating. Choose vendors that scale and have been building SSDs for decades.
Someone here said Dell produces SSD’s, no they brand SSDs. Big difference. You want to investigate what brand your OEM is providing you, or ask for them by name. Your storage is your most vital infrastructure asset. It always gets expensive but not so much on the front-end.
I stopped buying Dell products several years ago, never had a good experience with anything they make, servers, desktops and laptops all seem to be vastly inferior, and problem ridden compared to what Lenovo/IBM and HP offer.
I have completely different experience, Dell hardware, especially servers one of the best on the market. I started from Dell L400 laptop, using XPS laptop series 15xx , then working with almost all kind of servers started from 9’th generation. Yes, sometime software is terrible (like some iDRAC patches). But quality is much much better than HP and ever IBM when it was real IBM. I think Fujitsu/Sun can be compare by quality, but not in the same price segments.
>problem ridden compard to what Lenovo/IBM and HP offer.
Lenovo really? I hated Lenovo before the whole Superfish debacle. Now I wouldn’t touch anything Lenovo if you paid me. I guess this is an example of the of the “hard disk phenomenon” because I have always had very good luck with Dell (dekstops and laptops, never used a Dell server).
http://www.pcworld.com/article/2887392/lenovo-hit-with-lawsuit-over-superfish-snafu.html
I haven’t been responsible for real iron for awhile but your R630 experience sounds a lot like what I experienced doing NAS and SAN evals in the mid-2000s. Vendors love to talk about stress-testing, exhaustive burn-ins, etc. and yet if we get a product from a new company or even a new product from an established player, a day or two of basic testing (bonnie, fsx, etc.) would usually find deadlocks, data corruption, kernel panics, etc. because the QA departments apparently stopped testing as soon as they saw the IOPs target needed by the marketing team.
We have the exact same issue currently with 3 Dell R630’s with 800GB Liteon SSD’s. Any chance you could share the Dell support tickets/case to help me expedite the Dell support team to get to replacing this “junk”?
Thanks!
Hey Chris,
For the 800GB AFAIK there is a firmware fix that resolves the issue. It’s just the 960GB that is still broken. Lemme know if that helps!
I was bitten so badly by crappy Dell RAID arrays about 12 years ago that I will NEVER buy anything from them again. We had bought 9 arrays and 7 failed all at once due to a firmware problem. When I called Dell under our 4 hour service agreement they said, “Sorry, can’t get replacements for 72 hours. I called the IBM rep at 4AM on a Saturday morning and he had replacements to me in under an hour. The price premium for the drives was well worth it. Bye Bye Dell.
To the story: we had similar problems as described here with EDGE SSD, guys from EDGE patch for us SSD’sfirmware. But if you’ve ever looked even deeper into MegaCli, then you will know that the last letter of the LSI should be D. I am not wondering at all why we got this kind of trouble, I wondering how it still working this way)
Very interesting post from a company that needs to operate at scale.
I think that there may be another important lesson here. First order a sample, stress the hell out of it, maybe even run it production, before you buy the actual product, or did you do that and did the issues only arise later on?
Although Dell and others involved seem to be very helpful, I really wonder why they were not able to find the issue you were able to reliably reproduce many times over.
Just stress it to max random IOPs on a RAID6 and let it run for a week? Would that have resulted in the same problems you faced?
If Intel gave you no problems why didn’t you just stick with them? They pioneered the SSD.
We only buy Intel SSDs, and it has all been Unicorns and IOPs. Every single article I read about SSDs ever makes me happy that we only buy Intel.
(except that our Dell machines don’t expose all the SMART data or let you set the on-drive encryption, so we have to use LUKS on top of the RAIDset rather than RAIDing on top of encrypted drives for those boxes)
Even Intel aren’t immune to faults. I just had a DC3500 replaced because it reported a capacity of 8MB and had trashed it’s internal SMART data table. It also kept dropping out of the array but that may have been from the SMART query problems.
So… why not stick with Intel or Samsung? Other companies, especially OCZ have been known to have significant issues.
Using Toshiba Q Series Pro, and Intel chipset RAID has been wonderful. TRIM working, and able to see individual drive’s SMART data.
At Aerospike – in-memory NoSQL DB specifically designed for flash – we’ve done a lot of flash testing, and a lot of deployments, on a lot of flash. To get 1,000,000 KV operations per second out of a system like you’re stating – eminently achievable – you use the “raid card” like a pure HBA and enable the LSI “fast path”.
I agree with another poster – using Raid6 or similar, and a garden variety file system – is a circle of hell unto itself. Write directly to the device and work out your own defrag system – that’s what we did, and it’s all open source.
Not sure how feasible it’d be (based on what systems you’re running and whatnot), but given a desire to use off-the-shelf drives as opposed to OEM drives, I would’ve been strongly inclined to use ZFS instead of hardware RAID, to reduce the dependency on specific hardware controllers.
I’d be curious to know how much of this article would’ve differed, had you not needed to worry about interactions with hardware RAID arrays.
Ah man…. I shoulda read the comments before I posted, heh. But don’t forget REFS. It’s new still, but it works like a charm on a test 2012 box I play around with, and I’ve used it in Win8 for over a year now with regular hdd.
Total server noob here, but isn’t it true that the common factor, other than SSD, is that they were all used in a RAID setup? If true, I was wondering if ZFS or even REFS is making inroads to replace RAID?
While I’m with you on deploying your own SSDs in production (we do so at Stack Overflow), there are some caveats that are not mentioned here. There is one critical difference between most of the Intels (and other “server” SSDs) compared to the consumer Samsungs: capacitors. This is hugely important, because having a RAID array does not solve the lack of them. Additionally having multiple servers doesn’t either.
When you issue a write to the RAID controller it’s acknowledgement down the chain: the OS waits for the RAID controller to acknowledge that the write has been persisted, and the RAID controller waits on the drive(s) to do the same before it ACKs the OS. Most SSDs (including Samsung EVO and Pro series), however, do not persist the data to NAND when this happens. They have only placed the write into their DIMM write cache, and it’s waiting to be persisted to the NAND. So while the OS thinks this data is hardened to persistent storage, it’s still susceptible to power loss.
If you lose power and are writing to those Samsung SSDs, you’ll lose all the data in that write buffer the OS thinks has been written. This can and will happen across all the write targets in the RAID. Any RAID mirroring is powerless here, because approximately the same writes are in approximately the same place in the write pipeline when it happens. If you zoom out a bit, this is also true across multiple servers in any kind of quorum write scenario. The same, or close to the same bits of data being written everywhere at once, even to multiple machines all with RAID are all corrupted in roughly the same way.
This is why capacitors (often referred to as “supercapacitors”) are so important. They provide enough power for the drive to finish writing in the event of sudden power loss. The slightly more expensive Intel drives have this as do all of the “enterprise” drives. It’s a critical difference from consumer drives I strongly urge you to update the article to mention.
Trust me, we lost a Cassandra cluster that was built on RAID 10s of Samsung 840 Pros here when a UPS failure caused a cascading power outage for exactly this reason.
For the curious: yes Samsung does make drives with capacitors as well, for example their 845DC PRO & 845DC EVO line.
Ran into some trouble a while ago with the 845DC EVO’s and a Dell R730xd. Seemed as if those capacitors were drawing too much current during power-up. System would detect a voltage drop and refuse to power on (could power it up by unplugging drives, the number varying by how long the box had been switched off).
In the end the problem was traced to a handful of SSDs, when used together.
Once again – cheap and fast solution, but a couple of circles down.
Which FS and mount options were you using? If you had ext3 then “nobarrier” is enabled by default (IIRC) and this would prevent the kernel from flushing the SSD write cache on synchronous writes.
With xfs and ext4 “barrier” is enabled by default which will ensure synchronous writes also flush the SSD write cache.
Tip: If you’re using one of the capacitor-backed SSDs mentioned, then you want to specify the “nobarrier” option as the drive will always be able to flush the write cache in the event of a power failure.
If you’re running a Linux OS then software RAID (mdadm) will offer better performance for small block random IO. (I just tested this in the R730xd with the H330 and H730p).
Also, using non-RAID mode on the drives and mdadm will let TRIM pass through to the devices whereas it’s unsupported when you’re doing RAID on the controller.
That can end up having a large impact on your performance as time progresses.
Great write up Laurie. I am sure that this will serve as a guide for SSD implementation.
I use an LSI controller with 4 Samsung 850 Pros, and never had any issues with either of them. However, I have run into more than a few issues with my WD Black HDDs and an Intel controller. Very strange issues, in fact. When I tried to switch from RAID5 to RAID1, I had all sorts of strange issues with the controller reserving space and not giving it back, not properly recognizing the non-RAID drives, recognizing one drive as two identical drives in windows…
So, this quote “Measured mean time to failure of hundreds of years before the memory wears out”. Are you talking about MTBF? I think you are confusing with MTBF means[0].
MTBF is a measure of reliability for a population of devices. So if you have a rack of 40 1U servers with 24x SSDs each, that’s 960 devices. If you have a 1.2 million hour MTBF device, divide by 960 and you get a MTBF of 1250 hours or 52 days for that rack. If you had 10 racks like this, you’re down to 125 hours between failures.
[0]: http://en.wikipedia.org/wiki/Mean_time_between_failures
Intels 3500/3600/3700/750 is a huge leap when it comes to performance consistency. That should help
I’ve been running the P3700 NVMe drives for Pernixdata FVP some months now, and they’re insane. I also run S3500 on HP raid controllers without any issues.
But I’ve seen some strange issues with older HP controllers (G5 era) and SSD’s.
The best controllers I’ve used so far is Areca 1880/1882/1883 series with SSD. They seem very reliable and have state of the art performance.
I wonder what kind of RAID controller was used for SSDs? I’ve read a lot of incompatibility reports about H710p / H730p and specifically Samsung EVO 840 . Now with 2Tb EVO out it becomes very lucrative to try building in R730xd sff case to get 48-52Tb pre-RAID per 2U.
Thanks!
I purchased two fujitsu servers to use as storage and a virtual host. We recently purchased some Micron SSDs to increase IOPs, but they failed spectacularly. All four SSDs failed at the same time after running for two months. The first round I thought it might just be a manufacturing defect, but after the second failure I figured it had to be an SSD/RAID controller issue. I’m thinking about replacing the LSI RAID controller with a non-raid controller and doing a software RAID. My main concern is getting a controller that will interface with the hot swap bay in the hardware. The enterprise grade SSDs from Fujitsu were 5k each which is why we were looking at 3rd party SSDs.
Bill, I wonder how many bytes written did your failed Micron drives endure? How did smartctl output increasingly look?
Flash your LSI controller into IT/HBA-only mode (or get another and flash it to IT mode) then get your softraid going. It’s the “standard” way of doing ZFS RAID by and large.
We had the exact same issue with the crap LiteON SSDs in Dell r730xd servers. 160 drives. After going back and forth with Dell for months, waiting for firmware upgrades, they finally accepted to replace all 160 drives with Intels. No problem since.
The key to using 840/850 pro’s with lsi megaraid controllers is to over provision them by 30% and reduce the rate for consistency check and the other scan method to 1%. If not, buy intel enterprise ssd’s which do the same over provision from the factory!
Also raid-10 doesn’t perform better than raid-1 and then span the volumes in ESXi to get better random I/o performance or spread out your database files across raid-1 volumes to self-load balance!
Works solid in HP G6/G7 boxes with lsi megaraid 9260/9271 controllers!