Category Archives: SSD

Solid State Storage: Enterprise State Of Affairs

Here In A Flash!

Its been a crazy last few years in the flash storage space. Things really started taking off around 2006 when NAND flash and moores law got together. in 2010 it was clear that flash storage was going to be a major part of your storage makeup in the future. It may not be NAND flash specifically though. It will be some kind of memory and not spinning disks.

Breaking The Cost Barrier.

For the last few years, I’ve always told people to price out on the cost of IO not the cost of storage. Buying flash storage was mainly a niche product solving a niche problem like to speed up random IO heavy tasks. With the cost of flash storage at or below standard disk based SAN storage with all the same connectivity features and the same software features I think it’s time to put flash storage on the same playing field as our old stalwart SAN solutions.

Right now at the end of 2012, you can get a large amount of flash storage. There is still this perception that it is too expensive and too risky to build out all flash storage arrays. I am here to prove at least cost isn’t as limiting a factor as you may believe. Traditional SAN storage can run you from 5 dollars a Gigabyte to 30 dollars a Gigabyte for spinning disks. You can easily get into an all flash array in that same range.

Here’s Looking At You Flash.

This is a short list of flash vendors currently on the market. I’ve thrown in a couple non-SAN types and a couple traditional SAN’s that have integrated flash storage in them. Please, don’t email me complaining that X vendor didn’t make this list or that Y vendor has different pricing. All the pricing numbers were gathered from published sources on the internet. These sources include, the vendors own website, published costs from TPC executive summaries and official third party price listings. If you are a vendor and don’t like the prices listed here then publicly publish your price list.

There are always two cost metrics I look at dollars per Gigabyte in raw capacity and dollars per Gigabyte in usable capacity. The first number is pretty straight forward. The second metric can get tricky in a hurry. On a disk based SAN that pretty much comes down to what RAID or protection scheme you use. Flash storage almost always introduces deduplication and compression which can muddy the waters a bit.

Fibre Channel/iSCSI vendor list

Nimbus Data

Appearing on the scene in 2006, they have two products currently on the market. the S-Class storage array and the E-Class storage array.

The S-Class seems to be their lower end entry but does come with an impressive software suite. It does provide 10GbE and Fibre Channel connectivity. Looking around at the cost for the S-Class I found a 2.5TB model for 25,000 dollars. That comes out to 9.7 dollars per Gigabyte in raw space. The S-Class is their super scaleable and totally redundant unit. I found a couple of quotes that put it in at 10.oo dollars a Gigabyte of raw storage. Already we have a contender!

Pure Storage

In 2009 Pure Storage started selling their flash only storage solutions. They include deduplication and compression in all their arrays and include that in the cost per Gigabyte. I personally find this a bit fishy since I always like to test with incompressible data as a worst case for any array. This would also drive up their cost. They claim between 5.00 and 10.00 dollars per usable Gigabyte and I haven’t found any solid source for public pricing on their array yet to dispute or confirm this number. They also have a generic “compare us” page on their website that at best is misleading and at worst plain lies. Since they don’t call out any specific vendor in their comparison page its hard to pin them for falsehoods but you can read between the lines.

Violin Memory

Violin Memory started in earnest around 2005 selling not just flash based but memory based arrays. Very quickly they transitioned to all flash arrays. They have two solutions on the market today. The 3000 series which allows some basic SAN style setups but also has direct attachments via external PCIe channels. It comes in at 10.50 dollars a Gigabyte raw and 12 dollars a Gigabyte usable. The 6000 series is their flagship product and the pricing reflects it. At 18.00 dollars per Gigabyte raw it is getting up there on the price scale. Again, not the cheapest but they are well established and have been used and are resold by HP.

Texas Memory Systems/IBM

If you haven’t heard, TMS was recently purchased by IBM. Based in Houston, TX I’ve always had a soft spot for them. They were also the first non-disk based storage solution I ever used. The first time I put a RamSan in and got 200,000 IO’s out of the little box I was sold. Of course it was only 64 Gigabytes of space and cost a small fortune. Today they have a solid flash based fibre attached and iSCSI attached lignup. I couldn’t find any pricing on the current flagship RamSan 820 but the 620 has been used in TPC benchmarks and is still in circulation. It is a heavy weight at 33.30 dollars a Gigabyte of raw storage.

Skyera

A new entrant into this space they are boasting some serious cost savings. They claim a 3.00 dollar per Gigabyte usable on their currently shipping product. The unit also includes options for deduplication and compression which can drive the cost down even further. It is also a half depth 1U solution with a built-in 10GbE switch. They are working on a fault tolerant unit due out second half of next year that will up the price a bit but add Fibre Channel connectivity. They have a solid pedigree as they are made up of the guys that brought the Sanforce controllers to market. They aren’t a proven company yet, and I haven’t seen a unit or been granted access to one ether. Still, I’d keep eye on them. At those price points and the crazy small footprint it may be worth taking a risk on them.

IBM

I’m putting the DS3524 on a separate entry to give you some contrast. This is a traditional SAN frame that has been populated with all SSD drives. With 112 200 GB drives and a total cost of 702908.00 it comes in at 31.00 a Gigabyte of raw storage. On the higher end but still in the price range I generally look to stay in.

SUN/Oracle

I couldn’t resist putting in a Sun F5100 in the mix. at 3,099,000.00 dollars it is the most expensive array I found listed. It has 38.4 Terabytes of raw capacity giving us a 80.00 dollars per Gigabyte price tag. Yikes!

Dell EqualLogic

When the 3Par deal fell apart Dell quickly gobbled up EqualLogic, a SAN manufacturer that focused on iSCSI solutions. This isn’t a flash array. I wanted to add it as contrast to the rest of the list. I found a 5.4 Terabyte array with a 7.00 dollar per Gigabyte raw storage price tag. Not horrible but still more expensive that some of our all flash solutions.

Fusion-io

What list would be complete without including the current king of the PCIe flash hill Fusion-io. I found a retail price listing for their 640 Gigabyte Duo card at 19,000 dollars giving us a 29.00 per usable Gigabyte. Looking at the next lowest card the 320 Gigabyte Duo at 7495.00 dollars ups the price to 32.20 per useable Gigabyte. They are wicked fast though :)

So Now What?

Armed with a bit of knowledge you can go forth and convince your boss and storage team that a SAN array fully based on flash is totally doable from a cost perspective. It may mean taking a bit of a risk but the rewards can be huge.

 

Speaking at PASS Summit 2012

It’s Not A Repeat

Speaking at the PASS Summit last year was one of the highlights of my career. I had a single regular session initially and picked up an additional session due to a drop in the schedule. Both talks were fun and I got some solid feedback.

The Boy Did Good

I won’t say great, there were some awesome sessions last year. I did do well enough to get an invite to submit for all the “invite only sessions”. I was stunned. I don’t have any material put together for a half day or a full day session yet and the window to submit sessions was a lot smaller this year. But I do have three new sessions and all of them could easily be extended from 75 minutes to 90 minutes. So, I submitted for both regular sessions and spotlight sessions and got one of both! WOO HOO!

The Lineup

I’ll be covering two topics near and dear to my heart.

How I Learned to Stop Worrying and Love My SAN [DBA-213-S]
Session Category: Spotlight Session (90 minutes)
Session Track: Enterprise Database Administration & Deployment

SANs and NASs have their challenges, but they also open up a whole new set of tools for disaster recovery and high availability. In this session, we’ll cover several different technologies that can make up a Storage Area Network. From Fibre Channel to iSCSI, there are similar technologies that every vendor implements. We’ll talk about the basics that apply to most SANs and strategies for setting up your storage. We’ll also cover SAN pitfalls as well as SQL Server-specific configuration optimizations that you can discuss with your storage teams. Don’t miss your chance to ask specific questions about your SAN problems.

I’ve built a career working with SAN and System Administrators. The goal of this session is to get you and your SAN Administrator speaking the same language, and to give you tools that BOTH of you can use to measure the health and performance of your IO system.

 

Integrating Solid State Storage with SQL Server [DBA-209]
Session Category: Regular Session (75 minutes)
Session Track: Enterprise Database Administration & Deployment

As solid state becomes more mainstream, there is a huge potential for performance gains in your environment. In this session, we will cover the basics of solid state storage, then look at specific designs and implementations of solid state storage from various vendors. Finally, we will look at different strategies for integrating solid state drives (SSDs) in your environment, both in new deployments and upgrades of existing systems. We will even talk about when you might want to skip SSDs and stay with traditional disk drives.

I’ve spoken quite a bit on solid state storage fundamentals this time around I’ll be tackling how people like myself and vendors are starting to mix SSD’s into the storage environment. Where it makes sense and where it can be a huge and costly mistake.

Finally

I hope to see you at the Summit again this year! Always feel free to come say hi and chat a bit. Networking is as important as the sessions and you will build friendships that last a lifetime.

Building A New Storage Test Server

We’re Gonna Need A Bigger Boat

Not to sound too obvious, I test IO systems. That means from time to time I have to refresh my environment if I want to test current hardware. Like you, I work for a living and can’t afford something like a Dell R910 Heck, I can’t afford to shell out for the stuff that Glenn Berry gets to play with these days. Yes, I work for the mighty Dell. No, they don’t give me loads of free hardware to just play with. That doesn’t mean I, or you, can’t have a solid test system that is expandable and a good platform for testing SQL Server.

The hardware choices, inexpensive doesn’t mean cheap

Well, most of the time. Realize I’m not building what I would consider a truly production ready server. Things like ECC memory and redundant power supplies are a must if you are building a “fire and forget” server to rack up. A good test server on the other hand doesn’t have the same up time requirements.

Case

A couple of years ago I would have bought something like a Aerocool Masstige. It will take a full size motherboard and has 10 5.25 bays. This allows me to then put something like this 3×5 5.25 to 3.5 mobile rack. with 10 bays I can put 15 hard drives in plus have one bay left over for something like a CD-Rom drive or another hard drive. The Aerocool Masstige does have two internal hard drive bays as well making for a total of 18 3.5″ drives in one case. The cost does add up though. The case has been discontinued but can still be found for around 110.00. The three drive cadges will run you another 100.oo. Oh, and you need a power supply that’s another 100.00. That brings the cost up to 510.00. Considering that a 3U Supermicro case with 15 bays will run you 700.00 easily. Not horrible for the amount of drive bays but there are better options now.

Norco RPC-4224 4U Server Case
This thing is big, I mean really big. It is deep and tall. It was designed to be a rack mount server but sits just fine on a shelf if you have clearance in the back. I was looking at another version of this same case that houses 20 drives but the price difference just made this hard pass up. This case isn’t a Supermicro case. It doesn’t have the build quality. To be honest though, I’m fine with that. What it does have is the ability to take a large range of ATX motherboards and a standard ATX power supply. Right now Newegg has this case on for 400.00. With a power supply that brings the total up to 500.00 still cheaper than the Supermicro with a ton of drive bays to boot. If you have worked with servers and had to cable them up you may notice that the RPC-4224 has a very different backplane layout. Every four drives has its own backplane and four lane SFF-8087 connector. Usually, most back planes have a single or maybe two connectors for 8 lanes shared via on board SAS expander. Since this doesn’t have that feature it actually makes it easier to build this thing for maximum speed. I can ether buy a very large RAID controller with 24 SAS ports or I can buy my own SAS expanders. The only down side to the backplanes on this server is the fact they are SAS 3Gb/s and not the newer 6Gb/s ports. For spinning drives it isn’t that big of an issue but if you are planning on stacking some SSD’s in those bays it can hurt you if the SSD’s support the newer protocol.

The one warning I’ll make is this thing is very front heavy. Oddly enough having 24 drives stuffed in the front doesn’t make for good weight distribution.  Pro tip, don’t put the hard drives in until the server is where you want it. It is a lot easier to move the case if it isn’t as heavy as two car batteries.

CPU

Just like Glenn, I think the Core i7 2600k is a very good choice for this build. At 314.00 you are only paying a slight premium over the 2600 for a lot more flexibility, *cough*overclocking*cough*.

Motherboard

I thought long and hard on this one and settled on a GIGABYTE GA-Z68A-D3H-B3. This is a very reasonably priced motherboard at 129.00 with some nice features. First, it is based off of the Intel Z68 chipset which means I have video built into the system and don’t have to give up a PCIe slot to video. Secondly, it has USB 3.0 which makes it easy to hook up an external USB 3.0 drive and get some livable speeds. Thirdly, it has SATA III 6Gb/s ports native. It only has two out of the six ports available at that speed but it does give me a few more drive options outside a add on RAID controller. Lastly, the PCIe slots on board are upgradeable to the new PCIe 3.0 standard. This means I don’t have to change my motherboard out to get a nice little bump in speed from newer PCIe RAID controllers or solid state cards.

Memory

Another perk of the Z68 chipset is that it will support up to 32GB of DDR3 RAM, when it becomes available that is. In the short to mid term I’ve got 16GB of Kingston HyperX 1600 DDR3 installed. That’s 115.00 in memory. I could have shaved a few dollars off but buying this as a four piece kit saves me from having to play the mix and match game with memory and hoping that it all works out.

IO System

This is where things get a little complicated. Since I need a lot of flexibility I need to have some additional hardware.

RAID Controller

I have an LSI MegaRAID 9260 6Gb/s card in the server now. At 530.00 it is a lot of card for the money. If you wanted to skip the SAS expanders and get a 24 port card you would be looking between 1100.00 to 1500.00. What’s worse, you really won’t see a huge jump in performance. Hard disks are a real limiting factor here.

SAS Expanders

SAS expanders are a must. There will be times where I will power all 24 drives from a single RAID card that has 24 lanes. There will also be times where I have smaller controllers installed and need to aggregate those drives together across or two connectors on a RAID controller. There are a couple of choices available to you. I opted for the Intel RES2Sv240 expander over the HP 468406-B21. The Intel expander supports the SAS 6Gb/s protocol and has one additional killer feature, it doesn’t require a PCIe slot to run. It was designed to work in cases that support the MD2 form factor. That means it could be mounted on a chassis wall and fed with a standard molex power connector. Why is such a big deal? It means I can stack these in my case and keep my very valuable PCIe slots free for RAID controllers and SSD cards. Newegg has them at 279.00 but you can find them cheaper. The HP expander is listed at 379.00 and requires a PCIe slot for power.

Hard Drives

I opted for smaller 73GB 15,000 RPM Fujitsu drives. They aren’t the fastest drives out since they are a generation behind. What they lack in speed they make up in price. Normally, these drives new cost 150.00 a pop. But, I’m a risk taker. You can find refurbished or pulls for as little as 22 bucks a drive. Make sure you are dealing with a seller that will take returns! I personally have had pretty good luck dealing with wholesale companies that specialize in buying older servers and then reselling the parts. Almost all of them will offer at least a 30 day return. That means you need to do a little more work on your end and validate the drives during your return window. Now I have 24 15k drives for under 600.00 bucks.

I’m using a 2.5″ 7200RPM drive as my boot drive mounted inside the case.

SSD’s

You didn’t think I’d put together a new system and not have some solid state in it did you? I’ve got a few SSD’s floating around but wanted to buy the latest in consumer grade drives and see if they have upped the game any. I opted for the Corsair Force GT 60GB drive, four of them. At 125.00 they are a solid buy for the performance you are getting. Based on the new Sanforce SF2280 controller and able to deliver 85k IOps and 500MB/sec in reads and writes they are a mighty contender. The other thing that pushed me to this drive was the fact it uses ONFI synchronous flash. I won’t hash out why it is better other than to say it produces more reliable results and is faster than its asynchronous or toggle NAND brothers.

Again, the case is so big on the inside I mounted two 1×2 3.5″ to 2.5″ drive bays to house them. That was an extra 50.00 a pop.

Lets Recap

Case 400.00
Powersupply 100.00
Motherboard 130.00
CPU 314.oo
Memory 115.00
RAID HBA 530.00
SAS Expanders 558.00
24 15K drives 558.00
4 SSD’s 500.00

Grand total: 3205.00

What does this buy me? A server that can do 2GB/s in reads or writes and 160k IOps or more. I’ll let you in on another little secret, shop around! Don’t think you have to buy everything at once. Don’t be afraid to wait a week for your parts if you get free shipping. By taking a month to put this machine together I paid about 2700.00. A huge discount over the listed price getting 30% or more off some stuff like the expanders, RAID controller, SSD’s, Case and CPU.

Just in case you were wondering what it looks like:

With the bonnet off (early test setup):

The SAS Backplanes cabled up:

Pliant Technology, Enterprise Flash Drives For Your SQL Server: Part 2

Adding In Others For Contrast

In our first part we introduced Pliant and the LS 300 drive. In part 2 we get down to the details. To give a better idea where you stand with the setup described last time I’m throwing in two other storage setups. A RAID 10 array made up of 12 500GB 7200 RPM drives attached via SATA II controllers In a RAID 0 configuration I was able to get 800MB/sec in sequential throughput so it isn’t horrible, just not “enterprise” worthy. A Patriot Torqx 128GB based on Indilinx Bigfoot SSD controller, not the greatest SSD on the consumer market but Indilinx was the king of the previous generation. I will be using the LSI controller just like I did for the Pliant LS 300.

Patriot Torqx Specifications:
Available in 64GB, 128GB and 256GB capacities
Interface: SATA I/II
Raid Support: 0, 1, 0+1
256GB and 128GB: Sequential Read: up to 260MB/s Sequential Write: up to 180MB/s
MTBF: >2,500,000 Hours
Data Retention: 5 years at 25°C
Data Reliability: Built in BCH 8, 12 and 16-bit ECC
10 Year Warranty

RAID support? I’m not sure what they are saying here other than don’t put this drive in a RAID 5 or RAID 6 setup at all. Mean time between failures(MTBF) is a pretty useless number, I would have rather seen a maximum write life or writes per day metric. It has ECC error checking, since this is an MLC based drive that doesn’t surprise me at all. 10 year warranty, yep 10 YEARS! This was one of the reasons I bought this drive. And I’m glad I did, it has already been replaced once.

The Setup

Since we are just testing storage systems I’m not as concerned with the host machine. It is more than up to the task of generating IO’s. I used Iometer 2008.06.18-RC2 for testing and my trusty
Iometer SQL Server IO Patterns File. After the test runs I used my other tool the Iometer output parser and importer to process the results and import them into a SQL Server table. The tests consisted of two different patters. These two patterns are close to what I’ve seen in the real world and loosely based on the Intel database test pattern. I run these test at different queue depths with a single worker
OLTP Heavy Read:
A mix of 8KB and 64KB size request with 90% of them being read request and 10% being write request. This test is 100% random access.

OLTP Moderate Read:
A mix of 8KB and 64KB size request with 65% of them being read request and 35% being write request. This test is 100% random access.

Lots And Lots of Graphs

This first set is OLTP Heavy Read at a queue depth of 1. Average Response Time is in milliseconds (ms).

Interesting to see the Torqx drive actually performing better than the Pliant drive. Since this is an extremely light load and mostly read only we can assume that the Torqx is tuned more towards that kind of workload. The hard disks put in a respectable showing, for hard disks.

OLTP Heavy Read at a queue depth of 4. Average Response Time is in milliseconds (ms).

As soon as we put some kind of load the Pliant drive just walks away from the other two drives. The Torqx is still five times faster than the RAID 10 setup.

OLTP Heavy Read at a queue depth of 8. Average Response Time is in milliseconds (ms).

Again, as the workload ramps up the Pliant really just ends up in a category all its own. We are still in a decent zone for the RAID setup but the single Torqx drive still is four to five times faster.

OLTP Heavy Read at a queue depth of 32. Average Response Time is in milliseconds (ms).

Now we are pushing past the bounds of the SATA based Torqx and the SATA based RAID setup. The Pliant drive just keeps getting faster jumping from 13,000 IO/sec to 22,000 IO/sec. Response times are still very impressive as well.

OLTP Heavy Read at a queue depth of 128. Average Response Time is in milliseconds (ms).

This is what we would call a “worst case scenario” for the RAID setup. With only 12 drives we are at a queue length of 10 for each drive. Response times are showing it too with the average being 110ms. Even the Torqx drive can’t shed the IO load at this point while the Pliant drive drives past 26,000 IO/sec and inches up on 500MB/sec as well. That last statement is accurate. Since this is a dual-port drive even though its a SAS 300 drive it is able to use both ports for read and writes. I did run the test up to 256 outstanding IO/sec but the Pliant drive was capped out and was starting to add some to the response time. The RAID array and the Torqx drive were getting so slow that the Pliant drive was hard to see on the average response time graph.

This second set is OLTP Moderate Read at a queue depth of 1. Average Response Time is in milliseconds (ms).

This workload is much more write intensive and the Pliant LS 300 jumps out in front very quickly. Even at 1 queue depth it is shaming the Torqx on write performance. The RAID array is performing pretty well with lower than expected response times.

OLTP Moderate Read at a queue depth of 4. Average Response Time is in milliseconds (ms).

Quickly the Pliant drive starts to walk away with this contest. It clearly has much more capacity for write workloads than the Torqx or RAID array.

OLTP Moderate Read at a queue depth of 8. Average Response Time is in milliseconds (ms).

Here we are again at the end of the road for the RAID array. The Torqx drive is holding on but response times are getting long. It is only managing to pull a two fold increase in performance over the RAID array.

OLTP Moderate Read at a queue depth of 32. Average Response Time is in milliseconds (ms).

Now things are just embarrassing for the RAID array and the Torqx drive. Both showing that write heavy workloads aren’t the best fit. Again, the Pliant drive is starting to get response times in the millisecond range but at 320MB/Sec and 18,000 IO/Sec I would have to call that a fair trade.

OLTP Moderate Read at a queue depth of 128. Average Response Time is in milliseconds (ms).

At last we have hit a wall with the RAID array and the Torqx drive. With the Torqx drive posting up numbers that are less than two times the RAID array it is starting to show its real weaknesses. The Pliant drive however is pulling a solid 22,ooo IO/Sec and creeping up on 43oMB/Sec of throughput. All of this from a single SAS 3.5″ drive.

Final Thoughts

I’ve had the Pliant LS 300 in my lab for quite a while now. I’ve also had the Patriot Torqx and this particular RAID array setup. All three have been running hard during the last three months. The Pliant drive did show some signs of slowing down as it settled into the workloads. The RAID array lost three drives total and as I stated earlier, the first Torqx drive I had gave up the ghost in the first month. I’ve said it before, and I will say it again. If you need an enterprise drive then buy an enterprise drive! Don’t get a drive that has a SATA interface and is dressed up like it is ready for the big show. I can say without a doubt the the Pliant LS 300 is one of the finest solid state disk I’ve ever worked with.

Secrets Of SQL Server: SQL Server, Storage And You Part 3 Solid State Storage

My last in the series on storage and SQL Server is today Wed, Jun 8, 2011 3:00 PM EDT (2:oo PM CST). You can register here if you want to take a deeper look into solid state storage. If you want a solid primer into flash based storage devices this is an excellent way to get it. If you haven’t seen my first part in this series go watch it!

Looking forward to rapping up this series and answering a TON of questions!

Pliant Technology, Enterprise Flash Drives For Your SQL Server: Part 1

Pliant Technology, New Kid On The Block

If you have been reading my storage series, and in particular my section on solid state storage, you know I have a pretty rigid standard for enterprise storage. Several months ago I contacted Pliant Technology about their Enterprise Flash Drives. It didn’t surprise me when they made the recent announcement about being acquired by SanDisk. Between Pliants’ enterprise ready technology and SandDisks’ track record at the consumer level I think they will be a new force to be reckoned with for sure. Pliant drives are already being sold by Dell and now will have a much larger channel partnerships with the new acquisition. They are one of the very few offering a 2.5″ or even more rare 3.5″ form factor using a  dual port SAS interface. I have been hammering on this drive for months now. It has taken everything I can throw at it and asked for more.

Enterprise Flash Drives

Pliant send me a Lightning LS 3.5″ 300S in a nondescript box. What surprised me is how heavy the drive is. I was expecting a featherweight drive like all the rest of the 2.5″ SSD’s I’ve worked with. This drive is very well made indeed. Another thing was the fins on top of the drive, something I’m use to seeing on 15,000 RPM drives but not on something with no moving parts. It never got hot to the touch so I’m not sure if they are really needed. The bottom of the drive has all the details on a sticker.

If you look closely at the SAS connector you will see many more wires than visible pins. This is because it is a true dual port drive. If you could see the other side of the SAS connector you would see another set of little pins in the center divider for the second port.

Normally, this port is used as a redundant path to the drive so you can lose a host bus adapter and still function just fine. Technically, you could use Multi-Path IO to use both channels in a load balancing configuration. Something I’ve never done on a traditional hard drive since you get zero benefit from the extra bandwidth at all. Solid state drives are a different beast though. A single drive can easily use the 300 megabytes available to a SAS 1.0 port. If you look at the specification sheet for this drive you will see they list read speeds of 525 MB/Sec and write speeds of 320 MB/Sec both above the 300 MB/sec available to a single SAS port. MPIO load balancing makes the magic happen. Since this drive was finalized before the 600 MB/Sec SAS 2.0 standard was in wide production it only makes since to use both ports for reads and writes. Since it doesn’t seem to be hitting more than 525 MB/Sec for reads I don’t know how much the drive would benefit from an upgrade to SAS 2.0.

Meet The HBA Eater

The big problem isn’t the MB/Sec throughput it is the number of IO’s this beast is capable of. Again, according to the spec sheet a single drive can generate 160,000 IO/Sec. That isn’t a typo. Even latest and best consumer grade SSD’s aren’t getting anywhere near that number, most top out in the 35,000 range with a few getting as high as 60,000. Lucky for us LSI has released a new series of host bus adapters capable of coping. The SAS 9211-4i boasts four lanes of SAS 2.0 and a throughput of more than 290,000 IO/Sec. More than enough to test a single LS 300S.

That answers the IO question but we still have to deal with the dual port issue if we wish to get every ounce out of the LS 300s. I tried several different approaches to get the second port to show up in windows as a usable active port. The drive chassis I had said they supported the feature but all of them had issues. I actually bought an additional drive cage that also reported to support dual port drives in an active/active configuration. Alas, it had issues as well. I was beginning to think there may be something wrong with the drive Pliant sent me! I finally just bought a mini-sas cable that supported dual port drives.

As you can see this cable is different. The two yellow wires are each a single SAS channel the other wires are for power. That means on my four port card I can hook up two dual port drives. Finally, windows saw two drives and I was able to configure MPIO in an active/active configuration!

Until Next Time….

Now that we have all the hardware in place and configured we will take a look at the benchmarks and long term stress tests in the next article.

SQLSaturday #63, Great Event!

So,

I actually had a early morning sessions and gave my Solid State Storage talk and had a great time. The audience was awesome asked very smart questions and I didn’t run over time. The guys and gals here in Dallas have put on another great event and it isn’t even lunch time yet!

As promised here is the slide deck from todays session. As always if you have any questions please drop me a line.

Solid State Storage Deep Dive

Changing Directions

I See Dead Tech….

Knowing when a technology is dying is always a good skill to have. Like most of my generation we weren’t the first on the computer scene but lived through several of it’s more painful transitions. As a college student I was forced to learn antiquated technologies and languages. I had to take a semester of COBOL. I also had to take two years of assembler for the IBM 390 mainframe and another year of assembler for the x86 focused on the i386 when the Pentium was already on the market. Again and again I’ve been forced to invest time in dying technologies. Well not any more!

Hard drives are dead LONG LIVE SOLID STATE!

I set the data on a delicate rinse cycle

I’m done with spinning disks. Since IBM invented them in nineteen and fifty seven they haven’t improved much over the years. They got smaller and faster yes but they never got sexier than the original. I mean, my mom was born in the fifties, I don’t want to be associated with something that old and way uncool. Wouldn’t you much rather have something at least invented in the modern age in your state of the art server?

Don’t you want the new hotness?

I mean seriously, isn’t this much cooler? I’m not building any new servers or desktop systems unless they are sporting flash drives. But don’t think this will last. You must stay vigilant, NAND flash won’t age like a fine wine ether. There will be something new in a few years and you must be willing to spend whatever it takes to deploy the “solid state killer” when it comes out.

Tell Gandpa Relational is Soooo last century

The relational model was developed by Dr. EF Codd while at IBM in 1970, two years before I was born. Using some fancy math called tuple calculus he proved that the relational model was better at seeking data on these new “hard drives” that IBM had laying around. That later tuned into relational algebra that is used today. Holy cow! I hated algebra AND calculus in high school why would I want to work with that crap now?

NoSQL Is The Future!

PhD’s, all neck ties and crazy gray hair.

Internet Scale, web 2.0 has a much better haircut.

In this new fast paced world of web 2.0 and databases that have to go all the way to Internet scale, the old crusty relational databases just can’t hang. Enter, NoSQL! I know that NoSQL covers a lot of different technologies, but some of the core things they do very well is scale up to millions of users and I need to scale that high. They do this by side stepping things like relationships, transactions and verified writes to disk. This makes them blazingly fast! Plus, I don’t have to learn any SQL languages, I can stay with what I love best javascript and JSON. Personally, I think MongoDB is the best of the bunch they don’t have a ton of fancy PhD’s, they are getting it done in the real world! Hey, they have a Success Engineer for crying out loud!!! Plus if you are using Ruby, Python, Erlang or any other real Web 2.0 language it just works out of the box. Don’t flame me about your NoSQL solution and why it is better, I just don’t care. I’m gearing up to hit all the major NoSQL conferences this year and canceling all my SQL Server related stuff. So long PASS Summit, no more hanging out with people obsessed with outdated skills.

Head in the CLOUD

Racks and Racks of Spaghetti photo by: Andrew McKaskill

Do you want this to manage?

Or this?

With all that said, I probably won’t be building to many more servers anyway. There is a new way of getting your data and servers without the hassle of buying hardware and securing it, THE CLOUD!

“Cloud computing is computation, software, data access, and storage services that do not require end-user knowledge of the physical location and configuration of the system that delivers the services. Parallels to this concept can be drawn with the electricity grid where end-users consume power resources without any necessary understanding of the component devices in the grid required to provide the service.” http://en.wikipedia.org/wiki/Cloud_computing

Now that’s what I’m talking about! I just plug in my code and out comes money. I don’t need to know how it all works on the back end. I’m all about convenient, on-demand network access to a shared pool of configurable computing resources. You know, kind of like when I was at college and sent my program to a sysadmin to get a time slice on the mainframe. I don’t need to know the details just run my program. Heck, I can even have a private cloud connected to other public and private clouds to make up The Intercloud(tm). Now that is sexy!

To my new ends I will be closing this blog and starting up NoSQLServerNoIOTheCloud.com to document my new jersey, I’ll only be posting once a year though, on April 1st.

See you next year!

Moore’s Law May Be The Death of NAND Flash

"It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so." -  Mark Twain

I try and keep this quote in my mind whenever I’m teaching about new technologies. You often hear the same things parroted over and over again long after they quit being true. This problem is compounded by fast moving technologies like NAND Flash.

If you have read my previous posts about Flash memory you are already aware of NAND flash endurance and reliability. Just like CPU’s manufacturing processes flash receive boost in capacity as you decrease the size of the transistors/gates used on the device. In CPU’s you get increases in speed, on flash you get increases in size. The current generation of flash manufactured on a 32nm process. This nets four gigabytes per die. Die size isn’t the same as chip, or package size. Flash dies are actually stacked in the actual chip package giving us sixteen gigabytes per package. With the new die shrink to 25nm we double the size to eight gigabytes and thirty two gigabytes respectively. That sounds great, but there is a dark side to the ever shrinking die. As the size of the gate gets smaller it becomes more unreliable and has less endurance than the previous generation. MLC flash suffers the brunt of this but SLC isn’t completely immune.

Cycles And Errors

One of the things that always comes up when talking about flash is the fact it wears out over time. The numbers that always get bantered about are SLC is good for 100,000 writes to a single cell and MLC dies at 10,000 cycles. This is one of those things that just ain’t so any more. Right now the current MLC main stream flash based on the 32nm process write cycles are down to 5000 or so. 25nm cuts that even further to 3000 with higher error rates to boot.

Several manufactures has announced the transition to 25nm on their desktop drives. Intel and OCZ being two of the biggest. Intel is a partner with Micron. They are directly responsible for developing and manufacturing quite a bit of the NAND flash on the market. OCZ is a very large consumer of that product. So, what do you do to offset the issues with 25nm? Well, the same thing you did to offset that problem with 32nm, more spare area and more ECC. At 32nm it wasn’t unusual to see 24 bits of ECC per 512 bytes. Now, I’ve seen numbers as high as 55 bits per 512 bytes to give 25nm the same protection.

To give you an example here is OCZ’s lineup with raw and usable space listed.

Drive Model Production Process Raw Capacity (in GB) Affected Capacity (in GB)
OCZSSD2‐2VTXE60G 25nm 64 55
OCZSSD2‐2VTX60G 32nm 64 60
OCZSSD2‐2VTXE120G 25nm 128 118
OCZSSD2‐2VTX120G 32nm 128 120

As you can clearly see the usable space is significantly decreased. There is a second problem specific to the OCZ drives as well. Since they are now using higher density modules they are only using half as many of them. Since most SSD’s get their performance from multiple read/write channels cutting that in half isn’t a good thing.

SLC is less susceptible to this issue but it is happening. At 32nm SLC was still in the 80,000 to 100,000 range for write cycles but the error rate was getting higher. At 25nm that trend continues and we are starting to see some of the same techniques used in MLC coming to SLC as ECC creeps up from 1 bit per 512 bytes to 8 bits or more per 512 bytes. Of course the down side to SLC is it is half the capacity of MLC. As die shrinks get smaller SLC may be the only viable option in the enterprise space.

It’s Non-Volatile… Mostly

Another side effect of shrinking the floating gate size is the loss of charge due to voltage bleed off over time. When I say “over time” I’m talking weeks or months and not years or decades anymore. The data on these smaller and smaller chips will have to be refreshed every few weeks. We aren’t seeing this severe an issue at the 25nm level but it will be coming unless they figure out a way to change the floating gate to prevent it.

Smaller Faster Cheaper

If you look at trends in memory and CPU you see that every generation the die gets smaller, capacity or speed increases and they become cheaper as you can fit double the chips on a single wafer. There are always technical issues to overcome with every technology. But NAND flash is the only one that gets so inherently so unreliable at smaller and smaller die sizes. So, does this mean the end of flash? In the short term I don’t think so. The fact is we will have to come up with new ways to reduce writes and add new kinds of protection and more advanced ECC. On the pricing front we are still in a position where demand is outstripping supply. That may change somewhat as 25nm manufacturing ramps up and more factories come online but as of today, I wouldn’t expect a huge drop in price for flash in the near future. If it was just a case of SSD’s consuming the supply of flash it would be a different matter. The fact is your cell phone, tablet and every other small portable device uses the exact same flash chips. Guess who is shipping more, SSDs or iPhones?

So, What Do I Do?

The easiest thing you can do is read the label. Check what manufacturing process the SSD is using. In some cases like OCZ that wasn’t a straight forward proposition. In most cases though the manufacturer prints raw and formatted capacities on the label. Check the life cycle/warranty of the drive. Is it rated for 50 gigabytes of writes or 5 terabytes of writes a day? Does it have a year warranty or 5 years? These are indicators of how long the manufacturer expects the drive to last. Check the error rate! Usually the error rate will be expressed in unrecoverable write or read errors per bit. Modern hard drives are in the 10^15 ~ 10^17 range. Some enterprise SSDs are in the 10^30 range. This tells me they are doing more ECC than the flash manufacturer “recommends” to keep your data as safe as possible.

#SQLRally is coming, Go vote!

 

We are in the final stages of selecting the speakers for the SQLRally May 11th through the 13th in sunny Orlando FloridaSQLRally Winner[11]. The program selection is a little different than what we have done with the Summit. The committee narrowed the number of selections and is putting the rest up to a public vote. This is your opportunity to voice your opinion on what you would like to hear at this inaugural event! I’ve been fortunate enough to have two of my sessions put up for a vote. If you follow my blog you know I have a passion for moving bits of data around as fast as possible. Both my sessions focus on storage. As much as I would love to have your votes to see my sessions at SQLRally, I would like it even more if you voted on what YOU want to learn about the most. Having served on the program committee for Summit last year I know just how hard it can be choosing what I think people would like to learn about. having the opportunity to make your choice known directly is just awesome. I am very excited to see PASS expand and have training events that cover the gambit. Starting with local user groups and SQL Saturdays now growing with SQLRally and finishing it off with the Summit, there is something for every budget.

With that said, here are my abstracts so you can get a better idea of what I’m speaking on. GO VOTE!

Title:
Solid State Storage Deep Dive
Speaker:
Wesley Brown
Category:
Storage
Level:
100

Abstract:
If you have ever wanted to know how SSD’s and Flash memory works this talk is for you. We will cover the fundamentals of Flash in detail. I will also highlight some of the specific vendor implementations and what makes a particular SSD enterprise-ready vs. consumer grade. We will also cover SQL Server usage patterns what is a good fit for SSD’s and when it may be better to go with hard disks. Solid State Storage isn’t a cure-all for every situation, this presentation will give you the tools you need to make the right choice for your SQL Server environment.

Session Goals

  • Understand the fundamental building block of Flash memory.
  • Get a clear explanation of what makes some SSD’s robust enough for enterprise use.
  • Learn where SSD will and won’t make a real difference in your SQL Server environment.

Title:
Understanding Storage Systems and SQL Server
Speaker:
Wesley Brown
Category:
Storage
Level:
100

Abstract:
The most important part of your SQL Server is also the slowest, Storage. This talk will take you through the fundamentals of your server’s Disk I/O System. From how hard drives work, through RAID configurations, and how to configure the file system. This session should give you a solid foundation over storage systems and help you understand why they are slow and how to overcome some of their limitations.

Session Goals

  • Understand the physical characteristics of IO hardware.
  • Understand the fundamentals of RAID.
  • Understand how to configure the file system.