Category Archives: Fusion-IO

Solid State Storage: Enterprise State Of Affairs

Here In A Flash!

Its been a crazy last few years in the flash storage space. Things really started taking off around 2006 when NAND flash and moores law got together. in 2010 it was clear that flash storage was going to be a major part of your storage makeup in the future. It may not be NAND flash specifically though. It will be some kind of memory and not spinning disks.

Breaking The Cost Barrier.

For the last few years, I’ve always told people to price out on the cost of IO not the cost of storage. Buying flash storage was mainly a niche product solving a niche problem like to speed up random IO heavy tasks. With the cost of flash storage at or below standard disk based SAN storage with all the same connectivity features and the same software features I think it’s time to put flash storage on the same playing field as our old stalwart SAN solutions.

Right now at the end of 2012, you can get a large amount of flash storage. There is still this perception that it is too expensive and too risky to build out all flash storage arrays. I am here to prove at least cost isn’t as limiting a factor as you may believe. Traditional SAN storage can run you from 5 dollars a Gigabyte to 30 dollars a Gigabyte for spinning disks. You can easily get into an all flash array in that same range.

Here’s Looking At You Flash.

This is a short list of flash vendors currently on the market. I’ve thrown in a couple non-SAN types and a couple traditional SAN’s that have integrated flash storage in them. Please, don’t email me complaining that X vendor didn’t make this list or that Y vendor has different pricing. All the pricing numbers were gathered from published sources on the internet. These sources include, the vendors own website, published costs from TPC executive summaries and official third party price listings. If you are a vendor and don’t like the prices listed here then publicly publish your price list.

There are always two cost metrics I look at dollars per Gigabyte in raw capacity and dollars per Gigabyte in usable capacity. The first number is pretty straight forward. The second metric can get tricky in a hurry. On a disk based SAN that pretty much comes down to what RAID or protection scheme you use. Flash storage almost always introduces deduplication and compression which can muddy the waters a bit.

Fibre Channel/iSCSI vendor list

Nimbus Data

Appearing on the scene in 2006, they have two products currently on the market. the S-Class storage array and the E-Class storage array.

The S-Class seems to be their lower end entry but does come with an impressive software suite. It does provide 10GbE and Fibre Channel connectivity. Looking around at the cost for the S-Class I found a 2.5TB model for 25,000 dollars. That comes out to 9.7 dollars per Gigabyte in raw space. The S-Class is their super scaleable and totally redundant unit. I found a couple of quotes that put it in at 10.oo dollars a Gigabyte of raw storage. Already we have a contender!

Pure Storage

In 2009 Pure Storage started selling their flash only storage solutions. They include deduplication and compression in all their arrays and include that in the cost per Gigabyte. I personally find this a bit fishy since I always like to test with incompressible data as a worst case for any array. This would also drive up their cost. They claim between 5.00 and 10.00 dollars per usable Gigabyte and I haven’t found any solid source for public pricing on their array yet to dispute or confirm this number. They also have a generic “compare us” page on their website that at best is misleading and at worst plain lies. Since they don’t call out any specific vendor in their comparison page its hard to pin them for falsehoods but you can read between the lines.

Violin Memory

Violin Memory started in earnest around 2005 selling not just flash based but memory based arrays. Very quickly they transitioned to all flash arrays. They have two solutions on the market today. The 3000 series which allows some basic SAN style setups but also has direct attachments via external PCIe channels. It comes in at 10.50 dollars a Gigabyte raw and 12 dollars a Gigabyte usable. The 6000 series is their flagship product and the pricing reflects it. At 18.00 dollars per Gigabyte raw it is getting up there on the price scale. Again, not the cheapest but they are well established and have been used and are resold by HP.

Texas Memory Systems/IBM

If you haven’t heard, TMS was recently purchased by IBM. Based in Houston, TX I’ve always had a soft spot for them. They were also the first non-disk based storage solution I ever used. The first time I put a RamSan in and got 200,000 IO’s out of the little box I was sold. Of course it was only 64 Gigabytes of space and cost a small fortune. Today they have a solid flash based fibre attached and iSCSI attached lignup. I couldn’t find any pricing on the current flagship RamSan 820 but the 620 has been used in TPC benchmarks and is still in circulation. It is a heavy weight at 33.30 dollars a Gigabyte of raw storage.

Skyera

A new entrant into this space they are boasting some serious cost savings. They claim a 3.00 dollar per Gigabyte usable on their currently shipping product. The unit also includes options for deduplication and compression which can drive the cost down even further. It is also a half depth 1U solution with a built-in 10GbE switch. They are working on a fault tolerant unit due out second half of next year that will up the price a bit but add Fibre Channel connectivity. They have a solid pedigree as they are made up of the guys that brought the Sanforce controllers to market. They aren’t a proven company yet, and I haven’t seen a unit or been granted access to one ether. Still, I’d keep eye on them. At those price points and the crazy small footprint it may be worth taking a risk on them.

IBM

I’m putting the DS3524 on a separate entry to give you some contrast. This is a traditional SAN frame that has been populated with all SSD drives. With 112 200 GB drives and a total cost of 702908.00 it comes in at 31.00 a Gigabyte of raw storage. On the higher end but still in the price range I generally look to stay in.

SUN/Oracle

I couldn’t resist putting in a Sun F5100 in the mix. at 3,099,000.00 dollars it is the most expensive array I found listed. It has 38.4 Terabytes of raw capacity giving us a 80.00 dollars per Gigabyte price tag. Yikes!

Dell EqualLogic

When the 3Par deal fell apart Dell quickly gobbled up EqualLogic, a SAN manufacturer that focused on iSCSI solutions. This isn’t a flash array. I wanted to add it as contrast to the rest of the list. I found a 5.4 Terabyte array with a 7.00 dollar per Gigabyte raw storage price tag. Not horrible but still more expensive that some of our all flash solutions.

Fusion-io

What list would be complete without including the current king of the PCIe flash hill Fusion-io. I found a retail price listing for their 640 Gigabyte Duo card at 19,000 dollars giving us a 29.00 per usable Gigabyte. Looking at the next lowest card the 320 Gigabyte Duo at 7495.00 dollars ups the price to 32.20 per useable Gigabyte. They are wicked fast though :)

So Now What?

Armed with a bit of knowledge you can go forth and convince your boss and storage team that a SAN array fully based on flash is totally doable from a cost perspective. It may mean taking a bit of a risk but the rewards can be huge.

 

Changing Directions

I See Dead Tech….

Knowing when a technology is dying is always a good skill to have. Like most of my generation we weren’t the first on the computer scene but lived through several of it’s more painful transitions. As a college student I was forced to learn antiquated technologies and languages. I had to take a semester of COBOL. I also had to take two years of assembler for the IBM 390 mainframe and another year of assembler for the x86 focused on the i386 when the Pentium was already on the market. Again and again I’ve been forced to invest time in dying technologies. Well not any more!

Hard drives are dead LONG LIVE SOLID STATE!

I set the data on a delicate rinse cycle

I’m done with spinning disks. Since IBM invented them in nineteen and fifty seven they haven’t improved much over the years. They got smaller and faster yes but they never got sexier than the original. I mean, my mom was born in the fifties, I don’t want to be associated with something that old and way uncool. Wouldn’t you much rather have something at least invented in the modern age in your state of the art server?

Don’t you want the new hotness?

I mean seriously, isn’t this much cooler? I’m not building any new servers or desktop systems unless they are sporting flash drives. But don’t think this will last. You must stay vigilant, NAND flash won’t age like a fine wine ether. There will be something new in a few years and you must be willing to spend whatever it takes to deploy the “solid state killer” when it comes out.

Tell Gandpa Relational is Soooo last century

The relational model was developed by Dr. EF Codd while at IBM in 1970, two years before I was born. Using some fancy math called tuple calculus he proved that the relational model was better at seeking data on these new “hard drives” that IBM had laying around. That later tuned into relational algebra that is used today. Holy cow! I hated algebra AND calculus in high school why would I want to work with that crap now?

NoSQL Is The Future!

PhD’s, all neck ties and crazy gray hair.

Internet Scale, web 2.0 has a much better haircut.

In this new fast paced world of web 2.0 and databases that have to go all the way to Internet scale, the old crusty relational databases just can’t hang. Enter, NoSQL! I know that NoSQL covers a lot of different technologies, but some of the core things they do very well is scale up to millions of users and I need to scale that high. They do this by side stepping things like relationships, transactions and verified writes to disk. This makes them blazingly fast! Plus, I don’t have to learn any SQL languages, I can stay with what I love best javascript and JSON. Personally, I think MongoDB is the best of the bunch they don’t have a ton of fancy PhD’s, they are getting it done in the real world! Hey, they have a Success Engineer for crying out loud!!! Plus if you are using Ruby, Python, Erlang or any other real Web 2.0 language it just works out of the box. Don’t flame me about your NoSQL solution and why it is better, I just don’t care. I’m gearing up to hit all the major NoSQL conferences this year and canceling all my SQL Server related stuff. So long PASS Summit, no more hanging out with people obsessed with outdated skills.

Head in the CLOUD

Racks and Racks of Spaghetti photo by: Andrew McKaskill

Do you want this to manage?

Or this?

With all that said, I probably won’t be building to many more servers anyway. There is a new way of getting your data and servers without the hassle of buying hardware and securing it, THE CLOUD!

“Cloud computing is computation, software, data access, and storage services that do not require end-user knowledge of the physical location and configuration of the system that delivers the services. Parallels to this concept can be drawn with the electricity grid where end-users consume power resources without any necessary understanding of the component devices in the grid required to provide the service.” http://en.wikipedia.org/wiki/Cloud_computing

Now that’s what I’m talking about! I just plug in my code and out comes money. I don’t need to know how it all works on the back end. I’m all about convenient, on-demand network access to a shared pool of configurable computing resources. You know, kind of like when I was at college and sent my program to a sysadmin to get a time slice on the mainframe. I don’t need to know the details just run my program. Heck, I can even have a private cloud connected to other public and private clouds to make up The Intercloud(tm). Now that is sexy!

To my new ends I will be closing this blog and starting up NoSQLServerNoIOTheCloud.com to document my new jersey, I’ll only be posting once a year though, on April 1st.

See you next year!

Fusion-io, Flash NAND All You Can Eat

Fusion-io has announced general availability of the new Octal. This card is the largest single flash based device I’ve ever seen. The SLC version has 2.56 terabytes of raw storage and the MLC has a whopping 5.12 terabytes of raw storage.  This thing is a behemoth. The throughput numbers are also impressive, both read at 6.2 Gigabytes a second using a 64KB block, you know the same size as an extent in SQL Server. They also put up impressive write numbers the SLC version doing 6 Gigabytes a second and the MCL clocks in at 4.4 Gigabytes a second.

There is a market for these drives but you really need to do your homework first. This is basically four ioDrive Duos or eight ioDrive’s using a single PCIe 2.0 16x slot. It requires a lot of power, more than the PCIe slot can provide. It needs an additional three power connectors two 6 pin and one 8 pin. EDIT: According to John C. You only need to use ether the two 6 pin OR the single 8 pin. These are pretty standard on ATX power supplies in your high end desk top machines but very rarely available in your HP, Dell or IBM server so check to see if you have any extra power leads in your box first.

Also, remember you have to have a certain amount of free memory for the ioDrive to work. They have done a lot of work in the latest driver to reduce the memory foot print but it can still be significant. I would highly recommend setting the drive up to use a 4K page instead of a 512 byte page. After that, you will still need a minimum of 284 megabytes of RAM per 80 gigabytes of storage. On the MLC Octal that comes to 18 gigabytes of RAM that you need to have available per card. To be honest with you, if you are slotting one of these bad boys into a server it won’t be a little dual processor pizza box. On the latest HP DL580G7’s you can have as much as 512 gigabytes of RAM so carving off 18 gigabytes of that isn’t such a huge deal.

Lastly, you will actually see several drives on your system each one will be a 640 gigabyte drive. If you want one monster drive you will have to stripe them at the OS level. The down side of that is loosing TRIM support which is a boon to the overall performance of the drive, but not a complete deal breaker. EDIT: John C is correct. You don’t loose TRIM for striping with the default Windows RAID stripe on Windows Server 2008 R2. I’m waiting for confirmation from Symantec if that is the case with Veritas Storage Foundation as well since that is what I am using to get software RAID 10 on my servers.

I don’t have pricing information on these yet, but I’m guessing its like a Ferrari, if you have to ask you probably can’t afford it.

SATA, SAS or Neither? SSD’s Get A Third Option

I recently wrote about solid state storage and its different form factor. Well, several major manufacturers have realized that solid state needs all the bandwidth it can get. Dell, IBM, EMC, Fujitsu and Intel have formed the SSD Form Factor Working Group bringing PCIe 3 to the same form factor that SATA and SAS use. Focusing on the same connector types and a 2.5” dive housing. I’m not sure how quickly it will make it’s way into the enterprise space but that is clearly it’s target. Reusing the physical form factor cuts down on manufacturing and R&D costs for all involved. They have an aggressive time scale for something like this. The specification hasn’t been published yet and I’ll take a deeper look into it when it becomes available. There are some key players missing though. HP and Seagate being the two in the enterprise space that give me pause. Both control a large segment of the storage space. On the controller side LSI is also absent. This could be a direct threat to their current market domination of the RAID controller chipset space if they aren’t on the ball.

Fusion-io got that early on and took a different route sticking with just PCIe to bypass the limitations of SAS/SATA and intermediate controllers. By going that route they opened up a whole other level of performance.

I asked David Flynn what he thought about the new standard. Fusion-io is a contributor to the working group.

It is quite validating that folks would be routing PCIe to the drive bays.  For us it’s just another form factor that

we can easily support it.

Two things, though…  First is that I believe it’s a hangover from the mechanical drive era to put such emphasis on form factors that allow easy servicing access.  Solid state should not need to be serviced.  It should be much more reliable than HDD’s.  But, outside of Fusion-io failure rates for solid state is actually much worse than for mechanical disk drives.

The second point is that form-factor and even PCIe attachment isn’t really the key thing to higher performing, more reliable solid state.  What makes the real difference is eliminating the embedded CPU bottleneck in the access path to the flash.

Fusion-io uses a memory controller approach to integrating flash.  You don’t find CPU’s on  DRAM modules.  SSD’s (SATA or PCIe) from everyone else use embedded CPU’s and attach using storage controller methodologies.

In an upcoming post on my solid state storage series I will explore failure rates in detail. I do find it interesting that Fusion-io is one of the very few companies that have significantly higher error detection rates than a standard hard drive or other SSD’s, even enterprise branded SSD’s. Fusion-io claims 10^20 uncorrectable detectable error rate and 10^30 uncorrectable undetectable error rate. I have yet to see any hard disk or SSD with a rate better than 10^17. So, I agree with David about actually needing a form factor for ease of service if you build the device with enough error correction, which clearly you can with solid state.

Fusion-io, What It Takes To Be On The Cutting Edge

 

I recently had the privilege to talk with David Flynn, former CTO, Founder and newly minted CEO about Fusion-io. How Fusion-io was born. What they have built and the future of the company. Fusion-io is a new comer to the enterprise storage space and has exited the gates in a flash. In the last two years they have shown up with some impressive hardware, managed to draw Steve Wozniak into the fold and show some explosive growth, touting IBM, DELL and HP as adopters of the ioDrive.

Fusion-io is in its 4th year now, employing around 250 people. The first two years they were in design and build mode. On their first year of revenue Fusion-io did well into the double digit millions. They recently closed out their second year of sales at over 500% growth.

Wes Brown – “How did Fusion-io and the ioDrive come about?”

David Flynn – “The product is something that came out of a hybrid of my work building large scale, high performance computing systems, at one point we had three of the fastest computers in the world, based on Linux commodity clustering. During that time, this was early 2000, I recognized that memory was the single most expensive part of these super computers. It was around that same time the DRAM density growth stalled, missed a whole cycle, and has been growing at a much slower rate since then. Memory kind of reached power density limit, You can lithograph a smaller transistor but you can’t cool them. Memory reached a capacity density barrier due to the thermal limitations. Next, I went to another company and met Rick White co-founder of Fusion-io. We went and built a tiny security device that ran Linux on a tiny CPU. The curious thing about this device was we were using a new kind of memory for the storage, NAND flash. It was the darndest thing that this little CPU and system running Linux actually felt faster in many ways than these big super computers. It boiled down to the storage being on NAND flash, the idea for Fusion-io came out of that combination, and a realization that NAND flash as a new type of memory could offset memory and solve the problem of RAM density growth. So, while everybody else is thinking of NAND flash as a way of building faster disk drives, we said lets integrate NAND flash where it’s so fast it can offset the need for putting in large capacity memory, so not a faster disk drive but a higher density, higher capacity memory device.”

WB – “Why did you and Rick wait so long to bring these ideas to market?”

DF – “In 2006 Fusion-io was born. It wasn’t possible until that time frame. DRAM was the density king and the price king. You could get higher performance and capacity than you could from NAND flash before then.”

WB – “You have had several rounds of venture capital funding, is Fusion-io planning on another round or is the cash and sales pipeline good enough?”

DF – “We don’t expect to have to raise another round of financing.”

David and I talked about the role of CEO at Fusion-io and the previous people to hold that post. I was curious why a co-founder and very technical guy would assume the mantle of CEO at this point.

Don Basile, first CEO at Fusion-io, led them through their A and B series funding rounds and went on to become CEO at Violin Memory Systems. This left a vacuum and David Bradford was promoted from within to fill that role, bringing in Steve Wozniak as Chief Scientist. He has also overseen the phenomenal growth during this last year. David was recommended by Bradford after a stint as CTO and managing quite a bit of the day to day operations at the company. David went on to say that Marc Andreessen, who is now an investor through Andreessen Horowitz, was one of the tipping points that lead him to the CEO chair. David pointed out that part of Marc’s model for their investment is backing founder-CEO’s for various reasons, he believes they have the moral authority and know where all the moving parts are and are generally very good taking that role.

We then talked about what was coming down the product pipeline from Fusion-io.

WB – “Last year double density was promised but delayed, what was the hold up in expanding the product line beyond the ioDrive Duo?”

DF – “It would have to be limited resources in the company; we were just overwhelmed with growing the company. We are at 250+ people today, this time last year we were at 70 people. We have made a large investment engaging OEM’s like IBM and HP and partners like Dell. ”

WB -“So, how did Fusion-io get these major OEM’s to include Fusion-io in their server line?”

DF – “This is a good way to put it, Performance was the way to get people’s attention, capacity is a good thing. But what seals the deal and makes it an enterprise product isn’t the performance, or capacity, it is reliability of the product. That it doesn’t corrupt your data, it doesn’t fail and lose the data and doesn’t wear out too quickly, That is what allowed us to win the major OEM relationships.”

WB -“Fusion-io did a big test with the Octal at the end of last year, is this something that will see the light of day as a product?”

DF – “The ioDrive Octal is set to go into general production and availability soon. Last year we announced it as a science project because it was custom built for some specific applications, but we have decided to productize it. It will have five Terabytes of capacity, one million IOP’s and the equivalent bandwidth of sixteen FC4 ports.”

There is no pricing available on the ioDrive Octal, The new high density ioDrive or ioDrive Duo yet. There are servers on the market that are rated to handle up to four cards in a single server. If you need capacity and speed, I can’t imagine a better way to get it.

WB – “Is Fusion-io planning to go public?”

DF – “We’ve been building a company to be a self standing company. We believe our go to market strategy sales force direct enterprise along side with OEM’s, we do direct sales but fulfill through OEM’s.”

DF – “We view ourselves, just to give you the simplest way to describe what Fusion-io is, we are to flash chips what EMC is to disk drives. We aggregate flash chips to build infrastructure usable and valuable to enterprise customers, because they are flash chips it allows us to miniaturize it and go inside the box instead in a whole rack of boxes. We are building a new subsystem not a memory subsystem in the traditional since and not a storage subsystem, but a fusion of the two. It is deployed through an OEM strategy because it does have to be in the box to offer the best density metrics. At the end of the day our value is to take the cheapest flash chips and make it into the highest value infrastructure for folks to build on. That’s not just performance or capacity density it’s also the reliability and manageability of it. ”

WB – “With that said, is Fusion-io planning an IPO or not?”

– laughter from David and I-

DF -“We are here to build a successful company and won’t speculate about an IPO at this time.”

In our second part of the interview David gets deep down and technical about the ioDrive, what it is and isn’t and how the magic is made.

Fusion-IO releases new 2.1 driver and firmware

And it is well worth the upgrade. I recently had the opportunity to interview David Flynn CEO of Fusion-IO and that will be coming up soon. I have been beta testing the 2.0 driver for quite some time and have been very happy with the performance and reduction in required system memory (by half!). The 2.1 driver is the official release of the 2.x series and has some gains even over the 2.0 drive I’ve been testing. I always to a little test run with HD Tach before diving into my other more detailed tools and right off the top the 2.1 driver is faster yet again than the 1.27 and the 2.0 driver. The blue is the 2.0 the red is the 2.1. I don’t know about you but getting a performance bump from a firmware and driver upgrade is always a good thing!

 

image

It’s Beginning to Look A Lot Like Christmas……

 

We got something good in the mail last week!

 

FusionIODuo640

 

Some quick observations:

The build quality is outstanding. Nothing cheap at all about this card. The engineering that has gone into this shows in every way.

It is made up of modules that are screwed down, I can see where they really thought this through so each rev of the card doesn’t require all new PCB’s to be manufactured.

It does require an external source of power via 4 pin Molex or SATA power connector period. Make sure your server has one available, even though these are sold by HP not all HP servers have the required connectors.

PCIe expander bays are few and far between. The issue is most of these are used to expand desktops, laptops or used in non critical applications mostly AV or render farms.

http://www.magma.com/products/pciexpress/expressbox4-1u/index.html

This is a nice chassis but they are currently being retooled and won’t be available for a month or so. It is the only 1U and it has redundant power.

It exposes two drives to the OS per card. We will initially configure them two per machine in a RAID 10 array for redundancy.

 

More to come!

 

Wes