A Visit to BlueArc
Posted on July 25, 2006
Welcome to the 3rd age of Network Storage. NAS really got its start with Sun in 1984 when they introduced NFS. In this first era servers with large ammounts of disk acted as NFS servers. In 1992 NetApp showed the world was enterprise grade NAS was really about and has continued to be at the forefront of enterprise class NAS to this day. But in 1998 BlueArc was founded to push the limits of what we expect from NAS in the one place its really been lacking… performance. Other companies such as Isilon and Exanet are making huge pushes in this sector as well, and NetApp aquired Spinnaker Networks in 2003 to bring blazing speed to the stability and featureset of OnTap (dubbed OnTap GX), but BlueArc has the solution to beat and its SPEC results prove it quarter after quarter. To give you an idea of just how sad the competition is, NetApp’s yet to be release OnTap GX SPEC submission has 96 cores in 24 nodes and produces a SPEC result only 5 times great than BlueArc’s 2 node 2 core active-active cluster.
Last week I had the honor and privilege to visit BlueArc’s new office in San Jose. Louis Gray, Sr. Manager, Corporate Marketing at BlueArc, really did a wonderful job of making me feel welcome and showing me around. I got to see the new facility, play with their flagship product, enjoy a chat with Louis and 2 BlueArc engineers, over pizza in the lab no less, and get a much better idea of who BlueArc is and what they’re about.
Of course… this is what I couldn’t wait to see:
BlueArc’s Titan 2000 is built for speed.. and it delivers. When I was given a tour around the building I got to look at a wall of beautiful plaques, each having the logo of one of their customers. Sadly I can’t disclose the names on that wall, but its impressive. One thing that struck me is that almost all the companies on the list have a need to process huge quantities of streaming data (from sensors, telemetry, simulation output, etc) in real-time. These sorts of applications not only need to store the massive input streams of data but read it elsewhere for analysis at the same time. Thats no easy task. Many storage solutions have really good read or write performance, but when you need to do heavy reads while massive writes are occuring you’ll spend a lot of time waiting. Clearly some of the most data intensive companies in the world have found their solution in BlueArc.
Here’s the king of them all… an active-active BlueArc Titan 2200 configuration in a single rack:
Just based on the SPEC reports its clear that BlueArc can do a whole lot more with a whole lot less. High performance can, apparently, happen in a single rack. Just doing some little benchmarks via NFS on a Sun workstation I hit gigabit line speed. That got my attention… I spent an hour in Hitatchi’s lab once and couldn’t break 30MB/s.
When you see in the picture above is 2 Titan 2000’s clustered using dual 10Gb Ethernet interconnects. The heads are attached via 4 4Gb/s Fibre Channel connections to 2 Broade switches (seen sandwiched between the heads and the disks) which connect to the disk arrays below. The BlueArc Titan is, believe it or not, an open storage platform and able to utilize almost any back-end storage that you choose, but the disk solution sold by BlueArc is to use Engenio 2882 arrays. In the past BlueArc has also employeed XIOtech arrays as well. Theoretically, you could take your existing Fibre Channel investement and use it as teired storage under the control of your Titan to take the crap you have and use it seemlessly with your new blazing fast solution. Talk about ROI!
Between the heads and the disks above, you see the two Brocades but you also see a 1U System Management Unit (SMU). The SMU handles all administrative duties and is accessable either via CLI or a really nice web interface. While I personally prefer integrated management, large storage systems are almost all continuing in this direction of including a dedicated 1U server for the purpose.
The disks above are Fibre Channel toward the top with some SATA at the bottom. BlueArc handles teired storage for various levels of performance and cost. Like most other vendors, this is done by using diffrent classes of disk and arrays allocating the whole disk, unlike Pillar Data’s only claim to fame being zones disks.
Now, the more interesting look… at the back:
Here you see a NetGear gigabit switch at the top of the rack, the two Titan’s, the SMU, the two switches, and then the disk arrays. Each Titan is modular and contains redundant power supplies and 4 modules:
- The first module is the Network Interface Module (NIM) which has 4 100Mb/s management ports, a serial console port, and 6 gigabit ethernet connections which can be aggregated using 802.3ad.
- The next two slots are use by the File System Modules (FSA and FSB). This is where the magic happens. The boards compliment each other and are not redundant.
- The last module is the Storage Interface Module (SIM) which has 4 Fibre Channel ports for back-end storage arrays and 2 10Gb/s ethernet ports as a cluster interconnect.
One thing that should be clear is that a single unit is not fully redundant. If you want enterprise grade redundancy you’ll need to use a clustered pair. This, however, isn’t unusual in the NAS space. The backend storage, however, can be entirely redundant making the only single point of failure the head itself… but if you’ve got a NetApp you already have this problem yourself so its not a knock on BlueArc.
Speaking of NetApp, the only thing you’ll really loose when moving from NetApp to BlueArc is the ability to provide FCP. BlueArc’s position is that while a lot of NetApp customers appreciate having the option, almost none of them use it. I’m firmly in that camp, I admit.
So then… can you afford BlueArc? Its not cheap, thats for sure… but I’ll say that if your considering a NetApp FAS6000 series you owe it to yourself to get a quote from BlueArc and see how they come in. Chances are that BlueArc will come in around the same range with more than 4 times the performance.
I’m very thankful to everyone at BlueArc for having me over. Too many storage companies have EMCitis with ego’s the size of the hindenburg. These guys were no non-sense, fun, easy going guys that knew storage forwards and backwards.
If you guys are interested in knowing more about the Titan let me know. I might be able to hook up an eval unit for a full write up if folks are interested.