Share via


A couple of corrections on video

  1. I should have said that 40 Gbe and 100 Gbe are not widely implemented rather than not available  – they are available. But, even 100 GbE is only adequate for 12 GBPS – and with Gen 3 PCIE SSD reading at 3.0 GBps +, that will easily be overrun by newer servers that can utilize 5 or more of those cards. Ultimately the solution to the network bottleneck issue probably involves utilizing enough parallel multiple network interface cards/links together to match the PCIE speeds or extending PCIE as a network transfer itself. See https://www.networkworld.com/news/tech/2012/030112-pci-express-256848.html
  2. The effective transfer rate is actually better than the 4GBps proposed when factoring in the effectiveness of database page/row compression. As pointed out in the blog entry, my sample database is storing closer to 1/2 TB of logical data when considering the compression ratios achieved. SQL Backup reads compressed data natively and backs it up as is without decompressing, thus increasing effective read throughput by the effectiveness of the database compression with no adverse impact on the backup performance. It is important to understand the difference between backup compression and database compression – backup compression does occur while the database is being backed up and reduces the amount of storage needed for the target destination at the cost of CPU – however data in a database already compressed using page/row database compression is simply backed up natively.
  3. The title should probably end with “using HP IO Accelerators” rather than Fusion-IO cards since I am using the HP-branded cards, but my feeling is that those new to the technology may not realize that the HP cards are actually just OEM versions of Fusion-IO cards.

There are probably a few other misstatements, feel free to let me know any others.

Thanks.