Partilhar via


Contoso Labs-Rack Design - Part 1

Contoso Labs Series - Table of Contents

Now that we have all of our equipment, we need to figure out what our deployment footprint looks like. It’s worth noting that while we’re discussing this like it was a linear process, this is actually one of the many interdependencies we had to manage during the planning process that shaped our purchasing.

Racks of Unusual Size

The first bit of uniqueness we have to deal with is our datacenter. The location where this cloud will be deployed is one of our research and development datacenters. In other words, Microsoft’s public services like Office365, Outlook.com, and Xbox Live! do not live there. That’s not to say it’s not a serious datacenter, however. There are literally tens of thousands of servers running there, all serving the needs of our development and test teams.

One of the unique features of the datacenter is special racks throughout. For density purposes, we had custom 52U racks built for the entire facility.  Needless to say, that makes for some interesting architecture choices when it comes to racking and wiring, if you’re going to make maximum use of the space in each rack.

Click to EnlargeMaximizing density is important despite the datacenter’s huge size, because space is still at a premium.  Luckily for us, our existing compute nodes were deployed as one large chunk several years ago, and consumed 7+ racks of contiguous space. This fortunate twist meant we only needed one additional rack of space for our new gear, along with a well considered re-racking plan.

Rack Pairs

The final rack deployment plan took shape around a rack pair design that would be replicated 4 times.  You can see a representation of our racks to the right, just click to enlarge. One rack will hold only compute nodes and their associated network equipment, while the other rack will hold a smaller quantity of compute nodes, as well one of our Scale-Out File Servers that will provide storage for the rack pair.  Besides helping localize traffic, it gives us flexibility to take a pair of racks and remove them from the fabric to test new ideas, new versions of Windows and System Center, and leave the rest of the fabric unaffected.

In the next post of this series, we’ll dive into the details of the rack layout, and then the logical wiring of the systems and racks.

Comments

  • Anonymous
    January 01, 2003
    Great blog postings so far. We're doing a similar project ourselves. Anxiously waiting to hear more about the configuration of the Storage and Hyper-V clusters.
  • Anonymous
    January 01, 2003
    @Henry - BMC will be shared function with one of the onboard NICs. Not ideal, but it still gives us control, and it saves us a wire and entire switch. I'll admit I'm slightly nervous about this, but we had very few good choices available. It's certainly better than no BMC at all.

    @Andrew - Boot is from local disks. In their previous life, these systems used their local disks heavily. Each has eight 600GB 10k SAS disks right now. We're going to find good use for 6 of them elsewhere, and continue to use two of them in a mirror as a boot drive. More than adequate.

    @RWillG - We'll get to the Spaces configuration soon enough. The clusters themselves might take a little bit longer. There's a lot of networking stuff that needs to be addressed first. :)
  • Anonymous
    January 01, 2003
    I really love how you are doing this, we're looking at redesigning our QA environment using Hyper-V and this whole series of posts has been very helpful, and I'm looking forward to more as you progress through your process.
  • Anonymous
    April 03, 2014
    What is the plan for Baseboard Management Controller network connectivity and will you be using SCVMM for bare metal deployments?
  • Anonymous
    April 04, 2014
    Very educating, thank you!
    Please, elaborate how hosts are planned to boot; local disks, iSCSI?
  • Anonymous
    June 12, 2014
    What happened to this series? I thought we were going to learn best practices with Windows Server and the system center suite