Freigeben über


Big Compute @ Microsoft

Customers and partners have been asking us what the recent Microsoft
organization changes mean for HPC. What does it mean for HPC? The answer—full
steam ahead. 

We call our workload “Big Compute.” Big Compute applications typically require
large amounts of compute power for hours or days at a time. Some of our customers
describe what they are doing as HPC, but others call it risk analysis,
rendering, transcoding, or digital design and manufacturing. 

We’re still working on the HPC Pack for Windows Server clusters and
enabling new Big Compute scenarios in the cloud with Windows Azure. You’ll
continue to see new features and new releases from us on a regular basis. The
team is really excited about the new capabilities we’ll be bringing to our
customers.   

Microsoft’s vision for Big Compute in Windows Azure remains democratizing
capabilities like performance and scale. We have demonstrated world class
performance using Windows Server in the cloud with HPC application benchmarks
and Top 500 systems. We’ll continue to make clusters easy to manage and extend
them to Windows Azure. Going forward, we want to make running cluster
applications in the cloud possible for users that don’t have or can’t get
access to clusters.                                                            

The economics of the cloud is fundamentally changing cluster computing
by making compute power available when you need it, paying only for what you
use. Our customers in research and industry get it. Users with workstations can
have access to clusters for projects without having to invest in infrastructure
that may sit idle. Our enterprise customers are able to keep their on-premises
servers busy, while running peak load in Azure. And now developers can
cost-effectively test applications and models at scale. We are part of the
Enterprise and Cloud Division at Microsoft for a reason. 

Performance and scale remains at our heart. Because we are part of the
Windows Azure Group, we are driving capabilities like low-latency RDMA
networking in the fabric. We make sure that customers can reliably provision
thousands of cores to run their compute jobs. Windows Azure is evolving to
provide a range of capabilities for Big Compute and Big Data. High memory
instances help more applications run in the cloud, and we are tuning our RDMA
and MPI stacks with partners now. 

The research community advances the leading edge of HPC. Our team and
Microsoft Research continue to work closely with partners in academia and
labs—we value the relationships and feedback. Our mission and focus is on
democratizing these capabilities and making HPC style computing broadly
available through services from Microsoft and from our partners. 

We’ll continue to use this blog to update you on Big Compute and HPC at
Microsoft. 

 

Alex Sutton

Group Program Manager, Big Compute

Comments

  • Anonymous
    July 31, 2013
    Hi! Great to hear that HPC/Big Compute is still getting attention at MS.  One big request on the that front is GPU and Xeon Phi based nodes, along with software support (such as C++ AMP and OpenCL drivers) are also launched.  

  • Anonymous
    August 30, 2013
    I am an ex-member of the Interactive Supercomputing team at the NERD center in Cambridge, MA.  The team I was in produced MS "Cloud Numerics".  I was wondering if you had any updates on MS's work on technical computing, or so-called "big data".  It's been a month already.....