Share via


.NET Services March 2009 CTP - The Service Bus Namespace

With the March CTP, the .NET Service Bus namespace root for each project is taking on a new form that we had already pre-announced into the PDC’08 documentation and that I’ve talked about at PDC and other occasions. Any project’s root URI is now, depending on the chosen transport option, of the form

sb://project-name.servicebus.windows.net/ or
https://project-name.servicebus.windows.net/ or
https://project-name.servicebus.windows.net/

The previous root URI for a project was sb:|http:|https://servicebus.windows.net/services/project-name/, which was clearly sub-optimal for a number of reasons. Some technical, some more philosophical. Technical reasons first:

  • From the customer perspective, the old structure didn’t allow any project to host “magic” well-known files at the root of the project’s domain. There are unfortunately some protocols that require this, even though it’s broadly considered to be bad practice to bake particular assumptions of the location of resources into protocols. Amongst the “offenders” are robots.txt, favicon.ico and w3c/p3p.xml, but also Adobe Flash’s cross-domain policy crossdomain.xml and – taking a page out of Adobe’s playbook in that case – Microsoft’s Silverlight with ClientAccessPolicy.xml. Therefore the Service Bus was inherently Flash and Silverlight incompatible, unless you served up the apps from within the Service Bus itself. I’m quite sure that there are numerous proprietary protocols used with all sorts of applications that follow a similar pattern and so far those applications could not be exposed through the .NET Service Bus, because there wasn’t any way to put anything at the root. Now there is, even though it requires a little WCF trick that I’ll explain in a separate post specifically addressing crossdomain.xml and ClientAccessPolicy.xml.
  • The most important reason for us was that we need to be able to scale out the system into an “infinite” number of partitions (or “scale-units”) inside and across data-centers. Therefore, each project now gets its very own DNS record pointing to the scale-unit(s) where the project is hosted. The DNS system we’re leveraging is the one that’s used across most Windows Live properties and has a number of very interesting characteristics that will allow us – over time - to optimize the Service Bus traffic flow and to drive down end-to-end latency.

What matters even more to us from an “aesthetics” and design perspective is that we really want a project’s namespace to be clean, isolated, and want to avoid any artificial, system imposed pollution in the namespace. The fact that there are sound technical reasons supporting that desire is even more helpful to create consensus around this.

At its core, the Service Bus namespace is a federated, hierarchical service registry, whose structure is dictated and owned by “the project”. The difference between the Service Bus namespace and a “classic” service registry system like DNS or UDDI or LDAP is that services or messaging primitives are (usually) not only referenced by the registry, but they are projected straight into the registry, so that you can interact with the registry and those services or messaging primitives projected into the registry using similar or identical programming interfaces and within the scope of a unified namespace. We intentionally blur the distinction.

The service registry’s “discovery” (or “browsing”) protocol is based on a simple, conceptually infinite, nested hierarchy of ATOM 1.0 feeds. (In case you are wondering: Yes, we’ve got work lined up to provide flattened, non-hierarchical discovery overlaid over the hierarchy.) If you project an ATOM 1.0 endpoint into any name in the name hierarchy and choose to make that endpoint discoverable, the transition from the discovery hierarchy across the Service Bus namespace into the discovery of the resource hierarchy provided by your endpoint is absolutely seamless. That’s a design point.

What makes the namespace “federated” is that services or messaging primitives can be projected into the shared namespace from “anywhere”. Typically, the path portion of a URI represents a set of relatively tightly collocated set of resources that are residing across a web farm or a database cluster with the authority portion identifying (directly or indirectly) the target cluster.

.NET Service Bus URIs obviously look exactly like that, but they are quite different.

Imagine you have a distributed setup with 3 different “order processing” systems: One for the U.S. near Seattle, one for the EU near Düsseldorf, and one for the SEA region in Singapore.  Let’s also assume that I’m not talking about a multi-national mega-corporation here, but about a trading company with some 40 people that happens to have these three offices. Let’s also assume that they are using a customized ISV application that has been adapted for them by a (plug!) Microsoft Certified Partner and that application is primarily designed to run on local servers. Let’s also assume that it would be difficult (or prohibitively expensive) for them to get a static IP address and a corresponding, secure network setup that would allow them to wire up the application at the respective sites to the outside world. If you are helping customers with business apps, you might find that scenario to be resonating with your experience.

The .NET Service Bus can help with the connectivity issues by allowing to project the endpoints into the Service Bus namespace. That means that the application’s endpoints are listening for messages on and inside the .NET Service Bus instead of some local network listener on-site. There is no need to open any inbound firewall port and no need to do anything to the NAT setup and no need to do anything with DNS. Clients talk to those endpoints. The Service Bus namespace helps with getting those applications organized in a way that you can look at the resulting distributed system as “one” even though it spans sites:

https://littletradingcorp.servicebus.windows.net/orders/seattle/
https://littletradingcorp.servicebus.windows.net/orders/dusseldorf/
https://littletradingcorp.servicebus.windows.net/orders/singapore/

In combination with the .NET Access Control service, you can now overlay a single set of access control rules over the base scope https://littletradingcorp.servicebus.windows.net/orders/ which yields a centrally manageable access control overlay over these three services, even though the actual servers and endpoints are spread once around the world.  

What makes the naming system very different from DNS is that the .NET Service Bus naming system names endpoints and not hosts. Let’s say that each site also hosts a local “human resources” software; at that company size that may very well be an application that runs on the respective branch manager’s desktop machine or on a small server. That system is quite naturally distinct from from the order processing system and its reasonable safe to assume that the company wouldn’t want to collocate that system with the order processing system. Let’s project these into the namespace as well and we’ll certainly assume they have different Access Control rules that apply to the respective root scope:

https://littletradingcorp.servicebus.windows.net/hr/seattle/
https://littletradingcorp.servicebus.windows.net/hr/dusseldorf/
https://littletradingcorp.servicebus.windows.net/hr/singapore/

If we were trying to provide direct access to the “orders” and “hr” endpoints using HTTP with a “normal” setup, we would either have to have – especially with HTTPS – two public, static IP addresses for each site that are mapped to the respective machines or we’d have to use some gateway server that would dispatch the requests locally based on a host-header (requiring distinct DNS entries) or on the path prefix or we’d have to resort to non-standard ports – and we’d open up the floodgates for arbitrary and potentially malicious inbound traffic on the opened ports. And if we had that we would have to map these IP addresses into some DNS naming structure.  That’s quite a bit of networking work. Not necessarily complicated for someone who is well versed in these matters, but at the very least it’s a significant cost point. The resulting complication grows with each endpoint, since DNS identifies the public IP gateway and not the endpoint. So things get trickier and trickier. If we want to help small and medium businesses to “go digital” and intercommunicate more efficiently over the web (Fax is still the king in very many places and businesses), all those networking acrobatics don’t scale well.

Mind that I argue that not all business owners are happily accepting the notion of putting all their data and apps into someone else’s data center or "into the cloud”. Mind also that I don’t think I’m contradicting myself here. The .NET Service Bus is the “+” in what Microsoft calls “Software+Services”. It facilitates communication between places of which either can be in the cloud or at a customer-controlled site, it’s not the place where you “put data”.

So much for this post – in the next we’re going to look at the new March 2009 CTP Routers and how they interact and integrate with the namespace structure. 

Comments

  • Anonymous
    April 01, 2009
    In the March 2009 CTP of the .NET Service Bus we’ve added two significant new capabilities, Routers and

  • Anonymous
    April 27, 2009
    Oh yeah, that's right. I've been busy doing a bunch of things that haven't involved blogging. My bad.