Delen via


Open Source and Interoperability

Open. Source. Is. Not. Interoperability.

Ted Neward is an entertaining and talented writer, not to mention a stand-up guy, and a first-rate technogeek. 

But that doesn't mean he is always right. I was just reading and old post of his which I missed during my unscheduled sabbatical, in which he writes:

They [Microsoft] need to have an interoperability story that developers can believe in, which means some kind of open-source-friendly play,

I still, still totally do not get why Interop and Open Source are used so interchangeably, by so many people.  Often by people whose viewpoints I respect, such as the esteemed Mr Neward. Or if not used interchangeably, why they are viewed as so closely related by so many people.  I do not see these two things as mutually dependent. They are completely independent factors.

Interop is not open source.  Open source does not guarantee interop. Period.

I am not making quality judgments on either open source or interop.  I am not saying that one of them is good and one is bad.  I am saying they are two different things.  They are "silicon based computers" and "big endian bit ordering".  They are related, they are neighbors, they are acquaintances, but they are not interchangeable.  Nor does one imply the other.  

Repeat after me:

  1. Interop is not open source.
  2. Interop does not require open source implementations
  3. Open source does not guarantee Interop

People will disagree with this; I expect to hear some feedback saying "With open source I can see the code and therefore it is easier to build something that interconnects with it."  I'm not buying that.  That is lazy thinking or bad engineering or both.   I can imagine that open source would imply interop, if you limit the definition of interoperability to mean "connect with previously written code in the same executable image."  In other words, merging distinct code bases.  This is a case where just linking up to a shared library can work, but sometimes seeing the code allows some better opportunity. 

But this limited definition of interoperability is definitely not the mainstream one that companies and organizations are dealing with today.  They want to connect large lego blocks together: Connect System A  and System B together in a business process flow that makes sense, and which may evolve over time.  Large-grained interop.  This is not merging two distinct code bases together.  Oftentimes the "code base" is not available - it is a pre-packed app.  Maybe it is Oracle, or maybe it is a Salesforce.com app.  Maybe the other end is a Sharepoint Portal, or a Medical Records management system from Lawson.  These might be called system-to-system interop or app-to-app interop if you like, where network protocols are the thing, not compile-time-constants.  Data interfaces, not code classes or interfaces.

In that situation, which I am arguing is THE mainstream challenge that architects and devs confront when they use the word "interop", looking at  source code is not helpful, and I won't hesitate to argue, I think it would be counterproductive.   Yeah, you read me right - it actually is harmful to look at the code if you want to connect two big apps together.

What is necessary to enable interop in these cases is PROTOCOLS, people.  Standard protocols would be nice to have, but don't misunderstand - standardized protocols are not a requirement for interop.  The requirement is for PUBLISHED protocols, not necessarily standard ones.  PUBLISHED NETWORK PROTOCOLS ALLOW INTEROP.  This is why a Java or .NET app can connect to an IBM transaction processing system, even though the on-the-wire protocols are completely closed and proprietary to IBM.  The protocols are documented.    They are closed yet published.  And because IBM's DTP protocols are published (not publicly per se, but published to those who license the protocols), anyone can implement the client-side of the exchange.

The same goes true for, say, Microsoft SQL Server. There is an on-the-wire protocol known as TDS, also known as tabular data stream.  Microsoft publishes TDS under license to partners, but I believe some have reverse-engineered it.  The fact that the protocol is constant and published means that, for example, DataDirect can build an OLEDB driver or a type-4 JDBC driver for SQL Server. 

It's the Protocols, Silly!

Standardized protocols are nice to have because the standard, if it is based on freely-available IP (or at the very least, IP available under RAND terms), encourages wider adoption and thus the virtuous cycle inherent in Metcalfe's Law.  Standardized protocols are essential if you want broad interoperability, which of course is really important, and is what most of us are after anyway.  Above I said that standards are not required, and I stand by that statement.  But practically speaking, standards are almost a sine-qua-non of meaningful interop.   

Web Services and XML are just common protocols, and their widespread adoption (owing in large part to the fact that they are standards) is the true source of their value to companies and organizations (again see Metcalfe's Law).  It is not required that web service endpoints, either client or server, be implemented with an open-source web services stack, in order to get good interoperability.  Instead it is essential that the endpoints conform to the standard protocol definitions.  And the corollary is, the protocol definitions must be sufficiently clear, complete, relatively simple to implement, and relatively simple to test, such that faithful implementations of the protocols can eb validated easily and will interconnect transparently. 

This is why .NET WCF clients can easily interconnect with Web Services enpoints running under IBM WebSphere App Server, though neither the .NET Framework nor the WebSphere web services libraries are open source.  

Open Source Is Not Interoperability.

Comments