How to Integrate Two Software Systems?
Many people may believe that if an integration works, it was created the right way. The reality is that an integration working correctly is not necessarily an indication that it was built using the best available methods and, moreover, that it will continue to work correctly five years down the road.
Technologies change, products get upgraded, colleagues come and go, data models are modified and of course software has an irritating way of getting “brittle.” Brittle in the sense that something well-thought-out and robust when first developed becomes perhaps way too easy to break, with unintentionally obfuscated or undocumented code where few people remember how it was supposed to work. To an engineer, brittle is very easy to understand. It means it breaks like an old piece of yard sale pottery when handled, and when trying to repair it with Super Glue, other pieces may easily come undone in the process. (To say nothing of the old, perhaps chintzy style of the piece.)
That’ll never happen to me, right? Well, it will not if it’s designed well right from the beginning. That generally means a few things: API layer integrations and integrations that use open standards. There’s a third best practice out there on the web called REST, which also deserves mention.
API layer simply means that the underlying database code has been “abstracted” to a conceptual data model level through an API and service layer. So instead of creating a user account directly in the Medidata Rave database, for example, you create a user object through the service layer that “knows” how to create that account. The point is that you’re not writing directly to the data layer but to a higher level that stays constant and unchanging, and you’re writing to something that everybody understands at your business, in this case an “account.”
The use of open standards means that instead of writing to a Medidata-specific or Apple-specific standard or data model, you’re writing to an industry standard data model or perhaps an Internet standard. This is great. It means that as long as the application to which you’re integrating supports that standard, you can reuse your integration easily. It isn’t always perfect, but you can be certain that most of the work is already done. You also don’t have to worry about your vendor changing their API interfaces, because they cannot do that and still say they support the standard.
REST, or REpresentational State Transfer, is the software architecture for distributed systems that governs the entire World Wide Web. Using REST means that application integrations follow Internet standards for HTTP and more generally the concepts of web client and web server. Practically, it means that your integrations talk together in an identical way to your web browser and web server. It is contrasted with Simple Object Access Protocol (SOAP), which allows any number of ways the two can communicate. REST is often better because it boils down integrations to a very simple layer that works just like a web browser and so everything else—servers, gateways, proxies, firewalls—easily work as well. For distributed systems and integrations, simpler is often better. For maintenance of an application integration over a long period of time, simpler can be absolutely essential.
There are many other things that keep your integration from getting dried out and brittle with age. The way you and the vendor tests makes a big difference, which I will share in another post. Good documentation helps, of course, as does a careful scheme of versioning and deprecation. Either way, looking at integrations is identical to any other software project—it’s well and good to get it working, but you cannot forget about maintaining it over a long period. Not as much fun, perhaps, but just as important to the business owner and the CTO.