IBM recently bought Cast Iron Systems to accelerate its Cloud efforts. According to @monkchip’s download from WebSphere GM Craig Hayman, IBM was specifically interested in the “specific application integration patterns” nee adaptors (“Cast Iron generates adapters that run on existing integration tools such as WebSphere MQ, or JPA.”) as well as Cast Iron’s capability to manage API proliferation in the enterprise. As IBM is squarely involved in the heart of enterprise Cloud Computing, this validates the idea that data integration is a key to getting Cloud right. They wouldn’t make this move if they weren’t encountering the issue in many customers.
But data within silo’d within applications has been a problem for quite some time, why is IBM moving on this now? Based on some work I’ve done with sales GTM organizations and their implementations of Salesforce, I would posit that Salesforce has reached the tipping point as a major system of record when it comes customer data. I have worked with one client where there is absolutely a tension between the incumbent SOR: Oracle, which they use to report to the SEC and Salesforce, the SOR that the sales organization depends on. In an age of Sarbanes-Oxley, it’s clear that maintaining both sets of datasets concurrently is priority for executives that must certify their public filings.
Salesforce is by far the largest SaaS vendor, but there are many others out there that are growing rapidly as well. While long term I am a believer in unified data pools for the enteprise, it seems like the next 3-5 years will be dominated by loosely-coupled data pools that are increasingly “networked” through API proliferation and services enablement. In the long term, I expect the market to start thinking from the perspective of “What does this do to my data pools?” instead of “What does this do to my application?”.