Originally written for and posted on TheAtlantic.com.

IBM estimates that 90% of the world’s data has been created in the last two years. Greater use of the inherent benefits of cloud such as responsiveness and flexibility will be essential to how organizations manage and apply that data. Gartner, for example, estimates that, by 2016, 50% of data will be stored on the cloud.

A wave of new managed services hitting the market. Cloud, in essence, is the provision of IT ‘as a service’. In response, we are seeing a proliferation of new services and service providers being offered in the market. Organizations determine what to source and what to do in-house using a variety of models.

One of these models is the well-known core/context analysis framework from Geoffrey Moore. Let’s take this model to determine how organizations can best solve the data explosion — where do data and predictive analysis fit in the model?

Short recap: core/context divides an organizations output, like services or products, into ”core and context”. “Core” represents everything that makes the organization unique, the unique selling points. “Context”, on the other side, is everything else. When used correctly, the core/context model will provide the organization with insight into where to invest and what to externalize. “Core”, in general, needs investment to strengthen the competitive position if the organization. “Context” can most likely be externalized, sourced from external services providers.

Now the interesting question, where does the data explosion fit in? The answer to the data explosion consists of two parts; data storage and predictive analysis. Data storage can be considered “context” for most organizations because it does not add any competitive advantage by itself. Data storage is therefore a good candidate to externalize to, for instance, a cloud storage provider.

On the other hand, using the data to perform predictive analytics can deliver a competitive advantage. Insight plus hindsight equals foresight. This is the second part of the data explosion; predictive analytics. But it’s not the predictive analytics itself that counts and make this “core”; it’s the outcome of the analytics that counts. So both data storage and predictive analytics can be considered “context”, but than what. What would be the best way to externalize BigData?

Cloud to the rescue? Definitely for data storage, but as long as the cloud storage service is fit for purpose given the classification of the data. No organization want’s to find their data where it does not belong, like the competition. But when using this data for predictive analysis, it is almost inevitable that both data and predictive analytics application are located close together to reduce unneeded network traffic and remove the performance bottleneck at the connection between the data and application.

The optimal solution therefore combines both low cost, but reliable, data storage with predictive analytics software, “as a service”. The data classification, along with the difference in total cost of ownership between public and private cloud services, will determine to which cloud deployment model an organization will turn. Ideally Big Data “as a service” will be consumed as software as a service, sourcing as much ‘context’ as possible.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s