Cloud Economics

A company that owns or leases datacenter space pays an annual hardware cost, per server, that consists of depreciation + [ power, space and cooling ]. Depreciation is the purchase price of the hardware divided by the accounting dept’s view on how many years that machine will be useful.

A server price of $6000, depreciated over 3 years for a $2000 cost per year, and power, space and cooling at 40% of that, $800/year, would not be atypical for many workloads.

The fundamental economic assumption at this layer by infrastructure cloud providers is that economy of scale can reduce these numbers, and these savings can be shared with customers. Large cloud providers buying identical servers in bulk can negotiate better rates, and large blocks of server conformity in size, power, setup, etc. leads to more efficiency that extends to power, space, cooling and networking costs. Even customized hardware (i.e. render farms, crypto) benefits from scale.

All things being equal, customers should assume a large, well run cloud infrastructure provider will be less expensive than buying and running servers themselves, on per server basis.

Per server rates are often not the largest cost consideration. Many workloads have seasonal (or daily, weekly, etc.) cycles in total workloads that can vary by as much as 10X. Underbuying for the peaks has the cost that the system simply isn’t available for at least some set of customers. Overbuying avoids lost business due to bad capacity planning, but at scale can result in the large expense of unused servers.

In the total cost equation, overbuying is often the dominant variable.

The elasticity of cloud as infrastructure, assuming your application can benefit from it, is where the potential for real savings and higher availability are. Consumption billing allows this savings to be realized at a fine level of granularity. You pay for only the resources you use (at the expense of more complex budgeting).

In the roughly 25 year client/server wave of computing, Microsoft and Oracle were large suppliers of technology largely due to SQL and Exchange. Storage systems are immensely complex and difficult to replicate. At the beginning of the wave, Oracle provided scale and power, at high cost. Microsoft’s initial position was value, at more limited scale. But as time progressed both systems provided scale and value. Application developers, almost without exception, chose one of those two SQL systems as the core of their application. For the reason that they were immensely powerful, economical, and getting better in both dimensions with time as the personal computer continued to bend the cost curve down.

It’s worth noting that during this wave, SAP established itself as a critical application in most large companies.

In the current wave, storage technology has migrated from two providers of SQL to many providers in form of open source.

Neither of the two largest cloud vendors has yet been able to establish a high value-add (and high margin), critical and broadly accepted application technology. Both companies offer a combination of proprietary storage technology, and hosting popular open source systems “as a service”, at higher value and margin than hosting them yourself on IaaS. Google is banking on their proprietary  storage, app model and AI technology.

To re-establish the last-wave margins and customer loyalty (and lock), it is critical that cloud providers move up the technology value-chain.

Both Salesforce and Office 365 blur the lines between applications and platforms. Both are examples of high value cloud wave offerings. SAP is working on the same formula, as is everyone with an on-premise ERP, CRM and/or CPQ system.

A large unfilled need for most large companies remains help with their significant footprint of “legacy” (i.e. the ones working today) system. Migrating applications that require significant human investment (i.e. DBAs) to keep them running, and were not designed to scale-up, to the cloud does not make them cheaper to run, or magically elastic. This is likely the largest class of applications in use today.

It’s possible that in this wave the “top” of the technology stack is not components that make application development easier, but completely finished solutions. The next 25 years might be a shakeout of the best 2 or 3 applications in each of many hundreds of verticals. Each with its own extensibility story. The winners and losers in such a model are not obvious, at least to me.

Mike.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s