To the Cloud?! – The CIO’s Decision

*Originally posted on Business 2 Community

Every CIO has to eventually face the decision on whether to build a datacenter or move to the cloud. This is especially true for companies with infrastructures of a certain size, where conventional wisdom dictates that once a platform grows to 1,000 servers it would be cheaper to build rather than rent. Historically, the argument has always been that building your own infrastructure is cheaper because renting essentially lets someone else do the work, take on the cost and then charge you a margin to use it. While this will always be true to a certain extent, the cost of cloud computing continues to drop to the point where the cost benefit of building, even at scale, is not so clear.

Consider that it costs, on average, $250 a month to house, operate and depreciate a physical server, compared to $300 a month for a comparable AWS reserve instance and the cost advantage for building seems clear. However, Google now offers a similarly sized instance for $254 a month on their GCE platform and other providers are likely to start offering comparable price reductions in the near future to stay in the conversation. Competition in this space is fierce and getting more so all the time, which will ultimately only mean better pricing for customers.

There are also other advantages a cloud platform brings that don’t necessary show up on an accounting balance sheet.

Speed to market

Most organizations require a certain number of approvals to execute a hardware order. Before you even have that ROI conversation with the CFO to justify the purchase, there are a number of other considerations. Do you have enough rack space for the new capacity? Do you need additional rack switches? How many more network ports do you need? If you need SAN access, how many fiber runs are required? That simple server expansion project is now looking much more complex and expensive.

After the decision has been made, it takes another 2-3 weeks to get the physical delivery of a server order and then once the servers arrive, you need to rack, cable and configure them. By the time all this is done, you are looking at 4-5 weeks at best to get a server into production, and on average, it’s more likely to be 2 months. In a cloud environment the entire process can be condense to minutes. A pretty dramatic difference, actually.

Ability to experiment with new products at a relatively low cost

Your CPO just came up with a great product idea, the team is excited and this product has the potential to move your company to another level. However, your CFO is not so sure he wants to invest a cool $1M into building the capacity to handle the projected business volume. What if you build it and they don’t come? This is a huge problem for most organizations and they simply do not have the luxury to get it wrong, at least not too many times. By the time this has been debated and the team has come to a consensus, someone else might have beaten them to the market. Or worse, your organization becomes paralyzed from the fear of decision making and nothing happens.

With a cloud platform you can afford to get it wrong as it allows you to fail FAST and fail many times without necessarily betting the company. This is why almost 100% of startups build their products in cloud-based environments and I would argue that larger organizations are going to have to adopt the same approach to stay relevant.

Peak, valley and seasonal traffic

For most platforms, the traffic pattern is never constant; there are daily, weekly, monthly and seasonal peaks and valleys. If you own your own data center it means you need to have the capacity to handle those peak traffic levels (assuming you did a really good job of anticipating the traffic patterns). This means a certain percentage of your assets will be idle or under utilized during non-peak periods. This is similar to airlines who need more planes to handle summer travel seasons and less during winter months, and just as planes are expensive for the airlines when they sit idle, so too are the data center assets.

In a cloud-based environment you can simply adjust your resource consumption to align with your traffic volume. This allows you to buy only as much resources as you need. There are even tools available that allow you to achieve this in an automated fashion, for example AWS Auto Scaling gives you the ability to add VM capacity based on pre-defined conditions. Conversely, rules can be setup to spin down resources when other conditions are met. It can even detect when servers fall into a bad state and replace the affected instance as needed.

Keeping up with latest technologies

Let’s say that Intel has just released their latest version of their server-on-a-chip which increased processing power while cutting power consumption. This just rendered your shiny new purchase completely obsolete. Now you are stuck with what you have until your next tech refresh, which might or might not happen by the time the asset is fully depreciated. Granted, your cloud provider might not necessary be upgrading their infrastructure with the new technology immediately, but given their scale and purchasing power, they are in a much faster refresh cycle than most companies.

Buy only the components you need

While there are many benefits to operating in a cloud environment, many companies have sunk costs or contractual obligations into existing physical infrastructures that make it difficult or expensive to make the jump. In these situations, it is entirely possible to consider a hybrid approach. This allows you to take advantage of the benefits that cloud platforms offer without incurring the cost of decommissioning your current infrastructure.

The idea is to use the cloud as an extension of your existing platform and any component that can be decoupled is a candidate to move. For example, if you need a backup or archive solution it is relatively quick and easy to replicate your data to AWS S3/Glacier or GCS. These are relatively low cost solutions that guarantee availability and data integrity. The cloud can also be used as an effective DR site, and while it will take some effort, you can replicate the entire tech stack and keep the cost low until it’s needed. Financially this is much more cost effective than any DR platform you can build on your own.

Extending pieces of applications to the cloud is also possible. The most common implementation is to scale out the frontend (web) tier to handle additional traffic. This can be done by migrating all or part of it to the cloud and then adjust the capacity as needed, while keeping the backend infrastructure in house.

There are some limitations to this approach however. Unless your application is not latency sensitive, your cloud provider would have to be relatively close to your current data center location. Finding one might be difficult if your current location is not close to one of the major fiber hubs. You will also need a decent VPN solution if there is a large bandwidth requirement. This could be expensive and if you are not careful, you might end up spending so many resources on the VPN that the cost outweighs the benefit.

Cloud providers now offer completely full suites of products, from simple processing power and data storage to more complex components such as data warehouses and UMBs. You can buy the entire stack or pick the components that you need and integrate with what you have.

Increasingly, using a cloud platform is no longer just an IT infrastructure decision. Many organizations are now recognizing it as a competitive advantage in the market place. Putting aside the buzzword factor for a moment, moving to the cloud means that companies can price their services using the same cost model.

The Viant Ad Cloud is a perfect example of this; being in the cloud means that we are able to align our infrastructure size and cost with our business. In turn, this allows us to offer the same advantages to our clients; they now have the same flexibility in terms of their ad tech spending. From the Identity Management Platform (IMP) to the Media Execution Platform (MEP) to the Data Analytics Platform (DAP), our clients can pick and choose the components they want and buy as much as they need without the heavy upfront cost of building and managing their own platform.

As Chief Information Officer for Viant, Linh oversees all technology initiatives across the company to ensure security, data protection and on-going connectivity for all Viant properties, clients and platforms.

Linh took the helm as CIO after seven years as senior vice president of technical operations at Myspace. While in that role, he oversaw a production infrastructure of over 12,000 servers at the largest social network of its time.

Prior to joining Myspace in 2007, Linh served in a variety of engineering and technical management roles. Most recently, Linh was the senior manager of Standard Chartered Bank, a technical director of Cable & Wireless and an advisory software engineer at Candle Corporation.

Linh graduated from the University of Manitoba with a BS in Computer Engineering, and resides in the Los Angeles area.