Times have changed. Nowadays, no less than 95 percent of IT leaders say they’re concerned about vendor lock-in, and they’re fighting it with a tool near and dear to our hearts: open cloud. That’s what we learned from a recent survey of 75 CIOs, CEOs, and other IT decision-makers across a range of industries. The results show that today’s IT leaders value interoperability, portability, and the freedom to mix and match platforms as they see fit. In other words, they’re determined to control their own destinies with help from open-cloud technology.
Multi-cloud and hybrid are the models of the future
Cloud adoption and migration are accelerating, and our survey results reflect this trend: After traditional IT services, cloud solutions ranked second among the top expenses in respondents’ 2018 technology budgets. Other respondents are investing heavily in cloud-related activities, such as data migration and data cleansing.
I am impressed by how IT leaders are prioritizing flexibility, autonomy, and freedom from vendor lock-in. Nearly 90 percent of respondents said they’ve already adopted or are currently moving toward a multi-cloud strategy, using two or more private or public clouds. An equal proportion said they’ve already adopted or are currently moving toward a hybrid strategy, using a combination of cloud-based and on-premises resources to build their infrastructure.
Compared to on-premises and single-cloud models, hybrid and multi-cloud deployments empower organizations to mix and match solutions that meet the unique requirements of each project or workload. They also eliminate the lock-in risks associated with exclusively relying on one technology provider by making applications more portable and flexible.
In addition to openness, another high priority may help explain many respondents’ enthusiasm for these approaches: reliability, cited by 41 percent of survey participants as the most important factor when selecting an open-cloud platform. Using multiple, distributed infrastructures can help businesses increase redundancy, avoid downtime, and ensure high availability.
It’s worth noting that the perceived importance of reliability varied from one industry to another. Healthcare respondents were most likely to name it the top priority, at a rate of 71 percent.
At Google, we run our global cloud infrastructure with a discipline known as SRE. This stands for Site Reliability Engineering, a topic we wrote an O’Reilly book about to share it with the world. In this discipline, reliability is recognized as the top feature of our services, and our engineers work continuously to measure and improve upon reliability. This means the infrastructure must be designed to dynamically adapt to accommodate failures. In fact, Google’s cloud automatically migrates workloads on hosts that exhibit problems, so you don’t have to move your own workloads on short notice. This capability is transparently built into our system design, so you don’t even need to think about it. Achieving industry-leading availability is all about running your infrastructure by science, measuring and iterating on the design to continually make it as resilient as you need.
Interoperability is key
Interoperability — the ability to coordinate diverse systems and applications, regardless of where they reside or what technology they use — is a defining characteristic of the open-cloud approach. It’s also an important defense against vendor lock-in and a prerequisite for building successful hybrid and multi-cloud infrastructures.
It’s not surprising, then, that nine out of 10 respondents consider at least some degree of interoperability to be important. Thirty-three percent said they only care about interoperability between on-premises and cloud applications, while another 31 percent said that all applications need to be interoperable. Less than 10 percent of respondents said that interoperability does not matter to them at all.
Containers are helping businesses achieve portability
Cloud portability is the ability to write and deploy the same code on premises and in public clouds. In recent years, containerization has emerged as the prevailing strategy among developers for ensuring that software will run reliably when transferred from one computing environment to another, regardless of the underlying infrastructure. By breaking up applications into lightweight, highly portable packages of code, programmers can move more quickly, deploy software more efficiently, and operate at scale.
Kubernetes, the container orchestration platform open-sourced by Google, helps automate container deployment across multiple systems, making hybrid or multi-cloud infrastructure management even easier. Google has also introduced a service mesh technology named Istio for solving a range of additional challenges that you may face when adopting microservices. It provides zero-trust service domains that are automatically encrypted with TLS not only from client-to-service but from service-to-service, and provide visibility and insight that make it practical to measure and debug distributed applications. Perhaps the most compelling thing that you get from a service mesh is a way to decouple the development and operations of your microservice systems.
A vast majority of survey respondents — nearly 70 percent — already use containers in their organizations. Another 24 percent said they’re planning to do so.
A number of years back, there was considerable buzz among the cloud companies about having open standards for cloud API service compatibility to enable interoperability. This idea never quite took form for a variety of reasons, but now we have new tools to achieve the same outcome with applications that can run the same way regardless of what cloud they run on. For that reason, it makes sense that interest in containers has skyrocketed.
I expect that containers will quickly grow more popular than virtual machines as a mechanism for deployment of software, whether run in a public cloud, private cloud, or traditional IT infrastructure. It becomes easier to adopt microservices software architecture, continuous integration, and continuous deployment, and eliminate the consistency gap between test/staging environments and production. CIOs love containers because they help make more efficient use of their hardware and cloud resources because they allow for more dense workload consolidation than virtualization.
Security remains a barrier to open adoption
Research has shown that leading public clouds are as secure as or more secure than enterprise data centers. Nevertheless, some technology leaders — particularly those who have yet to move data and workloads to the cloud — still feel unsure about cloud security. Twenty-seven percent of respondents said that security concerns are the primary obstacle standing in their way of open-cloud adoption.
I can appreciate the fear of losing immediate physical control of one’s systems and data. You may wonder how it will be protected if you place them into a cloud. The days of relying on a perimeter network firewall and static data encryption at rest as a defense strategy have passed. Maybe you are scrambling to keep pace with the constant rise of internal and external threats that grow more formidable every day. It’s not practical to expect that you can employ the number and quality of security engineers needed to constantly defend against all such threats. Cloud providers must make those investments. They must because trust is core to their business. Cloud providers simply can’t afford to gloss over security details. The truth is that consuming cloud services and fully leveraging those resources, security features, and best practices actually makes you safer. If you take time to understand how cloud provider security features actually work, you can realize this for yourself.
After security concerns, the second most-cited obstacle to open-cloud adoption was fear of vendor lock-in. This finding suggests that providers of open-cloud solutions need to better communicate how their offerings and adoption of open technologies like containers, Kubernetes, and Istio help business avoid lock-in, which this research has shown to be a significant issue for today’s IT leaders.
Enterprises are quickly turning to machine learning as a way to reason about their large and growing data sets. Unstructured data is growing at an unprecedented rate and accounts for a huge majority of our data on Earth. Vendor lock-in fears for AI/ML services are surfacing here as well. Google released another open-source initiative called TensorFlow to allow you to create, train, and run your own machine learning models to produce AI neural networks that can help you gain valuable insights from your own unstructured data. Your TensorFlow models can be used anywhere, and can leverage your CPUs, GPUs, and even Google Cloud’s TPUs. You can explore ML on your own infrastructure, or even in a hybrid cloud environment.
Given the strong agreement among survey respondents that vendor lock-in remains a cause for concern in the current IT landscape, it’s not surprising that participants showed interest in containers, interoperability, and multi-cloud and hybrid infrastructures. These are the fundamental building blocks of the open-cloud approach, which is diametrically opposed to the proprietary strategies of previous decades. Through open-cloud technology, IT leaders are gaining autonomy from providers and reclaiming their right to choose the ideal combination of solutions to meet their needs. As longtime proponents of open cloud for businesses everywhere, we’re glad to see how quickly this model is winning support.