In the age of the multi-cloud, in which enterprises are now most commonly using a variety of public cloud, private cloud, and on-premises infrastructure to manage services, applications, and workloads, it shouldn’t be surprising that we’re seeing major public cloud platforms make moves into private data centers to expand their reach and further their relationships. In this article I’ll explore some of the ways the public cloud is expanding into private data centers, further blurring the lines between the various types of infrastructure within enterprise IT architecture.
Amazon Web Services (AWS) launched its Snowball Edge appliance in 2015, providing on-premises storage and limited compute capabilities for enterprise cloud users. The original use-case for the Edge devices – and the AWS Snowmobile truck – was to help enterprises transfer large amounts of data into the AWS cloud.
In July 2018, AWS announced EC2 for its AWS Snowball Edge device, bringing the real power of the AWS cloud to the edge. Customers can now run virtualized applications in local EC2 instances, anywhere that has electricity, and without internet connection. Snowball Edge runs AWS Lambda for serverless computing and AWS Greengrass connects the Snowball Edge to the AWS cloud for data processing, including machine learning. Along with new support for Amazon S3 data storage, these enhancements have turned the Snowball Edge from a simple data vehicle to a powerful computing node.
Just a few days after the AWS announcement, Google Cloud Platform (GCP) announced it would extend its Google Kubernetes Engine (GKE) – its core service for managing containers to edge devices, supporting on-premise container deployment. As long-time Googler Urs Hölzle stated about the launch, Google intends to end “the false dichotomy between on-premise and the cloud.” GKE On-Prem will serve as another availability zone in the Google Cloud dashboard – bringing consistency to the management and monitoring of infrastructure across both on-premises and the public cloud.
What is a reverse hybrid cloud?
Reverse hybrid cloud is a phrase that describes an architecture in which an enterprise operates public cloud software and services in their own private data center. Necessarily in this situation the enterprise would also be using the traditional public cloud services as well as have on-premises infrastructure and capabilities to manage hardware in addition to software.
Typically, a “hybrid cloud” has referred to using two or more discreet clouds that work together through a common or proprietary technology – often a hybrid integration platform. Legacy enterprises just starting to use the cloud, and digital native companies moving away from the cloud for a variety of reasons, often find themselves with a hybrid cloud. The movement of GCP and AWS into private data centers, though, is a novel play by the public cloud giants that will help enterprises on their journey to a right-fit transformed IT.
What are the benefits of a reverse hybrid cloud?
Here are a few of the ways enterprises can benefit from using a reverse hybrid cloud architecture, bringing the power of the public cloud in-house.
- Instead of forcing teams to work on and maintain multiple environments, this setup encourages enterprises to standardize on technologies that can be deployed in any location. With Google Kubernetes Engine, it doesn’t matter if the workloads are processed on-premises or on Google’s machines – the interface, the applications, the technology, and the requisite skills remain the same.
- Greater control and a wider range of options for large enterprises that want to take advantage of containers and microservices but for any number of reasons needs to operate core hardware and infrastructure internally.
- Using common technologies across environments, such as by standardizing on Kubernetes, gives IT decision makers improved policy enforcement and compliance. Running cloud-native technology on-premises enables this type of standardization.
- Gives IT organizations the opportunity to test and develop cloud -based services completely internally prior to pushing out to the public cloud.
- For existing cloud-based workloads that require minimal latency, bringing the cloud on-premises can improve performance and potentially save costs associated with WANs.
- For enormous data transfers, it can save money to load data on edge devices and ship the devices to a cloud provider for hosting.
- Running EC2 on the edge lets enterprises manipulate their data before sending to AWS.
As enterprises find the right balance of private and public cloud, both on-premises and not, for their workloads, data, and applications, its time consider bringing the public cloud to the edge. Google and Amazon are keenly aware that IT leaders are moving to the cloud more slowly than originally expected and by bringing the cloud to private data centers, the public cloud leaders may have a new way to accelerate the use of their platform.
As Google made clear in July, it doesn’t matter where the cloud is – teams simply need access to the right tools and methods to operate efficiently.
By Stephen Watts, Contributor, CIO | AUG 3, 2018 9:36 AM PDT