Google Kubernetes Services

A simple way to automatically deploy, scale, and manage Kubernetes.

  • Run your apps on the most automated and scalable managed Kubernetes platform
  • Start quickly with single-click clusters and scale up to 15000 nodes
  • Leverage a high-availability control plane including multi-zonal and regional clusters
  • Eliminate operational overhead with industry-first four-way auto scaling 
  • Secure by default, including vulnerability scanning of container images and data encryption

BENEFITS

Speed up app development without sacrificing security

Develop a wide variety of apps with support for stateful, serverless, and application accelerators. Use Kubernetes-native CI/CD tooling to secure and speed up each stage of the build-and-deploy life cycle.

Streamline operations with release channels

Choose the channel that fits your business needs. Rapid, regular, and stable release channels have different cadences of node upgrades and offer support levels aligned with the channel nature.

Reduce Day 2 ops with help from Google SREs

Get back time to focus on your applications with help from Google Site Reliability Engineers (SREs). Our SREs constantly monitor your cluster and its computing, networking, and storage resources.

KEY FEATURES

Two modes of operation, one GKE

GKE now offers two modes of operations: Standard and Autopilot. Standard is the experience we’ve been building since the launch of GKE, giving you full control over the nodes with the ability to fine tune and run custom administrative workloads. The all new Autopilot mode is a hands-off, fully managed solution that manages your entire cluster’s infrastructure without worrying about configuring and monitoring. And with per-pod billing, Autopilot ensures you pay only for your running pods, not system components, operating system overhead, or unallocated capacity.

Pod and cluster autoscaling

GKE is the industry’s first fully managed Kubernetes service that implements full Kubernetes API, 4-way autoscaling, release channels and multi-cluster support. Horizontal pod autoscaling can be based on CPU utilization or custom metrics. Cluster autoscaling works on a per-node-pool basis and vertical pod autoscaling continuously analyzes the CPU and memory usage of pods, automatically adjusting CPU and memory requests.

Prebuilt Kubernetes applications and templates

Get access to enterprise-ready containerized solutions with prebuilt deployment templates, featuring portability, simplified licensing, and consolidated billing. These are not just container images, but open source, Google-built, and commercial applications that increase developer productivity. Click to deploy on-premises or in third-party clouds from Google Cloud Marketplace.

Container native networking and security

GKE Sandbox provides a second layer of defense between containerized workloads on GKE for enhanced workload security. GKE clusters natively support Kubernetes Network Policy to restrict traffic with pod-level firewall rules. Private clusters in GKE can be restricted to a private endpoint or a public endpoint that only certain address ranges can access.

Migrate traditional workloads to GKE containers with ease

Migrate for Anthos and GKE makes it fast and easy to modernize traditional applications away from virtual machines and into native containers. Our unique automated approach extracts the critical application elements from the VM so you can easily insert those elements into containers in Google Kubernetes Engine or Anthos clusters without the VM layers (like Guest OS) that become unnecessary with containers. This product also works with GKE Autopilot.

Information Source – https://cloud.google.com/kubernetes-engine

CONTACT US