Deploy Kubernetes Load balancer service with Terraform on GCP

Abhishek Sharma
7 min readSep 8, 2020

πŸ’« Welcome you guys to my 𝐏𝐫𝐨𝐣𝐞𝐜𝐭 article based on π™‚π™€π™€π™œπ™‘π™š π˜Ύπ™‘π™€π™ͺ𝙙 𝙋𝙑𝙖𝙩𝙛𝙀𝙧𝙒 πŸ’«

About GCP:

Google Cloud Platform (GCP), offered by Google, is a suite of cloud computing services that runs on the same infrastructure that Google uses internally for its end-user products, such as Google Search, Gmail, file storage, and YouTube. Alongside a set of management tools, it provides a series of modular cloud services including computing, data storage, data analytics and machine learning. Registration requires a credit card or bank account details.[3]

Google Cloud Platform provides infrastructure as a service, platform as a service, and serverless computing environments.

About VPC:

Virtual Private Cloud (VPC) provides networking functionality to Compute Engine virtual machine (VM) instances, Google Kubernetes Engine (GKE) clusters, and the App Engine flexible environment. VPC provides networking for your cloud-based resources and services that is global, scalable, and flexible.

About Subnets:

After creating our building(VPC) we create labs inside it. In the cloud world, these labs are called subnets and we have to provide the Network Range in the Subnet.

πŸ”° PROJECT DESCRIPTION :

1. Create multi projects with dev and prod

2. Create VPC network for dev project

3.Create VPC network for prod project

4. Connect both the VPC network with VPC peering

5. Create a Kubernetes Cluster in dev project and launch a wordpress/Joomla application with the Load balancer

6. Create a SQL server in prod project and create a database

7. Connect the SQL database with the application launched in the K8s cluster

Project Begins:

Step 1: Creating the Two projects. One for Production Team and another for Developer Team.

resource "google_project" "project1" {
name = "Developer project"
project_id = "dev-project-817103"
}

resource "google_project" "project2" {
name = "Production project"
project_id = "driven-strength-888789"

}

Step 2: Creating VPC and Subnets for both projects.

resource "google_compute_network" "vpc1"{
name="abhi-vpc-1"
project = var.production_project
routing_mode="GLOBAL"
auto_create_subnetworks="false"
}resource "google_compute_network" "vpc2"{
name="abhi-vpc-2"
project = var.developer_project
routing_mode="GLOBAL"
auto_create_subnetworks="false"
}resource "google_compute_subnetwork" "subnet2"{
ip_cidr_range="10.10.12.0/24"
name="abhi-subnet-2"
network =google_compute_network.vpc2.name
project=var.developer_project
region="us-west1"
}
resource "google_compute_subnetwork" "subnet1"{
ip_cidr_range="10.10.11.0/24"
name="abhi-subnet-1"
network = google_compute_network.vpc1.name
project= var.production_project
region="us-west1"
}

Step 3: Creating the Firewall for both the Projects.

When you create a VPC firewall rule, you specify a VPC network and a set of components that define what the rule does. The components enable you to target certain types of traffic, based on the traffic’s protocol, ports, sources, and destinations.

resource "google_compute_firewall" "default" {
name = "abhi-firewall"
network = google_compute_network.vpc1.name
project= var.production_project
allow {
protocol = "icmp"
}
allow {
protocol = "tcp"
ports = ["80", "8080", "1000-2000","22"]
}
source_tags = ["web"]
source_ranges=["0.0.0.0/0"]
}
resource "google_compute_firewall" "default1" {
name = "abhi-firewall"
network = google_compute_network.vpc2.name
project = var.developer_project
allow {
protocol = "icmp"
}
allow {
protocol = "tcp"
ports = ["80", "8080", "1000-2000","22"]
}
source_tags = ["web"]
source_ranges=["0.0.0.0/0"]
}

Step 4: Creating VPC Peering

Google Cloud VPC Network Peering allows internal IP address connectivity across two Virtual Private Cloud (VPC) networks regardless of whether they belong to the same project or the same organization.

resource "google_compute_network_peering" "peering1" {
name = "peering-test"
network = google_compute_network.vpc1.id
peer_network = google_compute_network.vpc2.id

}
resource "google_compute_network_peering" "peering2" {
name = "peering-test"
network = google_compute_network.vpc2.id
peer_network = google_compute_network.vpc1.id
}

Step 5: Creating Instances in both Projects

An instance is a virtual machine (VM) hosted on Google’s infrastructure. You can create an instance by using the Google Cloud Console, the gcloud command-line tool, or the Compute Engine API.

Instance for Developer Project:-

resource "google_compute_instance" "default2" {
name = "myos1"
machine_type = "n1-standard-1"
zone = "us-west1-c"
project="dev-project-817103"
tags = ["foo", "bar"]
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = google_compute_network.vpc2.name
subnetwork=google_compute_subnetwork.subnet2.name
subnetwork_project="dev-project-817103"
access_config{
}
}
}

Instance for Production Project:-

resource "google_compute_instance" "default22" {
name = "myos1"
machine_type = "n1-standard-1"
zone = "us-west1-c"
project=var.production_project
tags = ["foo", "bar"]
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = google_compute_network.vpc1.name
subnetwork=google_compute_subnetwork.subnet1.name
subnetwork_project="driven-strength-888789"
access_config{
}

}
}

Step 6: Creating Kubernetes cluster using GKE

Kubernetes: It is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

GKE: Google Kubernetes Engine (GKE) provides a managed environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The GKE environment consists of multiple machines (specifically, Compute Engine instances) grouped together to form a cluster.

For this we have to Install kubectl in our system: https://kubernetes.io/docs/tasks/tools/install-kubectl/

resource "google_container_cluster" "primary" {
name = "abhi-cluster"
location = "us-central1-a"
initial_node_count = 3
project=var.developer_project
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
metadata = {
disable-legacy-endpoints = "true"
}
labels = {
app = "wordpress"
}
tags = ["website", "wordpress"]
}
timeouts {
create = "30m"
update = "40m"
}
}
resource "null_resource" "nullremote1" {
depends_on=[google_container_cluster.primary]
provisioner "local-exec" {
command = "gcloud container clusters get-credentials ${google_container_cluster.primary.name} --zone ${google_container_cluster.primary.location} --project ${google_container_cluster.primary.project}"
}
}

Step 7: Creating the Wordpress pod and the LoadBalancer.

Internal load balancing enables you to build scalable and highly available internal services for your internal client instances without requiring your load balancers to be exposed to the internet. GCP internal load balancing is architected using Andromeda, Google’s software-defined network virtualization platform.

resource "kubernetes_service" "example" {depends_on=[null_resource.nullremote1]metadata {
name = "terra-example"
}
spec {
selector = {
app = "${kubernetes_pod.example.metadata.0.labels.app}"
}
session_affinity = "ClientIP"
port {
port = 80
target_port = 80
}
type = "LoadBalancer"
}
}
resource "kubernetes_pod" "example" {depends_on=[null_resource.nullremote1]
metadata {
name = "myword"
labels = {
app = "MyApp"
}
}
spec {
container {
image = "wordpress"
name = "example"
}
}
}
output "wordpressip" {
value = kubernetes_service.example.load_balancer_ingress
}

Step 8: Creating the SQL database

Cloud SQL is a fully-managed database service that helps you set up, maintain, manage, and administer your relational databases on Google Cloud Platform. You can use Cloud SQL with MySQL, PostgreSQL, or SQL Server.

resource "google_sql_database" "database" {
name = "abhi-database"
instance = google_sql_database_instance.master.name
project=var.production_project
}
resource "google_sql_database_instance" "master" {
name = "instance15"
database_version = "MYSQL_5_7"
region = "us-central1"

project=var.production_project
settings {

tier = "db-f1-micro"


ip_configuration{
ipv4_enabled ="true"
authorized_networks{
name="public network"
value="0.0.0.0/0"
}
}
}
}
resource "google_sql_user" "user" {
name = "abhi"
instance = google_sql_database_instance.master.name
project=var.production_project

password = "redhat"
}

To deploy complete Infrastructure-

In Order to Build the Complete Infrastructure, At first we have to initialize Terraform -

Step1: # terraform init

To check if any error if present or not-

Step2: #terraform validate

It’s always good to check what we are planning to build , So before running the Actual Code . First we should check our plan

Step3: #terraform plan

Step 4: #terraform apply β€” auto-approve

And for destroy all the setup:

#terraform destroy β€” auto-approve

Now Using the IP of Wordpress for launching it

Thats all about my Task..

Thank you….

--

--