<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>The Ops Community ⚙️: Akshay Rao</title>
    <description>The latest articles on The Ops Community ⚙️ by Akshay Rao (@akshay_rao).</description>
    <link>https://community.ops.io/akshay_rao</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://community.ops.io/feed/akshay_rao"/>
    <language>en</language>
    <item>
      <title>Build a end to end DevSecOps pipeline for Nodejs project</title>
      <dc:creator>Akshay Rao</dc:creator>
      <pubDate>Sun, 29 Oct 2023 17:59:45 +0000</pubDate>
      <link>https://community.ops.io/akshay_rao/build-a-end-to-end-devsecops-pipeline-for-nodejs-project-341j</link>
      <guid>https://community.ops.io/akshay_rao/build-a-end-to-end-devsecops-pipeline-for-nodejs-project-341j</guid>
      <description>&lt;p&gt;Hi this Akshay Rao, I tried to create the whole devops pipeline including some security scans. These security scans are very important as the vulnerability is found before the Application is the production, because if the vulnerability are found in the production the cost of rectifying is very high.&lt;/p&gt;

&lt;p&gt;Lets start by understanding the pipeline&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the developers commit the App code to the remote repository like GitHub, GitBucket and others.&lt;/li&gt;
&lt;li&gt;code has to built, run unit test and pass it.&lt;/li&gt;
&lt;li&gt;we will have to scan the whole code for vulnerabilities, for that we will be conducting SAST (Static Application Security testingTesting),SCA(Software Composition Analysis) and DAST (Dynamic Application Security Testing).&lt;/li&gt;
&lt;li&gt;The SAST is a methodology to find security vulnerabilities in the application. I have used Sonar cloud to perform SAST in this pipeline&lt;/li&gt;
&lt;li&gt;The SCA is performed to evaluate security, license compliance, imported package vulnerabilities or the deprecated packages and code quality. I have used Snyk tool in the pipeline.&lt;/li&gt;
&lt;li&gt;The DAST is similar to SAST but he scan is done when the application is running in the production environment. I have used OWASP ZAP tool in pipeline.&lt;/li&gt;
&lt;li&gt;After the scans are done then the reports and issues are generated. if any vulnerability found can be rectified immediately or can be communicated to the developers. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/RLu3XIqL1OvAnK6mHoOiBD3CRHUcfzhGjcwSo_vcspw/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvbjRh/cmZxeGdmajFqZWpq/aGQxajcucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/RLu3XIqL1OvAnK6mHoOiBD3CRHUcfzhGjcwSo_vcspw/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvbjRh/cmZxeGdmajFqZWpq/aGQxajcucG5n" alt="Image pipeline" width="800" height="429"&gt;&lt;/a&gt;&lt;br&gt;
I have take nodejs project in the Github, write a workflow.yml&lt;br&gt;
In this yml file i have created &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Three jobs (build, security and zap_scan)&lt;/li&gt;
&lt;li&gt;In build job ,I have built the application and performed SAST scan in the name of Sonar cloud scan.&lt;/li&gt;
&lt;li&gt;In Security job, I have run the SCA scan with Snyk tool.&lt;/li&gt;
&lt;li&gt;In Zap_scan, I have performed the DAST with OWASP ZAP tool. In the Target key we can put the url of the Application.
I had to generate a token form Synk and Sonar cloud (SYNK_TOKENS &amp;amp; SONAR_TOKEN) in the github repository settings.
Then commit the workflow and the scans will start running in the actions tab in the github.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Build code, run unit test, run SAST, SCA, DAST security scan for NodeJs App
on: push

jobs:
  build:
    runs-on: ubuntu-latest
    name: Run unit tests and SAST scan on the source code 
    steps:
    - uses: actions/checkout@v3
    - uses: actions/setup-node@v3
      with:
        node-version: 16
        cache: npm
    - run: npm install
    - name: SonarCloud Scan
      uses: sonarsource/sonarcloud-github-action@master
      env:
        GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
      with:
        args: &amp;gt;
          -Dsonar.organization=&amp;lt;PUT YOUR ORGANIZATION NAME&amp;gt;
          -Dsonar.projectKey=&amp;lt; PUT YOUR PROJECT KEY NAME&amp;gt;
  security:
    runs-on: ubuntu-latest
    needs: build
    name: Run the SCA scan on the source code
    steps:
      - uses: actions/checkout@master
      - name: RunSnyk to check for vulnerabilities
        uses: snyk/actions/node@master
        continue-on-error: true
        env:
          SNYK_TOKEN: ${{ secrets.SNYK_TOKENS }}
  zap_scan:
    runs-on: ubuntu-latest
    needs: security
    name: Run DAST scan on the web application
    steps:
      - name: Checkout
        uses: actions/checkout@v2
        with:
          ref: master
      - name: ZAP Scan
        uses: zaproxy/action-baseline@v0.6.1
        with:
          docker_name: 'owasp/zap2docker-stable'
          target: 'http://example.com/'
          rules_file_name: '.zap/rules.tsv'
          cmd_options: '-a'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The reports will be genarated as artifacts or in the actions by clicking on scan names or through dashboard url which will be mentioned.&lt;br&gt;
SAST Report&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/l4mpCoYDrnijKZwVszGcraXouH6USaUSBn9vBliK86o/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvMXps/c3QxZzI2bGM2bzhv/ZDVibXAucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/l4mpCoYDrnijKZwVszGcraXouH6USaUSBn9vBliK86o/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvMXps/c3QxZzI2bGM2bzhv/ZDVibXAucG5n" alt="SAST Image" width="800" height="297"&gt;&lt;/a&gt;&lt;br&gt;
SCA Report&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/266X5nWKQZ_XE1C55zgLsMZbu9zStAL4aQPCj3AeZVc/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvem90/dTliamEzMjFwNjJk/bndoZ3QucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/266X5nWKQZ_XE1C55zgLsMZbu9zStAL4aQPCj3AeZVc/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvem90/dTliamEzMjFwNjJk/bndoZ3QucG5n" alt="SCA Image" width="800" height="441"&gt;&lt;/a&gt;&lt;br&gt;
DAST Report&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/c_dbWJ_U_4RBq5cmxdf20TLDHbEuxhwulahdUORC3pU/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvcGNp/NmNyaGtpYzRibXJ1/NDB3eDUucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/c_dbWJ_U_4RBq5cmxdf20TLDHbEuxhwulahdUORC3pU/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvcGNp/NmNyaGtpYzRibXJ1/NDB3eDUucG5n" alt="DAST Image" width="800" height="441"&gt;&lt;/a&gt;&lt;br&gt;
the github repo-&lt;a href="https://github.com/asecurityguru/devsecops-with-github-actions-end-to-end-nodejs-project"&gt;https://github.com/asecurityguru/devsecops-with-github-actions-end-to-end-nodejs-project&lt;/a&gt;&lt;br&gt;
I hope this helps you find solutions to problems&lt;br&gt;
Thank you &lt;/p&gt;

</description>
      <category>secops</category>
      <category>devops</category>
      <category>github</category>
    </item>
    <item>
      <title>Terraform: Resource dependencies</title>
      <dc:creator>Akshay Rao</dc:creator>
      <pubDate>Sun, 29 Oct 2023 17:53:08 +0000</pubDate>
      <link>https://community.ops.io/akshay_rao/terraform-resource-dependencies-a03</link>
      <guid>https://community.ops.io/akshay_rao/terraform-resource-dependencies-a03</guid>
      <description>&lt;p&gt;Hi, this Akshay Rao. Lets us look how does Terraform handle resource dependencies and provisioning order.&lt;/p&gt;

&lt;p&gt;Terraform uses an implicit and explicit dependency paradigm to manage resource dependencies and provisioning order. This model enables Terraform to comprehend the connections between resources and guarantees that they are provisioned in the proper sequence to meet these requirements.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Implicit Dependency Model&lt;/strong&gt;: Based on the configuration code, Terraform automatically ascertains the interdependence between resources. When one resource refers to the properties of another, Terraform creates an implicit dependency between them. For instance, Terraform will automatically recognize the dependency between an EC2 instance and a security group if you create an AWS EC2 instance and assign it to a certain security group. As a result, Terraform will make sure the security group is established first.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explicit Dependency Model&lt;/strong&gt;: There are occasions when you must provide explicit dependencies that Terraform cannot deduce from the configuration on its own. The depends_on parameter can be used in this situation. Even if there are no direct references between two resources in the configuration, the &lt;strong&gt;depends_on&lt;/strong&gt; parameter specifies that they are interdependent. When resources lack direct attribute references to one another yet are logically dependent on one another.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An illustration of how resource dependencies operate in a Terraform configuration is given below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "sg" {
  name_prefix = "sg_"
}

resource "aws_instance" "instance" {
  ami           = "ami-400Odg3r354efd"
  instance_type = "t2.micro"
  security_groups = [
    aws_security_group.web_sg.id,
  ]

}

resource "aws_s3_bucket" "tf_bucket" {
  bucket = "my-tf-bucket"
  acl    = "private"

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the aws_instance resource is dependent on the aws_security_group resource because the security group ID is referenced in the security_groups parameter. Terraform understands that the security group must be created before the EC2 instance.&lt;/p&gt;

&lt;p&gt;However, because the aws_s3_bucket resource contains no references to other resources, it has no implicit dependencies. If you need to construct the S3 bucket after the EC2 instance and security group are created, you can use the depends_on argument:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket" "tf_bucket" {
  bucket = "my-tf-bucket"
  acl    = "private"

  depends_on = [
    aws_instance.instance,
    aws_security_group.sg,
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using the &lt;strong&gt;depends_on&lt;/strong&gt; parameter, you explicitly specify that the aws_s3_bucket resource is dependent on both the aws_instance.instance and the aws_security_group.sg resources, ensuring proper provisioning sequence.&lt;/p&gt;

&lt;p&gt;Keep in mind that, while depends_on aids in ordering, it does not impose rigid resource sequencing. Terraform prioritizes creating resources concurrently and resolving dependencies over waiting for each resource to be built sequentially.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Terraform: Nginx in GCE</title>
      <dc:creator>Akshay Rao</dc:creator>
      <pubDate>Sun, 29 Oct 2023 17:47:48 +0000</pubDate>
      <link>https://community.ops.io/akshay_rao/terraform-resource-dependencies-3db7</link>
      <guid>https://community.ops.io/akshay_rao/terraform-resource-dependencies-3db7</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;hi, i am Akshay Rao&lt;br&gt;
This blog is about the installation of nginx in GCP VM through terraform.&lt;/p&gt;
&lt;h2&gt;
  
  
  Pre-requisite
&lt;/h2&gt;

&lt;p&gt;GCP account (can get 300$ free trial account)&lt;br&gt;
Terraform installed &lt;br&gt;
VS-code&lt;/p&gt;
&lt;h2&gt;
  
  
  Let's start
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;create a project in the gcp console.&lt;/li&gt;
&lt;li&gt;click on the tab beside google cloud logo and click &lt;strong&gt;new project&lt;/strong&gt;, give name i have given it a terraform-gcp.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/c4awXKAC10_JRG8Pye3yf-PWUZs6ujsjxzoXtQGVHdc/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvbHNr/azJqamF3bnE5Z3Zm/bDdzbnIucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/c4awXKAC10_JRG8Pye3yf-PWUZs6ujsjxzoXtQGVHdc/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvbHNr/azJqamF3bnE5Z3Zm/bDdzbnIucG5n" alt="Image 1" width="800" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;create a service account by searching service account in the search bar, click create service account-&amp;gt; give name -&amp;gt; add the editor role under basic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;create a keys for the service account and download the json file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;make a directory in which the terraform scripts will be stored and move the above downloaded json to this directory.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mv ~/Downloads/&amp;lt;file name&amp;gt; credentials.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;create a file named main.tf&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Provider block
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "google" {
  project = "&amp;lt;your project-id&amp;gt;"
  credentials = "${file("credentials.json")}" 
  region = "us-west1"
  zone = "us-west1-a"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;now execute &lt;code&gt;terraform init&lt;/code&gt;&lt;br&gt;
this is done so that the terraform can connect to gcp.&lt;br&gt;
&lt;code&gt;.terraform.lock.hcl&lt;br&gt;
.terraform&lt;/code&gt;&lt;br&gt;
will be created automatically &lt;br&gt;
provider will be registered.&lt;br&gt;
&lt;a href="https://community.ops.io/images/pldc7bS-LNkK-gcF3zzlDEplA9x-maI2yZJDWeYr-is/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvajky/bHZ4aHMzbG5zZjZr/bWtvazgucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/pldc7bS-LNkK-gcF3zzlDEplA9x-maI2yZJDWeYr-is/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvajky/bHZ4aHMzbG5zZjZr/bWtvazgucG5n" alt="Image 2" width="593" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Resources block
create a network , one subnetwork and VM instance in which the nginx will be running.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_compute_network" "vpc_network" {
  name                    = "my-custom-network"
  auto_create_subnetworks = false
  mtu                     = 1460
}

resource "google_compute_subnetwork" "default" {
  name          = "us-west-a"
  ip_cidr_range = "10.0.1.0/24"
  region        = "us-west1"
  network       = google_compute_network.vpc_network.id
}

resource "google_compute_instance" "nginx-instance" {
    name = "nginx-intsance"
    machine_type = "f1-micro"
    tags = ["ssh"]                                              
    zone = "us-west1-a"
    allow_stopping_for_update = true

    boot_disk {
      initialize_params {
        image = "debian-cloud/debian-11"
      }
    }
    metadata_startup_script =  "sudo apt-get update; sudo apt-get install -y nginx; sudo systemctl start nginx"      // renderning script from template file
    network_interface {
      subnetwork = "google_compute_subnetwork.default.id"
      subnetwork_project = "&amp;lt;your project id&amp;gt;"
      access_config {
         // is included so that the vm gets external ip address
      }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;mtu - maximum trasmiaaion unit.&lt;/li&gt;
&lt;li&gt;added boot as the debian 11 image.&lt;/li&gt;
&lt;li&gt;in the metadata_startup_script i have passed the installation commands for the nginx.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Deployment and Verify
&lt;/h2&gt;

&lt;p&gt;save it&lt;br&gt;
now run command &lt;code&gt;terraform plan&lt;/code&gt; and yes&lt;br&gt;
terraform will know the what resources to create with plan command and it will create orderly when applied.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[akshay.rao terraform-gcp]$ terraform plan
data.template_file.nginx_installation: Reading...
data.template_file.nginx_installation: Read complete after 0s [id=cfbeb1ad70856f85403aaec9edaed46cdcd3ab215367abd5b1049f5a69a24fc1]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_compute_instance.nginx-instance will be created
  + resource "google_compute_instance" "nginx-instance" {
      + allow_stopping_for_update = true
      + can_ip_forward            = false
      + cpu_platform              = (known after apply)
      + current_status            = (known after apply)
      + deletion_protection       = false
      + guest_accelerator         = (known after apply)
      + id                        = (known after apply)
      + instance_id               = (known after apply)
      + label_fingerprint         = (known after apply)
      + machine_type              = "f1-micro"
      + metadata_fingerprint      = (known after apply)
      + metadata_startup_script   = &amp;lt;&amp;lt;-EOT
            #!/bin/bash/
            set -e
            echo "** installing nginx **"
            sudo apt-get update
            sudo apt-get install -y nginx
            sudo systemctl enable nginx
            sudo systemctl restart nginx

            echo "**   Installation Complteted!!   **"

            echo "Welcome to Nginx which is deployed using Terraform!!!" &amp;gt; /var/www/html

            echo "**   Startup script completes!!   **"
        EOT
      + min_cpu_platform          = (known after apply)
      + name                      = "nginx-intsance"
      + project                   = (known after apply)
      + self_link                 = (known after apply)
      + tags                      = [
          + "proxy",
        ]
      + tags_fingerprint          = (known after apply)
      + zone                      = "us-west1-a"

      + boot_disk {
          + auto_delete                = true
          + device_name                = (known after apply)
          + disk_encryption_key_sha256 = (known after apply)
          + kms_key_self_link          = (known after apply)
          + mode                       = "READ_WRITE"
          + source                     = (known after apply)

          + initialize_params {
              + image  = "debian-cloud/debian-11"
              + labels = (known after apply)
              + size   = (known after apply)
              + type   = (known after apply)
            }
        }

      + network_interface {
          + ipv6_access_type   = (known after apply)
          + name               = (known after apply)
          + network            = "default"
          + network_ip         = (known after apply)
          + stack_type         = (known after apply)
          + subnetwork         = (known after apply)
          + subnetwork_project = (known after apply)

          + access_config {
              + nat_ip       = (known after apply)
              + network_tier = (known after apply)
            }
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now &lt;code&gt;terraform apply&lt;/code&gt; and yes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[akshay.rao terraform-gcp ]$ terraform apply
data.template_file.nginx_installation: Reading...
data.template_file.nginx_installation: Read complete after 0s [id=cfbeb1ad70856f85403aaec9edaed46cdcd3ab215367abd5b1049f5a69a24fc1]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_compute_instance.nginx-instance will be created
  + resource "google_compute_instance" "nginx-instance" {
      + allow_stopping_for_update = true
      + can_ip_forward            = false
      + cpu_platform              = (known after apply)
      + current_status            = (known after apply)
      + deletion_protection       = false
      + guest_accelerator         = (known after apply)
      + id                        = (known after apply)
      + instance_id               = (known after apply)
      + label_fingerprint         = (known after apply)
      + machine_type              = "f1-micro"
      + metadata_fingerprint      = (known after apply)
      + metadata_startup_script   = &amp;lt;&amp;lt;-EOT
            #!/bin/bash/
            set -e
            echo "** installing nginx **"
            sudo apt-get update
            sudo apt-get install -y nginx
            sudo systemctl enable nginx
            sudo systemctl restart nginx

            echo "**   Installation Complteted!!   **"

            echo "Welcome to Nginx which is deployed using Terraform!!!" &amp;gt; /var/www/html

            echo "**   Startup script completes!!   **"
        EOT
      + min_cpu_platform          = (known after apply)
      + name                      = "nginx-intsance"
      + project                   = (known after apply)
      + self_link                 = (known after apply)
      + tags                      = [
          + "proxy",
        ]
      + tags_fingerprint          = (known after apply)
      + zone                      = "us-west1-a"

      + boot_disk {
          + auto_delete                = true
          + device_name                = (known after apply)
          + disk_encryption_key_sha256 = (known after apply)
          + kms_key_self_link          = (known after apply)
          + mode                       = "READ_WRITE"
          + source                     = (known after apply)

          + initialize_params {
              + image  = "debian-cloud/debian-11"
              + labels = (known after apply)
              + size   = (known after apply)
              + type   = (known after apply)
            }
        }

      + network_interface {
          + ipv6_access_type   = (known after apply)
          + name               = (known after apply)
          + network            = "default"
          + network_ip         = (known after apply)
          + stack_type         = (known after apply)
          + subnetwork         = (known after apply)
          + subnetwork_project = (known after apply)

          + access_config {
              + nat_ip       = (known after apply)
              + network_tier = (known after apply)
            }
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

google_compute_instance.nginx-instance: Creating...
google_compute_instance.nginx-instance: Still creating... [10s elapsed]
google_compute_instance.nginx-instance: Still creating... [20s elapsed]
google_compute_instance.nginx-instance: Creation complete after 22s [id=projects/terraform-gcp-388209/zones/us-west1-a/instances/nginx-intsance]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now go to console and search for vm instances you will be able to see the vm running&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/C19D6hZaAH4kSOIGOaRnWf6y3p6CHN7vQMjqDigQdB4/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvejF5/djQydTU2MWFlMGhq/eWczZzgucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/C19D6hZaAH4kSOIGOaRnWf6y3p6CHN7vQMjqDigQdB4/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvejF5/djQydTU2MWFlMGhq/eWczZzgucG5n" alt="Image 3" width="569" height="145"&gt;&lt;/a&gt;&lt;br&gt;
click on ssh with open with browser window&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/zRtpLmi9xp0-Pc5T6ZF0ZZASvlrZKe3fIqABXEhM7Jc/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvM2kw/Z2gzeGw5NnY5MzBv/ZW5udmQucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/zRtpLmi9xp0-Pc5T6ZF0ZZASvlrZKe3fIqABXEhM7Jc/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvM2kw/Z2gzeGw5NnY5MzBv/ZW5udmQucG5n" alt="Image 4" width="462" height="262"&gt;&lt;/a&gt;&lt;br&gt;
the terminal will open &lt;br&gt;
run command &lt;code&gt;systemctl status nginx&lt;/code&gt;&lt;br&gt;
you will get:-&lt;br&gt;
&lt;a href="https://community.ops.io/images/a8I7P7ziAYssYNJ9NbtuchCNJoN-3WpTYpKTKQnJa-I/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvcW1q/MHZpMWFybTZmNDVl/azA3YTIucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/a8I7P7ziAYssYNJ9NbtuchCNJoN-3WpTYpKTKQnJa-I/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvcW1q/MHZpMWFybTZmNDVl/azA3YTIucG5n" alt="Image 5" width="800" height="246"&gt;&lt;/a&gt;&lt;br&gt;
we have insatlled nginx through terraform.&lt;br&gt;
Thank you&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>nginx</category>
      <category>gcp</category>
    </item>
    <item>
      <title>Docker Container manipulation Part-II</title>
      <dc:creator>Akshay Rao</dc:creator>
      <pubDate>Sun, 29 Oct 2023 17:29:50 +0000</pubDate>
      <link>https://community.ops.io/akshay_rao/docker-container-manipulation-part-ii-1db2</link>
      <guid>https://community.ops.io/akshay_rao/docker-container-manipulation-part-ii-1db2</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Hi, I am Akshay Rao this is the part II of  container manipulations. I had written the blog on basic manipulation. &lt;br&gt;
I will leave the link of the previous in the conclusion.&lt;br&gt;
I have used  Go app for demonstration purposes.&lt;br&gt;
&lt;strong&gt;How to Restart a Container&lt;/strong&gt; &lt;br&gt;
Here there can be two situations &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Restating a stoped or killed container&lt;/li&gt;
&lt;li&gt;Rebooting a running container
To restart a stopped or killed container we can use container start command
&lt;code&gt;docker container start “identifier”&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker container start akshay_rao
akshay_rao
akshayrao@HL00802 ~ % docker container ls -a
CONTAINER ID   IMAGE                                             COMMAND          CREATED        STATUS                    PORTS                    NAMES
6360b3c15f34   aksrao1998/first-go-project                       "/app/project"   28 hours ago   Up 14 seconds             0.0.0.0:8080-&amp;gt;8080/tcp   akshay_rao
52fa8c04c200   aksrao1998/first-go-project                       "/app/project"   29 hours ago   Exited (2) 28 hours ago                            elated_colden
b2f071967c1a   aksrao1998/first-go-project                       "/app/project"   29 hours ago   Exited (2) 29 hours ago                            goofy_haslett
5db127b7f005   aksrao1998/first-go-repository:first-go-project   "/app/project"   29 hours ago   Exited (2) 29 hours ago                            nostalgic_margulis
eab55402a418   aksrao1998/first-go-repository:first-go-project   "/app/project"   29 hours ago   Exited (2) 29 hours ago                            blissful_sutherland
b0de1c496db1   aksrao1998/first-go-project                       "/app/project"   29 hours ago   Exited (2) 29 hours ago                            peaceful_moore
7c72eda7950c   aksrao1998/first-go-project                       "/app/project"   4 weeks ago    Exited (2) 4 weeks ago                             zealous_clarke
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Rebooting a container&lt;br&gt;
 &lt;code&gt;docker conatiner restart “indentifier”&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker container restart akshay_rao
akshay_rao
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How to Create a Container Without Running&lt;/strong&gt;&lt;br&gt;
 &lt;code&gt;docker container create “image-name”&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker container create  -p 8080:8080 aksrao1998/first-go-project
6360b3c15f34b8dc605079cfd1c58f9e4ea9c900d4307bab5fafe766c4623451
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How to Remove Dangling Containers&lt;/strong&gt;&lt;br&gt;
Container rm command is used when un-necessary container are there.&lt;br&gt;
 &lt;code&gt;docker container rm “container-identifier”&lt;/code&gt;&lt;br&gt;
There is also option —rm for container run and container start commands which will remove the containers as soon as they are stopped.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker container run —rm -p 8080:8080 —name remove_it  aksrao1998/first-go-project
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How to Run a Container in Interactive Mode&lt;/strong&gt;&lt;br&gt;
Sometimes we have to execute commands inside the container, -it option is used to get acccess to the STDERR.&lt;br&gt;
Any interactive program inside a container can be interacted with using the -it option. This choice is actually the combination of two distinct choices.&lt;/p&gt;

&lt;p&gt;You can send inputs to bash by connecting to the container's input stream using the -i or —interactive option.&lt;br&gt;
By allocating a pseudo-tty, the -t or —tty option ensures that you receive good formatting and a native terminal-like experience.&lt;br&gt;
&lt;a href="https://community.ops.io/images/l4XDy9lQNZuicOfsXm9hrUmhQJc44S3wsGk-UlAsUBM/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvZm9v/aWN6dHJ1ZnpjZnN3/bWlsc2MucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/l4XDy9lQNZuicOfsXm9hrUmhQJc44S3wsGk-UlAsUBM/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvZm9v/aWN6dHJ1ZnpjZnN3/bWlsc2MucG5n" alt="Image 1" width="800" height="521"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;How to Work With Executable Images&lt;/strong&gt;&lt;br&gt;
The program running inside the container is isolated from the local host file system so to grant direct access used bind mounts.&lt;br&gt;
By using a bind mount, you can create a two-way data connection between the contents of one directory (the source) on your local file system and another directory inside a container (destination). The source directory will benefit from any modifications made to the destination directory in this manner, and vice versa.&lt;br&gt;
&lt;code&gt;—volume or -v “”local file system directory absolute path:”container file system directory absolute path”:”read write access”&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
I have covered most of the container manipulation.&lt;br&gt;
the last blog link:-&lt;a href="https://dev.to/aksrao1998/docker-container-manipulation-part-i-1pbe"&gt;https://dev.to/aksrao1998/docker-container-manipulation-part-i-1pbe&lt;/a&gt;&lt;br&gt;
Thank you&lt;/p&gt;

</description>
      <category>docker</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Docker Container Manipulation Part-I</title>
      <dc:creator>Akshay Rao</dc:creator>
      <pubDate>Sun, 29 Oct 2023 17:27:53 +0000</pubDate>
      <link>https://community.ops.io/akshay_rao/docker-container-manipulation-part-i-4lpl</link>
      <guid>https://community.ops.io/akshay_rao/docker-container-manipulation-part-i-4lpl</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;I am Akshay Rao working in Annotation.Inc. I tried different container manipulations in this blog.&lt;br&gt;
I have used  Go app for demonstration purposes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Run a Container&lt;/strong&gt;&lt;br&gt;
The “docker run” command is used to start a container using images.&lt;br&gt;
I used docker run "image name" command which has generic syntax but the actual syntax is &lt;br&gt;
Docker "object" "command" "options"&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Object - describes which docker object will be manipulated&lt;/li&gt;
&lt;li&gt;Command - task that the docker deamon will be assigned&lt;/li&gt;
&lt;li&gt;Options - parameter which will overrule the defaults behaviour of the command.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;docker container run --publish 8080:8080 aksrao1998/first-go-project&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Publish a Port&lt;/strong&gt;&lt;br&gt;
syntax:- —publish "host port":"container port"&lt;br&gt;
To publish port I used —publish or -p option and binding the container port 8080 with host system 8080.&lt;br&gt;
The app will be accessed in localhost:8080&lt;br&gt;
To stop the container close the terminal window or press &lt;code&gt;ctrl+c&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Use Detached Mode&lt;/strong&gt;&lt;br&gt;
The container stops when the terminal window is closed. This is because, by default the container that are ran in foreground are attached to the terminal.&lt;br&gt;
So in-order to run the container separately detach option is used. &lt;br&gt;
This a very famous options —detach or -d &lt;/p&gt;

&lt;p&gt;Example:-  &lt;code&gt;docker container run —detach —publish 8080:8080 aksrao1998/first-go-project&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker container run -d -p 8080:8080 aksrao1998/first-go-project
6360b3c15f34b8dc605079cfd1c58f9e4ea9c900d4307bab5fafe766c4623451
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The order of the options provided doesn’t matter, but make sure that the options are written before image name, anything written after image name is considered as an argument.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to List Containers&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker container ls

CONTAINER ID   IMAGE                            COMMAND          CREATED         STATUS           PORTS                       NAMES
6360b3c15f34   aksrao1998/first-go-project   "/app/project"   5 minutes ago   Up 5 minutes   0.0.0.0:8080-&amp;gt;8080/tcp   affectionate_hertz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Container ID is provide as it is 64 character long but for display only first 12 characters are displayed.&lt;br&gt;
Name is automatically assigned by the daemon, we can also manually assign name to the container for better observer ability .&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/OYU_LRJ0ZiYtZ53k0f5J2dd7oPr1tsKk60Pvma2ZXlQ/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvMnVi/c2dzOHpzenk3MTF6/aHlzc20ucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/OYU_LRJ0ZiYtZ53k0f5J2dd7oPr1tsKk60Pvma2ZXlQ/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvMnVi/c2dzOHpzenk3MTF6/aHlzc20ucG5n" alt="Image 1" width="800" height="81"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;How to Name or Rename a Container&lt;/strong&gt;&lt;br&gt;
Every container has two identifiers&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Conatiner ID - a random 64 character-long string&lt;/li&gt;
&lt;li&gt;Name:- combination of two random words, joined with underscore&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;—name&lt;/code&gt; option can be used while running a container&lt;/p&gt;

&lt;p&gt;To rename the container use container  rename&lt;br&gt;
Syntax- docker container rename "container identifier" "new name"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker rename affectionate_hertz akshay_rao
docker container ls

CONTAINER ID   IMAGE                          COMMAND          CREATED          STATUS            PORTS                  NAMES
6360b3c15f34   aksrao1998/first-go-project   "/app/project"   19 minutes ago   Up 6 seconds   0.0.0.0:8080-&amp;gt;8080/tcp   akshay_rao
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How to Stop or Kill a Running Container&lt;/strong&gt;&lt;br&gt;
The container  can be stoped by closing the terminal window or ctrl+c.&lt;br&gt;
But for those container which are detach options need to use container stop command.&lt;br&gt;
Syntax:- docker container stop “identifier”&lt;br&gt;
Example-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker container stop akshay_rao
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command sends SIGTERM signal to shutdown the container properly, if the container doesn’t stop within certain  time than SIGKILL signal is sent to shutdown immediately.&lt;br&gt;
To kill the container directly kill command can be used.&lt;br&gt;
Syntax- docker container kill "identifier"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker container kill akshay_rao
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
I have introduced the basic manipulation like running, publishing, detaching, renaming and terminating the containers.&lt;br&gt;
I hope that this helps in managing docker containers and stay tuned for Part-II.&lt;br&gt;
How do you think about the blog? &lt;br&gt;
pls comment and share&lt;br&gt;
Thank you&lt;/p&gt;

</description>
      <category>docker</category>
    </item>
    <item>
      <title>How does a pod get launched when deployment is applied.</title>
      <dc:creator>Akshay Rao</dc:creator>
      <pubDate>Sun, 29 Oct 2023 17:24:49 +0000</pubDate>
      <link>https://community.ops.io/akshay_rao/how-does-a-pod-get-launched-when-deployment-is-applied-140e</link>
      <guid>https://community.ops.io/akshay_rao/how-does-a-pod-get-launched-when-deployment-is-applied-140e</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Hi, I am Akshay Rao.&lt;br&gt;
While working with Kubernetes, i had a question how does the deployment make a pod, even in the yaml file, the kind we don't mention the as pod then also the pod is created, i researched and understood how it actually it works.&lt;br&gt;
&lt;strong&gt;Let's Start&lt;/strong&gt; &lt;br&gt;
For this flow to understand we need to understand the controllers in the k8s.&lt;br&gt;
every controller has a 2 component:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Informers&lt;/strong&gt; keep an eye on the desired state of resources in a scalable and sustainable manner. They also have a resync mechanism, which enforces periodic reconciliation and is frequently used to ensure that the cluster state and the assumed state cached in memory do not drift (due to faults or network issues.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Work queue&lt;/strong&gt; is essentially a component that may be utilized by the event handler to handle the queuing of state changes and aid in the implementation of retries.
This feature is accessible in client-go via the work queue package. Resources can be requeued if there are mistakes when updating the world or publishing the status, or if we need to evaluate the resource after a period of time for various reasons.
we have understood the controller components, now we will see the pods creation flow.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/4y8hg2zrLGs99VWGygVtBQ1hF1gSGfHSH0h1OCJWCNk/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvNGNl/eTF6NzJvaHhwam5k/emM4aHgucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/4y8hg2zrLGs99VWGygVtBQ1hF1gSGfHSH0h1OCJWCNk/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvNGNl/eTF6NzJvaHhwam5k/emM4aHgucG5n" alt="Image flowchart" width="720" height="970"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The deployment controller (located within kube-controller-manager) detects (through a deployment informer) that the user has created a deployment. In its business logic, it generates a replica set.&lt;/li&gt;
&lt;li&gt;The replica set controller (again, inside kube-controllermanager) observes the new replica set (through a replica set informer) and executes its business logic, which generates a pod object.&lt;/li&gt;
&lt;li&gt;The scheduler (within the kube-scheduler binary), which is also a controller, observes the pod with an empty spec.nodeName field (through a pod informer). Its business logic queues the pod for scheduling.&lt;/li&gt;
&lt;li&gt;Meanwhile, another controller, the kubelet, observes the new pod (via its pod informer). However, the new pod's spec.nodeName field is empty, therefore it does not match the kubelet's node name. It ignores the pod and returns to sleep until  next event is triggered.&lt;/li&gt;
&lt;li&gt;The scheduler removes the pod from the work queue and assigns it to a node with appropriate spare resources by modifying the pod's spec.nodeName field and writing it to the API server.&lt;/li&gt;
&lt;li&gt;The kubelet re-wakes as a result of the pod update event. It compares the spec.nodeName to its own node name once again. Because the names match, the kubelet launches the pod's containers and informs back to the API server that the containers have been launched by writing this information into the pod status.&lt;/li&gt;
&lt;li&gt;The replica set controller observes the modified pod but is powerless to intervene.&lt;/li&gt;
&lt;li&gt;The pod eventually comes to an end. The kubelet will detect this, retrieve the pod object from the API server, set the "terminated" condition in the pod's status, and return it to the API server.&lt;/li&gt;
&lt;li&gt;When the replica set controller observes the terminated pod, he determines that it must be replaced. It removes the terminated pod from the API server and replaces it with a fresh one.&lt;/li&gt;
&lt;li&gt;And so on.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Thus this ishow the pods are created via deployments.&lt;br&gt;
I hope this has brough some clarity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thank you&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Kubernetes : Core Concepts</title>
      <dc:creator>Akshay Rao</dc:creator>
      <pubDate>Sun, 29 Oct 2023 17:23:01 +0000</pubDate>
      <link>https://community.ops.io/akshay_rao/kubernetes-core-concepts-1in3</link>
      <guid>https://community.ops.io/akshay_rao/kubernetes-core-concepts-1in3</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Hi, I am Akshay Rao, will be starting a exercise series on k8s.&lt;br&gt;
In this blog there will not explanation only problems and solutions.if you want explanation have a look at this series:-&lt;br&gt;
&lt;a href="https://dev.to/aksrao1998/series/24887"&gt;https://dev.to/aksrao1998/series/24887&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pre-requisite&lt;/strong&gt;&lt;br&gt;
have minikube or kind running in the local machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;:- k is alias for kubectl.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's Start&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem 1&lt;/strong&gt;&lt;br&gt;
Create a namespace called 'mynamespace' and a pod with image nginx called nginx on this namespace.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[k8s-ckad (⎈|minikube:default)]$ k get ns
NAME              STATUS   AGE
default           Active   9d
hands-on          Active   9d
kube-node-lease   Active   9d
kube-public       Active   9d
kube-system       Active   9d

[k8s-ckad (⎈|minikube:default)]$ k create namespace mynamespace
namespace/mynamespace created
[k8s-ckad (⎈|minikube:default)]$ k config set-context --current --namespace=mynamespace
Context "minikube" modified.

# deploy a pod
[k8s-ckad (⎈|minikube:mynamespace)]$ kubectl run nginx --image=nginx --dry-run=client -o yaml &amp;gt; pods.yaml

# edit the pods.yaml
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
  namespace: mynamespace 

# verify
[k8s-ckad (⎈|minikube:mynamespace)]$ k create -f pods.yaml 
pod/nginx created
[k8s-ckad (⎈|minikube:mynamespace)]$ k get po
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          6s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Problem 2&lt;/strong&gt;&lt;br&gt;
Create a busybox pod (using kubectl command) that runs the command "env". Run it and see the output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[k8s-ckad (⎈|minikube:mynamespace)]$ k run soln2 --image=busybox --command --restart=Never -it --rm -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=soln2
TERM=xterm
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
HOME=/root
pod "soln2" deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Problem 3&lt;/strong&gt;&lt;br&gt;
Create the YAML for a new ResourceQuota called 'myrq' with hard limits of 1 CPU, 1G memory and 2 pods without creating it&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[k8s-ckad (⎈|minikube:mynamespace)]$ k create quota myrq  --hard=cpu=1,memory=1G,pods=2 --dry-run=client -o y
aml
apiVersion: v1
kind: ResourceQuota
metadata:
  creationTimestamp: null
  name: myrq
spec:
  hard:
    cpu: "1"
    memory: 1G
    pods: "2"
status: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Problem 4&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a pod with image nginx:1.25.1 called nginx and expose traffic on port 80&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[k8s-ckad (⎈|minikube:mynamespace)]$ k run  mypod --image=nginx:1.25.1 --port=80
pod/mypod created

#Verify

[k8s-ckad (⎈|minikube:mynamespace)]$ k get po
NAME    READY   STATUS              RESTARTS   AGE
mypod   0/1     ContainerCreating   0          4s
nginx   1/1     Running             0          20m

[k8s-ckad (⎈|minikube:mynamespace)]$ k exec -it mypod bash -- nginx -v
nginx version: nginx/1.25.1

[k8s-ckad (⎈|minikube:mynamespace)]$ k describe pods mypod
Name:             mypod
Namespace:        mynamespace
Priority:         0
Service Account:  default
Node:             minikube/192.168.49.2
Start Time:       Thu, 12 Oct 2023 17:33:46 +0900
Labels:           run=mypod
Annotations:      &amp;lt;none&amp;gt;
Status:           Running
IP:               172.17.0.4
IPs:
  IP:  172.17.0.4
Containers:
  mypod:
    Container ID:   docker://147821ce3a12c57d9fef21026a57fcd0cee71360b411275db391a3dcccc25270
    Image:          nginx:1.25.1
    Image ID:       docker-pullable://nginx@sha256:67f9a4f10d147a6e04629340e6493c9703300ca23a2f7f3aa56fe615d75d31ca
    Port:           80/TCP
    Host Port:      0/TCP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I hope this blog exercise has helped you.&lt;br&gt;
&lt;strong&gt;Thank you&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>tutorials</category>
      <category>beginner</category>
    </item>
  </channel>
</rss>
