<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>The Ops Community ⚙️: Patrick Londa</title>
    <description>The latest articles on The Ops Community ⚙️ by Patrick Londa (@patrick_londa).</description>
    <link>https://community.ops.io/patrick_londa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://community.ops.io/feed/patrick_londa"/>
    <language>en</language>
    <item>
      <title>Aligning Your AWS Account with the FFIEC Cybersecurity Standards</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Wed, 05 Jul 2023 20:45:07 +0000</pubDate>
      <link>https://community.ops.io/blinkops/aligning-your-aws-account-with-the-ffiec-cybersecurity-standards-2goj</link>
      <guid>https://community.ops.io/blinkops/aligning-your-aws-account-with-the-ffiec-cybersecurity-standards-2goj</guid>
      <description>&lt;p&gt;Companies in the banking and finance industry must adhere to high security standards since they are high-value targets for bad actors. &lt;/p&gt;

&lt;p&gt;Industry-specific organizations like the Federal Financial Institutions Examination Council (FFIEC) have established guidelines to help companies ensure compliance with applicable laws and regulations.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll show you how to check if your AWS account adheres to the cybersecurity standards set forth by the FFIEC using automations in Blink.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the FFIEC Cybersecurity Standards
&lt;/h2&gt;

&lt;p&gt;Established in 1979, the &lt;a href="https://www.ffiec.gov/about.htm"&gt;Federal Financial Institutions Examination Council (FFIEC)&lt;/a&gt; is a U.S. government interagency body of five organizations working together to ensure the safety and soundness of the banking system.&lt;/p&gt;

&lt;p&gt;The FFIEC coordinates common standards for banks and develops uniform guidelines and examinations for all financial institutions. It also releases tooling, like the &lt;a href="https://www.ffiec.gov/cyberassessmenttool.htm"&gt;Cybersecurity Assessment Tool (CAT)&lt;/a&gt;, to help financial institutions evaluate their cybersecurity risk and develop appropriate controls. The CAT is a document that provides a framework and guidance, but it does not interactively assess an AWS account for compliance.&lt;/p&gt;

&lt;h2&gt;
  
  
  FFIEC Cybersecurity Guidance for AWS
&lt;/h2&gt;

&lt;p&gt;An audit of an organization's AWS environment is a critical part of FFIEC compliance requirements. AWS provides the tools and services necessary for financial institutions to adhere to FFIEC regulations, but each organization must ensure that its environment meets the specific requirements of the FFIEC.&lt;/p&gt;

&lt;p&gt;AWS provides &lt;a href="https://docs.aws.amazon.com/config/latest/developerguide/operational-best-practices-for-ffiec.html"&gt;operational best practices&lt;/a&gt; for FFIEC compliance, including a list of control IDs, AWS configuration rules, and guidance.&lt;/p&gt;

&lt;p&gt;Here are some examples of controls that organizations using AWS must follow to meet the FFIEC guidelines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;An inventory of organizational assets is maintained.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An information security and business continuity risk management function exists within the institution.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The risk assessment identifies internet-based systems and high-risk transactions that warrant additional authentication controls.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Information security threats are gathered and shared with applicable internal employees.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Audit log records and other security event logs are reviewed and retained in a secure manager. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For each of these controls, there are a few to &lt;a href="https://github.com/awslabs/aws-config-rules/blob/master/aws-config-conformance-packs/Operational-Best-Practices-for-FFIEC.yaml"&gt;several configuration rules&lt;/a&gt; in AWS that could apply to your organization, depending on the guidance.&lt;/p&gt;

&lt;p&gt;Manually checking whether your EC2 volumes are all encrypted, your IP addresses are all private, or you have the right password policy in place could take days or weeks.&lt;/p&gt;

&lt;p&gt;If you want to check your AWS environment for compliance quickly, you can use automation to get a comprehensive report based on these controls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating FFIEC Compliance for AWS with Blink
&lt;/h2&gt;

&lt;p&gt;With &lt;a href="https://library.blinkops.com/automations/federal-financial-institutions-examination-council-ffiec-compliance-report-for-aws"&gt;one automation&lt;/a&gt; in Blink, you could quickly scan your AWS environment to check your FFIEC compliance against the controls and generate reports with the findings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/XKv1EI010izL5Z0Xsl1PS8mdk4lfn8Dg_sBuvvkTNkU/w:800/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2RqeDJv/dDliMDZkZXN4bHNi/MGowLnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/XKv1EI010izL5Z0Xsl1PS8mdk4lfn8Dg_sBuvvkTNkU/w:800/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2RqeDJv/dDliMDZkZXN4bHNi/MGowLnBuZw" alt="Blink Automation: Federal Financial Institutions Examination Council Compliance Report for AWS" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Blink Automation: &lt;a href="https://library.blinkops.com/automations/federal-financial-institutions-examination-council-ffiec-compliance-report-for-aws"&gt;Federal Financial Institutions Examination Council Compliance Report for AWS&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When &lt;a href="https://library.blinkops.com/automations/federal-financial-institutions-examination-council-ffiec-compliance-report-for-aws"&gt;this automation&lt;/a&gt; runs, it executes the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Generates a Cyber Risk Management and Oversight Report.&lt;/li&gt;
&lt;li&gt;Generates a Threat Intelligence and Collaboration Report.&lt;/li&gt;
&lt;li&gt;Generates a Cybersecurity Controls Report.&lt;/li&gt;
&lt;li&gt;Generates an External Dependency Management Report.&lt;/li&gt;
&lt;li&gt;Generates a Cyber Incident Management and Resilience Report.&lt;/li&gt;
&lt;li&gt;Sends Report results to a specified email.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You could set this automation to run weekly, monthly, or quarterly so you can validate that you are maintaining your compliance over time.&lt;/p&gt;

&lt;p&gt;You may also have other compliance checks you need to run beyond this one with the Financial Federation Institutions Examination Council guidelines. What about SOC, ISO, or PCI compliance?&lt;/p&gt;

&lt;p&gt;There are over 7K pre-built automations in the &lt;a href="https://library.blinkops.com/automations"&gt;Blink Library&lt;/a&gt; that make it easy to gauge your environments against industry standards.&lt;/p&gt;

&lt;p&gt;To start streamlining your compliance and security checks today, you can get started by signing up for a &lt;a href="https://app.blinkops.com/signup"&gt;free trial&lt;/a&gt; or &lt;a href="https://www.blinkops.com/schedule-time"&gt;guided demo&lt;/a&gt; of Blink.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>secops</category>
      <category>automation</category>
    </item>
    <item>
      <title>Introducing Blink Copilot: Generative AI for Security Workflows</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Wed, 31 May 2023 19:27:58 +0000</pubDate>
      <link>https://community.ops.io/blinkops/introducing-blink-copilot-generative-ai-for-security-workflows-5jp</link>
      <guid>https://community.ops.io/blinkops/introducing-blink-copilot-generative-ai-for-security-workflows-5jp</guid>
      <description>&lt;p&gt;Today, we’re announcing Blink Copilot, the first ever generative AI for automating security and IT operations workflows. This innovative tool makes it possible for any security operator to generate no-code workflows using a simple prompt.&lt;/p&gt;

&lt;p&gt;When &lt;a href="https://www.blinkops.com"&gt;Blink&lt;/a&gt; was founded in 2021, we knew security teams needed a better way to automate internal workflows. Low-code platforms had already transformed business operations for CRM and marketing, and it was only a matter of time before low-code solutions enabled security teams to automate their own workflows across their stacks.&lt;/p&gt;

&lt;p&gt;Recent advancements in large language models (LLM) and generative AI have supercharged our mission, finally making true no-code automation possible. That’s why today, we’re excited to announce Blink Copilot, the first ever generative AI for security workflow automation. &lt;/p&gt;

&lt;p&gt;Blink Copilot enables security teams to generate any simple or complex workflow, instantly. Generative AI makes it possible to build workflows without writing code or needing to be an expert in target applications.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/exEzdWmUpNQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  The Critical Need for Security Automation
&lt;/h2&gt;

&lt;p&gt;According to the &lt;a href="https://www.isc2.org/Research/Workforce-Study"&gt;2022 (ISC)² Cybersecurity Workforce Study&lt;/a&gt;, there are over 3.4 million open security jobs and too few skilled security operators to fill them. Meanwhile, cybersecurity breaches in 2022 were &lt;a href="https://www.forbes.com/sites/forbesbusinesscouncil/2023/03/30/lessons-learned-from-the-data-breaches-of-2022/?sh=34e33e2b42e4"&gt;more costly than ever before&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;With such high stakes and a massive skills gap, security automation is necessary for organizations to defend against the massive volume of attacks enterprises face on a daily basis. &lt;/p&gt;

&lt;p&gt;Blink Copilot empowers teams of any size to build any no-code, low-code, or code security automation using generative AI. With Blink Copilot, security and IT operators of all skill levels can now leverage AI to increase productivity and deliver workflows to help protect their organizations.&lt;/p&gt;

&lt;p&gt;Blink Copilot unlocks unparalleled automation capabilities for security teams. Security automation projects that once took months are now built in seconds using Blink Copilot.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automate Security Beyond the SOC
&lt;/h2&gt;

&lt;p&gt;Blink Copilot empowers security teams to automate any operations workflow, effortlessly. Security practitioners regardless of skill set can automate any operational task or security workflow using a simple text prompt. &lt;/p&gt;

&lt;p&gt;Blink Copilot can help security teams automate workflows for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SOC &amp;amp; Incident Response&lt;/li&gt;
&lt;li&gt;Cloud Security&lt;/li&gt;
&lt;li&gt;IT &amp;amp; SaaS Security&lt;/li&gt;
&lt;li&gt;Identity &amp;amp; Access Management&lt;/li&gt;
&lt;li&gt;Governance, Risk &amp;amp; Compliance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using Blink Copilot, security teams can finally automate workflows beyond the SOC. Blink customers get out-of-the-box automations for popular security use cases and powerful automation capabilities backed by the robust Blink platform.&lt;/p&gt;

&lt;p&gt;Our internal team at Blink Ops has already generated over 7,000 workflow automations. Now with Blink Copilot, we’re empowering any team to build no-code, low-code, or code security workflow automations using generative AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Powerful No-Code Security Automation
&lt;/h2&gt;

&lt;p&gt;Blink Copilot is backed by the most powerful no-code security automation platform ever built.&lt;/p&gt;

&lt;p&gt;Key features of the Blink platform include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Blink Copilot&lt;/strong&gt;: The first ever generative AI for automating security and IT operations workflows&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low-Code Editor&lt;/strong&gt;: Drag-and-drop user interface that makes it easy for security teams to customize workflow automations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation Library&lt;/strong&gt;: 7,000+ security workflow automations built by the Blink community&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-Service Portal&lt;/strong&gt;: Shift-left service requests by making automated workflows available in a self-service portal or interactive Slack/Microsoft Teams app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Blink was designed for enterprise security teams, offering robust platform features like secure workspaces, on-prem runners, and multi-tenant support for MSPs and MSSPs. &lt;/p&gt;

&lt;p&gt;Blink is committed to upholding the highest grade of industry security standards, including SOC 2 Type 2, GDPR, ISO 27001 compliance. Blink can also help teams leverage no-code automation to achieve compliance across their own organizations.&lt;/p&gt;

&lt;p&gt;Learn more about the Blink platform and technical capabilities in the &lt;a href="https://www.docs.blinkops.com/docs/Documentation"&gt;Blink documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Started with Blink Copilot
&lt;/h2&gt;

&lt;p&gt;Blink Copilot completely transforms how security teams will automate simple and complex workflows.&lt;/p&gt;

&lt;p&gt;Security teams can take advantage of Blink Copilot today to shift-left internal workflows. Along with 7,000 out-of-the-box automations for common security use cases and powerful automation capabilities backed by the robust Blink platform, enterprise security teams can finally achieve operational excellence across their entire stack.&lt;/p&gt;

&lt;p&gt;For more information on Blink Copilot, visit the &lt;a href="https://www.blinkops.com/"&gt;Blink Ops website&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>secops</category>
      <category>ai</category>
      <category>nocode</category>
    </item>
    <item>
      <title>How to Find and Update Public EC2 AMIs in AWS</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Fri, 05 May 2023 19:30:54 +0000</pubDate>
      <link>https://community.ops.io/blinkops/how-to-find-and-update-public-ec2-amis-in-aws-mb0</link>
      <guid>https://community.ops.io/blinkops/how-to-find-and-update-public-ec2-amis-in-aws-mb0</guid>
      <description>&lt;p&gt;Resources in AWS should only be accessible to users who require them to complete tasks. If this principle of least privilege isn't followed, you increase the risk for data leakages or unauthorized access.&lt;/p&gt;

&lt;p&gt;If you have EC2 AMIs that are publicly-shared for example, they are available to any AWS account and may contain sensitive data about your organization such as passwords, SSH keys, and configuration details.&lt;/p&gt;

&lt;p&gt;In this guide, we will explain how to find and restrict publicly-shared AWS EC2 AMIs so you can ensure that sensitive information about your applications is not exposed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding AMIs for AWS EC2
&lt;/h2&gt;

&lt;p&gt;An &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html"&gt;Amazon Machine Image&lt;/a&gt; (AMIs) is an image that enables you to easily launch &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html"&gt;Amazon Elastic Compute Cloud&lt;/a&gt; (EC2) instances with the necessary requirements preconfigured.&lt;/p&gt;

&lt;p&gt;Each AMI includes instructions for block device mapping, launch permissions, and a root device volume, either backed by the EC2 instance store or at least one Amazon Elastic Block Store (EBS) snapshot.&lt;/p&gt;

&lt;p&gt;You can use an AMI to launch single instances or multiple servers in a cluster. For example, if you are launching a web server that handles a large volume of traffic, you can use an AMI to deploy it quickly. You can also launch multiple instances in the same cluster and configure them to work together.&lt;/p&gt;

&lt;p&gt;These AMIs can be shared with specific users, giving them access to the same configurations and settings. If they are publicly-shared, AMIs can reveal sensitive information about how your instances function.&lt;/p&gt;

&lt;h2&gt;
  
  
  How To Find Any Public AMIs in AWS
&lt;/h2&gt;

&lt;p&gt;Since AWS EC2 AMIs are publicly shared by default, it's easy to miss if you haven't taken the time to check your settings. Luckily, there are two ways to check whether your AMIs are publicly shared. You can use the Amazon EC2 Console or the AWS CLI, depending on your AWS environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the AWS Console:
&lt;/h3&gt;

&lt;p&gt;This method is the most straightforward way to check if your AMIs are publicly shared. Follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log in to the &lt;a href="https://console.aws.amazon.com/ec2/"&gt;AWS Management Console&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;EC2&lt;/strong&gt; from the list of services.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;AMIs&lt;/strong&gt; in the left sidebar menu under the &lt;strong&gt;IMAGES&lt;/strong&gt; section.&lt;/li&gt;
&lt;li&gt;To see all the public AMIs, use the first filter in the AMI list and choose &lt;strong&gt;Public Images&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;To check individual AMIs, select one and scroll down to the &lt;strong&gt;Permissions&lt;/strong&gt; section and check the current permissions. If the selected AMI is publicly shared, the EC2 dashboard will display the following message, "This image is currently Public" in the &lt;strong&gt;Permissions&lt;/strong&gt; section.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Using the AWS CLI:
&lt;/h3&gt;

&lt;p&gt;You can also use the AWS Command Line Interface (CLI) to check whether your AMIs are publicly shared. This process is a bit more complex, but here's how it works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run the following &lt;a href="https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/describe-images.html"&gt;describe-images&lt;/a&gt; command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 describe-images --executable-users all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command will return a list of all the publicly shared AMIs in your account. If the command returns images with the &lt;strong&gt;Public&lt;/strong&gt; parameter value of "&lt;strong&gt;true&lt;/strong&gt;" then the AMI is publicly accessible. &lt;/p&gt;

&lt;p&gt;For instance, the AMI in question is publicly shared if the output is something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"Images": {"ImageId": "ami-1234abcd", "Public": true } }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that you have identified public AMIs, next you can update them to make them more secure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Changing Public AMIs to Private AMIs in AWS
&lt;/h2&gt;

&lt;p&gt;You can update public AMIs to restrict access using either the AWS Console and CLI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the AWS Console:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Log in to the &lt;a href="https://console.aws.amazon.com/ec2/"&gt;AWS Management Console&lt;/a&gt; again and select EC2 from the services list.&lt;/li&gt;
&lt;li&gt;Select the AMI you want to make private from the list of AMIs in the &lt;strong&gt;IMAGES&lt;/strong&gt; section.&lt;/li&gt;
&lt;li&gt;Head to the Permissions tab from the bottom bar menu on your dashboard. Click on &lt;strong&gt;Edit AMI Permissions&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Then select &lt;strong&gt;Private&lt;/strong&gt; and &lt;strong&gt;Save changes&lt;/strong&gt; to make the AMI private. The Permissions tab will now display the following message, "This AMI is not shared with any other accounts."&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Using the AWS CLI:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Run the following &lt;a href="https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/modify-image-attribute.html"&gt;modify-image-attribute&lt;/a&gt; command to make an AMI private:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 modify-image-attribute 
    --image-id ami-1234abcd 
    --launch-permission "{\"Remove\":[{\"Group\":\"all\"}]}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will immediately make the AMI private while preserving all its existing permissions.&lt;/p&gt;

&lt;p&gt;To double-check, you can list the attributes of the image by running the following &lt;a href="https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/describe-images.html"&gt;describe-images&lt;/a&gt; command for the AMI with the id “ami-1234abcd”:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 describe-images --image-ids ami-1234abcd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should now display the Public parameter as &lt;strong&gt;false&lt;/strong&gt; as shown here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"Images": { "ImageId": "ami-1234abcd", "Public": false}}.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you have successfully secured your non-compliant AMI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating AWS  Permission Checks for EC2 AMIs with Blink
&lt;/h2&gt;

&lt;p&gt;It isn’t hard to find and secure public AMIs, but it can be time-consuming if you have many that need to be addressed or you want to check this on a regular basis.&lt;/p&gt;

&lt;p&gt;How do you ensure that you don’t have overly-permissive AMIs at any given time? If you are only doing this task with CLI commands or steps in the console, you will need to context-switch to oversee each step.&lt;/p&gt;

&lt;p&gt;With an automation platform like &lt;a href="https://www.blinkops.com/"&gt;Blink&lt;/a&gt;, you can run &lt;a href="https://library.blinkops.com/automations/ensure-ec2-amis-are-not-shared-publicly-in-aws"&gt;this automation&lt;/a&gt; to identify and notify your team of new public AMIs so you can maintain consistent compliance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/TjyAolmqa4nOH9wd37w3XQ2EpQsUzlhV7n8NA4Oq0OI/w:800/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3dnZHpz/ZDYxeXVmcnduNjk2/MmR1LnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/TjyAolmqa4nOH9wd37w3XQ2EpQsUzlhV7n8NA4Oq0OI/w:800/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3dnZHpz/ZDYxeXVmcnduNjk2/MmR1LnBuZw" alt="Blink Automation: Ensure EC2 AMIs are Not Shared Publicly in AWS" width="800" height="500"&gt;&lt;/a&gt;&lt;em&gt;Blink Automation: &lt;a href="https://library.blinkops.com/automations/ensure-ec2-amis-are-not-shared-publicly-in-aws"&gt;Ensure EC2 AMIs are Not Shared Publicly in AWS&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This automation in the &lt;a href="https://library.blinkops.com/automations"&gt;Blink library&lt;/a&gt; executes the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Checks if there are any publicly-shared AWS EC2 AMIs.&lt;/li&gt;
&lt;li&gt;Sends a report to a specified email address.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is a simple automation, which makes it easy to customize for your organization’s needs. For example, you can add actions to update the settings of non-compliant AMIs with just an approval via Slack.&lt;/p&gt;

&lt;p&gt;This automation and 5K more are available for you to use right away from the Blink library. Instead of manually implementing best practices with the AWS console or CLI, these ready-to-use automations enable you to set and enforce policies across your organization or teams. &lt;/p&gt;

&lt;p&gt;You can also build custom automations to match your organization’s unique use cases, whether you want to schedule a task, run steps based on an event in a certain tool, or share a self-service application for your coworkers to use.&lt;/p&gt;

&lt;p&gt;Start a &lt;a href="https://app.blinkops.com/signup"&gt;free trial of Blink&lt;/a&gt; today or &lt;a href="https://www.blinkops.com/schedule-time"&gt;schedule time with us&lt;/a&gt; to see more.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>secops</category>
    </item>
    <item>
      <title>How to Search for an IOC Across Devices in CrowdStrike</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Sun, 09 Apr 2023 16:39:43 +0000</pubDate>
      <link>https://community.ops.io/blinkops/how-to-search-for-an-ioc-across-devices-in-crowdstrike-2958</link>
      <guid>https://community.ops.io/blinkops/how-to-search-for-an-ioc-across-devices-in-crowdstrike-2958</guid>
      <description>&lt;p&gt;When malware is detected on one of your organization’s devices, it will have characteristics called Indicators of Compromise (IOCs), such as certain hash values, urls, or IP addresses.&lt;/p&gt;

&lt;p&gt;You can use these IOCs to look across your organization’s devices to identify lateral movement associated with an attack.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll show you how to use CrowdStrike to detect if IOCs associated with malware are present on any other devices at your organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Searching for an IOC Across CrowdStrike Hosts
&lt;/h2&gt;

&lt;p&gt;You can search for IOCs on other devices either by using the CrowdStrike Console or by using the CrowdStrike API.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the CrowdStrike Console:
&lt;/h3&gt;

&lt;p&gt;1: First log in to the &lt;a href="https://falcon.crowdstrike.com/login/"&gt;CrowdStrike Falcon Console&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;2: Open the left-hand menu and select &lt;strong&gt;Investigate&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/QkchbS8N-gvjhjCrbPQTkhKq_bJSDg91rROIkQZhqxI/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2Y0cHd5/MnQxNDNieXpsc2U2/aGpwLnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/QkchbS8N-gvjhjCrbPQTkhKq_bJSDg91rROIkQZhqxI/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2Y0cHd5/MnQxNDNieXpsc2U2/aGpwLnBuZw" alt="Investigation Menu in the CrowdStrike Console" width="880" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3: Depending on your IOC type, choose the related link under the &lt;strong&gt;Search&lt;/strong&gt; section. For example, if you are looking for an IOC that is a domain, you can choose &lt;strong&gt;Bulk domains&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/GT4RlaTxL7bJx6oE4netRna-qLWdXv3x-bfpMgwQeDc/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3RjYW0z/eWR2MmZzOHdieXdp/Nmo4LnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/GT4RlaTxL7bJx6oE4netRna-qLWdXv3x-bfpMgwQeDc/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3RjYW0z/eWR2MmZzOHdieXdp/Nmo4LnBuZw" alt="Searching IOCs by Domains in CrowdStrike" width="880" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4: Input your IOC value and specify the time range you care about. Then click &lt;strong&gt;Submit&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In the results, you’ll see which hosts have observed the IOCs you’re investigating and you’ll see details on the process-level as well.&lt;/p&gt;

&lt;p&gt;You may want to contain any additional hosts now associated with the IOC. We wrote a guide on containing hosts here. For your audit trail, you can export this data by hovering over either section and clicking the &lt;strong&gt;Export&lt;/strong&gt; icon.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the CrowdStrike Falcon API:
&lt;/h3&gt;

&lt;p&gt;The platform also offers an API which allows administrators to easily programmatically manage their sensors. You can use the endpoint that geographically aligns with your specific CrowdStrike account:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;US-1 “api.crowdstrike.com”&lt;/li&gt;
&lt;li&gt;US-2 “api.us-2.crowdstrike.com”&lt;/li&gt;
&lt;li&gt;US-GOV-1 “api.laggar.gcw.crowdstrike.com”&lt;/li&gt;
&lt;li&gt;EU-1 “api.eu-1.crowdstrike.com”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the examples we show later, we’ll use “api.us-2.crowdstrike.com”.&lt;/p&gt;

&lt;p&gt;CrowdStrike’s API documentation is available after you &lt;a href="https://falcon.crowdstrike.com/login/?next=%2Fdocumentation%2F"&gt;log in here&lt;/a&gt;, and you’ll see information about how to &lt;a href="https://www.crowdstrike.com/blog/tech-center/get-access-falcon-apis/"&gt;use OAuth2 for authenticating&lt;/a&gt; your requests.&lt;/p&gt;

&lt;p&gt;Before you start, you need to make an access token request, including your client ID and client secret. You’ll get an access token in response that will be valid for 30 minutes after that. The API calls you make after that initial call will include that token.&lt;/p&gt;

&lt;p&gt;Next, make a GET request to this endpoint with IOC type and value specified:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://api.us-2.crowdstrike.com/indicators/queries/devices/v1?type=sha256&amp;amp;value=XYZ
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can use the following IOC types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;sha256&lt;/li&gt;
&lt;li&gt;md5&lt;/li&gt;
&lt;li&gt;domain&lt;/li&gt;
&lt;li&gt;ipv4&lt;/li&gt;
&lt;li&gt;ipv6&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also use the parameters &lt;strong&gt;limit&lt;/strong&gt; and &lt;strong&gt;offset&lt;/strong&gt; to manage pagination of results. With the response, you’ll be able to see all the resources that have observed the specific IOC.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automatically Searching Across Devices for IOCs with Blink:
&lt;/h3&gt;

&lt;p&gt;Checking if similar IOCs exist on other devices at your organization is one of many steps in responding to a security alert. While it isn’t difficult to do, it takes time and context-switching.&lt;/p&gt;

&lt;p&gt;With Blink, you can &lt;a href="https://library.blinkops.com/automations/search-crowdstrike-ioc-across-devices"&gt;handle this task automatically&lt;/a&gt; by just inputting an IOC type and value.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/CikJKeMD0XuuBdpMEhNJOYJGRND2KHP9CEDoDsAOERE/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2wxZzc0/MThub3B3cmNieDRm/c201LnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/CikJKeMD0XuuBdpMEhNJOYJGRND2KHP9CEDoDsAOERE/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2wxZzc0/MThub3B3cmNieDRm/c201LnBuZw" alt="Blink Automation: Search IOCs Across Devices in CrowdStrike" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With these IOC inputs, this automation runs the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Searches for other devices that have observed the IOC.&lt;/li&gt;
&lt;li&gt;Parses the results to format them into an easily readable report.&lt;/li&gt;
&lt;li&gt;Sends the report to the relevant SecOps team member via Slack.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is a simple automation, and that makes it easy to customize. For example, you can add this as a subflow in larger incident response automation triggered by a CrowdStrike alert.&lt;/p&gt;

&lt;p&gt;You can also use any of the 5K automations in the &lt;a href="https://library.blinkops.com/automations/"&gt;Blink library&lt;/a&gt;, or build new ones from scratch to fit your unique needs. In Blink, there are hundreds of native integrations and thousands of actions available to make building easy.&lt;/p&gt;

&lt;p&gt;Start a &lt;a href="https://app.blinkops.com/signup/"&gt;free trial of Blink&lt;/a&gt; today to see how easy automation can be.&lt;/p&gt;

</description>
      <category>secops</category>
      <category>crowdstrike</category>
    </item>
    <item>
      <title>How to Get User Activity From Your Azure Logs</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Thu, 23 Mar 2023 18:14:47 +0000</pubDate>
      <link>https://community.ops.io/blinkops/how-to-get-user-activity-from-your-azure-logs-kfj</link>
      <guid>https://community.ops.io/blinkops/how-to-get-user-activity-from-your-azure-logs-kfj</guid>
      <description>&lt;p&gt;It’s important to be able to audit user activity in Azure, whether you are dealing with a security incident or just want to fully review the actions a user has taken.&lt;/p&gt;

&lt;p&gt;If one of your developers has their account compromised, reviewing their user activity can be a necessary task to ensure that they haven’t done anything malicious to your Azure account and resources, or exfiltrated data like access keys or secrets.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll show you how to retrieve all the activity logs for a given user in Azure to quickly assess the scope of the threat.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Get User Activity From Azure Logs
&lt;/h2&gt;

&lt;p&gt;Azure Monitor collects and organizes all log and performance data from Azure resources, and you can access the activity logs for the last 90 days through steps in the console or CLI commands.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the Azure Monitor Log:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt; Open the Azure console, and navigate to the &lt;strong&gt;Activity log&lt;/strong&gt; view. You can do this either by clicking into a specific resource or by searching for &lt;strong&gt;Azure Monitor&lt;/strong&gt; to see all activity logs across your account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/CPQ0EMedSMK5xkXb8OiVJbAlcZrgNz3v058YAiYIQE4/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL21hMmdm/YnNnMTV2ZG9hMWlj/OXByLnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/CPQ0EMedSMK5xkXb8OiVJbAlcZrgNz3v058YAiYIQE4/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL21hMmdm/YnNnMTV2ZG9hMWlj/OXByLnBuZw" alt="Image description" width="880" height="438"&gt;&lt;/a&gt;Source: &lt;a href="https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log?tabs=powershell"&gt;Azure documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.&lt;/strong&gt; Next, you can add the &lt;strong&gt;Events initiated by&lt;/strong&gt; filter to view only the activity logs related to a specific user.&lt;br&gt;
&lt;strong&gt;3.&lt;/strong&gt; You can also adjust the timespan filter to specify the full duration you care about. You can also select &lt;strong&gt;Edit columns&lt;/strong&gt; if there is additional information you want to view.&lt;br&gt;
&lt;strong&gt;4.&lt;/strong&gt; To export the logs, click on &lt;strong&gt;Download as CSV&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This method is relatively simple, but it does require logging in to the console and manually working through the steps. Let’s look at running this same search from the Azure CLI.&lt;/p&gt;
&lt;h3&gt;
  
  
  Using the Azure CLI:
&lt;/h3&gt;

&lt;p&gt;If you aren’t already set up with the &lt;a href="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli"&gt;Azure CLI&lt;/a&gt;, you’ll need to install it locally. From there, you can run the &lt;a href="https://learn.microsoft.com/en-us/cli/azure/monitor/activity-log?view=azure-cli-latest#az-monitor-activity-log-list"&gt;az monitor activity-log list&lt;/a&gt; command to list and query activity logs.&lt;/p&gt;

&lt;p&gt;Here is the format to get the activity logs for a specific user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az monitor activity-log list --caller [USER-NAME]  
--offset [TIME-RANGE]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The offset parameter is formatted as XXdXXh, and defaults to 6h if not specified.&lt;/p&gt;

&lt;p&gt;Here’s an example command to get the activity logs for John Smith:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az monitor activity-log list --caller john.smith@blinkops.com  
--offset 14d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can these additional parameters to modify the results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;--start-time&lt;/strong&gt; or &lt;strong&gt;--end-time&lt;/strong&gt;: use the format of date (yyyy-mm-dd), time (hh:mm:ss.xxxxx), and timezone (+/-hh:mm).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;--resource-group&lt;/strong&gt;, &lt;strong&gt;--resource-id&lt;/strong&gt;, &lt;strong&gt;--namespace&lt;/strong&gt;: use this to narrow to only activities affecting those specific resource areas.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;--max-events&lt;/strong&gt;: use to change the limit of the results, the default is 50&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can see all of the options in the &lt;a href="https://learn.microsoft.com/en-us/cli/azure/monitor/activity-log?view=azure-cli-latest#az-monitor-activity-log-list"&gt;command documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This approach is more easily repeatable than the console steps, but it requires you to remember the commands and be familiar with the Azure CLI.&lt;/p&gt;

&lt;p&gt;With a no-code automation tool like Blink, you can run this task effortlessly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting User Activity from Azure Faster with Blink
&lt;/h2&gt;

&lt;p&gt;Retrieving user logs manually and adding them to a ticket could be a waste of valuable time, especially if your team is responding to a security incident.&lt;/p&gt;

&lt;p&gt;With Blink, an automation can be triggered to pull and enrich Azure activity logs and other information for a compromised user right away.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/ixr3R-wrxfWCBF9gvP2_7mNRfoghOiHfzQio9ZH2HBU/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzLzl1dWQ4/aGdtYnVoaDl5dGN1/MjE5LnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/ixr3R-wrxfWCBF9gvP2_7mNRfoghOiHfzQio9ZH2HBU/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzLzl1dWQ4/aGdtYnVoaDl5dGN1/MjE5LnBuZw" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;br&gt;
Blink Automation: &lt;a href="https://library.blinkops.com/automations/get-user-activity-from-azure-logs"&gt;Get User Activity from Azure Logs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The automation shown above is in the Blink library and is set up as a self-service app –  where a team member can specify input parameters and get all the activity logs sent to an email address.&lt;/p&gt;

&lt;p&gt;When it runs, it executes the following steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt; Fetches the logs from Azure monitor for a specified email address.&lt;br&gt;
&lt;strong&gt;2.&lt;/strong&gt; Formats the logs for better readability.&lt;br&gt;
&lt;strong&gt;3.&lt;/strong&gt; Reports the results to the specified email address.&lt;/p&gt;

&lt;p&gt;If you’re handling a security incident, you can have this flow as part of an automation that is triggered by a malware or DLP alert for a given user. That way, you have all the logs and information you need to assess the risk for your organization.&lt;/p&gt;

&lt;p&gt;You can import this automation from the library into your account and customize it based on your organization’s needs. For example, you can drag-and-drop new actions into the canvas or set up conditional subflows.&lt;/p&gt;

&lt;p&gt;You can build your own automation from scratch or use one of our 5K pre-built automations today.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://app.blinkops.com/signup"&gt;Start a free trial of Blink&lt;/a&gt; today or &lt;a href="https://www.blinkops.com/schedule-time"&gt;schedule time with us&lt;/a&gt; to see more.&lt;/p&gt;

</description>
      <category>azure</category>
    </item>
    <item>
      <title>How to Verify IOCs and Enrich Your Incident Response with VirusTotal</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Mon, 06 Mar 2023 19:03:01 +0000</pubDate>
      <link>https://community.ops.io/blinkops/how-to-verify-iocs-and-enrich-your-incident-response-with-virustotal-33bi</link>
      <guid>https://community.ops.io/blinkops/how-to-verify-iocs-and-enrich-your-incident-response-with-virustotal-33bi</guid>
      <description>&lt;p&gt;IOCs, or Indicators of Compromise, are pieces of information that can be used to detect malicious activity on a network or system. By using a threat intelligence tool like VirusTotal, organizations can get more information about IOCs like the type of threat detected, its origin, or even which antivirus engines are detecting it.&lt;/p&gt;

&lt;p&gt;In this guide, we'll show you how to use VirusTotal to verify an IOC and incorporate its results into an automated incident response workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verifying IOCs Using VirusTotal
&lt;/h2&gt;

&lt;p&gt;When endpoint detection tools like CrowdStrike or SentinelOne detect malware, they will alert organizations and the incident will list IOCs. IOCs include such data as the IP address or domain name of a suspicious website, hashes associated with malware, and various other details related to an attack.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using VirusTotal on a Browser:
&lt;/h3&gt;

&lt;p&gt;If you are a SOC analyst and you want to know more information about the malicious website or software, you can go to the &lt;a href="https://www.virustotal.com/gui/home/search"&gt;VirusTotal website&lt;/a&gt;. VirusTotal is a free service offered by Google which allows users to upload files, enter URLs, search IP addresses or file hashes, and scan these against the platform's antivirus databases.&lt;/p&gt;

&lt;p&gt;Here’s an example of what you would see if you submitted a file hash value that isn’t malicious:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/a-ZMkqjezvODPBIIcgCCoE1Gdef1oIQX74onlPpyDMU/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3VuOXA0/N2ZxdDQ0bjlxNnhr/ajI3LnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/a-ZMkqjezvODPBIIcgCCoE1Gdef1oIQX74onlPpyDMU/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3VuOXA0/N2ZxdDQ0bjlxNnhr/ajI3LnBuZw" alt="VirusTotal example of non malicious url" width="880" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have a malicious file hash, you’ll see this information:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/4gqT0GX-o5LOyaDTOVC8spqoyWGDjEdI60PwhsGQqwE/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3RuNGFq/YmU1a2NieTZxZjZu/Y3RqLnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/4gqT0GX-o5LOyaDTOVC8spqoyWGDjEdI60PwhsGQqwE/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3RuNGFq/YmU1a2NieTZxZjZu/Y3RqLnBuZw" alt="VirusTotal example of malicious hash value" width="880" height="545"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You’ll be able to view details like creation time, when it was first submitted, and when it was last analyzed, how it behaves, community comments, and more.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the VirusTotal API:
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://developers.virustotal.com/reference/overview"&gt;VirusTotal API&lt;/a&gt; allows users to integrate their own custom scripts into the platform, enabling more advanced incident response workflows. For example, a script can be used to automatically upload suspicious files for scanning against the platform's databases.&lt;/p&gt;

&lt;p&gt;To access the VirusTotal API, you’ll need to create an account and then you’ll have a free public API key you can use.&lt;/p&gt;

&lt;p&gt;You can use this endpoint to &lt;a href="https://developers.virustotal.com/reference/file-info"&gt;get a file report&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --request GET \
--url https://www.virustotal.com/api/v3/files/{id} \
--header 'x-apikey: &amp;lt;your-API-key&amp;gt;'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can use this endpoint to &lt;a href="https://developers.virustotal.com/reference/scan-url"&gt;get a URL analysis report&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --request GET \
--url https://www.virustotal.com/api/v3/urls/{id} \
--header 'x-apikey: &amp;lt;your-API-key&amp;gt;'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And you can use this endpoint to &lt;a href="https://developers.virustotal.com/reference/ip-info"&gt;get an IP address report&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --request GET \
--url https://www.virustotal.com/api/v3/ip_addresses/{ip} \
--header 'x-apikey: &amp;lt;your-API-key&amp;gt;'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you review the &lt;a href="https://developers.virustotal.com/reference/overview"&gt;API documentation&lt;/a&gt;, you may see other popular endpoints that better map to your incident &amp;amp; response process.&lt;/p&gt;

&lt;p&gt;In the context of responding to an alert, you could save yourself a step by setting up a script to automatically call these endpoints with each respective piece of information and enrich your alerts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Taking Action Based on VirusTotal Reports with Blink
&lt;/h2&gt;

&lt;p&gt;When you enrich alerts with VirusTotal reports, you can save you time manually reviewing each detail. Instead of stopping at enrichment, what if you could utilize the results to kick off conditional responses?&lt;/p&gt;

&lt;p&gt;For example, if an IOC is verified and has a high threat score, should you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Immediately contain that device using your endpoint detection tool?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Search other devices across your organization to see if that IOC is present?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add the IOC to your firewall and EDR blocking rules?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These might be the right steps, but if you scripted them all, you might not be able to control your automations easily at each stage.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.blinkops.com/"&gt;Blink&lt;/a&gt;, you can build event-based automations to facilitate incident response with conditional workflows and approval steps along the way. When a new alert is raised in an EDR like CrowdStrike or SentinelOne, you could immediately enrich that alert with reports from VirusTotal on each IOC.&lt;/p&gt;

&lt;p&gt;Depending on the threat level, you can speed up your time-to-action with conditional response steps to fully leverage the combined power of the security and communication tools you’re already using.&lt;/p&gt;

&lt;p&gt;You can start building a better response process with this pre-built &lt;a href="https://library.blinkops.com/automations/verify-ioc-with-virustotal"&gt;Blink automation that verifies IOCs with VirusTotal&lt;/a&gt;. We have over 5K ready-to-use automations just like it in our library. By combining and customizing them, you’ll be able to create the workflows that match each security situation.&lt;/p&gt;

&lt;p&gt;Get started with a &lt;a href="https://app.blinkops.com/signup"&gt;free trial of Blink&lt;/a&gt; today or &lt;a href="https://www.blinkops.com/schedule-time"&gt;schedule time to see a demo&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Migrate from AWS EC2 Launch Configurations to Launch Templates</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Tue, 24 Jan 2023 22:22:15 +0000</pubDate>
      <link>https://community.ops.io/blinkops/how-to-migrate-from-aws-ec2-launch-configurations-to-launch-templates-636</link>
      <guid>https://community.ops.io/blinkops/how-to-migrate-from-aws-ec2-launch-configurations-to-launch-templates-636</guid>
      <description>&lt;p&gt;For EC2 Auto Scaling, AWS is currently urging all customers to switch from using launch configurations to launch templates instead.&lt;/p&gt;

&lt;p&gt;As part of this push, starting in 2023, launch configurations do not support any new Amazon EC2 instance types released after Dec. 31st, 2022. In practice, AWS could release a more cost-effective instance type that better fits your use case, but you won’t be able to take advantage of it until you migrate from launch configurations to launch templates.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll explain the basics of launch configurations and launch templates, and show the steps for migrating them over.&lt;/p&gt;

&lt;h2&gt;
  
  
  Launch Configurations vs. Launch Templates
&lt;/h2&gt;

&lt;p&gt;Amazon EC2 Auto Scaling groups need information to guide how to launch new EC2 instances when they scale. They rely on one source of truth; either an EC2 instance (automatically converted to a launch configuration), a specified launch configuration, or a launch template. &lt;/p&gt;

&lt;h3&gt;
  
  
  Launch Configurations
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-configurations.html"&gt;Launch configurations&lt;/a&gt; are instance configuration settings utilized by an EC2 Auto Scaling group to inform the new EC2 instances it launches. You must specify standard information when setting up a launch configuration, including the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ID of the Amazon Machine Image (AMI)&lt;/li&gt;
&lt;li&gt;Instance Type&lt;/li&gt;
&lt;li&gt;Key Pair&lt;/li&gt;
&lt;li&gt;Security Group(s)&lt;/li&gt;
&lt;li&gt;Block Device Mapping&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For inspiration, you can use similar settings for previously launched EC2 instances if they match your use case. Once you create your launch configuration, you cannot make changes to it. You can only create a new one and update your Auto Scaling group to use the new one. If you do that, existing instances will not be immediately updated.&lt;/p&gt;

&lt;h3&gt;
  
  
  Launch Templates
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-templates.html"&gt;Launch templates&lt;/a&gt; function similarly to launch configurations by specifying instance configuration information with similar information you would place in a launch configuration file. The major difference between launch configurations and launch templates is that you can set up multiple versions of a launch template.&lt;/p&gt;

&lt;p&gt;Versioning launch templates lets you establish a subset of the complete parameter set. You can reuse these to set up other versions of the base launch template. For example, you can create a launch template with a defined base configuration without including an AMI or user data script.&lt;/p&gt;

&lt;p&gt;After creating the launch template, you can add a new version with the AMI and user script for testing. You end up with two versions of the template. That means you can always have a base configuration for reference, then create new template versions as needed. It’s also possible to delete test template versions when they are no longer necessary.&lt;/p&gt;

&lt;p&gt;Launch templates are recommended because you can access the latest improvements. Some Amazon EC2 Auto scaling features aren’t available when you use launch configurations. Launch templates also let you use newer generation features of Amazon EC2.&lt;/p&gt;

&lt;p&gt;While parameters in launch templates are optional, you can’t add an AMI to one if you do not specify it when creating an Auto Scaling group. If you specify an AMI but leave out the instance type, it’s possible to add one or more when setting up your Auto Scaling group.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Migrate Launch Configurations to Launch Templates
&lt;/h2&gt;

&lt;p&gt;Copying launch configurations to convert them to launch templates, you will need to use the AWS Console.&lt;/p&gt;

&lt;h3&gt;
  
  
  Migrating One Launch Configuration
&lt;/h3&gt;

&lt;p&gt;Anyone currently using launch configurations can migrate them over to launch templates by copying them into the console.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the &lt;a href="https://console.aws.amazon.com/ec2/"&gt;EC2 console&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Look for &lt;strong&gt;Auto Scaling&lt;/strong&gt; in the navigation pane, then choose &lt;strong&gt;Launch Configurations&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Choose the launch configuration to copy.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Copy to launch template&lt;/strong&gt;, &lt;strong&gt;Copy selected&lt;/strong&gt; to set up a new template with the same name and options as your configuration.&lt;/li&gt;
&lt;li&gt;Add the name of your current launch configuration file or a new name in the &lt;strong&gt;New launch template&lt;/strong&gt; name field. The name must be unique.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Copy&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Migrating All Launch Configurations‍
&lt;/h3&gt;

&lt;p&gt;Follow the steps below if you wish to move all your launch configurations to launch templates in the console:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the &lt;a href="https://console.aws.amazon.com/ec2/"&gt;EC2 console&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Look for &lt;strong&gt;Auto Scaling&lt;/strong&gt; in the navigation pane, then choose &lt;strong&gt;Launch Configurations&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Choose the launch configuration to copy.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Copy to launch template&lt;/strong&gt;, &lt;strong&gt;Copy all&lt;/strong&gt; to copy all your configurations within the current Region to a new launch template named after your configuration.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Copy&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Updating Auto Scaling Groups to Use Launch Templates
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Using the AWS Console:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the &lt;a href="https://console.aws.amazon.com/ec2/"&gt;AWS EC2 console&lt;/a&gt;, open the navigation pane, and select &lt;strong&gt;Auto Scaling Groups&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click on the check box associated with the Auto Scaling group you want to update.&lt;/li&gt;
&lt;li&gt;In the &lt;strong&gt;Details&lt;/strong&gt; tab, choose &lt;strong&gt;Launch configuration&lt;/strong&gt;, then click &lt;strong&gt;Edit&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Switch to launch template&lt;/strong&gt;, then select the appropriate launch template.&lt;/li&gt;
&lt;li&gt;You’ll need to select a &lt;strong&gt;Version&lt;/strong&gt; for the launch template. You can customize whether the Auto Scaling group uses a default version when it scales out, or uses the latest version.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Update&lt;/strong&gt; to apply the changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Using the AWS CLI:
&lt;/h3&gt;

&lt;p&gt;You can run this &lt;a href="https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/update-auto-scaling-group.html"&gt;update-auto-scaling-group&lt;/a&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws autoscaling update-auto-scaling-group
--auto-scaling-group-name &amp;lt;value&amp;gt;
--launch-template &amp;lt;value&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An Auto Scaling group cannot have both a launch configuration and launch template parameter set. To update your Auto Scaling group to use a launch template instead of a launch configuration, you can include a launch template value in this command, like in this example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws autoscaling update-auto-scaling-group
    --auto-scaling-group-name my-asg
    --launch-template LaunchTemplateName=my-template-for-auto-scaling,Version='2'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you’ve migrated your Auto Scaling groups to use launch templates instead of launch configurations, you should stop creating new launch configurations and only create launch templates and new versions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Replacing Launch Configurations Automatically with Blink
&lt;/h2&gt;

&lt;p&gt;If you have several Auto Scaling groups, making these updates can be manual and time-consuming. You also can’t ensure that new launch configurations are not created and utilized in your AWS EC2 Auto Scaling groups.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.blinkops.com/"&gt;Blink&lt;/a&gt;, you can use a no-code automation to quickly identify Auto Scaling groups that are using launch configurations, convert those launch configurations to launch templates, and update the groups accordingly. You can also set up checks to detect new launch configurations and notify owners that they should use launch templates instead.&lt;/p&gt;

&lt;p&gt;Create your &lt;a href="https://app.blinkops.com/signup"&gt;free Blink account&lt;/a&gt; and migrate fully to launch templates today.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to Enable Autoscaling for a GKE Cluster</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Mon, 12 Dec 2022 22:01:40 +0000</pubDate>
      <link>https://community.ops.io/blinkops/how-to-enable-autoscaling-for-a-gke-cluster-5g6p</link>
      <guid>https://community.ops.io/blinkops/how-to-enable-autoscaling-for-a-gke-cluster-5g6p</guid>
      <description>&lt;p&gt;Autoscaling is an &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler"&gt;automated, node provisioning process&lt;/a&gt; that scales your GKE clusters depending on their workload needs. As a result, GKE clusters with autoscaling enabled scale up their node pool to offer more workload availability when demand is high and scale down their node pool to save on costs when demand is low.&lt;/p&gt;

&lt;p&gt;You can control your cluster’s autoscaling by specifying a minimum and maximum number of nodes. You can also choose whether to use the default, balanced autoscaling method or the optimize-utilization setting.&lt;/p&gt;

&lt;p&gt;In this post, we’ll walk you through the basics of GKE cluster autoscaling and show you how to enable it for your node pools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Your GKE Autoscaling Options
&lt;/h2&gt;

&lt;p&gt;When you enable autoscaling, you have the ability to set guardrails and preferences:&lt;/p&gt;

&lt;h3&gt;
  
  
  Minimum and Maximum Nodes
&lt;/h3&gt;

&lt;p&gt;One of the decisions you need to make is the minimum and maximum number of nodes, either per zone (minimum nodes, maximum nodes) or in total (total minimum nodes, total maximum nodes) across your node pools.&lt;/p&gt;

&lt;p&gt;For a minimum, you’ll always need at least 1 node for each zone the node pool is in. The autoscaler will never scale down to zero because at least 1 node is needed to run the system Pods.&lt;/p&gt;

&lt;p&gt;When setting a maximum, you might want to consider the implications of a dramatic scale up. For example, if you scale beyond the IP address space you have allocated, you will receive an error and no longer be able to add new nodes. Consider these types of dependencies when selecting a maximum.&lt;/p&gt;

&lt;h3&gt;
  
  
  Balanced vs. Optimize-Utilization
&lt;/h3&gt;

&lt;p&gt;There are two types of autoscaling profiles. The default profile is &lt;strong&gt;balanced&lt;/strong&gt;, which means it scales up and down with a balance between availability of resources and node utilization. For example, balanced autoscaling allows for more nodes with lower utilization so that they are available if workloads increase.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;optimize-utilization&lt;/strong&gt; autoscaling profile by comparison prioritizes concentrating utilization in fewer nodes, which enables the removal of underutilized nodes. The result is faster scale downs and lower resource costs. If you choose this profile, you may experience performance delays when new workloads require new resources to be provisioned. Depending on your performance requirements, optimize-utilization may be a useful way to lower your GKE operating costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring Autoscaling for an Existing Node Pool
&lt;/h2&gt;

&lt;p&gt;If you want to enable autoscaling, you can start by updating your &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler"&gt;existing node pools&lt;/a&gt; using GCP Console or the GCP CLI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the GCP Console:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the Google Cloud console, navigate to the &lt;strong&gt;Google Kubernetes Engine&lt;/strong&gt; page.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the cluster you want to update from the displayed cluster list, and go to the &lt;strong&gt;Nodes&lt;/strong&gt; tab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;Node Pools&lt;/strong&gt;, select the node pool that you want to update and click &lt;strong&gt;Edit&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;Size&lt;/strong&gt;, check &lt;strong&gt;Enable autoscaling&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Specify values for &lt;strong&gt;Minimum number of nodes&lt;/strong&gt; and &lt;strong&gt;Maximum number of nodes&lt;/strong&gt;, and click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Using the gCloud CLI:
&lt;/h3&gt;

&lt;p&gt;You can use the following command to enable autoscaling for an existing node pool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters update CLUSTER_NAME \
    --enable-autoscaling \
    --autoscaling-profile=PROFILE \
    --node-pool=POOL_NAME \
    --min-nodes=MIN_NODES \
    --max-nodes=MAX_NODES \
    --region=COMPUTE_REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Plug in your details for the cluster, node pool, and region. If you only have one node pool, you can use &lt;strong&gt;default-pool&lt;/strong&gt; as your value. The region value should be your &lt;a href="https://cloud.google.com/compute/docs/regions-zones#available"&gt;Compute Engine region&lt;/a&gt;, or specific zone if it’s a zonal cluster.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;--enable-autoscaling&lt;/strong&gt; flag invokes autoscaling. You can customize your configuration with --autoscaling-profile (balanced or optimize-utilization), &lt;strong&gt;--min-nodes&lt;/strong&gt;, &lt;strong&gt;--max-nodes&lt;/strong&gt;, &lt;strong&gt;--total-max-nodes&lt;/strong&gt;, and &lt;strong&gt;--total-max-nodes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Here’s an example of enabling autoscaling with an optimize-utilization profile for the &lt;strong&gt;pool-1&lt;/strong&gt; node pool of the &lt;strong&gt;demo-1&lt;/strong&gt; cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters update demo-1 \
    --enable-autoscaling \
    --autoscaling-profile=optimize-utilization \
    --node-pool=pool-1 \
    --min-nodes=1 \
    --max-nodes=4 \
    --region=us-central1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Enabling Autoscaling When Creating a New GKE Cluster or Node Pool
&lt;/h2&gt;

&lt;p&gt;If you are creating new clusters or node pools, you can enabling GKE autoscaling by using settings in the GCP Console or GCP CLI flags:&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the GCP Console:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the Google Cloud console, go to the &lt;strong&gt;Google Kubernetes Engine&lt;/strong&gt; page.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To set up a new cluster, click &lt;strong&gt;Create&lt;/strong&gt;. To create a new node pool, select an existing cluster and click &lt;strong&gt;Add Node Pool&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Specify details for your cluster or node pool. For a new cluster, click &lt;strong&gt;default-pool&lt;/strong&gt; under &lt;strong&gt;Node Pools&lt;/strong&gt; from your navigation pane.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Next, you need to select the &lt;strong&gt;Enable autoscaling&lt;/strong&gt; checkbox. For node pools, you’ll find it under the &lt;strong&gt;Size&lt;/strong&gt; section. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Modify the values for &lt;strong&gt;Minimum number of nodes&lt;/strong&gt; and &lt;strong&gt;Maximum number of nodes&lt;/strong&gt; according to your requirements, and click &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Using the gCloud CLI:
&lt;/h3&gt;

&lt;p&gt;When you are creating new clusters or new node pools, you can ensure that they have autoscaling enabled by including the &lt;strong&gt;--enable-autoscaling&lt;/strong&gt; flag and specifying &lt;strong&gt;--min-nodes&lt;/strong&gt; and &lt;strong&gt;--max-nodes&lt;/strong&gt; values.&lt;/p&gt;

&lt;p&gt;Here’s an example of creating a new cluster with &lt;a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/create"&gt;this command&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters create my-cluster --enable-autoscaling \
    --num-nodes=30 \
    --min-nodes=15 --max-nodes=50 \
    --region=us-central
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s an example of creating a new node pool with &lt;a href="https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create"&gt;this command&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container node-pools create node-pool-1 
    --cluster=sample-cluster 
    --num-nodes=5
    --enable-autoscaling
    --min-nodes=5 --max-nodes=15
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By updating your existing node pools, and creating new clusters and node pools with autoscaling enabled, you’ll be able to ensure your clusters are able to automatically adapt to changing workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Checking Whether Autoscaling is Enabled with Blink
&lt;/h2&gt;

&lt;p&gt;Autoscaling can make your GKE clusters more effective and efficient. If you want to make it a standard that your organization's GKE clusters have autoscaling enabled, then you can manually enable it using the steps above. Unfortunately, that requires many manual updates and doesn’t ensure that future clusters will also have autoscaling enabled.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.blinkops.com/"&gt;Blink&lt;/a&gt;, you can use no-code automations to quickly check if you have any GKE clusters without autoscaling enabled. You can then send a notification to Slack on a regular basis if there are any clusters missing this setting. You can be more agile with making adjustments and getting information when you are only a click away.&lt;/p&gt;

&lt;p&gt;Create your &lt;a href="https://app.blinkops.com/signup"&gt;free Blink account&lt;/a&gt; and better manage your GKE clusters today.&lt;/p&gt;

</description>
      <category>gcp</category>
      <category>gke</category>
      <category>tutorials</category>
    </item>
    <item>
      <title>How to Manually Rotate Keys in GCP</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Thu, 01 Dec 2022 17:26:54 +0000</pubDate>
      <link>https://community.ops.io/blinkops/how-to-manually-rotate-keys-in-gcp-5802</link>
      <guid>https://community.ops.io/blinkops/how-to-manually-rotate-keys-in-gcp-5802</guid>
      <description>&lt;p&gt;Key rotation is a critical security practice. In GCP, you can either rotate keys by enabling automatic rotation or by rotating a key manually.&lt;/p&gt;

&lt;p&gt;Manual rotations make sense if your key is compromised or if you are modifying your application to use a different or stronger algorithm.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll show you how to manually rotate keys using the GCP console and the gCloud CLI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Manually Rotating Keys in GCP
&lt;/h2&gt;

&lt;p&gt;You will need to have permissions granted by the &lt;strong&gt;Cloud KMS Admin&lt;/strong&gt; role to rotate keys in GCP. If you want to also do the re-encryption step below, you’ll need permissions granted by the &lt;strong&gt;Cloud KMS CryptoKey Encrypter/Decrypter&lt;/strong&gt; role.&lt;/p&gt;

&lt;h4&gt;
  
  
  Using the GCP Console:
&lt;/h4&gt;

&lt;p&gt;These are the &lt;a href="https://cloud.google.com/kms/docs/rotating-keys#manual"&gt;steps&lt;/a&gt; to manually rotate keys in the GCP Console:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the &lt;strong&gt;Key Management&lt;/strong&gt; page from the Google Cloud Console.&lt;/li&gt;
&lt;li&gt;Select the name of the key ring that contains the key you want to create a new version for.&lt;/li&gt;
&lt;li&gt;Select the key for which you need to create a new version.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Rotate&lt;/strong&gt; in the displayed header.&lt;/li&gt;
&lt;li&gt;Again, click &lt;strong&gt;Rotate&lt;/strong&gt; in the prompt to confirm the key rotation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, you’ll see a new version of your key is created and is marked as the primary key.&lt;/p&gt;

&lt;p&gt;If you want to use a different existing key version, you can make it primary key using these &lt;a href="https://cloud.google.com/kms/docs/rotating-keys#set_primary"&gt;steps&lt;/a&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Choose the key whose primary version you want to update.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;View More&lt;/strong&gt; in the row of your intended key.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Make primary version&lt;/strong&gt; in the menu.&lt;/li&gt;
&lt;li&gt;In the confirmation prompt, click &lt;strong&gt;Make primary&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you have encrypted anything with the prior key, you’ll need to re-encrypt it with your new key, and then destroy the old key. This encryption step can only be done with the CLI and we’ll show it in the encryption section below.&lt;/p&gt;

&lt;h4&gt;
  
  
  Using the gCloud CLI:
&lt;/h4&gt;

&lt;p&gt;To run Cloud KMS on the command line, you’ll first need to install the latest version of &lt;a href="https://cloud.google.com/sdk/gcloud"&gt;gCloud CLI&lt;/a&gt;. Once you’ve done that, you can run &lt;a href="https://cloud.google.com/sdk/gcloud/reference/kms/keys/create"&gt;this command&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud kms keys versions create \
    --key &amp;lt;KEY_NAME&amp;gt; \
    --keyring &amp;lt;KEY_RING&amp;gt; \
    --location &amp;lt;LOCATION&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can input values for each of these parameters:&lt;/p&gt;

&lt;p&gt;&amp;lt;KEY_NAME&amp;gt; refers to the name of the key.&lt;br&gt;
&amp;lt;KEY_RING&amp;gt; refers to the name of the key ring that consists of the key you want to rotate.&lt;br&gt;
&amp;lt;LOCATION&amp;gt; refers to the key ring Cloud KMS location.&lt;/p&gt;

&lt;p&gt;Here’s an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud kms keys versions create
    --key=bowser
    --keyring=castle
    --location=global
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then set an existing key version as the primary version with &lt;a href="https://cloud.google.com/sdk/gcloud/reference/kms/keys/update"&gt;this command&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud kms keys update &amp;lt;KEY_NAME&amp;gt; \
    --keyring &amp;lt;KEY_RING&amp;gt; \
    --location &amp;lt;LOCATION&amp;gt; \
    --primary-version &amp;lt;KEY_VERSION&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The only new flag in this command is &amp;lt;KEY_VERSION&amp;gt; which refers to the version number of the new primary key.&lt;/p&gt;

&lt;h2&gt;
  
  
  Re-encrypting Data with a New Primary Key
&lt;/h2&gt;

&lt;p&gt;If you have encrypted data with the prior key, that prior key can still be used to decrypt that data. If your key is compromised, your data will be insecure unless you re-encrypt it with your new primary key.&lt;/p&gt;

&lt;p&gt;You should do this with the following gCloud &lt;a href="https://cloud.google.com/sdk/gcloud/reference/kms/encrypt"&gt;CLI command&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud kms encrypt \
    --key &amp;lt;KEY_NAME&amp;gt; \
    --keyring &amp;lt;KEY_RING&amp;gt; \
    --location &amp;lt;LOCATION&amp;gt;  \
    --plaintext-file &amp;lt;FILE_TO_BE_ENCRYPTED&amp;gt; \
    --ciphertext-file &amp;lt;FILE_TO_STORE_ENCRYPTED_DATA&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&amp;lt;FILE_TO_BE_ENCRYPTED&amp;gt; should be the local file path for reading the plaintext data.&lt;br&gt;
&amp;lt;FILE_TO_STORE_ENCRYPTED_DATA&amp;gt; should be the local file path for where you plan to save the encrypted output.&lt;br&gt;
If you want to verify that your encryption is now using the new primary key, you can test it by running the &lt;a href="https://cloud.google.com/kms/docs/re-encrypt-data#decrypt"&gt;decrypt command&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Disabling or Destroying the Prior Key Version
&lt;/h2&gt;

&lt;p&gt;Disabling or destroying a key both remove the key’s functionality. It’s important to ensure that compromised keys are disabled or destroyed.&lt;/p&gt;

&lt;p&gt;The difference between the two outcomes is that destroyed keys are removed permanently (after their scheduled destruction date), which means that if you have anything encrypted that relies on that key to be decrypted, and that key is destroyed, you lose access to that data permanently. If you are certain that you no longer need the key, destroying it is a way to clean up your key ring and prevent a compromised key from somehow being restored.&lt;/p&gt;
&lt;h4&gt;
  
  
  Using the GCP Console:
&lt;/h4&gt;

&lt;p&gt;In the GCP Console, you can disable and destroy a key by following &lt;a href="https://cloud.google.com/kms/docs/re-encrypt-data#disable-or-destroy"&gt;these steps&lt;/a&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the key ring view, click the key you recently rotated.&lt;/li&gt;
&lt;li&gt;Next to the version of the key you want to change, you’ll see an &lt;strong&gt;Actions&lt;/strong&gt; column with three vertical dots. Click on the dots.&lt;/li&gt;
&lt;li&gt;Depending on which action you want to take, you can either select &lt;strong&gt;Disable&lt;/strong&gt; or &lt;strong&gt;Destroy&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;If you choose &lt;strong&gt;Destroy&lt;/strong&gt;, you will need to type in the key name and click &lt;strong&gt;Schedule Destruction&lt;/strong&gt; to confirm the action.
Once you have done this, you will have fully rotated your keys and cleaned up the prior key version.&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;
  
  
  Using the gCloud CLI:
&lt;/h4&gt;

&lt;p&gt;You can also disable or destroy keys with the CLI&lt;/p&gt;

&lt;p&gt;You can use &lt;a href="https://cloud.google.com/sdk/gcloud/reference/kms/keys/versions/disable"&gt;this command&lt;/a&gt; to disable a key version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud kms keys versions disable &amp;lt;KEY_VERSION&amp;gt; \
    --key &amp;lt;KEY_NAME&amp;gt; \
    --keyring &amp;lt;KEY_RING&amp;gt; \
    --location &amp;lt;LOCATION&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And you can use &lt;a href="https://cloud.google.com/sdk/gcloud/reference/kms/keys/versions/destroy"&gt;this command&lt;/a&gt; to destroy a key version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud kms keys versions destroy &amp;lt;KEY_VERSION&amp;gt; \
    --key &amp;lt;KEY_NAME&amp;gt; \
    --keyring &amp;lt;KEY_RING&amp;gt; \
    --location &amp;lt;LOCATION&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you run the destroy a key version command, it will be scheduled for destruction. You can 24 hours after that to change your mind and &lt;a href="https://cloud.google.com/kms/docs/destroy-restore#restore"&gt;restore the key&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using No-Code Steps in Blink to Rotate GCP Keys
&lt;/h2&gt;

&lt;p&gt;If you need to manually rotate access keys, you will need to remember each step and stop what you are working on to ensure you do it all properly. Working through these steps each time isn’t hard, but it takes time.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.blinkops.com/"&gt;Blink&lt;/a&gt;, you can easily create an automation that rotates access keys, re-encrypts files that are using the prior key version, and disables the prior key version with a simple click. If a key is compromised, you’ll be able to act quickly.&lt;/p&gt;

&lt;p&gt;Blink also allows you schedule disabled keys for destruction after a certain period of time. Ensure that your keys are cleaned up while also giving your team time to validate that you no longer need the old versions.&lt;/p&gt;

&lt;p&gt;Create your &lt;a href="https://app.blinkops.com/signup"&gt;free Blink account&lt;/a&gt; and make it easy to rotate your GCP keys.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Find and Remove Unused AWS Passwords</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Tue, 22 Nov 2022 20:12:14 +0000</pubDate>
      <link>https://community.ops.io/blinkops/how-to-find-and-remove-unused-aws-passwords-3l06</link>
      <guid>https://community.ops.io/blinkops/how-to-find-and-remove-unused-aws-passwords-3l06</guid>
      <description>&lt;p&gt;If an AWS user has a password to login to the AWS console, but hasn’t used it in over 6 months, their login credentials might be a security liability to the rest of your account.&lt;/p&gt;

&lt;p&gt;By running a check to find unused passwords, you can delete login profiles and reduce your account’s potential attack surface.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll show you how to find and delete unused passwords to strengthen the security of your AWS account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding and Deleting Unused AWS Passwords
&lt;/h2&gt;

&lt;p&gt;You can find unused passwords with the AWS Console, the AWS CLI, or the AWS API.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the AWS Console:
&lt;/h3&gt;

&lt;p&gt;Here are the &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_finding-unused.html"&gt;steps&lt;/a&gt; to find and disable unused passwords:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1.&lt;/strong&gt; To start, log in to your AWS IAM Console.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2.&lt;/strong&gt; In the navigation pane, select &lt;strong&gt;Credential report&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3.&lt;/strong&gt; When you click Download Report, you’ll get a CSV file using the naming structure &lt;strong&gt;status_reports_T&lt;time&gt;.csv&lt;/time&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4.&lt;/strong&gt; Filter on the fifth column named password_last_used.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If they have &lt;strong&gt;N/A&lt;/strong&gt;, it means they have no password assigned.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If they have &lt;strong&gt;no_information&lt;/strong&gt;, it means they haven’t used their password since IAM started tracking passwords (Oct. 20th, 2014).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If they have a date that is earlier than a threshold you set (e.g. 90 days), then you can consider their passwords unused and act on them.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 5.&lt;/strong&gt; Go to the navigation pane and select &lt;strong&gt;Users&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6.&lt;/strong&gt; Select the name of a user who has an unused password.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7.&lt;/strong&gt; Go to the &lt;strong&gt;Security credentials&lt;/strong&gt; tab.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8.&lt;/strong&gt; Under &lt;strong&gt;Sign-in credentials&lt;/strong&gt;, click &lt;strong&gt;Manage&lt;/strong&gt; next to &lt;strong&gt;Console password&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 9.&lt;/strong&gt; Select &lt;strong&gt;Disable&lt;/strong&gt; for &lt;strong&gt;Console access&lt;/strong&gt;, then click &lt;strong&gt;apply&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 10.&lt;/strong&gt; Repeat steps 5-9 for all users you identified as having an unused password.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the AWS CLI:
&lt;/h3&gt;

&lt;p&gt;If you would prefer to do this using the AWS CLI, here are the steps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1.&lt;/strong&gt; You can find unused passwords by running the &lt;a href="https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/list-users.html"&gt;following command&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam list-users
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will output a list of all users in your AWS account. In the output, you’ll see information about all users, including a &lt;strong&gt;PasswordLastUsed&lt;/strong&gt; value. If the user has no value listed, then they either do not have a password or haven’t used their password since tracking began (Oct. 20, 2014).&lt;/p&gt;

&lt;p&gt;Alternatively, you can use the &lt;a href="https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/get-user.html"&gt;get-user&lt;/a&gt; command if there is someone specific you suspect might have an unused password.&lt;/p&gt;

&lt;p&gt;Here is the output of running the list-users command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Users": [
    {
        "UserName": "Charlie",
        "Path": "/department_abc/group_def/",
        "CreateDate": "2017-06-19T10:01:44Z",
        “PasswordLastUsed”: “2022-10-23T11:01:29Z”,
        "UserId": "AID3YDW8DMLG72PEANUTS",
        "Arn": "arn:aws:iam::123456789012:user/department_abc/group_def/Charlie"
    },
    {
        "UserName": "Lucy",
        "Path": "/department_abc/group_ghi/",
        "CreateDate": "2018-03-09T13:21:33Z",
        “PasswordLastUsed”: “2018-05-21T13:21:51Z”,
        "UserId": "AIDIODN4U1W727PEANUTS",
        "Arn": "arn:aws:iam::123456789012:user/department_abc/group_ghi/Lucy"
    }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now see that Lucy has not used her password to log in to AWS in multiple years.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2.&lt;/strong&gt; Next, you can run the &lt;a href="https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/delete-login-profile.html"&gt;following command&lt;/a&gt; to delete the password for anyone who, like Lucy, has not used their password in a certain amount of time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam delete-login-profile 
--user-name Lucy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result of this command is that you have denied the user the ability to sign in to the AWS Console, which limits your security risk if an old password were to become compromised. They will still have access to the AWS CLI and API, so make sure to also &lt;a href="https://www.blinkops.com/blog/how-to-find-and-remove-unused-aws-access-keys"&gt;remove their access keys&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the AWS API:
&lt;/h3&gt;

&lt;p&gt;You can also use the AWS API to find and delete unused passwords.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1.&lt;/strong&gt; Use the &lt;a href="https://docs.aws.amazon.com/IAM/latest/APIReference/API_ListUsers.html"&gt;ListUsers&lt;/a&gt; action to get a list of all users in your AWS. You can use the  PathPrefix parameter to narrow the list of users.&lt;/p&gt;

&lt;p&gt;Here’s an example of that request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://iam.amazonaws.com/?Action=ListUsers
&amp;amp;Version=2010-05-08
&amp;amp;AUTHPARAMS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s an example response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;ListUsersResponse xmlns="https://iam.amazonaws.com/doc/2010-05-08/"&amp;gt;
 &amp;lt;ListUsersResult&amp;gt;
    &amp;lt;Users&amp;gt;
       &amp;lt;member&amp;gt;
          &amp;lt;UserId&amp;gt;AID3YDW8DMLG72PEANUTS&amp;lt;/UserId&amp;gt;
          &amp;lt;Path&amp;gt;/department_abc/group_def/&amp;lt;/Path&amp;gt;
          &amp;lt;UserName&amp;gt;Charlie&amp;lt;/UserName&amp;gt;
          &amp;lt;Arn&amp;gt;arn:aws:iam::123456789012:user/department_abc/group_def/Charlie&amp;lt;/Arn&amp;gt;
          &amp;lt;CreateDate&amp;gt;2017-06-19T10:01:44Z&amp;lt;/CreateDate&amp;gt;
          &amp;lt;PasswordLastUsed&amp;gt;2022-10-23T11:01:29Z&amp;lt;/PasswordLastUsed&amp;gt;
       &amp;lt;/member&amp;gt;
       &amp;lt;member&amp;gt;
          &amp;lt;UserId&amp;gt;AIDIODN4U1W727PEANUTS&amp;lt;/UserId&amp;gt;
          &amp;lt;Path&amp;gt;/department_abc/group_ghi/&amp;lt;/Path&amp;gt;
          &amp;lt;UserName&amp;gt;Lucy&amp;lt;/UserName&amp;gt;
          &amp;lt;Arn&amp;gt;arn:aws:iam::123456789012:user/department_abc/group_ghi/Lucy&amp;lt;/Arn&amp;gt;
          &amp;lt;CreateDate&amp;gt;2018-03-09T13:21:33Z&amp;lt;/CreateDate&amp;gt;
          &amp;lt;PasswordLastUsed&amp;gt;2018-05-21T13:21:51Z&amp;lt;/PasswordLastUsed&amp;gt;
       &amp;lt;/member&amp;gt;
    &amp;lt;/Users&amp;gt;
    &amp;lt;IsTruncated&amp;gt;false&amp;lt;/IsTruncated&amp;gt;
 &amp;lt;/ListUsersResult&amp;gt;
 &amp;lt;ResponseMetadata&amp;gt;
    &amp;lt;RequestId&amp;gt;7a62c49f-347e-4fc4-9331-6e8eEXAMPLE&amp;lt;/RequestId&amp;gt;
 &amp;lt;/ResponseMetadata&amp;gt;
&amp;lt;/ListUsersResponse&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2.&lt;/strong&gt; Now that you can see which users have passwords that are no longer being used, you can delete their login credentials with the action &lt;a href="https://docs.aws.amazon.com/IAM/latest/APIReference/API_DeleteLoginProfile.html"&gt;DeleteLoginProfile&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Since in the example above, Lucy hasn’t used her password to login to AWS since 2018, we can go ahead and delete her login profile.&lt;/p&gt;

&lt;p&gt;Here’s an example of that request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://iam.amazonaws.com/?Action=DeleteLoginProfile
&amp;amp;UserName=Lucy
&amp;amp;Version=2010-05-08
&amp;amp;AUTHPARAMS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s an example response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;DeleteLoginProfileResponse xmlns="https://iam.amazonaws.com/doc/2010-05-08/"&amp;gt;
  &amp;lt;ResponseMetadata&amp;gt;
    &amp;lt;RequestId&amp;gt;7a62c49f-347e-4fc4-9331-6e8eEXAMPLE&amp;lt;/RequestId&amp;gt;
  &amp;lt;/ResponseMetadata&amp;gt;
&amp;lt;/DeleteLoginProfileResponse&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you have successfully prevented unused passwords from being used to log in to the AWS Console if they become compromised. &lt;/p&gt;

&lt;p&gt;As we mentioned in the other methods, you will still need to separately look to see if the user has unused access keys. You can do that with the &lt;a href="https://docs.aws.amazon.com/IAM/latest/APIReference/API_ListAccessKeys.html"&gt;ListAccessKeys&lt;/a&gt; action.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding Unused Passwords Automatically with Blink
&lt;/h2&gt;

&lt;p&gt;You can find unused passwords manually by following the steps above, but that relies on you taking the time to set reminders and manually update each user. It’s time-intensive and requires context-switching.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.blinkops.com/"&gt;Blink&lt;/a&gt;, you can easily create an automation that runs on a schedule to find passwords that have not been used in a certain number of days. You can then kick off a Slack notification that makes deleting their login profile as easy as clicking “approve”.&lt;/p&gt;

&lt;p&gt;By automating this entire workflow, you can turn a best practice into a built-in workflow.&lt;/p&gt;

&lt;p&gt;Create your &lt;a href="https://app.blinkops.com/signup"&gt;free Blink account&lt;/a&gt; and boost your AWS security posture today.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>secops</category>
    </item>
    <item>
      <title>Reducing Your Cloud Costs: An Operational Optimization Guide</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Wed, 16 Nov 2022 16:57:36 +0000</pubDate>
      <link>https://community.ops.io/blinkops/reducing-your-cloud-costs-an-operational-optimization-guide-3h8k</link>
      <guid>https://community.ops.io/blinkops/reducing-your-cloud-costs-an-operational-optimization-guide-3h8k</guid>
      <description>&lt;p&gt;&lt;em&gt;Are your cloud costs on the rise?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In this cost optimization guide, we outline the most common operating expenses and provide you with actionable recommendations to lower your monthly cloud bill.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Cloud costs are top of mind as many business leaders and teams are focusing attention on honing their operational efficiency.&lt;/p&gt;

&lt;p&gt;In April at CIO.com’s Future of Cloud Summit, Dave McCarthy, research vice president of cloud infrastructure services at IDC, shared that cloud spending represents roughly &lt;a href="https://www.cio.com/article/403231/cios-contend-with-rising-cloud-costs.html"&gt;30% of current IT budgets&lt;/a&gt;. In the 2022 State of Cloud Report by Flexera, 750 surveyed executives shared that they estimate they are &lt;a href="https://www.forbes.com/sites/joemckendrick/2020/04/29/one-third-of-cloud-spending-wasted-but-still-accelerates/?sh=5a313399489e"&gt;wasting 30% of their cloud spend&lt;/a&gt;, while also saying that they expect costs to increase 47% over the next year. If you combine those stats, there is an efficiency opportunity roughly the size of 10% of IT budgets.&lt;/p&gt;

&lt;p&gt;Achieving those cost savings isn’t as easy as flipping a switch. There is wasted spend embedded across multiple resource types, regions, and services. By function, the main categories of cloud spending are compute time, data storage, and data transfer.&lt;/p&gt;

&lt;p&gt;In this post, we’ll outline a framework for reviewing your cloud spending today, identifying wasted resources, and reviewing your long-term infrastructure efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reviewing Your Current Spending
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;“What are we currently spending money on?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To start, you can review your current spend at the account-level with the major cloud providers. AWS, Azure, and GCP all have reporting options that enable you to view and filter your spending over a period of time.&lt;/p&gt;

&lt;p&gt;In AWS, you can create &lt;a href="https://docs.aws.amazon.com/cur/latest/userguide/cur-create.html"&gt;Cost and Usage Reports&lt;/a&gt;. In GCP, you can review your &lt;a href="https://cloud.google.com/billing/docs/how-to/reports"&gt;Cloud Billing Report&lt;/a&gt; and view spend by “Project” or other filters. In the Azure portal, you can download usage and charges from the “&lt;a href="https://learn.microsoft.com/en-us/azure/cost-management-billing/understand/download-azure-daily-usage"&gt;Cost Management + Billing&lt;/a&gt;” section.&lt;/p&gt;

&lt;p&gt;These views may be useful to get started and see transactional costs, such as from data transfers. In order to get more granular details on your cloud spending, you should leverage resource labels and tags to accurately categorize expenses.&lt;/p&gt;

&lt;p&gt;With labels and tags, you can associate resources with specific cost centers, projects, business units, or teams. You can then easily organize your resource data, create custom reports, and run specific queries.&lt;/p&gt;

&lt;p&gt;If you do not currently have a mechanism or standard practice around resource tags and labels, you can refer to these how-to guides for setting up mandatory tags:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/enforcing-mandatory-tags-across-aws-resources"&gt;Enforcing Mandatory Tags Across Your AWS Resources&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GCP:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/enforcing-labels-and-tags-across-your-gcp-resources"&gt;Enforcing Labels and Tags Across Your GCP Resources&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Azure:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/enforcing-mandatory-tags-across-azure-resources"&gt;Enforcing Mandatory Tags Across Your Azure Resources&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you use more than one cloud computing provider, you’ll need to aggregate invoices and usage reports across vendors. In this scenario, having consistent tagging methods across platforms is even more useful as it can offer a consistent way to view your resource usage and expenses.&lt;/p&gt;

&lt;p&gt;Once you have a clear sense of your current spending, you can look for opportunities to reduce your expenses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Eliminating Unnecessary Resources
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;“What resources are we spending money on and not using at all?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As projects are spun up and shut down, there are often resources that become unattached and left behind. While they are no longer in use, they are still costing your organization money on a recurring basis.&lt;/p&gt;

&lt;p&gt;Ideally, you have an automated way to regularly catch and delete these unattached resources. With a no-code platform like &lt;a href="https://app.blinkops.com/signup"&gt;Blink&lt;/a&gt;, teams can scale up scheduled automations to continuously detect and remove unnecessary resources.&lt;/p&gt;

&lt;p&gt;If you don’t have automations already in place, you can manually review resources in the console and remove unused ones in bulk. It can be time-consuming, but you may be able to reduce your operating costs significantly this way in the short-term.&lt;/p&gt;

&lt;p&gt;To know what types of resources to review, here are some common examples:&lt;/p&gt;

&lt;h3&gt;
  
  
  Unattached Disks
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/how-to-find-and-delete-unattached-aws-resources"&gt;How to Find and Delete Unattached AWS Volumes and Gateways&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Azure:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/finding-and-deleting-unattached-disks-with-the-azure-cli"&gt;Finding and Deleting Unattached Disks with the Azure CLI&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GCP:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/how-to-find-and-delete-unattached-gcp-disks"&gt;How to Find and Delete Unattached GCP Disks&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Unattached IP Addresses
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/finding-and-removing-unattached-aws-elastic-ip-addresses"&gt;Finding and Removing Unattached AWS Elastic IP Addresses&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Azure:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/how-to-detect-and-remove-unattached-azure-public-ip-addresses"&gt;How to Detect and Remove Unattached Azure Public IP Addresses&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GCP:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/finding-and-removing-unattached-gcp-external-ip-addresses"&gt;Finding and Removing Unattached GCP External IP Addresses&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Old Snapshots
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/how-to-find-and-remove-old-ebs-snapshots"&gt;How to Find and Remove Old EBS Snapshots&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Azure:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/how-to-find-and-remove-old-azure-snapshots"&gt;How to Find and Remove Old Azure Snapshots&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GCP:&lt;/strong&gt; &lt;a href="https://www.linkedin.com/jobs/view/3358837548/"&gt;How to Find and Remove Old GCP Disk Snapshots&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finding and removing idle resources is a clear way to cut your operating costs, but it also is an important practice for maintaining a strong security posture. If you leave resources like unattached IP addresses, &lt;a href="https://www.blinkops.com/blog/how-to-find-and-delete-unattached-aws-resources"&gt;idle NAT Gateways&lt;/a&gt;, &lt;a href="https://www.blinkops.com/blog/tracking-down-amazon-load-balancers-with-no-target"&gt;load balancers with no target&lt;/a&gt;, or &lt;a href="https://www.blinkops.com/blog/getting-and-deleting-orphaned-secrets-with-kubectl"&gt;orphaned Secrets lying around&lt;/a&gt;, bad actors could find them and take advantage of the information. In this way, resource management is key to reducing costs and reducing risk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimizing and Updating Resources
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;“How can we optimize our existing resources?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now that you’ve reviewed and removed unused resources, you can now look at optimizing the resources you are using.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the Right Family for the Job
&lt;/h3&gt;

&lt;p&gt;Whether you are creating new resources or evaluating existing ones, it’s important to consider which family of resources best fits your needs. If you’re using general-purpose machines, there might be another more cost-effective machine that is a better fit.&lt;/p&gt;

&lt;p&gt;Depending on your usage, you may need more capacity in some specifications than others. For example, if you’re using AWS, there are Compute Optimized instances under the C family (e.g. EC2 C7g instances) which offer optimal price performance for especially computing-intense use cases, like batch processing workloads and scientific modeling. Other families include Memory Optimized (e.g. EC2 R6a instances) and Storage Optimized (Ec2 lm4gn instances). There are lots of other families (e.g. IOPs, network, accelerator-optimized) depending on the platform and the specification you want to optimize for.&lt;/p&gt;

&lt;p&gt;When considering your performance requirements, you might have use cases like batch jobs or workloads that are fault-tolerant. &lt;a href="https://azure.microsoft.com/en-us/products/virtual-machines/spot/#overview"&gt;Azure&lt;/a&gt;, &lt;a href="https://cloud.google.com/spot-vms"&gt;GCP&lt;/a&gt;, and &lt;a href="https://aws.amazon.com/ec2/spot/"&gt;AWS&lt;/a&gt; all have unused capacity that they offer as less expensive, less reliable Spot VMs. Compared to on-demand instances, they are up to 90% less expensive to run.&lt;/p&gt;

&lt;h3&gt;
  
  
  Updating to New Machines
&lt;/h3&gt;

&lt;p&gt;Within each of these families, there are often newer versions being offered. Often, the newer versions run more efficiently or have higher performance, so it’s a good best practice to upgrade to newer versions as much as you can. &lt;/p&gt;

&lt;p&gt;One example of this is with EBS volumes. By switching from &lt;a href="https://www.blinkops.com/blog/switching-gp2-volumes-to-gp3-volumes-to-lower-aws-ebs-costs"&gt;EBS GP2 volumes to EBS GP3 volumes&lt;/a&gt;, you can reduce your costs by 20%. There are some small performance tradeoffs, but it’s important to keep these types of upgrade opportunities in mind.&lt;/p&gt;

&lt;p&gt;Another AWS example is switching from older machines to ones that use the new AWS Graviton2 processors. Instances running on Graviton2 processors vs. Intel processors offer up to 40% better price performance, with specific efficiencies varying by family.&lt;/p&gt;

&lt;h3&gt;
  
  
  Looking for Low CPU Usage
&lt;/h3&gt;

&lt;p&gt;One way to optimize your spending is by rightsizing resources to match the usage level that you need. For example, you may be running an instance or virtual machine that has more computer capacity than you need.&lt;/p&gt;

&lt;p&gt;By reviewing your usage data, you can determine if you are running at an average CPU usage of 30% or less for example. By reducing the size or type of instance, you can slightly reduce your spend, which adds up over time.&lt;/p&gt;

&lt;p&gt;Here are some how-to guides that show examples for each platform:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/finding-and-resizing-amazon-ec2-instances-with-low-cpu-usage"&gt;Finding and Resizing Amazon EC2 Instances with Low CPU Usage&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GCP:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/finding-and-resizing-gcp-compute-instances-with-low-cpu-usage"&gt;Finding and Resizing GCP Compute Instances with Low CPU Usage&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Azure:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/finding-and-resizing-azure-virtual-machines-with-low-cpu-usage"&gt;Finding and Resizing Azure Virtual Machines with Low CPU Usage&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Using Long-Term Resourcing for Predictable CPU Usage
&lt;/h3&gt;

&lt;p&gt;Another way to optimize your costs is by leveraging reserved instances or committed use discounts. In exchange for predictable computing expectations, the major cloud providers offer resources at a discount with a committed term, such as 1 year or 3 years.&lt;/p&gt;

&lt;p&gt;Here are some how-to guides that show examples for each platform:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/lowering-costs-on-long-running-aws-ec2-instances"&gt;Lowering Costs on Long Running AWS EC2 Instances&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GCP:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/lowering-costs-for-long-running-gcp-instances-with-committed-use-discounts"&gt;Lower Costs for Long Running GCP Instances with Committed Use Discounts&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Azure:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/optimizing-costs-for-long-running-azure-vms-with-reserved-instances"&gt;Optimizing Costs for Long Running Azure VMs with Reserved Instances&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Starting Nightly Non-Production Scale-Downs
&lt;/h3&gt;

&lt;p&gt;Are there any resources that you can shut-down when they are not being used? For example, if your team is working with a test environment during certain work hours, you don’t need to run it 24 hours a day. You can scale it down at night and scale it back up the next morning.&lt;/p&gt;

&lt;p&gt;With some automation, pausing and restarting a non-production cluster could be as simple as clicking an approval button in a slack message, and reducing your daily cloud costs.&lt;/p&gt;

&lt;p&gt;Here are a couple examples of how to pause and restart clusters nightly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/how-to-scale-down-aws-eks-clusters-nightly-to-lower-ec2-costs"&gt;How to Scale Down AWS EKS Clusters Nightly&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GCP:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/how-to-pause-your-gke-cluster-nightly"&gt;How to Pause Your GKE Cluster Nightly&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Azure:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/how-to-pause-your-aks-clusters-nightly"&gt;How to Pause Your AKS Cluster Nightly&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Storing and Moving Data Efficiency
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;“Can we optimize how our data is stored and transferred?”&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Storing Only Relevant Data
&lt;/h3&gt;

&lt;p&gt;Your cloud bill is also impacted by how much data you are storing. While it’s useful to collect data to see how your services are running, it likely becomes less useful and relevant over time. Even if you want to maintain as much data as possible, you’ll want to employ a strategy of periodically switching data over to less-costly, long-term storage vehicles, such as &lt;a href="https://aws.amazon.com/archive/"&gt;Amazon’s S3 Glacier storage&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here are some how-to guides for AWS on how to identify data that hasn’t changed in a while and how to reduce logging storage costs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/detecting-aws-dynamodb-tables-with-stale-data"&gt;Detecting AWS DynamoDB Tables with Stale Data&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/lowering-aws-cloudtrail-costs-by-removing-redundant-trails"&gt;Lowering AWS CloudTrail Costs by Removing Redundant Trails&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS:&lt;/strong&gt; &lt;a href="https://www.blinkops.com/blog/ensuring-aws-cloudwatch-log-groups-have-set-retention-periods"&gt;Ensuring AWS CloudWatch Log Groups Have Set Retention Periods&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Optimizing Data Transfers
&lt;/h3&gt;

&lt;p&gt;Data transfers may also account for a significant part of your cloud costs, and vary greatly depending on their source, destination, method of transport, and size.&lt;/p&gt;

&lt;p&gt;You can also likely expect charges if you are transferring data across regions or across availability zones. Unless your business case requires it, you should look to avoid data transfers that go across regions and availability zones.&lt;/p&gt;

&lt;p&gt;While inbound (or ingress) data transfers between the internet and your cloud provider are not charged, outbound transfers are charged per service. You should reduce outbound data transfers from your cloud to external destinations as much as possible.&lt;/p&gt;

&lt;p&gt;If you are transferring data across AWS services for example, you should be utilizing private endpoints. This way, when you are accessing a S3 bucket from an EC2 instance, you can avoid data transfer charges. &lt;/p&gt;

&lt;p&gt;The same principle applies for transferring data from your cloud to on-premises locations, and tools like AWS &lt;a href="https://aws.amazon.com/directconnect/"&gt;Direct Connect&lt;/a&gt;, GCP &lt;a href="https://cloud.google.com/network-connectivity/docs/direct-peering"&gt;Direct Peering&lt;/a&gt;, and Azure &lt;a href="https://azure.microsoft.com/en-us/products/expressroute/#overview"&gt;ExpressRoute&lt;/a&gt; which may offer lower cost per GB compared to transfers over the internet. Actual savings depends on the amount of data you are moving, and if you are below a certain threshold, it might not make sense.&lt;/p&gt;

&lt;p&gt;You can read more about the types of data transfer charges in the &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/cost-optimization-pillar/plan-for-data-transfer.html"&gt;Cost Optimization pillar&lt;/a&gt; of the AWS Well-Architected Framework, or these &lt;a href="https://aws.amazon.com/blogs/architecture/overview-of-data-transfer-costs-for-common-architectures/"&gt;AWS&lt;/a&gt;, &lt;a href="https://cloud.google.com/vpc/network-pricing"&gt;GCP&lt;/a&gt;, and &lt;a href="https://azure.microsoft.com/en-us/pricing/details/bandwidth/"&gt;Azure&lt;/a&gt; resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Achieving Operational Excellence with Blink Automations
&lt;/h2&gt;

&lt;p&gt;So far, we have covered several areas where you and your team can focus and optimize your costs, but significant savings over time takes new processes.&lt;/p&gt;

&lt;p&gt;Beyond finding unused resources, you need an automated process for alerting you to cost reduction opportunities, and then making approval for removing resources as easy as clicking a button. If you only rely on scripts, you may accidentally take down environments or orphaned resources that should have been left up.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.blinkops.com/"&gt;Blink&lt;/a&gt;, you can use no-code automations to achieve operational excellence. In the cost optimization context, Blink lets you create and run dozens of common resource checks and send reports to email or Slack channels with simple, actionable options.&lt;/p&gt;

&lt;p&gt;By running these Blink automations on a schedule, you’ll be able to confidently ensure that you are achieving operational excellence not just one time, but daily. You can take the same Blink automation approach for other operational excellence categories, like security operations, incident response, troubleshooting, and permissions management.&lt;/p&gt;

&lt;p&gt;Get started with a &lt;a href="https://auth.blinkops.com/authorize?redirect_uri=https%3A%2F%2Fapp.blinkops.com&amp;amp;client_id=Ny9cPXnZDJYswd8XqAArOJEFQ8Uu46Kk&amp;amp;authTenantName=blink-prod&amp;amp;scope=openid%20profile%20email%20read%3Asponsors%20create%3Asponsor%20read%3Aprograms%20delete%3Asponsor-program-enrolment&amp;amp;last_error_msg=&amp;amp;screen_hint=signup&amp;amp;email=&amp;amp;response_type=code&amp;amp;response_mode=query&amp;amp;auth0Client=eyJuYW1lIjoiQGF1dGgwL2F1dGgwLWFuZ3VsYXIiLCJ2ZXJzaW9uIjoiMS44LjIifQ%3D%3D"&gt;free Blink account&lt;/a&gt; or reach out to us directly to &lt;a href="https://www.blinkops.com/contact"&gt;hear more&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Switching from gp2 Volumes to gp3 Volumes to Lower AWS EBS Costs</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Mon, 17 Oct 2022 14:18:16 +0000</pubDate>
      <link>https://community.ops.io/blinkops/switching-from-gp2-volumes-to-gp3-volumes-to-lower-aws-ebs-costs-36cb</link>
      <guid>https://community.ops.io/blinkops/switching-from-gp2-volumes-to-gp3-volumes-to-lower-aws-ebs-costs-36cb</guid>
      <description>&lt;p&gt;In December 2020, Amazon Web Services (AWS) &lt;a href="https://aws.amazon.com/about-aws/whats-new/2020/12/introducing-new-amazon-ebs-general-purpose-volumes-gp3/"&gt;announced&lt;/a&gt; the availability of gp3, the latest version of general purpose SSD volumes for Amazon Elastic Block Store (EBS). &lt;/p&gt;

&lt;p&gt;Prior to this, there had been only a linear relationship between EBS volume performance and storage capacity, which led to over-provisioning ⁠and non-cost-effective pricing. Now, with gp3, if you require higher IOPs (input/output operations per second) and throughput, you don’t need to also pay for excess storage.&lt;/p&gt;

&lt;p&gt;When you switch from gp2 volumes to gp3 volumes, you &lt;a href="https://aws.amazon.com/blogs/storage/migrate-your-amazon-ebs-volumes-from-gp2-to-gp3-and-save-up-to-20-on-costs/"&gt;lower your costs by 20%&lt;/a&gt;. For example, in the us-east-1 region, gp2 volumes cost $0.10/GiB-month compared to gp3 volumes at $0.08/GiB-month.&lt;/p&gt;

&lt;p&gt;With gp3 volumes, you get a baseline performance of 3,000 IOPs and 125MB/s at any volume size, with the ability to scale up to 16,000 input/output operations per second (IOPS) and 1,000 MiB/s for additional fees. With high performance and lower costs, making this switch is easy to justify.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll show you how you can switch your gp2 volumes to gp3 to lower your costs by 20%.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Switch from GP2 Volumes to GP3 Volumes
&lt;/h3&gt;

&lt;p&gt;You can make the switch from GP2 volumes to GP3 volumes by either using the console or the AWS CLI, and you won’t disrupt your running EC2 instance.&lt;/p&gt;

&lt;h4&gt;
  
  
  Using the AWS console
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open EC2 console.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose Volumes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select which gp2 volume you need to modify.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose &lt;strong&gt;Actions&lt;/strong&gt; ---- &lt;strong&gt;Modify Volume&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once your chosen volume's ID and current configuration are shown in &lt;strong&gt;Modify Volume&lt;/strong&gt; window, set your desired configuration values as follows:&lt;/p&gt;

&lt;p&gt;a. Modify type by choosing gp3 for &lt;strong&gt;Volume Type&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;b. Modify size by entering a new value for &lt;strong&gt;Size&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;c. Modify IOPS by entering a new value for &lt;strong&gt;IOPS&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;d. Modify throughput by entering a new value for &lt;strong&gt;Throughput&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;e. Having completed changes to volume settings, choose &lt;strong&gt;Modify&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;f. Choose &lt;strong&gt;Yes&lt;/strong&gt;, when prompted for confirmation.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can repeat these steps for as many gp2 volumes as you have to scale this cost savings.&lt;/p&gt;

&lt;h4&gt;
  
  
  Using the AWS CLI
&lt;/h4&gt;

&lt;p&gt;Use the &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/ec2/modify-volume.html"&gt;modify-volume&lt;/a&gt; command so you migrate to gp3. This is an example of an 8-GiB gp2 to gp3 migration command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 modify-volume --volume-type gp3 -volume-id vol-11111111111111111
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "VolumeModification": {
    "VolumeId": "vol-11111111111111111",
    "ModificationState": "modifying",
    "TargetSize": 8,
    "TargetIops": 3000,
    "TargetVolumeType": "gp3",
    "OriginalSize": 8,
    "OriginalIops": 100,
    "OriginalVolumeType": "gp2",
    "Progress": 0,
    "StartTime": "2021-02-03T13:38:08+00:00"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By running this command and making this change, you have now reduced your monthly AWS costs for this volume. You will need to repeat this process for each gp2 volume.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automating Your Migration to GP3 Volumes
&lt;/h3&gt;

&lt;p&gt;Switching your gp2 volumes over to gp3 volumes is a simple way to lower your costs, but the process requires manual action on each volume. If you have dozens or hundreds of gp2 volumes, this can be highly time-consuming. And then what happens if an even more cost-effective gp4 volume is announced?&lt;/p&gt;

&lt;p&gt;For migrations like this, automation can save you significant time and make these optimizations more feasible at scale.&lt;/p&gt;

&lt;p&gt;When you create a &lt;a href="https://app.blinkops.com/signup"&gt;free Blink account&lt;/a&gt;, you can run automations that identify and alert you to any gp2 volumes associated with your EC2 instances. You can then click a button in Slack and migrate that volume to gp3 instead.&lt;/p&gt;

&lt;p&gt;Get started with &lt;a href="https://www.blinkops.com/"&gt;Blink&lt;/a&gt; and make it easy to optimize your cloud costs.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>tutorials</category>
      <category>finops</category>
    </item>
  </channel>
</rss>
