<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>The Ops Community ⚙️: Blink Ops</title>
    <description>The latest articles on The Ops Community ⚙️ by Blink Ops (@blinkops).</description>
    <link>https://community.ops.io/blinkops</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://community.ops.io/feed/blinkops"/>
    <language>en</language>
    <item>
      <title>10 Essential Security Policies to Enforce That Don’t Get Covered in SOC 2 and ISO 27001</title>
      <dc:creator>Ashlyn Eperjesi</dc:creator>
      <pubDate>Fri, 29 Sep 2023 17:47:30 +0000</pubDate>
      <link>https://community.ops.io/blinkops/10-essential-security-policies-to-enforce-that-dont-get-covered-in-soc-2-and-iso-27001-5fg8</link>
      <guid>https://community.ops.io/blinkops/10-essential-security-policies-to-enforce-that-dont-get-covered-in-soc-2-and-iso-27001-5fg8</guid>
      <description>&lt;p&gt;Today’s growing security threats prove that it’s critical for businesses to protect sensitive data and ensure robust cybersecurity practices. As &lt;a href="https://legal.thomsonreuters.com/content/dam/ewp-m/documents/legal/en/pdf/infographics/shaping-the-future.pdf"&gt;78% of organizations&lt;/a&gt; anticipate annual increases in regulatory compliance requirements, it’s no wonder there’s growing adoption of data and security standards. Two common standards that teams turn to are SOC 2 and ISO 27001 compliance.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is SOC 2 Compliance?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://us.aicpa.org/interestareas/frc/assuranceadvisoryservices/aicpasoc2report"&gt;SOC 2 compliance&lt;/a&gt; is a recognized standard that verifies the controls and security practices of organizations that handle sensitive customer data. It ensures the confidentiality, integrity, and availability of that data. A third party performs an independent audit of an organization's policies, procedures, and technical controls based on the Trust Services Criteria. &lt;/p&gt;

&lt;p&gt;Achieving SOC 2 compliance showcases a steadfast dedication to safeguarding data security and privacy. It offers customers and partners the assurance that their valuable data is fortified, particularly in industries where data protection is paramount.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is ISO 27001?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.iso.org/standard/27001"&gt;ISO 27001&lt;/a&gt; is a well-known international standard for information security management systems (ISMS). It provides a systematic approach for organizations to establish, implement, maintain, and continually improve their information security (InfoSec) processes. ISO 27001 encompasses a comprehensive set of controls and risk management practices that span various critical areas. These include risk assessment, security policies, asset management, access control, incident response, and compliance.&lt;/p&gt;

&lt;p&gt;Obtaining ISO 27001 certification showcases an organization's dedication to safeguarding its information assets. It instills trust in customers, partners, and stakeholders, confirming the presence of robust security measures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Security Policies to Consider
&lt;/h2&gt;

&lt;p&gt;While compliance controls like SOC 2 and ISO 27001 establish a solid foundation, companies should consider implementing additional policies to enhance their security posture. Here are 10 specific policies that are crucial to enforce and go beyond the scope of SOC 2 and ISO 27001 compliance controls.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Password Complexity and Rotation Policy:&lt;/strong&gt; Mandate a policy that enforces strong password complexity, including a combination of uppercase and lowercase letters, numbers, and special characters. Additionally, implement a password rotation policy that requires users to change their passwords at regular intervals to mitigate the risk of unauthorized access.&lt;br&gt;
&lt;strong&gt;Two-Factor Authentication (2FA) for All Users:&lt;/strong&gt; Require the use of two-factor authentication (2FA) by all users - employees and clients alike - to access company systems and applications. This additional layer of security significantly reduces the risk of unauthorized access, even if passwords are compromised.&lt;br&gt;
&lt;strong&gt;Privileged Access Management:&lt;/strong&gt; Implement a policy that restricts privileged access to critical systems and data. Only authorized personnel should be granted privileged accounts. Strict monitoring and auditing of their activities should be conducted to prevent misuse and unauthorized access.&lt;br&gt;
&lt;strong&gt;Data Loss Prevention (DLP) Policy:&lt;/strong&gt; Ensure the protection of sensitive data by implementing data loss prevention measures. This includes monitoring and controlling data transfers, encrypting data at rest and in transit, and implementing robust data backup and recovery procedures.&lt;br&gt;
&lt;strong&gt;Acceptable Use Policy for Bring Your Own Device (BYOD):&lt;/strong&gt; Define a policy that governs the acceptable use of personal devices within the corporate network. This policy should outline security requirements such as device registration, mandatory security applications, and encryption protocols to mitigate risks associated with BYOD practices.&lt;br&gt;
&lt;strong&gt;Social Engineering Awareness and Prevention:&lt;/strong&gt; Educate employees about social engineering techniques and establish guidelines to recognize and report potential threats. Regular training sessions, phishing simulations, and awareness campaigns can help employees stay vigilant against social engineering attacks.&lt;br&gt;
&lt;strong&gt;Vulnerability Management Policy:&lt;/strong&gt; Develop a policy that outlines procedures to identify, assess, and remediate vulnerabilities in the organization's systems and applications. This includes conducting regular vulnerability scans, prioritizing and patching identified vulnerabilities, and implementing a process for continuous vulnerability monitoring.&lt;br&gt;
&lt;strong&gt;Incident Escalation and Response Time Policy:&lt;/strong&gt; Establish clear escalation paths and response time expectations for security incidents. This policy should define roles and responsibilities, escalation thresholds, and predefined incident response procedures to ensure timely and effective incident management.&lt;br&gt;
&lt;strong&gt;Secure Disposal and Destruction of Data:&lt;/strong&gt; Implement a policy that outlines proper procedures for the secure disposal and destruction of sensitive data and storage media. This includes guidelines for data wiping, physical destruction, and proper disposal techniques to prevent unauthorized access to discarded information.&lt;br&gt;
&lt;strong&gt;Security Awareness Training for Third Parties:&lt;/strong&gt; Develop a policy that mandates security awareness training for third-party vendors and partners with access to the company's systems and data. This training should cover security best practices, incident reporting protocols, and contractual obligations to ensure that third parties adhere to high cybersecurity standards.&lt;/p&gt;

&lt;p&gt;While SOC 2 and ISO 27001 compliance controls provide a baseline for cybersecurity, companies must go beyond these frameworks to strengthen their defenses. By enforcing these 10 specific policies, organizations can significantly enhance their cybersecurity posture and protect their valuable assets from evolving threats.&lt;/p&gt;

&lt;p&gt;These policies address critical areas such as password management, access controls, data protection, incident response, and training, helping companies establish a robust security foundation in an increasingly complex digital landscape.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Automation Copilot for Your Compliance Team
&lt;/h2&gt;

&lt;p&gt;Implementing the right security policies is essential for any business, but with so many regulations to keep track of, it can be hard to stay on top of all your compliance demands. That's why automation tools like &lt;a href="https://www.blinkops.com/product/platform"&gt;Blink Copilot&lt;/a&gt; generative AI make it easy to automate compliance workflows. &lt;/p&gt;

&lt;p&gt;By leveraging a security automation copilot, you can ensure that no important policy gets overlooked while keeping up with ever-changing regulatory requirements.&lt;/p&gt;

</description>
      <category>secops</category>
      <category>compliance</category>
      <category>grc</category>
    </item>
    <item>
      <title>Aligning Your AWS Account with the FFIEC Cybersecurity Standards</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Wed, 05 Jul 2023 20:45:07 +0000</pubDate>
      <link>https://community.ops.io/blinkops/aligning-your-aws-account-with-the-ffiec-cybersecurity-standards-2goj</link>
      <guid>https://community.ops.io/blinkops/aligning-your-aws-account-with-the-ffiec-cybersecurity-standards-2goj</guid>
      <description>&lt;p&gt;Companies in the banking and finance industry must adhere to high security standards since they are high-value targets for bad actors. &lt;/p&gt;

&lt;p&gt;Industry-specific organizations like the Federal Financial Institutions Examination Council (FFIEC) have established guidelines to help companies ensure compliance with applicable laws and regulations.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll show you how to check if your AWS account adheres to the cybersecurity standards set forth by the FFIEC using automations in Blink.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the FFIEC Cybersecurity Standards
&lt;/h2&gt;

&lt;p&gt;Established in 1979, the &lt;a href="https://www.ffiec.gov/about.htm"&gt;Federal Financial Institutions Examination Council (FFIEC)&lt;/a&gt; is a U.S. government interagency body of five organizations working together to ensure the safety and soundness of the banking system.&lt;/p&gt;

&lt;p&gt;The FFIEC coordinates common standards for banks and develops uniform guidelines and examinations for all financial institutions. It also releases tooling, like the &lt;a href="https://www.ffiec.gov/cyberassessmenttool.htm"&gt;Cybersecurity Assessment Tool (CAT)&lt;/a&gt;, to help financial institutions evaluate their cybersecurity risk and develop appropriate controls. The CAT is a document that provides a framework and guidance, but it does not interactively assess an AWS account for compliance.&lt;/p&gt;

&lt;h2&gt;
  
  
  FFIEC Cybersecurity Guidance for AWS
&lt;/h2&gt;

&lt;p&gt;An audit of an organization's AWS environment is a critical part of FFIEC compliance requirements. AWS provides the tools and services necessary for financial institutions to adhere to FFIEC regulations, but each organization must ensure that its environment meets the specific requirements of the FFIEC.&lt;/p&gt;

&lt;p&gt;AWS provides &lt;a href="https://docs.aws.amazon.com/config/latest/developerguide/operational-best-practices-for-ffiec.html"&gt;operational best practices&lt;/a&gt; for FFIEC compliance, including a list of control IDs, AWS configuration rules, and guidance.&lt;/p&gt;

&lt;p&gt;Here are some examples of controls that organizations using AWS must follow to meet the FFIEC guidelines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;An inventory of organizational assets is maintained.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An information security and business continuity risk management function exists within the institution.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The risk assessment identifies internet-based systems and high-risk transactions that warrant additional authentication controls.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Information security threats are gathered and shared with applicable internal employees.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Audit log records and other security event logs are reviewed and retained in a secure manager. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For each of these controls, there are a few to &lt;a href="https://github.com/awslabs/aws-config-rules/blob/master/aws-config-conformance-packs/Operational-Best-Practices-for-FFIEC.yaml"&gt;several configuration rules&lt;/a&gt; in AWS that could apply to your organization, depending on the guidance.&lt;/p&gt;

&lt;p&gt;Manually checking whether your EC2 volumes are all encrypted, your IP addresses are all private, or you have the right password policy in place could take days or weeks.&lt;/p&gt;

&lt;p&gt;If you want to check your AWS environment for compliance quickly, you can use automation to get a comprehensive report based on these controls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating FFIEC Compliance for AWS with Blink
&lt;/h2&gt;

&lt;p&gt;With &lt;a href="https://library.blinkops.com/automations/federal-financial-institutions-examination-council-ffiec-compliance-report-for-aws"&gt;one automation&lt;/a&gt; in Blink, you could quickly scan your AWS environment to check your FFIEC compliance against the controls and generate reports with the findings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/XKv1EI010izL5Z0Xsl1PS8mdk4lfn8Dg_sBuvvkTNkU/w:800/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2RqeDJv/dDliMDZkZXN4bHNi/MGowLnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/XKv1EI010izL5Z0Xsl1PS8mdk4lfn8Dg_sBuvvkTNkU/w:800/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2RqeDJv/dDliMDZkZXN4bHNi/MGowLnBuZw" alt="Blink Automation: Federal Financial Institutions Examination Council Compliance Report for AWS" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Blink Automation: &lt;a href="https://library.blinkops.com/automations/federal-financial-institutions-examination-council-ffiec-compliance-report-for-aws"&gt;Federal Financial Institutions Examination Council Compliance Report for AWS&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When &lt;a href="https://library.blinkops.com/automations/federal-financial-institutions-examination-council-ffiec-compliance-report-for-aws"&gt;this automation&lt;/a&gt; runs, it executes the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Generates a Cyber Risk Management and Oversight Report.&lt;/li&gt;
&lt;li&gt;Generates a Threat Intelligence and Collaboration Report.&lt;/li&gt;
&lt;li&gt;Generates a Cybersecurity Controls Report.&lt;/li&gt;
&lt;li&gt;Generates an External Dependency Management Report.&lt;/li&gt;
&lt;li&gt;Generates a Cyber Incident Management and Resilience Report.&lt;/li&gt;
&lt;li&gt;Sends Report results to a specified email.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You could set this automation to run weekly, monthly, or quarterly so you can validate that you are maintaining your compliance over time.&lt;/p&gt;

&lt;p&gt;You may also have other compliance checks you need to run beyond this one with the Financial Federation Institutions Examination Council guidelines. What about SOC, ISO, or PCI compliance?&lt;/p&gt;

&lt;p&gt;There are over 7K pre-built automations in the &lt;a href="https://library.blinkops.com/automations"&gt;Blink Library&lt;/a&gt; that make it easy to gauge your environments against industry standards.&lt;/p&gt;

&lt;p&gt;To start streamlining your compliance and security checks today, you can get started by signing up for a &lt;a href="https://app.blinkops.com/signup"&gt;free trial&lt;/a&gt; or &lt;a href="https://www.blinkops.com/schedule-time"&gt;guided demo&lt;/a&gt; of Blink.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>secops</category>
      <category>automation</category>
    </item>
    <item>
      <title>Introducing Blink Copilot: Generative AI for Security Workflows</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Wed, 31 May 2023 19:27:58 +0000</pubDate>
      <link>https://community.ops.io/blinkops/introducing-blink-copilot-generative-ai-for-security-workflows-5jp</link>
      <guid>https://community.ops.io/blinkops/introducing-blink-copilot-generative-ai-for-security-workflows-5jp</guid>
      <description>&lt;p&gt;Today, we’re announcing Blink Copilot, the first ever generative AI for automating security and IT operations workflows. This innovative tool makes it possible for any security operator to generate no-code workflows using a simple prompt.&lt;/p&gt;

&lt;p&gt;When &lt;a href="https://www.blinkops.com"&gt;Blink&lt;/a&gt; was founded in 2021, we knew security teams needed a better way to automate internal workflows. Low-code platforms had already transformed business operations for CRM and marketing, and it was only a matter of time before low-code solutions enabled security teams to automate their own workflows across their stacks.&lt;/p&gt;

&lt;p&gt;Recent advancements in large language models (LLM) and generative AI have supercharged our mission, finally making true no-code automation possible. That’s why today, we’re excited to announce Blink Copilot, the first ever generative AI for security workflow automation. &lt;/p&gt;

&lt;p&gt;Blink Copilot enables security teams to generate any simple or complex workflow, instantly. Generative AI makes it possible to build workflows without writing code or needing to be an expert in target applications.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/exEzdWmUpNQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  The Critical Need for Security Automation
&lt;/h2&gt;

&lt;p&gt;According to the &lt;a href="https://www.isc2.org/Research/Workforce-Study"&gt;2022 (ISC)² Cybersecurity Workforce Study&lt;/a&gt;, there are over 3.4 million open security jobs and too few skilled security operators to fill them. Meanwhile, cybersecurity breaches in 2022 were &lt;a href="https://www.forbes.com/sites/forbesbusinesscouncil/2023/03/30/lessons-learned-from-the-data-breaches-of-2022/?sh=34e33e2b42e4"&gt;more costly than ever before&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;With such high stakes and a massive skills gap, security automation is necessary for organizations to defend against the massive volume of attacks enterprises face on a daily basis. &lt;/p&gt;

&lt;p&gt;Blink Copilot empowers teams of any size to build any no-code, low-code, or code security automation using generative AI. With Blink Copilot, security and IT operators of all skill levels can now leverage AI to increase productivity and deliver workflows to help protect their organizations.&lt;/p&gt;

&lt;p&gt;Blink Copilot unlocks unparalleled automation capabilities for security teams. Security automation projects that once took months are now built in seconds using Blink Copilot.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automate Security Beyond the SOC
&lt;/h2&gt;

&lt;p&gt;Blink Copilot empowers security teams to automate any operations workflow, effortlessly. Security practitioners regardless of skill set can automate any operational task or security workflow using a simple text prompt. &lt;/p&gt;

&lt;p&gt;Blink Copilot can help security teams automate workflows for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SOC &amp;amp; Incident Response&lt;/li&gt;
&lt;li&gt;Cloud Security&lt;/li&gt;
&lt;li&gt;IT &amp;amp; SaaS Security&lt;/li&gt;
&lt;li&gt;Identity &amp;amp; Access Management&lt;/li&gt;
&lt;li&gt;Governance, Risk &amp;amp; Compliance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using Blink Copilot, security teams can finally automate workflows beyond the SOC. Blink customers get out-of-the-box automations for popular security use cases and powerful automation capabilities backed by the robust Blink platform.&lt;/p&gt;

&lt;p&gt;Our internal team at Blink Ops has already generated over 7,000 workflow automations. Now with Blink Copilot, we’re empowering any team to build no-code, low-code, or code security workflow automations using generative AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Powerful No-Code Security Automation
&lt;/h2&gt;

&lt;p&gt;Blink Copilot is backed by the most powerful no-code security automation platform ever built.&lt;/p&gt;

&lt;p&gt;Key features of the Blink platform include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Blink Copilot&lt;/strong&gt;: The first ever generative AI for automating security and IT operations workflows&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low-Code Editor&lt;/strong&gt;: Drag-and-drop user interface that makes it easy for security teams to customize workflow automations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation Library&lt;/strong&gt;: 7,000+ security workflow automations built by the Blink community&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-Service Portal&lt;/strong&gt;: Shift-left service requests by making automated workflows available in a self-service portal or interactive Slack/Microsoft Teams app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Blink was designed for enterprise security teams, offering robust platform features like secure workspaces, on-prem runners, and multi-tenant support for MSPs and MSSPs. &lt;/p&gt;

&lt;p&gt;Blink is committed to upholding the highest grade of industry security standards, including SOC 2 Type 2, GDPR, ISO 27001 compliance. Blink can also help teams leverage no-code automation to achieve compliance across their own organizations.&lt;/p&gt;

&lt;p&gt;Learn more about the Blink platform and technical capabilities in the &lt;a href="https://www.docs.blinkops.com/docs/Documentation"&gt;Blink documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Started with Blink Copilot
&lt;/h2&gt;

&lt;p&gt;Blink Copilot completely transforms how security teams will automate simple and complex workflows.&lt;/p&gt;

&lt;p&gt;Security teams can take advantage of Blink Copilot today to shift-left internal workflows. Along with 7,000 out-of-the-box automations for common security use cases and powerful automation capabilities backed by the robust Blink platform, enterprise security teams can finally achieve operational excellence across their entire stack.&lt;/p&gt;

&lt;p&gt;For more information on Blink Copilot, visit the &lt;a href="https://www.blinkops.com/"&gt;Blink Ops website&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>secops</category>
      <category>ai</category>
      <category>nocode</category>
    </item>
    <item>
      <title>How to Find and Update Public EC2 AMIs in AWS</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Fri, 05 May 2023 19:30:54 +0000</pubDate>
      <link>https://community.ops.io/blinkops/how-to-find-and-update-public-ec2-amis-in-aws-mb0</link>
      <guid>https://community.ops.io/blinkops/how-to-find-and-update-public-ec2-amis-in-aws-mb0</guid>
      <description>&lt;p&gt;Resources in AWS should only be accessible to users who require them to complete tasks. If this principle of least privilege isn't followed, you increase the risk for data leakages or unauthorized access.&lt;/p&gt;

&lt;p&gt;If you have EC2 AMIs that are publicly-shared for example, they are available to any AWS account and may contain sensitive data about your organization such as passwords, SSH keys, and configuration details.&lt;/p&gt;

&lt;p&gt;In this guide, we will explain how to find and restrict publicly-shared AWS EC2 AMIs so you can ensure that sensitive information about your applications is not exposed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding AMIs for AWS EC2
&lt;/h2&gt;

&lt;p&gt;An &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html"&gt;Amazon Machine Image&lt;/a&gt; (AMIs) is an image that enables you to easily launch &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html"&gt;Amazon Elastic Compute Cloud&lt;/a&gt; (EC2) instances with the necessary requirements preconfigured.&lt;/p&gt;

&lt;p&gt;Each AMI includes instructions for block device mapping, launch permissions, and a root device volume, either backed by the EC2 instance store or at least one Amazon Elastic Block Store (EBS) snapshot.&lt;/p&gt;

&lt;p&gt;You can use an AMI to launch single instances or multiple servers in a cluster. For example, if you are launching a web server that handles a large volume of traffic, you can use an AMI to deploy it quickly. You can also launch multiple instances in the same cluster and configure them to work together.&lt;/p&gt;

&lt;p&gt;These AMIs can be shared with specific users, giving them access to the same configurations and settings. If they are publicly-shared, AMIs can reveal sensitive information about how your instances function.&lt;/p&gt;

&lt;h2&gt;
  
  
  How To Find Any Public AMIs in AWS
&lt;/h2&gt;

&lt;p&gt;Since AWS EC2 AMIs are publicly shared by default, it's easy to miss if you haven't taken the time to check your settings. Luckily, there are two ways to check whether your AMIs are publicly shared. You can use the Amazon EC2 Console or the AWS CLI, depending on your AWS environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the AWS Console:
&lt;/h3&gt;

&lt;p&gt;This method is the most straightforward way to check if your AMIs are publicly shared. Follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log in to the &lt;a href="https://console.aws.amazon.com/ec2/"&gt;AWS Management Console&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;EC2&lt;/strong&gt; from the list of services.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;AMIs&lt;/strong&gt; in the left sidebar menu under the &lt;strong&gt;IMAGES&lt;/strong&gt; section.&lt;/li&gt;
&lt;li&gt;To see all the public AMIs, use the first filter in the AMI list and choose &lt;strong&gt;Public Images&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;To check individual AMIs, select one and scroll down to the &lt;strong&gt;Permissions&lt;/strong&gt; section and check the current permissions. If the selected AMI is publicly shared, the EC2 dashboard will display the following message, "This image is currently Public" in the &lt;strong&gt;Permissions&lt;/strong&gt; section.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Using the AWS CLI:
&lt;/h3&gt;

&lt;p&gt;You can also use the AWS Command Line Interface (CLI) to check whether your AMIs are publicly shared. This process is a bit more complex, but here's how it works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run the following &lt;a href="https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/describe-images.html"&gt;describe-images&lt;/a&gt; command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 describe-images --executable-users all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command will return a list of all the publicly shared AMIs in your account. If the command returns images with the &lt;strong&gt;Public&lt;/strong&gt; parameter value of "&lt;strong&gt;true&lt;/strong&gt;" then the AMI is publicly accessible. &lt;/p&gt;

&lt;p&gt;For instance, the AMI in question is publicly shared if the output is something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"Images": {"ImageId": "ami-1234abcd", "Public": true } }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that you have identified public AMIs, next you can update them to make them more secure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Changing Public AMIs to Private AMIs in AWS
&lt;/h2&gt;

&lt;p&gt;You can update public AMIs to restrict access using either the AWS Console and CLI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the AWS Console:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Log in to the &lt;a href="https://console.aws.amazon.com/ec2/"&gt;AWS Management Console&lt;/a&gt; again and select EC2 from the services list.&lt;/li&gt;
&lt;li&gt;Select the AMI you want to make private from the list of AMIs in the &lt;strong&gt;IMAGES&lt;/strong&gt; section.&lt;/li&gt;
&lt;li&gt;Head to the Permissions tab from the bottom bar menu on your dashboard. Click on &lt;strong&gt;Edit AMI Permissions&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Then select &lt;strong&gt;Private&lt;/strong&gt; and &lt;strong&gt;Save changes&lt;/strong&gt; to make the AMI private. The Permissions tab will now display the following message, "This AMI is not shared with any other accounts."&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Using the AWS CLI:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Run the following &lt;a href="https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/modify-image-attribute.html"&gt;modify-image-attribute&lt;/a&gt; command to make an AMI private:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 modify-image-attribute 
    --image-id ami-1234abcd 
    --launch-permission "{\"Remove\":[{\"Group\":\"all\"}]}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will immediately make the AMI private while preserving all its existing permissions.&lt;/p&gt;

&lt;p&gt;To double-check, you can list the attributes of the image by running the following &lt;a href="https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/describe-images.html"&gt;describe-images&lt;/a&gt; command for the AMI with the id “ami-1234abcd”:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 describe-images --image-ids ami-1234abcd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should now display the Public parameter as &lt;strong&gt;false&lt;/strong&gt; as shown here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"Images": { "ImageId": "ami-1234abcd", "Public": false}}.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you have successfully secured your non-compliant AMI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating AWS  Permission Checks for EC2 AMIs with Blink
&lt;/h2&gt;

&lt;p&gt;It isn’t hard to find and secure public AMIs, but it can be time-consuming if you have many that need to be addressed or you want to check this on a regular basis.&lt;/p&gt;

&lt;p&gt;How do you ensure that you don’t have overly-permissive AMIs at any given time? If you are only doing this task with CLI commands or steps in the console, you will need to context-switch to oversee each step.&lt;/p&gt;

&lt;p&gt;With an automation platform like &lt;a href="https://www.blinkops.com/"&gt;Blink&lt;/a&gt;, you can run &lt;a href="https://library.blinkops.com/automations/ensure-ec2-amis-are-not-shared-publicly-in-aws"&gt;this automation&lt;/a&gt; to identify and notify your team of new public AMIs so you can maintain consistent compliance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/TjyAolmqa4nOH9wd37w3XQ2EpQsUzlhV7n8NA4Oq0OI/w:800/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3dnZHpz/ZDYxeXVmcnduNjk2/MmR1LnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/TjyAolmqa4nOH9wd37w3XQ2EpQsUzlhV7n8NA4Oq0OI/w:800/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3dnZHpz/ZDYxeXVmcnduNjk2/MmR1LnBuZw" alt="Blink Automation: Ensure EC2 AMIs are Not Shared Publicly in AWS" width="800" height="500"&gt;&lt;/a&gt;&lt;em&gt;Blink Automation: &lt;a href="https://library.blinkops.com/automations/ensure-ec2-amis-are-not-shared-publicly-in-aws"&gt;Ensure EC2 AMIs are Not Shared Publicly in AWS&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This automation in the &lt;a href="https://library.blinkops.com/automations"&gt;Blink library&lt;/a&gt; executes the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Checks if there are any publicly-shared AWS EC2 AMIs.&lt;/li&gt;
&lt;li&gt;Sends a report to a specified email address.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is a simple automation, which makes it easy to customize for your organization’s needs. For example, you can add actions to update the settings of non-compliant AMIs with just an approval via Slack.&lt;/p&gt;

&lt;p&gt;This automation and 5K more are available for you to use right away from the Blink library. Instead of manually implementing best practices with the AWS console or CLI, these ready-to-use automations enable you to set and enforce policies across your organization or teams. &lt;/p&gt;

&lt;p&gt;You can also build custom automations to match your organization’s unique use cases, whether you want to schedule a task, run steps based on an event in a certain tool, or share a self-service application for your coworkers to use.&lt;/p&gt;

&lt;p&gt;Start a &lt;a href="https://app.blinkops.com/signup"&gt;free trial of Blink&lt;/a&gt; today or &lt;a href="https://www.blinkops.com/schedule-time"&gt;schedule time with us&lt;/a&gt; to see more.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>secops</category>
    </item>
    <item>
      <title>How to Search for an IOC Across Devices in CrowdStrike</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Sun, 09 Apr 2023 16:39:43 +0000</pubDate>
      <link>https://community.ops.io/blinkops/how-to-search-for-an-ioc-across-devices-in-crowdstrike-2958</link>
      <guid>https://community.ops.io/blinkops/how-to-search-for-an-ioc-across-devices-in-crowdstrike-2958</guid>
      <description>&lt;p&gt;When malware is detected on one of your organization’s devices, it will have characteristics called Indicators of Compromise (IOCs), such as certain hash values, urls, or IP addresses.&lt;/p&gt;

&lt;p&gt;You can use these IOCs to look across your organization’s devices to identify lateral movement associated with an attack.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll show you how to use CrowdStrike to detect if IOCs associated with malware are present on any other devices at your organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Searching for an IOC Across CrowdStrike Hosts
&lt;/h2&gt;

&lt;p&gt;You can search for IOCs on other devices either by using the CrowdStrike Console or by using the CrowdStrike API.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the CrowdStrike Console:
&lt;/h3&gt;

&lt;p&gt;1: First log in to the &lt;a href="https://falcon.crowdstrike.com/login/"&gt;CrowdStrike Falcon Console&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;2: Open the left-hand menu and select &lt;strong&gt;Investigate&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/QkchbS8N-gvjhjCrbPQTkhKq_bJSDg91rROIkQZhqxI/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2Y0cHd5/MnQxNDNieXpsc2U2/aGpwLnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/QkchbS8N-gvjhjCrbPQTkhKq_bJSDg91rROIkQZhqxI/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2Y0cHd5/MnQxNDNieXpsc2U2/aGpwLnBuZw" alt="Investigation Menu in the CrowdStrike Console" width="880" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3: Depending on your IOC type, choose the related link under the &lt;strong&gt;Search&lt;/strong&gt; section. For example, if you are looking for an IOC that is a domain, you can choose &lt;strong&gt;Bulk domains&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/GT4RlaTxL7bJx6oE4netRna-qLWdXv3x-bfpMgwQeDc/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3RjYW0z/eWR2MmZzOHdieXdp/Nmo4LnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/GT4RlaTxL7bJx6oE4netRna-qLWdXv3x-bfpMgwQeDc/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3RjYW0z/eWR2MmZzOHdieXdp/Nmo4LnBuZw" alt="Searching IOCs by Domains in CrowdStrike" width="880" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4: Input your IOC value and specify the time range you care about. Then click &lt;strong&gt;Submit&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In the results, you’ll see which hosts have observed the IOCs you’re investigating and you’ll see details on the process-level as well.&lt;/p&gt;

&lt;p&gt;You may want to contain any additional hosts now associated with the IOC. We wrote a guide on containing hosts here. For your audit trail, you can export this data by hovering over either section and clicking the &lt;strong&gt;Export&lt;/strong&gt; icon.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the CrowdStrike Falcon API:
&lt;/h3&gt;

&lt;p&gt;The platform also offers an API which allows administrators to easily programmatically manage their sensors. You can use the endpoint that geographically aligns with your specific CrowdStrike account:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;US-1 “api.crowdstrike.com”&lt;/li&gt;
&lt;li&gt;US-2 “api.us-2.crowdstrike.com”&lt;/li&gt;
&lt;li&gt;US-GOV-1 “api.laggar.gcw.crowdstrike.com”&lt;/li&gt;
&lt;li&gt;EU-1 “api.eu-1.crowdstrike.com”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the examples we show later, we’ll use “api.us-2.crowdstrike.com”.&lt;/p&gt;

&lt;p&gt;CrowdStrike’s API documentation is available after you &lt;a href="https://falcon.crowdstrike.com/login/?next=%2Fdocumentation%2F"&gt;log in here&lt;/a&gt;, and you’ll see information about how to &lt;a href="https://www.crowdstrike.com/blog/tech-center/get-access-falcon-apis/"&gt;use OAuth2 for authenticating&lt;/a&gt; your requests.&lt;/p&gt;

&lt;p&gt;Before you start, you need to make an access token request, including your client ID and client secret. You’ll get an access token in response that will be valid for 30 minutes after that. The API calls you make after that initial call will include that token.&lt;/p&gt;

&lt;p&gt;Next, make a GET request to this endpoint with IOC type and value specified:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://api.us-2.crowdstrike.com/indicators/queries/devices/v1?type=sha256&amp;amp;value=XYZ
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can use the following IOC types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;sha256&lt;/li&gt;
&lt;li&gt;md5&lt;/li&gt;
&lt;li&gt;domain&lt;/li&gt;
&lt;li&gt;ipv4&lt;/li&gt;
&lt;li&gt;ipv6&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also use the parameters &lt;strong&gt;limit&lt;/strong&gt; and &lt;strong&gt;offset&lt;/strong&gt; to manage pagination of results. With the response, you’ll be able to see all the resources that have observed the specific IOC.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automatically Searching Across Devices for IOCs with Blink:
&lt;/h3&gt;

&lt;p&gt;Checking if similar IOCs exist on other devices at your organization is one of many steps in responding to a security alert. While it isn’t difficult to do, it takes time and context-switching.&lt;/p&gt;

&lt;p&gt;With Blink, you can &lt;a href="https://library.blinkops.com/automations/search-crowdstrike-ioc-across-devices"&gt;handle this task automatically&lt;/a&gt; by just inputting an IOC type and value.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/CikJKeMD0XuuBdpMEhNJOYJGRND2KHP9CEDoDsAOERE/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2wxZzc0/MThub3B3cmNieDRm/c201LnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/CikJKeMD0XuuBdpMEhNJOYJGRND2KHP9CEDoDsAOERE/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2wxZzc0/MThub3B3cmNieDRm/c201LnBuZw" alt="Blink Automation: Search IOCs Across Devices in CrowdStrike" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With these IOC inputs, this automation runs the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Searches for other devices that have observed the IOC.&lt;/li&gt;
&lt;li&gt;Parses the results to format them into an easily readable report.&lt;/li&gt;
&lt;li&gt;Sends the report to the relevant SecOps team member via Slack.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is a simple automation, and that makes it easy to customize. For example, you can add this as a subflow in larger incident response automation triggered by a CrowdStrike alert.&lt;/p&gt;

&lt;p&gt;You can also use any of the 5K automations in the &lt;a href="https://library.blinkops.com/automations/"&gt;Blink library&lt;/a&gt;, or build new ones from scratch to fit your unique needs. In Blink, there are hundreds of native integrations and thousands of actions available to make building easy.&lt;/p&gt;

&lt;p&gt;Start a &lt;a href="https://app.blinkops.com/signup/"&gt;free trial of Blink&lt;/a&gt; today to see how easy automation can be.&lt;/p&gt;

</description>
      <category>secops</category>
      <category>crowdstrike</category>
    </item>
    <item>
      <title>How to Get User Activity From Your Azure Logs</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Thu, 23 Mar 2023 18:14:47 +0000</pubDate>
      <link>https://community.ops.io/blinkops/how-to-get-user-activity-from-your-azure-logs-kfj</link>
      <guid>https://community.ops.io/blinkops/how-to-get-user-activity-from-your-azure-logs-kfj</guid>
      <description>&lt;p&gt;It’s important to be able to audit user activity in Azure, whether you are dealing with a security incident or just want to fully review the actions a user has taken.&lt;/p&gt;

&lt;p&gt;If one of your developers has their account compromised, reviewing their user activity can be a necessary task to ensure that they haven’t done anything malicious to your Azure account and resources, or exfiltrated data like access keys or secrets.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll show you how to retrieve all the activity logs for a given user in Azure to quickly assess the scope of the threat.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Get User Activity From Azure Logs
&lt;/h2&gt;

&lt;p&gt;Azure Monitor collects and organizes all log and performance data from Azure resources, and you can access the activity logs for the last 90 days through steps in the console or CLI commands.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the Azure Monitor Log:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt; Open the Azure console, and navigate to the &lt;strong&gt;Activity log&lt;/strong&gt; view. You can do this either by clicking into a specific resource or by searching for &lt;strong&gt;Azure Monitor&lt;/strong&gt; to see all activity logs across your account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/CPQ0EMedSMK5xkXb8OiVJbAlcZrgNz3v058YAiYIQE4/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL21hMmdm/YnNnMTV2ZG9hMWlj/OXByLnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/CPQ0EMedSMK5xkXb8OiVJbAlcZrgNz3v058YAiYIQE4/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL21hMmdm/YnNnMTV2ZG9hMWlj/OXByLnBuZw" alt="Image description" width="880" height="438"&gt;&lt;/a&gt;Source: &lt;a href="https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log?tabs=powershell"&gt;Azure documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.&lt;/strong&gt; Next, you can add the &lt;strong&gt;Events initiated by&lt;/strong&gt; filter to view only the activity logs related to a specific user.&lt;br&gt;
&lt;strong&gt;3.&lt;/strong&gt; You can also adjust the timespan filter to specify the full duration you care about. You can also select &lt;strong&gt;Edit columns&lt;/strong&gt; if there is additional information you want to view.&lt;br&gt;
&lt;strong&gt;4.&lt;/strong&gt; To export the logs, click on &lt;strong&gt;Download as CSV&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This method is relatively simple, but it does require logging in to the console and manually working through the steps. Let’s look at running this same search from the Azure CLI.&lt;/p&gt;
&lt;h3&gt;
  
  
  Using the Azure CLI:
&lt;/h3&gt;

&lt;p&gt;If you aren’t already set up with the &lt;a href="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli"&gt;Azure CLI&lt;/a&gt;, you’ll need to install it locally. From there, you can run the &lt;a href="https://learn.microsoft.com/en-us/cli/azure/monitor/activity-log?view=azure-cli-latest#az-monitor-activity-log-list"&gt;az monitor activity-log list&lt;/a&gt; command to list and query activity logs.&lt;/p&gt;

&lt;p&gt;Here is the format to get the activity logs for a specific user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az monitor activity-log list --caller [USER-NAME]  
--offset [TIME-RANGE]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The offset parameter is formatted as XXdXXh, and defaults to 6h if not specified.&lt;/p&gt;

&lt;p&gt;Here’s an example command to get the activity logs for John Smith:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az monitor activity-log list --caller john.smith@blinkops.com  
--offset 14d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can these additional parameters to modify the results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;--start-time&lt;/strong&gt; or &lt;strong&gt;--end-time&lt;/strong&gt;: use the format of date (yyyy-mm-dd), time (hh:mm:ss.xxxxx), and timezone (+/-hh:mm).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;--resource-group&lt;/strong&gt;, &lt;strong&gt;--resource-id&lt;/strong&gt;, &lt;strong&gt;--namespace&lt;/strong&gt;: use this to narrow to only activities affecting those specific resource areas.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;--max-events&lt;/strong&gt;: use to change the limit of the results, the default is 50&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can see all of the options in the &lt;a href="https://learn.microsoft.com/en-us/cli/azure/monitor/activity-log?view=azure-cli-latest#az-monitor-activity-log-list"&gt;command documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This approach is more easily repeatable than the console steps, but it requires you to remember the commands and be familiar with the Azure CLI.&lt;/p&gt;

&lt;p&gt;With a no-code automation tool like Blink, you can run this task effortlessly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting User Activity from Azure Faster with Blink
&lt;/h2&gt;

&lt;p&gt;Retrieving user logs manually and adding them to a ticket could be a waste of valuable time, especially if your team is responding to a security incident.&lt;/p&gt;

&lt;p&gt;With Blink, an automation can be triggered to pull and enrich Azure activity logs and other information for a compromised user right away.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/ixr3R-wrxfWCBF9gvP2_7mNRfoghOiHfzQio9ZH2HBU/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzLzl1dWQ4/aGdtYnVoaDl5dGN1/MjE5LnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/ixr3R-wrxfWCBF9gvP2_7mNRfoghOiHfzQio9ZH2HBU/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzLzl1dWQ4/aGdtYnVoaDl5dGN1/MjE5LnBuZw" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;br&gt;
Blink Automation: &lt;a href="https://library.blinkops.com/automations/get-user-activity-from-azure-logs"&gt;Get User Activity from Azure Logs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The automation shown above is in the Blink library and is set up as a self-service app –  where a team member can specify input parameters and get all the activity logs sent to an email address.&lt;/p&gt;

&lt;p&gt;When it runs, it executes the following steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt; Fetches the logs from Azure monitor for a specified email address.&lt;br&gt;
&lt;strong&gt;2.&lt;/strong&gt; Formats the logs for better readability.&lt;br&gt;
&lt;strong&gt;3.&lt;/strong&gt; Reports the results to the specified email address.&lt;/p&gt;

&lt;p&gt;If you’re handling a security incident, you can have this flow as part of an automation that is triggered by a malware or DLP alert for a given user. That way, you have all the logs and information you need to assess the risk for your organization.&lt;/p&gt;

&lt;p&gt;You can import this automation from the library into your account and customize it based on your organization’s needs. For example, you can drag-and-drop new actions into the canvas or set up conditional subflows.&lt;/p&gt;

&lt;p&gt;You can build your own automation from scratch or use one of our 5K pre-built automations today.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://app.blinkops.com/signup"&gt;Start a free trial of Blink&lt;/a&gt; today or &lt;a href="https://www.blinkops.com/schedule-time"&gt;schedule time with us&lt;/a&gt; to see more.&lt;/p&gt;

</description>
      <category>azure</category>
    </item>
    <item>
      <title>How to Verify IOCs and Enrich Your Incident Response with VirusTotal</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Mon, 06 Mar 2023 19:03:01 +0000</pubDate>
      <link>https://community.ops.io/blinkops/how-to-verify-iocs-and-enrich-your-incident-response-with-virustotal-33bi</link>
      <guid>https://community.ops.io/blinkops/how-to-verify-iocs-and-enrich-your-incident-response-with-virustotal-33bi</guid>
      <description>&lt;p&gt;IOCs, or Indicators of Compromise, are pieces of information that can be used to detect malicious activity on a network or system. By using a threat intelligence tool like VirusTotal, organizations can get more information about IOCs like the type of threat detected, its origin, or even which antivirus engines are detecting it.&lt;/p&gt;

&lt;p&gt;In this guide, we'll show you how to use VirusTotal to verify an IOC and incorporate its results into an automated incident response workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verifying IOCs Using VirusTotal
&lt;/h2&gt;

&lt;p&gt;When endpoint detection tools like CrowdStrike or SentinelOne detect malware, they will alert organizations and the incident will list IOCs. IOCs include such data as the IP address or domain name of a suspicious website, hashes associated with malware, and various other details related to an attack.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using VirusTotal on a Browser:
&lt;/h3&gt;

&lt;p&gt;If you are a SOC analyst and you want to know more information about the malicious website or software, you can go to the &lt;a href="https://www.virustotal.com/gui/home/search"&gt;VirusTotal website&lt;/a&gt;. VirusTotal is a free service offered by Google which allows users to upload files, enter URLs, search IP addresses or file hashes, and scan these against the platform's antivirus databases.&lt;/p&gt;

&lt;p&gt;Here’s an example of what you would see if you submitted a file hash value that isn’t malicious:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/a-ZMkqjezvODPBIIcgCCoE1Gdef1oIQX74onlPpyDMU/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3VuOXA0/N2ZxdDQ0bjlxNnhr/ajI3LnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/a-ZMkqjezvODPBIIcgCCoE1Gdef1oIQX74onlPpyDMU/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3VuOXA0/N2ZxdDQ0bjlxNnhr/ajI3LnBuZw" alt="VirusTotal example of non malicious url" width="880" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have a malicious file hash, you’ll see this information:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/4gqT0GX-o5LOyaDTOVC8spqoyWGDjEdI60PwhsGQqwE/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3RuNGFq/YmU1a2NieTZxZjZu/Y3RqLnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/4gqT0GX-o5LOyaDTOVC8spqoyWGDjEdI60PwhsGQqwE/w:880/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3RuNGFq/YmU1a2NieTZxZjZu/Y3RqLnBuZw" alt="VirusTotal example of malicious hash value" width="880" height="545"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You’ll be able to view details like creation time, when it was first submitted, and when it was last analyzed, how it behaves, community comments, and more.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the VirusTotal API:
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://developers.virustotal.com/reference/overview"&gt;VirusTotal API&lt;/a&gt; allows users to integrate their own custom scripts into the platform, enabling more advanced incident response workflows. For example, a script can be used to automatically upload suspicious files for scanning against the platform's databases.&lt;/p&gt;

&lt;p&gt;To access the VirusTotal API, you’ll need to create an account and then you’ll have a free public API key you can use.&lt;/p&gt;

&lt;p&gt;You can use this endpoint to &lt;a href="https://developers.virustotal.com/reference/file-info"&gt;get a file report&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --request GET \
--url https://www.virustotal.com/api/v3/files/{id} \
--header 'x-apikey: &amp;lt;your-API-key&amp;gt;'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can use this endpoint to &lt;a href="https://developers.virustotal.com/reference/scan-url"&gt;get a URL analysis report&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --request GET \
--url https://www.virustotal.com/api/v3/urls/{id} \
--header 'x-apikey: &amp;lt;your-API-key&amp;gt;'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And you can use this endpoint to &lt;a href="https://developers.virustotal.com/reference/ip-info"&gt;get an IP address report&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --request GET \
--url https://www.virustotal.com/api/v3/ip_addresses/{ip} \
--header 'x-apikey: &amp;lt;your-API-key&amp;gt;'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you review the &lt;a href="https://developers.virustotal.com/reference/overview"&gt;API documentation&lt;/a&gt;, you may see other popular endpoints that better map to your incident &amp;amp; response process.&lt;/p&gt;

&lt;p&gt;In the context of responding to an alert, you could save yourself a step by setting up a script to automatically call these endpoints with each respective piece of information and enrich your alerts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Taking Action Based on VirusTotal Reports with Blink
&lt;/h2&gt;

&lt;p&gt;When you enrich alerts with VirusTotal reports, you can save you time manually reviewing each detail. Instead of stopping at enrichment, what if you could utilize the results to kick off conditional responses?&lt;/p&gt;

&lt;p&gt;For example, if an IOC is verified and has a high threat score, should you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Immediately contain that device using your endpoint detection tool?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Search other devices across your organization to see if that IOC is present?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add the IOC to your firewall and EDR blocking rules?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These might be the right steps, but if you scripted them all, you might not be able to control your automations easily at each stage.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.blinkops.com/"&gt;Blink&lt;/a&gt;, you can build event-based automations to facilitate incident response with conditional workflows and approval steps along the way. When a new alert is raised in an EDR like CrowdStrike or SentinelOne, you could immediately enrich that alert with reports from VirusTotal on each IOC.&lt;/p&gt;

&lt;p&gt;Depending on the threat level, you can speed up your time-to-action with conditional response steps to fully leverage the combined power of the security and communication tools you’re already using.&lt;/p&gt;

&lt;p&gt;You can start building a better response process with this pre-built &lt;a href="https://library.blinkops.com/automations/verify-ioc-with-virustotal"&gt;Blink automation that verifies IOCs with VirusTotal&lt;/a&gt;. We have over 5K ready-to-use automations just like it in our library. By combining and customizing them, you’ll be able to create the workflows that match each security situation.&lt;/p&gt;

&lt;p&gt;Get started with a &lt;a href="https://app.blinkops.com/signup"&gt;free trial of Blink&lt;/a&gt; today or &lt;a href="https://www.blinkops.com/schedule-time"&gt;schedule time to see a demo&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Migrate from AWS EC2 Launch Configurations to Launch Templates</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Tue, 24 Jan 2023 22:22:15 +0000</pubDate>
      <link>https://community.ops.io/blinkops/how-to-migrate-from-aws-ec2-launch-configurations-to-launch-templates-636</link>
      <guid>https://community.ops.io/blinkops/how-to-migrate-from-aws-ec2-launch-configurations-to-launch-templates-636</guid>
      <description>&lt;p&gt;For EC2 Auto Scaling, AWS is currently urging all customers to switch from using launch configurations to launch templates instead.&lt;/p&gt;

&lt;p&gt;As part of this push, starting in 2023, launch configurations do not support any new Amazon EC2 instance types released after Dec. 31st, 2022. In practice, AWS could release a more cost-effective instance type that better fits your use case, but you won’t be able to take advantage of it until you migrate from launch configurations to launch templates.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll explain the basics of launch configurations and launch templates, and show the steps for migrating them over.&lt;/p&gt;

&lt;h2&gt;
  
  
  Launch Configurations vs. Launch Templates
&lt;/h2&gt;

&lt;p&gt;Amazon EC2 Auto Scaling groups need information to guide how to launch new EC2 instances when they scale. They rely on one source of truth; either an EC2 instance (automatically converted to a launch configuration), a specified launch configuration, or a launch template. &lt;/p&gt;

&lt;h3&gt;
  
  
  Launch Configurations
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-configurations.html"&gt;Launch configurations&lt;/a&gt; are instance configuration settings utilized by an EC2 Auto Scaling group to inform the new EC2 instances it launches. You must specify standard information when setting up a launch configuration, including the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ID of the Amazon Machine Image (AMI)&lt;/li&gt;
&lt;li&gt;Instance Type&lt;/li&gt;
&lt;li&gt;Key Pair&lt;/li&gt;
&lt;li&gt;Security Group(s)&lt;/li&gt;
&lt;li&gt;Block Device Mapping&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For inspiration, you can use similar settings for previously launched EC2 instances if they match your use case. Once you create your launch configuration, you cannot make changes to it. You can only create a new one and update your Auto Scaling group to use the new one. If you do that, existing instances will not be immediately updated.&lt;/p&gt;

&lt;h3&gt;
  
  
  Launch Templates
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-templates.html"&gt;Launch templates&lt;/a&gt; function similarly to launch configurations by specifying instance configuration information with similar information you would place in a launch configuration file. The major difference between launch configurations and launch templates is that you can set up multiple versions of a launch template.&lt;/p&gt;

&lt;p&gt;Versioning launch templates lets you establish a subset of the complete parameter set. You can reuse these to set up other versions of the base launch template. For example, you can create a launch template with a defined base configuration without including an AMI or user data script.&lt;/p&gt;

&lt;p&gt;After creating the launch template, you can add a new version with the AMI and user script for testing. You end up with two versions of the template. That means you can always have a base configuration for reference, then create new template versions as needed. It’s also possible to delete test template versions when they are no longer necessary.&lt;/p&gt;

&lt;p&gt;Launch templates are recommended because you can access the latest improvements. Some Amazon EC2 Auto scaling features aren’t available when you use launch configurations. Launch templates also let you use newer generation features of Amazon EC2.&lt;/p&gt;

&lt;p&gt;While parameters in launch templates are optional, you can’t add an AMI to one if you do not specify it when creating an Auto Scaling group. If you specify an AMI but leave out the instance type, it’s possible to add one or more when setting up your Auto Scaling group.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Migrate Launch Configurations to Launch Templates
&lt;/h2&gt;

&lt;p&gt;Copying launch configurations to convert them to launch templates, you will need to use the AWS Console.&lt;/p&gt;

&lt;h3&gt;
  
  
  Migrating One Launch Configuration
&lt;/h3&gt;

&lt;p&gt;Anyone currently using launch configurations can migrate them over to launch templates by copying them into the console.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the &lt;a href="https://console.aws.amazon.com/ec2/"&gt;EC2 console&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Look for &lt;strong&gt;Auto Scaling&lt;/strong&gt; in the navigation pane, then choose &lt;strong&gt;Launch Configurations&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Choose the launch configuration to copy.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Copy to launch template&lt;/strong&gt;, &lt;strong&gt;Copy selected&lt;/strong&gt; to set up a new template with the same name and options as your configuration.&lt;/li&gt;
&lt;li&gt;Add the name of your current launch configuration file or a new name in the &lt;strong&gt;New launch template&lt;/strong&gt; name field. The name must be unique.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Copy&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Migrating All Launch Configurations‍
&lt;/h3&gt;

&lt;p&gt;Follow the steps below if you wish to move all your launch configurations to launch templates in the console:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the &lt;a href="https://console.aws.amazon.com/ec2/"&gt;EC2 console&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Look for &lt;strong&gt;Auto Scaling&lt;/strong&gt; in the navigation pane, then choose &lt;strong&gt;Launch Configurations&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Choose the launch configuration to copy.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Copy to launch template&lt;/strong&gt;, &lt;strong&gt;Copy all&lt;/strong&gt; to copy all your configurations within the current Region to a new launch template named after your configuration.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Copy&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Updating Auto Scaling Groups to Use Launch Templates
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Using the AWS Console:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the &lt;a href="https://console.aws.amazon.com/ec2/"&gt;AWS EC2 console&lt;/a&gt;, open the navigation pane, and select &lt;strong&gt;Auto Scaling Groups&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click on the check box associated with the Auto Scaling group you want to update.&lt;/li&gt;
&lt;li&gt;In the &lt;strong&gt;Details&lt;/strong&gt; tab, choose &lt;strong&gt;Launch configuration&lt;/strong&gt;, then click &lt;strong&gt;Edit&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Switch to launch template&lt;/strong&gt;, then select the appropriate launch template.&lt;/li&gt;
&lt;li&gt;You’ll need to select a &lt;strong&gt;Version&lt;/strong&gt; for the launch template. You can customize whether the Auto Scaling group uses a default version when it scales out, or uses the latest version.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Update&lt;/strong&gt; to apply the changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Using the AWS CLI:
&lt;/h3&gt;

&lt;p&gt;You can run this &lt;a href="https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/update-auto-scaling-group.html"&gt;update-auto-scaling-group&lt;/a&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws autoscaling update-auto-scaling-group
--auto-scaling-group-name &amp;lt;value&amp;gt;
--launch-template &amp;lt;value&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An Auto Scaling group cannot have both a launch configuration and launch template parameter set. To update your Auto Scaling group to use a launch template instead of a launch configuration, you can include a launch template value in this command, like in this example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws autoscaling update-auto-scaling-group
    --auto-scaling-group-name my-asg
    --launch-template LaunchTemplateName=my-template-for-auto-scaling,Version='2'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you’ve migrated your Auto Scaling groups to use launch templates instead of launch configurations, you should stop creating new launch configurations and only create launch templates and new versions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Replacing Launch Configurations Automatically with Blink
&lt;/h2&gt;

&lt;p&gt;If you have several Auto Scaling groups, making these updates can be manual and time-consuming. You also can’t ensure that new launch configurations are not created and utilized in your AWS EC2 Auto Scaling groups.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.blinkops.com/"&gt;Blink&lt;/a&gt;, you can use a no-code automation to quickly identify Auto Scaling groups that are using launch configurations, convert those launch configurations to launch templates, and update the groups accordingly. You can also set up checks to detect new launch configurations and notify owners that they should use launch templates instead.&lt;/p&gt;

&lt;p&gt;Create your &lt;a href="https://app.blinkops.com/signup"&gt;free Blink account&lt;/a&gt; and migrate fully to launch templates today.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>WEBINAR: 2023 CloudOps and Cybersecurity Predictions</title>
      <dc:creator>Brad Johnson</dc:creator>
      <pubDate>Thu, 12 Jan 2023 00:17:10 +0000</pubDate>
      <link>https://community.ops.io/blinkops/webinar-2023-cloudops-and-cybersecurity-predictions-1gl0</link>
      <guid>https://community.ops.io/blinkops/webinar-2023-cloudops-and-cybersecurity-predictions-1gl0</guid>
      <description>&lt;p&gt;Join the Blink team, cofounder and CTO (and Ops Community member!) &lt;a class="mentioned-user" href="https://community.ops.io/haviv"&gt;@haviv&lt;/a&gt;, along with special guest Sarbjeet Johal for a webinar focused on cybersecurity and cloud recommended practices.&lt;/p&gt;

&lt;p&gt;Learn bold predictions about trends in cybersecurity automation and cloud operations for 2023 from leading cloud executives. Blink’s CTO, Haviv Rosh, will be joined by cloud economist, Sarbjeet Johal, and other special guests to discuss the news and technologies that will matter most to enterprise leaders like CISOs, CTOs, and CEOs this year.&lt;/p&gt;

&lt;p&gt;Get insights from our panel of technology executives who have led technology strategy for teams at companies like ServiceNow, Oracle, Palo Alto Networks to help you prepare for a secure and resilient New Year.&lt;/p&gt;

&lt;p&gt;Register here: &lt;a href="https://www.blinkops.com/webinar/2023-cloudops-and-cybersecurity-predictions"&gt;https://www.blinkops.com/webinar/2023-cloudops-and-cybersecurity-predictions&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloudops</category>
      <category>secops</category>
      <category>webinar</category>
      <category>video</category>
    </item>
    <item>
      <title>How to Enable Autoscaling for a GKE Cluster</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Mon, 12 Dec 2022 22:01:40 +0000</pubDate>
      <link>https://community.ops.io/blinkops/how-to-enable-autoscaling-for-a-gke-cluster-5g6p</link>
      <guid>https://community.ops.io/blinkops/how-to-enable-autoscaling-for-a-gke-cluster-5g6p</guid>
      <description>&lt;p&gt;Autoscaling is an &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler"&gt;automated, node provisioning process&lt;/a&gt; that scales your GKE clusters depending on their workload needs. As a result, GKE clusters with autoscaling enabled scale up their node pool to offer more workload availability when demand is high and scale down their node pool to save on costs when demand is low.&lt;/p&gt;

&lt;p&gt;You can control your cluster’s autoscaling by specifying a minimum and maximum number of nodes. You can also choose whether to use the default, balanced autoscaling method or the optimize-utilization setting.&lt;/p&gt;

&lt;p&gt;In this post, we’ll walk you through the basics of GKE cluster autoscaling and show you how to enable it for your node pools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Your GKE Autoscaling Options
&lt;/h2&gt;

&lt;p&gt;When you enable autoscaling, you have the ability to set guardrails and preferences:&lt;/p&gt;

&lt;h3&gt;
  
  
  Minimum and Maximum Nodes
&lt;/h3&gt;

&lt;p&gt;One of the decisions you need to make is the minimum and maximum number of nodes, either per zone (minimum nodes, maximum nodes) or in total (total minimum nodes, total maximum nodes) across your node pools.&lt;/p&gt;

&lt;p&gt;For a minimum, you’ll always need at least 1 node for each zone the node pool is in. The autoscaler will never scale down to zero because at least 1 node is needed to run the system Pods.&lt;/p&gt;

&lt;p&gt;When setting a maximum, you might want to consider the implications of a dramatic scale up. For example, if you scale beyond the IP address space you have allocated, you will receive an error and no longer be able to add new nodes. Consider these types of dependencies when selecting a maximum.&lt;/p&gt;

&lt;h3&gt;
  
  
  Balanced vs. Optimize-Utilization
&lt;/h3&gt;

&lt;p&gt;There are two types of autoscaling profiles. The default profile is &lt;strong&gt;balanced&lt;/strong&gt;, which means it scales up and down with a balance between availability of resources and node utilization. For example, balanced autoscaling allows for more nodes with lower utilization so that they are available if workloads increase.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;optimize-utilization&lt;/strong&gt; autoscaling profile by comparison prioritizes concentrating utilization in fewer nodes, which enables the removal of underutilized nodes. The result is faster scale downs and lower resource costs. If you choose this profile, you may experience performance delays when new workloads require new resources to be provisioned. Depending on your performance requirements, optimize-utilization may be a useful way to lower your GKE operating costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring Autoscaling for an Existing Node Pool
&lt;/h2&gt;

&lt;p&gt;If you want to enable autoscaling, you can start by updating your &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler"&gt;existing node pools&lt;/a&gt; using GCP Console or the GCP CLI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the GCP Console:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the Google Cloud console, navigate to the &lt;strong&gt;Google Kubernetes Engine&lt;/strong&gt; page.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the cluster you want to update from the displayed cluster list, and go to the &lt;strong&gt;Nodes&lt;/strong&gt; tab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;Node Pools&lt;/strong&gt;, select the node pool that you want to update and click &lt;strong&gt;Edit&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;Size&lt;/strong&gt;, check &lt;strong&gt;Enable autoscaling&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Specify values for &lt;strong&gt;Minimum number of nodes&lt;/strong&gt; and &lt;strong&gt;Maximum number of nodes&lt;/strong&gt;, and click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Using the gCloud CLI:
&lt;/h3&gt;

&lt;p&gt;You can use the following command to enable autoscaling for an existing node pool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters update CLUSTER_NAME \
    --enable-autoscaling \
    --autoscaling-profile=PROFILE \
    --node-pool=POOL_NAME \
    --min-nodes=MIN_NODES \
    --max-nodes=MAX_NODES \
    --region=COMPUTE_REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Plug in your details for the cluster, node pool, and region. If you only have one node pool, you can use &lt;strong&gt;default-pool&lt;/strong&gt; as your value. The region value should be your &lt;a href="https://cloud.google.com/compute/docs/regions-zones#available"&gt;Compute Engine region&lt;/a&gt;, or specific zone if it’s a zonal cluster.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;--enable-autoscaling&lt;/strong&gt; flag invokes autoscaling. You can customize your configuration with --autoscaling-profile (balanced or optimize-utilization), &lt;strong&gt;--min-nodes&lt;/strong&gt;, &lt;strong&gt;--max-nodes&lt;/strong&gt;, &lt;strong&gt;--total-max-nodes&lt;/strong&gt;, and &lt;strong&gt;--total-max-nodes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Here’s an example of enabling autoscaling with an optimize-utilization profile for the &lt;strong&gt;pool-1&lt;/strong&gt; node pool of the &lt;strong&gt;demo-1&lt;/strong&gt; cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters update demo-1 \
    --enable-autoscaling \
    --autoscaling-profile=optimize-utilization \
    --node-pool=pool-1 \
    --min-nodes=1 \
    --max-nodes=4 \
    --region=us-central1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Enabling Autoscaling When Creating a New GKE Cluster or Node Pool
&lt;/h2&gt;

&lt;p&gt;If you are creating new clusters or node pools, you can enabling GKE autoscaling by using settings in the GCP Console or GCP CLI flags:&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the GCP Console:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the Google Cloud console, go to the &lt;strong&gt;Google Kubernetes Engine&lt;/strong&gt; page.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To set up a new cluster, click &lt;strong&gt;Create&lt;/strong&gt;. To create a new node pool, select an existing cluster and click &lt;strong&gt;Add Node Pool&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Specify details for your cluster or node pool. For a new cluster, click &lt;strong&gt;default-pool&lt;/strong&gt; under &lt;strong&gt;Node Pools&lt;/strong&gt; from your navigation pane.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Next, you need to select the &lt;strong&gt;Enable autoscaling&lt;/strong&gt; checkbox. For node pools, you’ll find it under the &lt;strong&gt;Size&lt;/strong&gt; section. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Modify the values for &lt;strong&gt;Minimum number of nodes&lt;/strong&gt; and &lt;strong&gt;Maximum number of nodes&lt;/strong&gt; according to your requirements, and click &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Using the gCloud CLI:
&lt;/h3&gt;

&lt;p&gt;When you are creating new clusters or new node pools, you can ensure that they have autoscaling enabled by including the &lt;strong&gt;--enable-autoscaling&lt;/strong&gt; flag and specifying &lt;strong&gt;--min-nodes&lt;/strong&gt; and &lt;strong&gt;--max-nodes&lt;/strong&gt; values.&lt;/p&gt;

&lt;p&gt;Here’s an example of creating a new cluster with &lt;a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/create"&gt;this command&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters create my-cluster --enable-autoscaling \
    --num-nodes=30 \
    --min-nodes=15 --max-nodes=50 \
    --region=us-central
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s an example of creating a new node pool with &lt;a href="https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create"&gt;this command&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container node-pools create node-pool-1 
    --cluster=sample-cluster 
    --num-nodes=5
    --enable-autoscaling
    --min-nodes=5 --max-nodes=15
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By updating your existing node pools, and creating new clusters and node pools with autoscaling enabled, you’ll be able to ensure your clusters are able to automatically adapt to changing workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Checking Whether Autoscaling is Enabled with Blink
&lt;/h2&gt;

&lt;p&gt;Autoscaling can make your GKE clusters more effective and efficient. If you want to make it a standard that your organization's GKE clusters have autoscaling enabled, then you can manually enable it using the steps above. Unfortunately, that requires many manual updates and doesn’t ensure that future clusters will also have autoscaling enabled.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.blinkops.com/"&gt;Blink&lt;/a&gt;, you can use no-code automations to quickly check if you have any GKE clusters without autoscaling enabled. You can then send a notification to Slack on a regular basis if there are any clusters missing this setting. You can be more agile with making adjustments and getting information when you are only a click away.&lt;/p&gt;

&lt;p&gt;Create your &lt;a href="https://app.blinkops.com/signup"&gt;free Blink account&lt;/a&gt; and better manage your GKE clusters today.&lt;/p&gt;

</description>
      <category>gcp</category>
      <category>gke</category>
      <category>tutorials</category>
    </item>
    <item>
      <title>Operational Excellence in a Cloud-Native World: No-Code Automation and DevOps</title>
      <dc:creator>Haviv Rosh</dc:creator>
      <pubDate>Fri, 02 Dec 2022 17:35:53 +0000</pubDate>
      <link>https://community.ops.io/blinkops/operational-excellence-in-a-cloud-native-world-no-code-automation-and-devops-19k2</link>
      <guid>https://community.ops.io/blinkops/operational-excellence-in-a-cloud-native-world-no-code-automation-and-devops-19k2</guid>
      <description>&lt;p&gt;In “&lt;a href="https://community.ops.io/blinkops/operational-excellence-in-a-cloud-native-world-what-is-operational-excellence-4oo4"&gt;What is Operational Excellence?,&lt;/a&gt;” the first post in this series, we defined what operational excellence means in today’s modern, cloud-native world. Consulting two popular frameworks, &lt;a href="https://aws.amazon.com/architecture/well-architected/?wa-lens-whitepapers.sort-by=item.additionalFields.sortDate&amp;amp;wa-lens-whitepapers.sort-order=desc&amp;amp;wa-guidance-whitepapers.sort-by=item.additionalFields.sortDate&amp;amp;wa-guidance-whitepapers.sort-order=desc"&gt;AWS Well-Architected&lt;/a&gt; and &lt;a href="https://www.devops-research.com/research.html"&gt;DORA’s five metrics&lt;/a&gt;, we determined how to appropriately measure DevOps efficiency and effectiveness. After learning from both of these frameworks, we compiled a list of &lt;strong&gt;speed (performance)&lt;/strong&gt;, &lt;strong&gt;scalability&lt;/strong&gt;, and &lt;strong&gt;reliability&lt;/strong&gt; as key indicators of operational excellence.&lt;/p&gt;

&lt;p&gt;Today, cloud engineering teams are now responsible for managing hundreds of cloud tools and services across different environments. Getting all these cloud services to work together requires major configuration and maintenance effort. For most teams, that means manual integration projects, many dependencies, and writing glue code.&lt;/p&gt;

&lt;p&gt;But, it doesn’t have to stay this way. &lt;strong&gt;No-code automation&lt;/strong&gt; is changing how operations teams build their cloud workflows. Platforms like Blink now come with purpose-built automations for different cloud tools and services, reducing the effort required to build new workflows. In the &lt;a href="http://library.blinkops.com/"&gt;Blink Automation Library,&lt;/a&gt; there are over 5000+ cloud automations available for teams to deploy today.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Does DevOps Automation Mean Today?
&lt;/h2&gt;

&lt;p&gt;The way cloud engineering teams think about “DevOps automation” has shifted over the last few years. Today, DevOps automation means more than just setting up CI/CD pipelines.&lt;/p&gt;

&lt;p&gt;Widespread adoption of CI/CD tools has led to a misguided belief that DevOps are primarily responsible for integration and delivery workflows. But these activities all occur before a code is deployed into production. DevOps are also responsible for the reliable operations and maintenance of in-production cloud applications, involving tasks that have their own complex workflows. Many of these workflows involve manual processes that are not easily or adequately solved by CI/CD tools.&lt;/p&gt;

&lt;p&gt;For example, how are you supposed to use Jenkins or any other CI/CD platforms to solve these kinds of problems?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.blinkops.com/blog/finding-and-deleting-unused-aws-iam-roles"&gt;Finding and Deleting Unused AWS IAM Roles&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.blinkops.com/blog/how-to-find-and-remove-old-ebs-snapshots"&gt;Finding and Removing Old EBS Snapshots&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.blinkops.com/blog/finding-and-resizing-amazon-ec2-instances-with-low-cpu-usage"&gt;Finding and Resizing Amazon EC2 Instances with Low CPU Usage&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.blinkops.com/blog/how-to-find-ec2-instances-scheduled-to-retire-soon"&gt;Finding EC2 Instances Scheduled To Retire Soon&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.blinkops.com/blog/enforcing-mandatory-tags-across-aws-resources"&gt;Enforcing Mandatory Tags Across Your AWS Resources&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.blinkops.com/blog/how-to-scale-down-aws-eks-clusters-nightly-to-lower-ec2-costs"&gt;Scaling Down AWS EKS Clusters at Night&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Azure:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.blinkops.com/blog/finding-and-deleting-unattached-disks-with-the-azure-cli"&gt;Finding and Deleting Unattached Disks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.blinkops.com/blog/finding-and-resizing-azure-virtual-machines-with-low-cpu-usage"&gt;Finding and Resizing Azure Virtual Machines with Low CPU Usage&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.blinkops.com/blog/finding-and-disabling-non-active-users-in-azure"&gt;Finding and Disabling Non-Active Users in Azure&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.blinkops.com/blog/finding-and-removing-unused-azure-virtual-network-gateways"&gt;Finding and Removing Unused Azure Virtual Network Gateways&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.blinkops.com/blog/how-to-pause-your-aks-clusters-nightly"&gt;Pausing Your AKS Clusters Nightly&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.blinkops.com/blog/enforcing-mandatory-tags-across-azure-resources"&gt;Enforcing Mandatory Tags Across Your Azure Resources&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;GCP:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.blinkops.com/blog/how-to-pause-your-gke-cluster-nightly"&gt;Pausing Your GKE Cluster at Night&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.blinkops.com/blog/finding-and-resizing-gcp-compute-instances-with-low-cpu-usage"&gt;Finding and Resizing GCP Compute Instances with Low CPU Usage&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.blinkops.com/blog/lowering-costs-for-long-running-gcp-instances-with-committed-use-discounts"&gt;Identifying Long Running GCP Instances and Applying Committed Use Discounts&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.blinkops.com/blog/finding-and-removing-unattached-gcp-external-ip-addresses"&gt;Finding and Removing Unattached GCP External IP Addresses&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.blinkops.com/blog/how-to-find-and-delete-unattached-gcp-disks"&gt;Finding and Deleting Unattached GCP Disks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.blinkops.com/blog/enforcing-labels-and-tags-across-your-gcp-resources"&gt;Enforcing Labels and Tags Across Your GCP Resources&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that’s just considering operational tasks related to the major cloud providers. Don’t forget about identity management, security, observability, incident response, communication, and other third-party tools necessary to running business applications. The unfortunate reality is that CI/CD and IaC tools cannot run operational or business workflows because they are unable to react to events that happen in the cloud (such as new resources being created, new vulnerabilities or incidents, etc..). &lt;/p&gt;

&lt;p&gt;Without an automation platform dedicated to managing operational workflows and business processes, DevOps engineers are left to navigate serverless/microservices architectures themselves. When it comes to building global operational workflows, manual scripts and CI/CD hacks won’t cut it for achieving reliability objectives or meeting customer SLAs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Breaking the Cycle of DevOps Burnout Culture
&lt;/h2&gt;

&lt;p&gt;Lack of a dedicated platform for maintaining cloud-native workflows transfers operational burden to DevOps engineers who must stitch solutions together manually. Workflows are slow to build and brittle to run. Updates take significant development time and effort.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Wasn’t DevOps about improving engineering culture and efficiency?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/devops/what-is-devops/"&gt;AWS states&lt;/a&gt; that “DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity.” &lt;/p&gt;

&lt;p&gt;By now, many organizations have adopted the cultural philosophies and practices of DevOps. Agile planning and tools are common, along with agile-based development. But manually written scripts are still scattered in Git repositories, APIs still get manually glued together, and plugin updates are fraught processes that risk costly downtime.&lt;/p&gt;

&lt;p&gt;Your level of commitment to DevOps philosophies isn’t enough when your tools and practices don’t support your workflows. &lt;/p&gt;

&lt;p&gt;DevOps culture is about more than just making a commitment to DevOps methodology. It is the real experience of being a DevOps contributor on a software development team. For most teams, that means lots of stress, too much work, and mountains of distracting service requests from developers and business teams.&lt;/p&gt;

&lt;p&gt;The daily experience for DevOps engineers is filled with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Context-switching:&lt;/strong&gt; Every day, DevOps engineers get notifications from monitoring tools, incident management platforms, project management tools, and communications platforms like Slack. They continuously receive urgent inbound service requests, get assigned on-call duty, and are still accountable for finishing their scheduled work. All the while, DevOps engineers must log in and out of different cloud tools, context-switching between different tasks and platforms costing significant time and cognitive overhead.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stress and burnout:&lt;/strong&gt; DevOps engineers experience an overall lack of control about what they’re working on day-to-day. With many demands on their time and too few skilled engineers to get everything done, DevOps practitioners are especially prone to burnout and churn. According to the &lt;a href="https://www.devops-research.com/research.html"&gt;DORA research team,&lt;/a&gt; having good team communication is a major factor for DevOps success. They found that “stable teams where information flows freely have lower levels of burnout.” Meanwhile, those affected by poor organizational communication are often the most vulnerable, as “employees from underrepresented groups reported higher levels of burnout.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Poor knowledge transfer:&lt;/strong&gt; Many operational processes and workflows lack proper documentation. When automations exist, they’re often only usable by the DevOps engineer who built them. Sometimes, DevOps engineers are unaware a relevant workflow already exists elsewhere in their organization and end up duplicating effort. This problem is exacerbated when employees leave the organization, taking valuable institutional knowledge with them. Meanwhile, skilled DevOps engineers are more difficult than ever to hire and retain.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Breaking the cycle of DevOps burnout culture requires being realistic about the demands being placed on DevOps teams and contributors. It’s critical that leaders establish clear expectations with teams and individual contributors up front, and continuously check in with direct reports to ensure they remain aligned on the correct objectives. Including DevOps stakeholders in decision making processes early and often ensures that hands-on operational wisdom is being considered during planning processes. Lastly, it’s important to prioritize producing complete and comprehensive documentation in order to aid knowledge transfer and reduce burnout.&lt;/p&gt;

&lt;h2&gt;
  
  
  Platform Engineering, Internal Developer Portals (IDPs), and Self-Service Automation
&lt;/h2&gt;

&lt;p&gt;This past October, &lt;a href="https://www.gartner.com/en/articles/what-is-platform-engineering"&gt;Gartner&lt;/a&gt; published an article on &lt;strong&gt;platform engineering,&lt;/strong&gt; which is an emerging trend within digital transformation efforts that “improves developer experience and productivity by providing self-service capabilities with automated infrastructure operations.” This effort is deeply rooted in business objectives. Garter defines “​​the goal is a frictionless, self-service developer experience that offers the right capabilities to enable developers and others to produce valuable software with as little overhead as possible.”&lt;/p&gt;

&lt;p&gt;Recently, we’ve seen growing popularity, both commercially and in the open source world, of what’s been termed internal developer portals (IDPs). These are user interfaces that allow developers to request services on-demand. IDPs improve internal developer experience for an organization, but they are limited in the types of workflows you can create. For example, anytime a developer needs a new development environment, they are able to request on on-demand. &lt;/p&gt;

&lt;p&gt;Garter found that “initial platform-building efforts often begin with internal developer portals, as these are most mature. IDPs provide a curated set of tools, capabilities and processes. They are selected by subject matter experts and packaged for easy consumption by development teams. The platform team, in close consultation with the developers they support, must determine which approach is best for their unique circumstances.” However, the limitations of IDPs mean only very specific, developer-focused cloud workflows are solved.&lt;/p&gt;

&lt;p&gt;With Blink, we decided to extend the utility of an IDP to all of an organization’s operational workflows. Using the Blink Self-Service Portal, you share automations that empower users to request permissions, provision cloud environments, onboard or offboard team members, initiate password resets, automate software installations, and many more workflows common to cloud-native teams. Blink provides a single system-of-action for DevOps engineers to build all the workflows that enable business teams and their whole organization, in addition to developers.&lt;/p&gt;

&lt;p&gt;Blink delivers a more collaborative operational model that relevant automations are always available for internal teams on-demand. The Blink Self-Service Portal makes it easy to proactively support your coworkers, speed up business processes, and frees you up to focus on other projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Operational Excellence in DevOps?
&lt;/h2&gt;

&lt;p&gt;At the end of the day, the best software engineering teams build better, faster, more reliable applications. Their internal operations workflows are a competitive advantage that helps them remain agile while scaling their business reliably. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;So what do speed, scalability, and reliability truly mean from a DevOps perspective?&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Speed and DevOps
&lt;/h3&gt;

&lt;p&gt;Looking at outcomes, speed means being agile in response to changing customer expectations and new technologies. Businesses want to be fast to adopt and integrate with new technologies. This is a competitive advantage, where speed is critical. Integrating new cloud tools or services is costly in terms of time and effort required. Integrations are typically tedious, manual processes and require onboarding time to learn the vocabulary and nuances of a different tool or platform. Businesses who more rapidly adopt new cloud technologies will deliver new features and capabilities faster, gain better insights, and outperform competitors.&lt;/p&gt;

&lt;p&gt;Speed also means operational efficiency. Three of the five DORA metrics are directly related to speed; Deployment frequency, Lead time for changes, and Time to restore service. Most cloud-native teams have adopted CI/CD and IaC tools in order to solve inefficiencies in these areas.&lt;/p&gt;

&lt;p&gt;Speed also takes the form of improved SLAs for customers and mean-time-to-response (MTTR) when troubleshooting performance or security issues. Offering faster, more reliable services is a competitive advantage. Responding to incidents faster prevents outages and costly downtime. According to the &lt;a href="https://www.devops-research.com/research.html"&gt;DORA research team,&lt;/a&gt; 28% of respondents take 1-7 days time to restore service when experiencing stability issues. An additional 21% of the lowest performers take between 1-6 months to resolve an issue!&lt;/p&gt;

&lt;h3&gt;
  
  
  Scalability and DevOps
&lt;/h3&gt;

&lt;p&gt;Scalability affects multiple different objectives in a DevOps context. From a DevOps perspective, it’s helpful to evaluate your ability to scale across three different axes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability of processes&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Does your existing processes scale to accommodate increased demand or team growth?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;When failures occur, is documentation readily available and actionable to resolve issues?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Is there an established process for new integrations or creating new workflows?&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Scalability of infrastructure&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Do you have established workflows for scaling infrastructure up or out?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Does your infrastructure accommodate rapid fluctuations in demand?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;How do you manage cloud costs at scale?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Do you have processes in place to prevent unnecessary cloud spend?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;What workflows are in place to ensure outages are avoided or quickly resolved?&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Scalability of communications&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;How many communication channels does your organization use?&lt;/em&gt; &lt;/li&gt;
&lt;li&gt;&lt;em&gt;How difficult is it to coordinate across teams or channels?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;How difficult is it to create actionable alerts for relevant stakeholders?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Do DevOps engineers know where to find relevant information?&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No-code automation makes it easier to scale your cloud infrastructure, while being agile to the operational challenges of managing distributed cloud applications at scale. In a world of microservices and countless cloud tools, it’s more important than ever for DevOps engineers to leverage automation to abstract away ever increasing complexity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reliability and DevOps
&lt;/h3&gt;

&lt;p&gt;Reliability is both an outcome, but also a predictor of organizational excellence. The &lt;a href="https://www.devops-research.com/research.html"&gt;DORA research team&lt;/a&gt; found that “both the practices we associate with reliability engineering and the extent to which people report meeting their reliability expectations are powerful predictors of high levels of organizational performance.” The authors recommend prioritizing having clear reliability goals, and making sure those goals tie back concrete and measurable reliability metrics.&lt;/p&gt;

&lt;p&gt;Clear reliability goals help businesses create defensive value by delivering dependable services over time and establishing trust with customers. Furthermore, having clear reliability goals helps ensure better team communication and leads to less DevOps churn. Reliable operations workflows also create offensive value, by enabling businesses to achieve new, better, faster outcomes. Having clear reliability goals helps organizations reduce context-switching, leading to less burnout, happier teams, and better overall communication practices.&lt;/p&gt;

&lt;p&gt;Additionally, by creating the processes and systems necessary to ensure reliable operations of your platform, you’re providing peace-of-mind for your DevOps and SREs that they are supporting a healthy system. While there are always bound to be outages, having clear expectations and processes for your DevOps team ensures greater reliability for your platform and applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try Blink today
&lt;/h2&gt;

&lt;p&gt;Blink enables DevOps, SecOps, and FinOps to achieve operational excellence by making it easy to create automated workflows across the cloud platforms and services they use every day. The impact of adopting a no-code automation platform like Blink is happier, more productive development teams and more reliable, resilient cloud operations.&lt;/p&gt;

&lt;p&gt;The best part? The no-code future for cloud operations is available today. &lt;a href="https://app.blinkops.com/signup"&gt;Sign up to create a Blink account.&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Manually Rotate Keys in GCP</title>
      <dc:creator>Patrick Londa</dc:creator>
      <pubDate>Thu, 01 Dec 2022 17:26:54 +0000</pubDate>
      <link>https://community.ops.io/blinkops/how-to-manually-rotate-keys-in-gcp-5802</link>
      <guid>https://community.ops.io/blinkops/how-to-manually-rotate-keys-in-gcp-5802</guid>
      <description>&lt;p&gt;Key rotation is a critical security practice. In GCP, you can either rotate keys by enabling automatic rotation or by rotating a key manually.&lt;/p&gt;

&lt;p&gt;Manual rotations make sense if your key is compromised or if you are modifying your application to use a different or stronger algorithm.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll show you how to manually rotate keys using the GCP console and the gCloud CLI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Manually Rotating Keys in GCP
&lt;/h2&gt;

&lt;p&gt;You will need to have permissions granted by the &lt;strong&gt;Cloud KMS Admin&lt;/strong&gt; role to rotate keys in GCP. If you want to also do the re-encryption step below, you’ll need permissions granted by the &lt;strong&gt;Cloud KMS CryptoKey Encrypter/Decrypter&lt;/strong&gt; role.&lt;/p&gt;

&lt;h4&gt;
  
  
  Using the GCP Console:
&lt;/h4&gt;

&lt;p&gt;These are the &lt;a href="https://cloud.google.com/kms/docs/rotating-keys#manual"&gt;steps&lt;/a&gt; to manually rotate keys in the GCP Console:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the &lt;strong&gt;Key Management&lt;/strong&gt; page from the Google Cloud Console.&lt;/li&gt;
&lt;li&gt;Select the name of the key ring that contains the key you want to create a new version for.&lt;/li&gt;
&lt;li&gt;Select the key for which you need to create a new version.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Rotate&lt;/strong&gt; in the displayed header.&lt;/li&gt;
&lt;li&gt;Again, click &lt;strong&gt;Rotate&lt;/strong&gt; in the prompt to confirm the key rotation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, you’ll see a new version of your key is created and is marked as the primary key.&lt;/p&gt;

&lt;p&gt;If you want to use a different existing key version, you can make it primary key using these &lt;a href="https://cloud.google.com/kms/docs/rotating-keys#set_primary"&gt;steps&lt;/a&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Choose the key whose primary version you want to update.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;View More&lt;/strong&gt; in the row of your intended key.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Make primary version&lt;/strong&gt; in the menu.&lt;/li&gt;
&lt;li&gt;In the confirmation prompt, click &lt;strong&gt;Make primary&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you have encrypted anything with the prior key, you’ll need to re-encrypt it with your new key, and then destroy the old key. This encryption step can only be done with the CLI and we’ll show it in the encryption section below.&lt;/p&gt;

&lt;h4&gt;
  
  
  Using the gCloud CLI:
&lt;/h4&gt;

&lt;p&gt;To run Cloud KMS on the command line, you’ll first need to install the latest version of &lt;a href="https://cloud.google.com/sdk/gcloud"&gt;gCloud CLI&lt;/a&gt;. Once you’ve done that, you can run &lt;a href="https://cloud.google.com/sdk/gcloud/reference/kms/keys/create"&gt;this command&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud kms keys versions create \
    --key &amp;lt;KEY_NAME&amp;gt; \
    --keyring &amp;lt;KEY_RING&amp;gt; \
    --location &amp;lt;LOCATION&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can input values for each of these parameters:&lt;/p&gt;

&lt;p&gt;&amp;lt;KEY_NAME&amp;gt; refers to the name of the key.&lt;br&gt;
&amp;lt;KEY_RING&amp;gt; refers to the name of the key ring that consists of the key you want to rotate.&lt;br&gt;
&amp;lt;LOCATION&amp;gt; refers to the key ring Cloud KMS location.&lt;/p&gt;

&lt;p&gt;Here’s an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud kms keys versions create
    --key=bowser
    --keyring=castle
    --location=global
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then set an existing key version as the primary version with &lt;a href="https://cloud.google.com/sdk/gcloud/reference/kms/keys/update"&gt;this command&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud kms keys update &amp;lt;KEY_NAME&amp;gt; \
    --keyring &amp;lt;KEY_RING&amp;gt; \
    --location &amp;lt;LOCATION&amp;gt; \
    --primary-version &amp;lt;KEY_VERSION&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The only new flag in this command is &amp;lt;KEY_VERSION&amp;gt; which refers to the version number of the new primary key.&lt;/p&gt;

&lt;h2&gt;
  
  
  Re-encrypting Data with a New Primary Key
&lt;/h2&gt;

&lt;p&gt;If you have encrypted data with the prior key, that prior key can still be used to decrypt that data. If your key is compromised, your data will be insecure unless you re-encrypt it with your new primary key.&lt;/p&gt;

&lt;p&gt;You should do this with the following gCloud &lt;a href="https://cloud.google.com/sdk/gcloud/reference/kms/encrypt"&gt;CLI command&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud kms encrypt \
    --key &amp;lt;KEY_NAME&amp;gt; \
    --keyring &amp;lt;KEY_RING&amp;gt; \
    --location &amp;lt;LOCATION&amp;gt;  \
    --plaintext-file &amp;lt;FILE_TO_BE_ENCRYPTED&amp;gt; \
    --ciphertext-file &amp;lt;FILE_TO_STORE_ENCRYPTED_DATA&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&amp;lt;FILE_TO_BE_ENCRYPTED&amp;gt; should be the local file path for reading the plaintext data.&lt;br&gt;
&amp;lt;FILE_TO_STORE_ENCRYPTED_DATA&amp;gt; should be the local file path for where you plan to save the encrypted output.&lt;br&gt;
If you want to verify that your encryption is now using the new primary key, you can test it by running the &lt;a href="https://cloud.google.com/kms/docs/re-encrypt-data#decrypt"&gt;decrypt command&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Disabling or Destroying the Prior Key Version
&lt;/h2&gt;

&lt;p&gt;Disabling or destroying a key both remove the key’s functionality. It’s important to ensure that compromised keys are disabled or destroyed.&lt;/p&gt;

&lt;p&gt;The difference between the two outcomes is that destroyed keys are removed permanently (after their scheduled destruction date), which means that if you have anything encrypted that relies on that key to be decrypted, and that key is destroyed, you lose access to that data permanently. If you are certain that you no longer need the key, destroying it is a way to clean up your key ring and prevent a compromised key from somehow being restored.&lt;/p&gt;
&lt;h4&gt;
  
  
  Using the GCP Console:
&lt;/h4&gt;

&lt;p&gt;In the GCP Console, you can disable and destroy a key by following &lt;a href="https://cloud.google.com/kms/docs/re-encrypt-data#disable-or-destroy"&gt;these steps&lt;/a&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the key ring view, click the key you recently rotated.&lt;/li&gt;
&lt;li&gt;Next to the version of the key you want to change, you’ll see an &lt;strong&gt;Actions&lt;/strong&gt; column with three vertical dots. Click on the dots.&lt;/li&gt;
&lt;li&gt;Depending on which action you want to take, you can either select &lt;strong&gt;Disable&lt;/strong&gt; or &lt;strong&gt;Destroy&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;If you choose &lt;strong&gt;Destroy&lt;/strong&gt;, you will need to type in the key name and click &lt;strong&gt;Schedule Destruction&lt;/strong&gt; to confirm the action.
Once you have done this, you will have fully rotated your keys and cleaned up the prior key version.&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;
  
  
  Using the gCloud CLI:
&lt;/h4&gt;

&lt;p&gt;You can also disable or destroy keys with the CLI&lt;/p&gt;

&lt;p&gt;You can use &lt;a href="https://cloud.google.com/sdk/gcloud/reference/kms/keys/versions/disable"&gt;this command&lt;/a&gt; to disable a key version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud kms keys versions disable &amp;lt;KEY_VERSION&amp;gt; \
    --key &amp;lt;KEY_NAME&amp;gt; \
    --keyring &amp;lt;KEY_RING&amp;gt; \
    --location &amp;lt;LOCATION&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And you can use &lt;a href="https://cloud.google.com/sdk/gcloud/reference/kms/keys/versions/destroy"&gt;this command&lt;/a&gt; to destroy a key version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud kms keys versions destroy &amp;lt;KEY_VERSION&amp;gt; \
    --key &amp;lt;KEY_NAME&amp;gt; \
    --keyring &amp;lt;KEY_RING&amp;gt; \
    --location &amp;lt;LOCATION&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you run the destroy a key version command, it will be scheduled for destruction. You can 24 hours after that to change your mind and &lt;a href="https://cloud.google.com/kms/docs/destroy-restore#restore"&gt;restore the key&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using No-Code Steps in Blink to Rotate GCP Keys
&lt;/h2&gt;

&lt;p&gt;If you need to manually rotate access keys, you will need to remember each step and stop what you are working on to ensure you do it all properly. Working through these steps each time isn’t hard, but it takes time.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.blinkops.com/"&gt;Blink&lt;/a&gt;, you can easily create an automation that rotates access keys, re-encrypts files that are using the prior key version, and disables the prior key version with a simple click. If a key is compromised, you’ll be able to act quickly.&lt;/p&gt;

&lt;p&gt;Blink also allows you schedule disabled keys for destruction after a certain period of time. Ensure that your keys are cleaned up while also giving your team time to validate that you no longer need the old versions.&lt;/p&gt;

&lt;p&gt;Create your &lt;a href="https://app.blinkops.com/signup"&gt;free Blink account&lt;/a&gt; and make it easy to rotate your GCP keys.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
