<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>The Ops Community ⚙️: ujjavala</title>
    <description>The latest articles on The Ops Community ⚙️ by ujjavala (@ujjavala).</description>
    <link>https://community.ops.io/ujjavala</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://community.ops.io/feed/ujjavala"/>
    <language>en</language>
    <item>
      <title>The Case for Code Simplicity in the Age of AI</title>
      <dc:creator>ujjavala</dc:creator>
      <pubDate>Thu, 15 May 2025 04:31:40 +0000</pubDate>
      <link>https://community.ops.io/ujjavala/the-case-for-code-simplicity-in-the-age-of-ai-4lp9</link>
      <guid>https://community.ops.io/ujjavala/the-case-for-code-simplicity-in-the-age-of-ai-4lp9</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;⚠️ This blog is opinionated. Please read with an open mind. You don’t have to agree, but if it gets you to pause and think, it has done its job.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We buy jewellery to show off. It makes sense because jewellery is valuable. It shines, is admired, and can even increase in value over time.&lt;/p&gt;

&lt;p&gt;But code?&lt;/p&gt;

&lt;p&gt;Code is not an asset. It’s a liability. It gets outdated, collects bugs, and needs constant maintenance. But we still write it like it's something to be displayed — fancy, complicated, and a &lt;em&gt;true&lt;/em&gt; legacy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Complexity is Expensive
&lt;/h2&gt;

&lt;p&gt;I’ve seen code that could have been done in just a few lines, but instead, it’s spread across hundreds of lines and many packages. We create interfaces for no reason, factories for objects that are only used once, and layers of abstraction that make the code hard to follow.&lt;/p&gt;

&lt;p&gt;It often starts with simple ideas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What if we need to use different data sources in the future?&lt;/li&gt;
&lt;li&gt;Let's put this in an abstraction just in case we need to change it later.&lt;/li&gt;
&lt;li&gt;This could be useful as a general-purpose class later.&lt;/li&gt;
&lt;li&gt;Let’s make it reusable across projects with its own settings.&lt;/li&gt;
&lt;li&gt;Maybe we’ll need a base class in the future, even though we only have one subclass now.&lt;/li&gt;
&lt;li&gt;This logic feels like it could go in a helper… or maybe in a service, or even as an interface for a domain service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A few weeks later, what started as a simple function now requires navigating through nine files, three interfaces, and helpers that just call other helpers.&lt;/p&gt;

&lt;p&gt;What could have been a simple task is now a tangled mess of services, abstractions, and classes. A simple task becomes a complicated web of strategies, wrappers, decorators, and configurations. All this extra complexity is added, not because it’s needed, but because we think it’s the &lt;em&gt;right&lt;/em&gt; way to do things.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Things Go Wrong
&lt;/h2&gt;

&lt;p&gt;Everything is fine until something breaks in production.&lt;/p&gt;

&lt;p&gt;Suddenly, your phone is buzzing. Slack is full of angry messages. Stakeholders want answers fast. You need to fix the issue, but where do you even start?&lt;/p&gt;

&lt;p&gt;You open the handler, which calls a mapper, which talks to a strategy, which calls a service, which delegates to a factory, and only then do you get the data.&lt;/p&gt;

&lt;p&gt;The person who wrote this has probably moved to another team, another country, or even another job. Documentation is missing or outdated. Comments are useless. The tests pass, but they don’t cover what’s actually broken.&lt;/p&gt;

&lt;p&gt;Every time you open a new file, you’re losing more time.&lt;/p&gt;

&lt;p&gt;Instead of solving the issue quickly, you’re searching through a confusing system built for flexibility that now feels like a burden.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Cost of Complexity
&lt;/h2&gt;

&lt;p&gt;Overcomplicating code doesn’t just slow you down a little. It costs time and money.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It makes it harder for new developers to get up to speed.&lt;/li&gt;
&lt;li&gt;It takes longer to fix bugs.&lt;/li&gt;
&lt;li&gt;It makes it scary to make changes to the code.&lt;/li&gt;
&lt;li&gt;Simple feature requests now take forever because you have to dig through so many layers of code.&lt;/li&gt;
&lt;li&gt;You end up asking, “What does this class do, and why does it exist?”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you add more layers, your codebase becomes harder to work with. It starts to look like a puzzle no one wants to solve. It feels more like a fragile house of cards than a solid foundation.&lt;/p&gt;

&lt;p&gt;The problem is that in trying to prepare for every possible future need, we forget to just solve the problem at hand. The cost of maintaining overengineered code grows quickly. As team members leave, the knowledge of how the code works disappears. Fixing bugs or adding features becomes harder and slower, and production issues take days to fix instead of hours.&lt;/p&gt;

&lt;p&gt;Complex code takes longer to fix, not because the problem itself is complex, but because the code is hard to understand and work with.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Human Side of Simplicity
&lt;/h2&gt;

&lt;p&gt;I’ve often faced criticism for choosing simple solutions, writing five lines of code where others might introduce factories or interfaces.&lt;/p&gt;

&lt;p&gt;But this preference for simplicity does not come from a lack of understanding of complexity. On the contrary, it comes from understanding just how important clarity and maintainability are. Simplicity is not a lack of depth; it is the result of deliberate thinking. It is about solving problems in a way that others and your future self can easily grasp and build on.&lt;/p&gt;

&lt;p&gt;Through experience, I have learned the true value of principles like &lt;a href="https://martinfowler.com/bliki/Yagni.html" rel="noopener noreferrer"&gt;YAGNI&lt;/a&gt;, not just as a catchy acronym, but as a practical guide in fast-moving and constantly changing environments. That mindset taught me to solve problems in a way that avoids overengineering, focusing on clarity and durability instead. It is not about showing off. It is about building something that works well and lasts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enterprise Doesn’t Always Mean Complexity
&lt;/h2&gt;

&lt;p&gt;I understand working in large companies, where the goal is to build systems that scale, are secure, and are reusable. But here's the truth: Most enterprise systems don't need complex solutions. They’re just handling data — fetching it, transforming it, and sending it somewhere else.&lt;/p&gt;

&lt;p&gt;Yet there’s a sad irony in how the enterprise world often rewards complexity. A pull request with intricate abstractions and layered patterns gets applause. Meanwhile, a simple, straightforward solution can be perceived as incompetence and is met with condescension. When we celebrate complexity and question simplicity, we not only discourage maintainable code, but we also make it harder for tools like Copilot to support us. The cost of this culture isn’t just technical debt. It’s wasted time, missed opportunities, and avoidable pain.&lt;/p&gt;

&lt;p&gt;We tend to overcomplicate systems, treating basic tasks like engineering a rocket. In reality, it’s more like plumbing. Important, yes, but not rocket science.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Struggles with Overengineering
&lt;/h2&gt;

&lt;p&gt;AI tools such as Copilot and ChatGPT can help us write code faster, suggest improvements, and even debug problems. But the power of AI only comes when the code is clear and well-structured. If the code is an overengineered mess, even the most advanced AI tools struggle to help.&lt;/p&gt;

&lt;p&gt;While Copilot can generate code quickly, it still relies on patterns, simplicity, and clarity to be truly effective. When your code is hard to follow, AI can’t work its magic. It can’t refactor your spaghetti code, nor can it suggest improvements for systems that are poorly designed. When code lacks clarity, even AI becomes confused.&lt;/p&gt;

&lt;p&gt;But here's the twist. AI can generate code, but it can't &lt;em&gt;understand&lt;/em&gt; it. It doesn’t have the context you do. It can’t debug issues with the same depth of understanding, nor can it fix architectural problems. So, the faster we generate complexity, the more we’re pushing that burden onto ourselves and our teams. &lt;/p&gt;

&lt;p&gt;Moreover, while AI can help with repetitive tasks or boilerplate code, it doesn’t help you when things go wrong. In production, when the system breaks, you need to trace back to the root cause. That’s a task that requires human understanding, and if the code is too complex, even that will take much longer. AI can speed up development, but it doesn’t clean up after you.&lt;/p&gt;

&lt;p&gt;This is where simplicity becomes a key asset. By writing clear, concise, and well-structured code, you not only improve your workflow but also ensure that when AI steps in, it actually helps. Copilot and ChatGPT are tools to aid the process, not fix problems caused by poor code design. They cannot rescue you from complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simple Code Is Not Basic, It Is Brave
&lt;/h2&gt;

&lt;p&gt;It’s easy to make code complicated. Just keep adding layers until your IDE can’t handle it.&lt;/p&gt;

&lt;p&gt;But it takes real discipline to keep things simple. It’s hard to resist the urge to overbuild for the future. It’s easier to say, "Let’s solve the problem at hand first."&lt;/p&gt;

&lt;p&gt;Simplicity is not about being basic. It’s about being intentional. It’s about being the kind of developer who thinks not just about architecture, but about maintainability over time.&lt;/p&gt;

&lt;p&gt;So, the next time you want to add another layer or abstraction, ask yourself one question:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Am I doing this for clarity, or just to look smart?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You’re not writing a novel. You’re not creating an art project. You’re writing instructions for a machine and for humans who will need to read and understand them.&lt;/p&gt;

&lt;p&gt;Write for them. Write for the person who will be on call when something breaks.&lt;/p&gt;

&lt;p&gt;Write for your future self, who will have to revisit this code on a stressful Friday afternoon.&lt;/p&gt;

&lt;p&gt;Write less. Think more. And remember.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The best code is the one that doesn’t need to be explained.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>nocode</category>
      <category>random</category>
    </item>
    <item>
      <title>Event-driven design with Log Events and RabbitMQ in Golang</title>
      <dc:creator>ujjavala</dc:creator>
      <pubDate>Tue, 23 Jul 2024 01:47:19 +0000</pubDate>
      <link>https://community.ops.io/ujjavala/event-driven-design-with-log-events-and-rabbitmq-in-golang-3oio</link>
      <guid>https://community.ops.io/ujjavala/event-driven-design-with-log-events-and-rabbitmq-in-golang-3oio</guid>
      <description>&lt;p&gt;The adoption of event-driven architecture is on the rise as teams pursue more adaptable, scalable, and agile solutions to meet the requirements of contemporary applications. Event-driven architectures support real-time updates and streamline integration across different systems by enabling communication through standardized and structured events.&lt;/p&gt;

&lt;p&gt;In a &lt;a href="https://dev.to/ujjavala/notifications-using-auth0-events-and-webhooks-2869" rel="noopener noreferrer"&gt;previous blog post&lt;/a&gt;, I discussed how webhooks in Auth0 can transmit events, thereby leveraging these events to initiate logic execution. In this article, I will delve into the technical aspects of this architecture and demonstrate how Go (Golang) can be utilized to construct such a system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Main components:
&lt;/h2&gt;

&lt;p&gt;Let's first have a look at the main components that drive this system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Log Events:
&lt;/h3&gt;

&lt;p&gt;Auth0 has log events associated with every activity at a tenant level. These events can be used for monitoring or audit purposes. The codes for each event can be checked out &lt;a href="https://auth0.com/docs/deploy-monitor/logs/log-event-type-codes" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Webhooks:
&lt;/h3&gt;

&lt;p&gt;We use auth0 &lt;a href="https://auth0.com/docs/customize/log-streams/custom-log-streams" rel="noopener noreferrer"&gt;webhooks&lt;/a&gt; to deliver filtered events to our producer. We filter these events since we are interested in only a handful.&lt;/p&gt;

&lt;h3&gt;
  
  
  RabbitMQ
&lt;/h3&gt;

&lt;p&gt;RabbitMQ supports multiple messaging protocols, and the one we use to route messages is the Advanced Messaging Queueing Protocol (AMQP). AMQP has three main entities – Queues, Exchanges and Bindings.&lt;/p&gt;

&lt;h2&gt;
  
  
  Behind the scenes
&lt;/h2&gt;

&lt;p&gt;When an event is triggered in Auth0, it's immediately sent via webhook to our publisher, which then publishes it based on the event type. Once published, the event goes to an exchange. The exchange directs the message to connected queues, where consumers receive it. To enable this process, we establish a channel. This channel allows us to publish messages to exchange and declare queues for subscription.&lt;/p&gt;

&lt;p&gt;To create a new queue, we utilize the QueueDeclare function provided by the package on the channel, specifying our desired queue properties. With the queue created, we can use the channel's Publish function to send a message.&lt;/p&gt;

&lt;p&gt;Next, we create a consumer that connects to our RabbitMQ and establishes a channel for communication. Using this channel, we can consume messages using the Consume method defined for it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Groundwork
&lt;/h2&gt;

&lt;p&gt;We use &lt;a href="//github.com/auth0/go-auth0/management"&gt;golang-auth0 management package&lt;/a&gt; to work on the log events and for the queue actions we use &lt;a href="//github.com/rabbitmq/amqp091-go"&gt;github.com/rabbitmq/amqp091-go&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Given below are the snippets:&lt;/p&gt;

&lt;h3&gt;
  
  
  Publishing:
&lt;/h3&gt;

&lt;p&gt;A detailed structure of the log can be found &lt;a href="https://github.com/auth0/go-auth0/blob/main/management/log.go" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    for _, auth0log := range payload.Logs {

        switch auth0log.Data.Type {
        case "slo":
            _, err = c.Publish(ctx, PublishRequest{
                ---your logic---
            })

        case "ss":
            _, err = c.Publish(ctx,PublishRequest{
                    -- your logic -----
                })

        }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Exchange:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    if consumeOptions.BindingExchange != "" {
        for _, routingKey := range routingKeys {
            err = consumer.chManager.channel.QueueBind(
                queue,
                routingKey,
                consumeOptions.BindingExchange,
                consumeOptions.BindingNoWait,
                tableToAMQPTable(consumeOptions.BindingArgs),
            )
            if err != nil {
                return err
            }
        }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Consuming:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func (c *Client) Consume() {
    err := c.consumer.StartConsuming(
        func(ctx context.Context, d queue.Delivery) bool {
            err := c.processMessages(ctx, d.Body, d.Exchange)
            if err != nil {
                c.log.Error().Ctx(ctx).Err(err).Str("exchange", d.Exchange).Msg("failed publishing")
                return nack // send to dlx
            }
            return ack // message acknowledged
        },
        c.queueName,
        []string{c.queueRoutingKey},
        func(opts *queue.ConsumeOptions) {
            opts.BindingExchange = c.queueBidingExchange
            opts.QueueDurable = true
            opts.QueueArgs = map[string]interface{}{
                "x-dead-letter-exchange": c.queueBidingExchangeDlx,
            }
        },
    )
    if err != nil {
        c.log.Fatal().Err(err).Msg("consumer: Failed to StartConsuming")
    }

    // block main thread so consumers run forever
    forever := make(chan struct{})
    &amp;lt;-forever
}   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thus, by leveraging webhooks in Auth0 to trigger events and employing RabbitMQ for reliable message queuing and delivery, we can build scalable and responsive applications. This approach not only enhances flexibility but also supports seamless event processing, enabling efficient handling of asynchronous operations. &lt;/p&gt;

&lt;p&gt;I hope this article was helpful and can prove to be beneficial in your event-driven journey.&lt;/p&gt;

&lt;p&gt;Happy coding :)&lt;/p&gt;

</description>
      <category>cloudops</category>
      <category>random</category>
    </item>
    <item>
      <title>5 Auth0 Gotchas to Consider</title>
      <dc:creator>ujjavala</dc:creator>
      <pubDate>Sat, 15 Jun 2024 15:47:48 +0000</pubDate>
      <link>https://community.ops.io/ujjavala/5-auth0-gotchas-to-consider-4pjm</link>
      <guid>https://community.ops.io/ujjavala/5-auth0-gotchas-to-consider-4pjm</guid>
      <description>&lt;p&gt;Using an identity provider (IDP) for user management has become the norm these days. And while it can get daunting to choose the best idp from the &lt;em&gt;ala carte&lt;/em&gt;, it could help identify some of the shortcomings beforehand.&lt;/p&gt;

&lt;p&gt;Given below are a few of Auth0's gotchas that I have come across and sincerely wish I knew them earlier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configurability Compromises:
&lt;/h2&gt;

&lt;p&gt;Well, this could be a tough one for many, as the one thing that developers value the most is the power of configurability. Auth0 does offer decent configurations; however, if one really wants to tweak something, the API sometimes falls short.&lt;/p&gt;

&lt;p&gt;Some of the limited or non-configurable features that I have faced are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Customisations offered around their universal login&lt;/li&gt;
&lt;li&gt;Changes around the rate limits and throttle&lt;/li&gt;
&lt;li&gt;Email templates for sending out notifications (I have a whole other blog right &lt;a href="https://dev.to/ujjavala/notifications-using-auth0-events-and-webhooks-2869"&gt;here&lt;/a&gt; where I have a workaround for this.)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Forgotten Flows:
&lt;/h2&gt;

&lt;p&gt;Auth0 has this concept of actions, which are basically functions that execute at a certain point. A flow would have a set of actions that would get triggered as a result of an event. Auth0 has &lt;a href="https://auth0.com/docs/customize/actions/flows-and-triggers"&gt;flows&lt;/a&gt; for login and a couple of other events, but surprisingly, it doesn't have one for logout. Now this is a pain point because there could be actions that you would want to trigger once a user logs out. Sure, you could depend on the SLO event, but it would make more sense if they had a separate flow for logout.&lt;/p&gt;

&lt;h2&gt;
  
  
  Awkward Apis
&lt;/h2&gt;

&lt;p&gt;I have seen many, but the ones that definitely need a rewrite are their &lt;a href="https://auth0.com/docs/authenticate/passwordless/implement-login/embedded-login/relevant-api-endpoints"&gt;passwordless apis&lt;/a&gt;. Here, fields like the email or phone number of the users can be compromised way before they have logged in, thereby paving the way for brute force and other attacks. In fact, Auth0 can easily solve this problem by simply replacing these fields with user_ids, just like in their other APIs.&lt;/p&gt;

&lt;p&gt;This is what the one of the passwordless api looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;POST https://{yourDomain}/passwordless/start
Content-Type: application/json
{
  "client_id": "{yourClientID}",
  "client_secret": "{yourClientSecret}", // For Regular Web Applications
  "connection": "email|sms",
  "email": "{email}", //set for connection=email
  "phone_number": "{phoneNumber}", //set for connection=sms
  "send": "link|code", //if left null defaults to link
  "authParams": { // any authentication parameters that you would like to add
    "scope": "openid",     // used when asking for a magic link
    "state": "{yourState}"  // used when asking for a magic link, or from the custom login page
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Insufficient Information
&lt;/h2&gt;

&lt;p&gt;Teams that use Auth0 heavily rely on their logs and &lt;a href="https://auth0.com/docs/deploy-monitor/logs/log-event-type-codes"&gt;log event type codes&lt;/a&gt; when it comes to debugging and monitoring. And yet, Auth0 doesn't send user_id when there is a password leak (pwd_leak) or when an IP address is blocked because it has reached the maximum number of failed login attempts on a single account (limit_wc). Now, this is concerning because tying the event back to the user_id could get really cumbersome, as then we need to rely on other attributes. This can be easily solved if user_id is also included in these events.&lt;/p&gt;

&lt;h2&gt;
  
  
  Documentation Daze
&lt;/h2&gt;

&lt;p&gt;I found that sometimes Auth0 documentation can be really confusing, and it could take hours to find relevant information. Most often than not, I have found leads in their community support channel (which is very active and helpful) rather than the documentation. This also proves that many have been in or are in the same boat as me, so hopefully this blog helps at least some of them.&lt;/p&gt;

&lt;p&gt;These gotchas are honestly head-ups that should be considered while working with Auth0. No software is perfect. Having said that, knowing the imperfections in advance surely ensures quicker workarounds, lesser bewilderment, and, most importantly, a good night's sleep.&lt;/p&gt;

&lt;p&gt;P.C. &lt;a href="https://unsplash.com/photos/yellow-and-black-no-smoking-sign-pzJgANvTaa8"&gt;unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>auth</category>
      <category>devops</category>
      <category>secops</category>
      <category>cloudops</category>
    </item>
    <item>
      <title>Notifications using Auth0 events and webhooks</title>
      <dc:creator>ujjavala</dc:creator>
      <pubDate>Sat, 28 Oct 2023 01:13:20 +0000</pubDate>
      <link>https://community.ops.io/ujjavala/notifications-using-auth0-events-and-webhooks-4a5n</link>
      <guid>https://community.ops.io/ujjavala/notifications-using-auth0-events-and-webhooks-4a5n</guid>
      <description>&lt;h2&gt;
  
  
  Understanding the context
&lt;/h2&gt;

&lt;p&gt;When it comes to customer identity and access management (CIAM), notifying various accounts specific events to the customers is paramount. These events could range from creating an account, adding MFA or deleting or deactivating their accounts. Being transparent and laying this data upfront instills trust within the customers and also help identifying security compromises if any. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://auth0.com/"&gt;Auth0&lt;/a&gt; is once such identity management platform that provides support for emails through their &lt;a href="https://auth0.com/docs/customize/email"&gt;customisable email templates&lt;/a&gt;. Currently, they have support for the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verification emails (using link or code)&lt;/li&gt;
&lt;li&gt;Welcome emails&lt;/li&gt;
&lt;li&gt;Enroll in MFA emails&lt;/li&gt;
&lt;li&gt;Change password emails&lt;/li&gt;
&lt;li&gt;Blocked account emails&lt;/li&gt;
&lt;li&gt;Password breach alert emails&lt;/li&gt;
&lt;li&gt;Verification code for email MFA&lt;/li&gt;
&lt;li&gt;User invitation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The above templates can be easily customised using their &lt;a href="https://auth0.com/docs/customize/email/email-templates/use-liquid-syntax-in-email-templates"&gt;liquid&lt;/a&gt; syntax, and additionally, there are &lt;a href="https://registry.terraform.io/providers/auth0/auth0/latest/docs/resources/email_template"&gt;terraform resources&lt;/a&gt; available that automates many steps for us.   &lt;/p&gt;

&lt;h2&gt;
  
  
  Acing the requirement
&lt;/h2&gt;

&lt;p&gt;Our requirement was fairly simple. We had to send out emails to our customers whenever there was a successful password change activity detected in their account. And since we were using Auth0, we were pretty laid back as we thought it would be straightforward to achieve. Brimming with confidence, when we looked into this, we found that though there is support available for major events, there are still many events for which email templates are not yet available and one such event is &lt;em&gt;Success Change Password&lt;/em&gt; (scp) event. Auth0 does have a template for a &lt;em&gt;Success Change Password Request&lt;/em&gt; (scpr) event, which is sent along with a password-reset link to reset a password. However, the email template we were interested in, was the one that gets triggered after a password is successfully changed and the one available wasn't of much use to us.&lt;/p&gt;

&lt;h2&gt;
  
  
  Webhooks to the rescue
&lt;/h2&gt;

&lt;p&gt;We found that we could rely on the relevant events from Auth0 logs to trigger the notifications. As &lt;a href="https://auth0.com/docs/customize/log-streams/custom-log-streams"&gt;webhooks&lt;/a&gt; allow events to be delivered to an external web server and Auth0 offers several integrations that can automatically push events to third-party, we captured the &lt;strong&gt;scp&lt;/strong&gt; event from Auth0 logs and configured a custom webhook which would push this to an external queue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/V_ePN45_eDwxfTdi8ZGRKJfR5gu-fCmb2Tmn1Mw6EFo/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvd2w0/ZGYxenlwcm5nNm93/ZWc2N3kucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/V_ePN45_eDwxfTdi8ZGRKJfR5gu-fCmb2Tmn1Mw6EFo/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvd2w0/ZGYxenlwcm5nNm93/ZWc2N3kucG5n" alt="Image description" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We implemented a producer-consumer model where Auth0 would publish events to the queue and then a consumer would listen to this queue, and as a result send out notifications. We had an additional consumer just for the logs, so that we could consume the relevant events. This approach worked out for us as we could easily scale this for other events. We could just add multiple consumers and configure the webhook to listen to multiple events.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wishful thinking
&lt;/h2&gt;

&lt;p&gt;The above solution works and is decently extensible, however, the ideal solution would be that Auth0 has customisable email templates which can support any event. For now, the event codes that trigger emails is a subset of all the events mentioned &lt;a href="https://auth0.com/docs/deploy-monitor/logs/log-event-type-codes"&gt;here&lt;/a&gt;. It would be really helpful to see a template that can support any event and thereby minimise the need to write additional code. Hope to see this feature soon :).&lt;/p&gt;

</description>
      <category>auth</category>
      <category>terraform</category>
      <category>productivity</category>
      <category>cloudops</category>
    </item>
    <item>
      <title>Curious case of kafka streams and hexagonal architecture</title>
      <dc:creator>ujjavala</dc:creator>
      <pubDate>Tue, 08 Aug 2023 02:41:19 +0000</pubDate>
      <link>https://community.ops.io/ujjavala/curious-case-of-kafka-streams-and-hexagonal-architecture-52oj</link>
      <guid>https://community.ops.io/ujjavala/curious-case-of-kafka-streams-and-hexagonal-architecture-52oj</guid>
      <description>&lt;h3&gt;
  
  
  The architecture conundrum
&lt;/h3&gt;

&lt;p&gt;As software engineers, most of us have gone through the phase where we decide on an architectural pattern for our codebase. The intention behind this is not only to avoid making any obvious mistake – like ending up with &lt;a href="https://en.wikipedia.org/wiki/Anti-pattern#:~:text=A%20Big%20Ball%20of%20Mud,%2C%20and%20repeated%2C%20expedient%20repair."&gt;a big ball of mud&lt;/a&gt;, but also to sleep better at nights by ensuring that we still remember what our code does after a month. Additionally, having an architectural pattern for the code provides guidelines by implicitly dictating what goes where, thereby making knowledge transitions easier.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pick a winner
&lt;/h3&gt;

&lt;p&gt;We were in a similar situation when we had to choose a pattern for our application. Deciding on it was especially challenging for us as our architecture did not follow the norm of being REST driven. With REST, the craft of pick-and-choose becomes a tad bit easier as most of the systems unintentionally fall under the layered-architecture bucket. &lt;/p&gt;

&lt;p&gt;In our case, we have an event-driven system where we use GraphQl for data interaction and Kafka to move, transform and enrich data across our microservices. The tech stack consists of Avro Schema files, Spring Kafka and Kotlin, AWS being our cloud provider. &lt;/p&gt;

&lt;p&gt;Initially, we considered &lt;a href="https://dev.to/barrymcauley/onion-architecture-3fgl"&gt;onion architecture&lt;/a&gt;, however the layers didn't seem to work for us. A couple of spikes later, we had a winner as we found that in terms of testability and isolation, hexagonal &lt;a href="https://www.happycoders.eu/software-craftsmanship/hexagonal-architecture/"&gt;ticked all the checkboxes&lt;/a&gt; that we needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Give it a whirl
&lt;/h3&gt;

&lt;p&gt;Now, after deciding that we did want to go with hexagonal, the task at hand was to transform our &lt;em&gt;small ball of mud&lt;/em&gt; into the hexagon shown below. We had to figure out what conforms to a port or an adapter and what would reside in the hexagon in Kafka world.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/lB4un71UEueWtwG7GMaPkP_ilrqUAv26lX4JyoBd5-g/w:800/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvMzZ0/MnBnaGJnbTNyb2V3/dWR4M2gucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/lB4un71UEueWtwG7GMaPkP_ilrqUAv26lX4JyoBd5-g/w:800/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvMzZ0/MnBnaGJnbTNyb2V3/dWR4M2gucG5n" alt="Image description" width="800" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Based on &lt;a href="https://www.youtube.com/watch?v=bKxkIjfTAnQ&amp;amp;list=PL1msPBH9ZGkhpANkreFA_teOnloVdLuCx&amp;amp;ab_channel=ValentinaCupa%C4%87%28%D0%92%D0%B0%D0%BB%D0%B5%D0%BD%D1%82%D0%B8%D0%BD%D0%B0%D0%A6%D1%83%D0%BF%D0%B0%D1%9B%29"&gt;these&lt;/a&gt; videos and a couple of other helpful articles, we segregated our code into the following folder structure: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ports&lt;/strong&gt; &lt;br&gt;
These represented the blueprint of the domain features and consisted of all the interfaces needed to interact with the core application. &lt;/p&gt;

&lt;p&gt;We had driven and a driver folder in here, where driver had interfaces of the services and driven had the interfaces of our repositories. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adapters&lt;/strong&gt; &lt;br&gt;
These were used to connect to external components through ports. We again had driven and a driver folder in here. &lt;/p&gt;

&lt;p&gt;Driver adapters initiated interactions with the hexagon using Service ports. These could be the controllers, application events or similar triggers.&lt;/p&gt;

&lt;p&gt;Driven adapters responded to these interactions by processing logic needed for them and consisted of repository implementations. In our case, reads from the Statestore or inserts to the Kafka topics, streams and ktable creations, etc., contributed to such implementations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hexagon&lt;/strong&gt; &lt;br&gt;
This formed the heart of our architecture as all the domain logic went in here. We moved all our service implementations to this hexagon, together with any other domain-specific logic. This meant that all our data transformation logic needed for Kafka went in here. &lt;/p&gt;

&lt;h3&gt;
  
  
  Put to the test
&lt;/h3&gt;

&lt;p&gt;Once we were happy with the redesign, it was time to test our new architecture. For this we used &lt;a href="https://www.archunit.org/userguide/html/000_Index.html"&gt;ArchUnit&lt;/a&gt;, as we wanted to test our packages, interface interactions and unintended dependencies. Few of such test-cases were: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check if repository implementations are in adapters&lt;/li&gt;
&lt;li&gt;Check no controllers in adapters can access domain classes &lt;/li&gt;
&lt;li&gt;Check domain classes in hexagon does not access adapters and ports directly &lt;/li&gt;
&lt;li&gt;Check there are no implementations and only interfaces in ports 
And many more. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Make peace
&lt;/h3&gt;

&lt;p&gt;I felt that it can sometimes get difficult to modularise and segregate (into driven and driver, ports and adapters) all of the streaming logic, as a lot of processing can happen in one single function. So, it all comes down to compromises and conscious decisions that govern the risks of weakening the standard architectural boundaries. &lt;/p&gt;

&lt;p&gt;Something that cannot be stressed enough is to always test the architecture, as it could help identify anti-patterns, for example, direct interaction between adapter and hexagon or implementations in port. &lt;/p&gt;

&lt;p&gt;To conclude, adopting hexagonal requires extra love from developers as it is not a traditional approach and there will be instances where you will need to focus on tradeoffs. So, adopt it only when your system is complex enough and there is bandwidth available for the learning curve. &lt;/p&gt;

</description>
      <category>cloudops</category>
      <category>kafka</category>
    </item>
    <item>
      <title>Taking the redesign plunge</title>
      <dc:creator>ujjavala</dc:creator>
      <pubDate>Thu, 28 Jul 2022 04:16:00 +0000</pubDate>
      <link>https://community.ops.io/ujjavala/taking-the-redesign-plunge-51ah</link>
      <guid>https://community.ops.io/ujjavala/taking-the-redesign-plunge-51ah</guid>
      <description>&lt;p&gt;Cloud providers are expensive and spending on infrastructure can be painful. But...You know what's more painful? Spending on unused infrastructure. Infrastructure that you don't even remember choosing in the first place.&lt;/p&gt;

&lt;p&gt;In the current era of microservices and almost everything being on cloud, it is extremely important to spend on infrastructure wisely. During the initial phase of designing and architecting, many of us tend to get carried away by scalability and availability. We go overboard and design a system which is over-engineered and a bit too ready for the traffic it won't even meet in the next five to ten years. I don't say that we shouldn't be future ready. We need to definitely design systems that are robust and scalable to withstand harsh conditions. But what we definitely need to avoid is building these bulky systems that come with neverending maintenance costs. It's like the broadband or any other plan that you usually opt for. When you know that your needs would be fulfilled by the basic plan then why pay for the premium plan which you won't even use(especially when you know that you can switch to premium at any moment).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figuring out the boo-boo&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the last project that I had worked on, we kind of faced a similar situation. The infrastructure at hand was over-engineered resulting in numerous repositories, huge maintenance costs and unhappy developers. Carrying out the daily tasks was getting difficult as there was a lot to deal with. Our major pain points were:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Managing the cloud instances&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We use AWS as our cloud provider and we overused their compute services. We had bulky EC2 instances and a bit of complex auto-scaling and monitoring strategy. We later realised that we actually don't even need EC2s and they can be easily replaced by ECS and docker containers.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Getting lost in the world of AWS while debugging issues&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Whenever there was a new bug in production, it made our hearts skip a beat (not in a good way). We literally used to get lost in the world of AWS and sometimes even forget why we landed on the page in the first place.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The repository maze&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We dreaded to look at our repositories to fix the bug, mainly because there were too many to start with and we ended up going through all the repositories for every single bug. No form of graph traversals could save us.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Varnish aches&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We used varnish for stitching our pages and, fun fact, there isn't much documentation on troubleshooting related to varnish. The very beloved stackoverflow too seemed powerless sometimes.&lt;/p&gt;

&lt;p&gt;So here we had an infrastructure which was extremely difficult to use and maintain. Within a couple of months we figured out that this won't fly. We instantly took actions and had a lot of spikes done. We came up with an alternate solution that was far more maintainable and scalable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Minimal impact on the current services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We came up with an alternate solution of using ECS instead of EC2. mainly because our needs were met by ECS and we didn't want to pay for a server we wouldn't need. We replaced Varnish with Kong. Kong, which is a fancy Nginx(in my opinion), had a slight learning curve, decent documentation and good plugin support.&lt;/p&gt;

&lt;p&gt;The only thing left was implementing and going live. Being a platform team, we help other teams go live by staying alive. It is supremely important that we don't go down and we knew that the transition needs to be done carefully. We had thorough rounds of testing done before going live. Only when we were finally confident with the results, we went live.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Projecting the gains&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Personally I feel, this was the most difficult task. You have redesigned your system and have taken a couple of months to do so. You say that the current solution is better. It has made maintenance easier and is helpful in fixing bugs quickly. But.. If you have spent far more than what you have gained or would be gaining in the coming years, then was it even worth it? How do you justify your decision? And more importantly, what are the metrics to measure the success of your decision.&lt;/p&gt;

&lt;p&gt;We answered these concerns by keeping track of the frequency of bugs that we got earlier vs now. Besides, we made use of AWS cost explorer. We found out that within six months we saved the development cost that we spent while redesigning. Also note that we have rolled out three new services and on the whole we are still saving costs i.e our clients got three new services and they didn't pay a buck extra for it, instead they are gaining every day.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The silver lining&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After going live, we got zero bugs. Just kidding. We did get bugs, but this time we had more confidence and lesser repositories to traverse. Our faith in stackoverflow was reinstated. We could find our way within the AWS console too. We could finally sleep with lesser nightmares.&lt;/p&gt;

&lt;p&gt;If you are pondering over whether you need to redesign your system or not, you have already begun your journey. The key here is to actually think, reason and finally accept. After careful speculation if you still believe your system is decent, congrats. But if you feel that you have made a boo-boo, like we did, congrats again. You have identified the shortcoming and are to unravel a whole new adventure of redesigning your architecture.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>Adapting to infra heavy teams</title>
      <dc:creator>ujjavala</dc:creator>
      <pubDate>Wed, 29 Jun 2022 08:00:35 +0000</pubDate>
      <link>https://community.ops.io/ujjavala/adapting-to-infra-heavy-teams-15i9</link>
      <guid>https://community.ops.io/ujjavala/adapting-to-infra-heavy-teams-15i9</guid>
      <description>&lt;p&gt;It's been almost 3 years since I have been exposed to infra-heavy teams and I wanted to share my learning and experiences around this.To give some context, I have been working as a regular developer from past 8 years in an agile environment. The maximum devops that I came across in my earlier teams was triggering/re-triggering builds and maybe write a few scripts here and there. Coming to land of devops, it is a whole a different ball game. Many teams are dependent on you and you need to make sure everything is up and looks pretty.And if something goes wrong, well you would still be fine, I think.&lt;/p&gt;

&lt;p&gt;Given below are the few gotchas/heads-ups most developers would come across while getting their hands dirtied with devops:&lt;/p&gt;

&lt;h3&gt;
  
  
  Mapping business value and devops
&lt;/h3&gt;

&lt;p&gt;Being mostly feature/functionality focused in my previous teams, it took me a while to understand the business value infrastructure teams can bring on the table. What I learnt in this time is, infrastructure teams are responsible for the developer experience and also the app performance which links to user experience. Whether it takes 300 ms to load your site or 30 seconds , it directly ties back to the infrastructure in place. If observed more acutely, these finer details are responsible for most of the end users to switch apps and opt for a newer-more performant one. So, to sum it up, infrastructure teams have a direct impact on the customer base.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mastering at least one editor
&lt;/h3&gt;

&lt;p&gt;If you are interested in infrastructure be prepared to run scripts. Your life would be much easier if you could master at least one editor, and there are two reasons for this. First, commands don't change, at least the underlying commands don't. And let us admit, instead of logging into console and searching for the accurate page where you would finally execute your task, it would be far more efficient to execute the same via command line. Second, you have more control over the arguments and the options. I am currently trying to learn emacs-doom(fingers crossed), but there are many other editors that can be tried out.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting firm grip on the tech stack
&lt;/h3&gt;

&lt;p&gt;Remember the advice that its important to focus on concepts rather than tools/ide , well what I have experienced is for devops you would need both. It would help to a great extent if you are well verse with the tech stack. For example, if AWS is being used, it would be beneficial to know about various AWS services and which one to use when. It would also help to know at least one of the AWS supported languages as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dashboards are your savior
&lt;/h3&gt;

&lt;p&gt;Visualizations of the infrastructure is extremely important since it shows your entire system in a glance. And making them as intuitive as possible and highlighting only relevant information is the key here. Dashboards are usually the first instant visual indication that something is not right and it would be really annoying to look at a cumbersome report instead of looking at the stats which are of high stakes. So being as concise as possible is something to keep in mind while building them.&lt;/p&gt;

&lt;p&gt;Well, that is pretty much I had to share. If you are an aspiring devops, hope the above nuances are helpful to make your journey an effortless one. I have been hearing this a lot- Devops is not just a role, its a mindset. And what I learnt in these 6 months is in order to adopt this mindset it is crucial to be open to it and to come out of the comfort zone.&lt;/p&gt;

</description>
      <category>devops</category>
    </item>
    <item>
      <title>GOing down the rabbit hole</title>
      <dc:creator>ujjavala</dc:creator>
      <pubDate>Wed, 29 Jun 2022 07:58:20 +0000</pubDate>
      <link>https://community.ops.io/ujjavala/going-down-the-rabbit-hole-1j47</link>
      <guid>https://community.ops.io/ujjavala/going-down-the-rabbit-hole-1j47</guid>
      <description>&lt;p&gt;I recently upgraded a service written in golang which was deployed using Google's AppEngine and I have just one word for the experience. It was &lt;em&gt;unpleasant&lt;/em&gt;. Just to be clear, I really am all in for golang and was also impressed with how easy it is to deploy any service using gcloud. Unfortunately the ride from go1.1 to go.1.2+ was more of a roller coaster for me. Let’s take a glimpse of this three-course meal together so that we can be well prepared for the next upgrade.&lt;/p&gt;

&lt;h3&gt;
  
  
  For appetizers
&lt;/h3&gt;

&lt;p&gt;I had already run my code locally using &lt;code&gt;dev_appserver.py --enable_host_checking=no --support_datastore_emulator=yes app.yaml&lt;/code&gt; , verified the entries in the datastore and was fairly satisfied with the code that I had written.&lt;/p&gt;

&lt;p&gt;I was all set to deploy my service on go111 if I hadn’t seen the error on my console related to a private repository reference. In order to resolve this issue, I leveraged &lt;code&gt;go vendor&lt;/code&gt; (introduced go 1.15 onward), which would copy all third-party dependencies to a vendor folder in your project root. I quickly updated my app.yaml file and specified the version there as go115. Fortunately, the reference error was resolved and I could deploy the service.&lt;/p&gt;

&lt;h3&gt;
  
  
  Here comes the main course
&lt;/h3&gt;

&lt;p&gt;Deployment with go115 was successfully done, the health endpoint worked too. I was all happy and I started celebrating by updating the README.md file with emojis and refactoring the code here and there. But, my happiness was short-lived when I found out that the other endpoints didn’t work.&lt;/p&gt;

&lt;p&gt;While accessing other endpoints I got a &lt;code&gt;metadata fetch failed: metadata server returned HTTP 404 error was seen&lt;/code&gt; error. I googled for a while but didn’t find the exact cause. I tried to fix the issue with a few of these &lt;a href="https://stackoverflow.com/questions/53331591/get-compute-metadata-from-app-engine-got-404-page-not-found-error"&gt;stackoverflow&lt;/a&gt; suggestions along with a few others, but didn't have any luck there. It took almost 2 days for me to figure out that I had to upgrade the AppEngine version. I bumped up the version and tada... I could access the other endpoints too. &lt;/p&gt;

&lt;h3&gt;
  
  
  Finally, the dessert
&lt;/h3&gt;

&lt;p&gt;I could see the css and labels for the page loading, but I couldn’t see any data there. Data, without which the page was just a skeleton, without any essence. &lt;/p&gt;

&lt;p&gt;I traversed back to the logs and found yet another error  &lt;code&gt;internal.flushLog: Flush RPC: Call error 7: App Engine APIs are not enabled, please add app_engine_apis: true to your app.yaml&lt;/code&gt; popping up. The error made complete sense to me and I did just what it had suggested. I added the flag in the app.yaml file and quickly deployed the app.&lt;/p&gt;

&lt;p&gt;And tada... the endpoint does &lt;em&gt;not&lt;/em&gt; have any data. &lt;/p&gt;

&lt;p&gt;I was again lost in midst of suggestions and comments and after navigating through all the pages in google (10 to be precise), I found nothing. &lt;/p&gt;

&lt;p&gt;Got a hunch that maybe it's again related to some other upgrade, and since it has something to do with data, upgraded datastore. Imported the datastore from &lt;code&gt;cloud.google.com/go/datastore&lt;/code&gt; instead of &lt;code&gt;google.golang.org/appengine&lt;/code&gt; and made the code compatible since the apis were a bit different. Found &lt;a href="https://xebia.com/blog/migrating-app-engine-to-go-1-11-the-price-of-vendor-lock-in/"&gt;this&lt;/a&gt; and &lt;a href="https://cloud.google.com/datastore/docs/reference/libraries#client-libraries-install-go"&gt;this&lt;/a&gt; really helpful). I deployed the code to the dev environment and finally… the fix worked beautifully and I could see the data there.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mint anyone?
&lt;/h3&gt;

&lt;p&gt;Few findings on top of my head&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Though the code was deployed successfully , I could not test the appengine-datastore setup on my local machine. For the standard environment, there is no documentation available for local setup for go 1.12+ versions. Before the upgrade, I had referred to &lt;a href="https://cloud.google.com/appengine/docs/standard/go111/tools/local-devserver-command"&gt;gcloud’s official document&lt;/a&gt;, but this didn’t work for go1.12+ versions.&lt;/li&gt;
&lt;li&gt;Testing just the AppEngine locally was difficult for go1.12+ versions. I observed that even the &lt;a href="https://cloud.google.com/appengine/docs/standard/go/testing-and-deploying-your-app"&gt;document&lt;/a&gt; recommends doing just a &lt;code&gt;go run&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Just running the datastore emulator is possible using &lt;code&gt;gcloud beta emulators datastore start&lt;/code&gt;. But, again this is not very helpful if you need AppEngine to run too.&lt;/li&gt;
&lt;li&gt;There were many incompatibility issues in between AppEngine and datastore, even if you haven’t upgraded yet. In order to test appengine the recommended solution is to use &lt;code&gt;GOOGLE_CLOUD_PROJECT= &amp;lt;projectId&amp;gt; &amp;lt;project_folder_path&amp;gt;&lt;/code&gt; . But this is incompatible if you are using datastore. For datastore, you would need the deprecated  &lt;code&gt;dev_appserver.py&lt;/code&gt;  way to test out things.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I felt that it would have been a lot smoother, if the steps for local development were more articulate and the error messages in gcloud were intuitive (they were really misleading). Few things worked for me and few things didn’t. But that was just my experience I guess, which is very subjective by the way. Not everyone might be running into these issues on a daily basis.&lt;/p&gt;

&lt;p&gt;What we can hope for is that, if any one of us did stumble upon these issues, we know what our modus operandi is going to be and we know exactly how we are going to get our peace of mind back.&lt;/p&gt;

&lt;p&gt;Keep calm and happy coding!&lt;/p&gt;

</description>
      <category>gcp</category>
      <category>random</category>
      <category>github</category>
    </item>
    <item>
      <title>Integrating Hashicorp vault with AWS and Keycloak</title>
      <dc:creator>ujjavala</dc:creator>
      <pubDate>Wed, 29 Jun 2022 07:56:32 +0000</pubDate>
      <link>https://community.ops.io/ujjavala/integrating-hashicorp-vault-with-aws-and-keycloak-4pfm</link>
      <guid>https://community.ops.io/ujjavala/integrating-hashicorp-vault-with-aws-and-keycloak-4pfm</guid>
      <description>&lt;p&gt;I built a Java-based identity service recently, where I had created a customised vault provider using Keycloak’s vault SPI and although Keycloak does offer support for a few vaults, the need to have a customised vault emerged from the requirement of using Hashicorp Vault within the company. &lt;/p&gt;

&lt;p&gt;The vault provider was responsible for storing Keycloak secrets like realm ids, ldap credentials, external tokens, etc and since our infrastructure was set up in AWS, we had to follow extra authentication steps to get the system working. &lt;/p&gt;

&lt;p&gt;Let's go through various events needed for this synergy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrating Hashicorp Vault with Keycloak
&lt;/h3&gt;

&lt;p&gt;In order to have a custom provider, you would need to extend SPIs in Keycloak. I used Vault SPI for the provider as shown in the snippet below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class HashicorpVaultProvider implements VaultProvider {

    @Override
    public VaultRawSecret obtainSecret(String secretName) {
        try {
            logger.info("setting up vault service");
            vaultService.setVaultConfig();

            logger.info(String.format("obtaining secret:%s", secretName));
            return DefaultVaultRawSecret.forBuffer(Optional.of(ByteBuffer.wrap(readSecretFromVault(secretName, "path").getBytes())));
        } catch (VaultException | JsonProcessingException e) {
            logger.info(String.format("caught vault exception while obtaining secret:%s", secretName));
            e.printStackTrace();
        }
        return DefaultVaultRawSecret.forBuffer(Optional.empty());
    }


    @Override
    public void close() {
        // Auto-generated method stub
    }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every provider has a factory associated with it, which you would need to extend, override and then make it your own 😉 . Given below is the code of the Vault factory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class HashicorpVaultProviderFactory implements 

    public HashicorpVaultProviderFactory() {
        // Keycloak expects noargs constructor
    }

    @Override
    public VaultProvider create(KeycloakSession session) {
        VaultService service = new VaultService(vaultUrl);
        return new HashicorpVaultProvider(session.getContext().getRealm().getName(), service);
    }

    @Override
    public void init(Scope config) {
        vaultUrl = Constants.VAULT_URL;
        logger.info("Init Hashicorp: " + vaultUrl);
    }

    @Override
    public void postInit(KeycloakSessionFactory factory) {
        // Auto-generated method stub

    }

    @Override
    public void close() {
        // Auto-generated method stub

    }

    @Override
    public String getId() {
        return VAULT_PROVIDER_ID;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Authentication for the Vault with AWS
&lt;/h3&gt;

&lt;p&gt;Next step would be to enable authentication of your vault. There are two authentication types present in the AWS auth method: IAM and EC2. You can use either of these methods. I used IAM for my use case. For more information, do skim &lt;a href="https://www.vaultproject.io/docs/auth/aws"&gt;this&lt;/a&gt; page. &lt;/p&gt;

&lt;p&gt;In case of IAM auth, you will be leveraging AWS Signature v4 algorithm and will need an additional header X-Vault-AWS-IAM-Server-ID to avoid different types of replay attacks.&lt;/p&gt;

&lt;p&gt;This is what the sample snippets looks like:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set up vault config
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    public void setVaultConfig() throws VaultException, JsonProcessingException {
        final VaultConfig vaultConfig = new VaultConfig().address(vaultUrl).token(obtainToken())
                .openTimeout(5).readTimeout(30)
                .sslConfig(new SslConfig().build())
                .engineVersion(1).build();
        logger.info("updated vault config");
        vault = new Vault(vaultConfig);
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;For obtaining the token:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
    public String obtainToken() throws VaultException, JsonProcessingException {
        final VaultConfig vaultConfig = new VaultConfig().address(vaultUrl).build();
        vault = new Vault(vaultConfig);
        logger.info("creating default vault config");

        String iamRequestUrl = Base64.getEncoder().encodeToString(IAM_REQUEST_URL.getBytes());
        String iamRequestBody = Base64.getEncoder().encodeToString(IAM_REQUEST_BODY.getBytes());
        String iamRequestHeaders = Base64.getEncoder().encodeToString(obtainIamRequestHeaders().getBytes());
        logger.info("getting response from auth");
        AuthResponse response = vault.auth().loginByAwsIam("readonly-secrets",
                iamRequestUrl,
                iamRequestBody,
                iamRequestHeaders,
                null);
        logger.info("successfully authenticated");
        return response.getAuthClientToken();
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Getting IAM Headers
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    private String obtainIamRequestHeaders() throws JsonProcessingException {
        DefaultRequest&amp;lt;?&amp;gt; request = getSignableRequest();
        InstanceProfileCredentialsProvider credentialsProvider = new InstanceProfileCredentialsProvider(false);
        AWSCredentials awsCredentials = credentialsProvider.getCredentials();
        AWS4Signer signer = new AWS4Signer();
        signer.setServiceName(DEFAULT_SERVICE_NAME);
        signer.setRegionName(DEFAULT_REGION);
        signer.sign(request, awsCredentials);
        try {
            credentialsProvider.close();
        } catch (IOException e) {
            e.printStackTrace();
        }

        return new ObjectMapper().writeValueAsString(request.getHeaders());
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Getting a signable request
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private DefaultRequest&amp;lt;?&amp;gt; getSignableRequest() {
        DefaultRequest&amp;lt;?&amp;gt; request = new DefaultRequest&amp;lt;&amp;gt;(DEFAULT_SERVICE_NAME);
        Map&amp;lt;String, String&amp;gt; headers = new HashMap&amp;lt;&amp;gt;();
        headers.put("User-Agent", "identity-service");
        headers.put("Content-Type", "application/x-www-form-urlencoded; charset=utf-8");
        headers.put("X-Vault-AWS-IAM-Server-ID", VAULT_FQDN);
        try {
            request.setEndpoint(new URI(IAM_REQUEST_URL));
        } catch (URISyntaxException e) {
            e.printStackTrace();
        }
        request.setHttpMethod(HttpMethodName.POST);
        request.setHeaders(headers);
        return request;
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once your Vault Config is set, you will be ready to read and write values to the vault set by the custom Keycloak provider created earlier 🥂&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>random</category>
    </item>
  </channel>
</rss>
