The Ops Community ⚙️

Cover image for How to Go with Pulumi YAML
Erik Lundevall Zara
Erik Lundevall Zara

Posted on • Edited on • Originally published at cloudgnosis.org

How to Go with Pulumi YAML

This text is kind of continuation of the article “A tale of two tools - Pulumi and AWS CDK”, although it will mainly focus on Pulumi, and write about YAML and Go support for Pulumi.

The Pulumi YAML support became generally available just recently, and this was announced on the Pulumi Cloud Engineering Days 2022. I will use this support to rewrite one solution in “A tale of two tools - Pulumi and AWS CDK” to YAML.

I will then use this YAML to generate a Go version of the code automatically and deploy that version as well.

AWS CDK has kind of similar feature to Pulumi YAML, and does not have the code conversion capability of Pulumi (YAML to Go in this case). That will be a topic for a separate post. In this one, we will focus on Pulumi.

Introduction to Pulumi YAML

Earlier this year at the PulumiUP virtual conference, Pulumi announced support for YAML as an additional language. This may have come as a surprise for some people, given that Pulumi has very much been advocating to use regular programming languages to define infrastructure.

This has not really changed. I believe, though, that Pulumi recognized that not every person who needs to work with infrastructure as software will be a skilled developer, and they do not need to be one either.

The programming language support allows for building good abstractions and interfaces other can consume efficiently. The consumption does not need to use a programming language though. This is where Pulumi YAML comes in.

In Pulumi YAML, you can refer to the same resources as you would use in a regular programming language, both low-level components provided by the cloud provider and higher-level components. These can be official or 3rd party components, or components you or your organisation has developed.

Pulumi YAML lives in the Pulumi.yaml project file, together with the project configuration. The built-in support is just a single YAML file, which kind of reflects the priorities and intentions with the YAML support - a non-programming-language interface to some suitable abstractions.

I think this is a good constraint at this point. Keep it simple, and only if there is an actual need after customer feedback, then look at making it more complex or capable.

Define app solution in YAML

I used the Typescript-based solution using Pulumi CrossWalk for AWS from “A tale of Two tools - Pulumi and AWS CDK” as a starting point for my solution in Pulumi YAML.

This was a straightforward process. The syntax is similar, so it was relatively easy to do copy and paste, with some tweaks.

The main differences were in defining re-usable constants, and use some pre-defined constant references. For re-usable constants, I used the project configuration feature and defined 3 typed configuration settings with default values.

For pre-defined constants, I simply had to check what value had mapped there. I had installed the Pulumi YAML extension to Visual Studio Code, which helped with providing type checking, some auto-completion and inline documentation. The extension also checks if there are invalid references as well, which was quite helpful.

Thanks to the similarities between the Typescript code and YAML, and the support by the Pulumi YAML extension in the editor, it did not take that much time to write the YAML version (and even faster the second time, more about that later...).

The resulting Pulumi.yaml looks like this:

name: ias-pulumi-yaml
description: A test solution with Pulumi YAML
runtime: yaml
configuration:
    port:
        type: Number
        default: 80
    cpu:
        type: Number
        default: 512
    memory:
        type: Number
        default: 1024
resources:
    vpc:
        type: awsx:ec2:Vpc
        properties:
            numberOfAvailabilityZones: 2
            natGateways:
                strategy: Single

    # An ECS cluster to deploy into
    cluster:
        type: aws:ecs:Cluster

    # An ECR repository for the app image
    repo:
        type: awsx:ecr:Repository
    # Build and publish the image to ECR
    image:
        type: awsx:ecr:Image
        properties:
            repositoryUrl: ${repo.url}
            path: ./my-image

    lbsg:
        type: aws:ec2:SecurityGroup
        properties:
            vpcId: ${vpc.vpcId}
            ingress:
                - fromPort: ${port}
                  toPort: ${port}
                  protocol: tcp
                  cidrBlocks:
                    - "0.0.0.0/0"
            egress:
                - fromPort: 0
                  toPort: 0
                  protocol: "-1"
                  cidrBlocks:
                    - "0.0.0.0/0"
    # An ALB to serve the container endpoint to the internet
    loadbalancer:
        type: awsx:lb:ApplicationLoadBalancer
        properties:
            subnetIds: ${vpc.publicSubnetIds}
            securityGroups:
                - ${lbsg.id}

    containersg:
        type: aws:ec2:SecurityGroup
        properties:
            vpcId: ${vpc.vpcId}
            ingress:
                - fromPort: ${port}
                  toPort: ${port}
                  protocol: tcp
                  securityGroups:
                    - ${lbsg.id}
            egress:
                - fromPort: 0
                  toPort: 0
                  protocol: "-1"
                  cidrBlocks:
                    - "0.0.0.0/0"

    # Deploy an ECS Service on Fargate to host the application container
    service:
        type: awsx:ecs:FargateService
        properties:
            cluster: ${cluster.arn}
            taskDefinitionArgs:
                container:
                    image: ${image.imageUri}
                    cpu: ${cpu}
                    memory: ${memory}
                    essential: true
                    portMappings:
                        - containerPort: ${port}
                          targetGroup: ${loadbalancer.defaultTargetGroup}
            networkConfiguration:
                subnets: ${vpc.privateSubnetIds}
                securityGroups:
                    - ${containersg.id}
            deploymentCircuitBreaker:
                enable: true
                rollback: true
outputs:
    # The URL at which the container's HTTP endpoint will be available
    url: http://${loadbalancer.loadBalancer.dnsName}

Enter fullscreen mode Exit fullscreen mode

The setup worked right away to deploy. This probably would not have been the case without the Pulumi YAML extension, so I am happy that I had that installed.

I think the result is pretty clear to read, and since it can use higher-level components, much shorter than the corresponding CloudFormation would be (about 400 lines, if you want to know).

The next task was to create a Go version of the solution.

Converting YAML to Go

The Pulumi CLI has a convert option, which does just that - it converts a YAML-based solution to any of the other target languages. By default it will use the Pulumi.yaml file in the current directory.

Now, in the back of my head, I was thinking - will it destroy my YAML config, since a Go version of Pulumi.yaml would not have any YAML definitions in it? Should I back up my YAML, or check in the latest changes in Git?

Despite these concerns, I YOLO:ed (You Only Live Once) and ran


pulumi convert --language go --generate-only

Enter fullscreen mode Exit fullscreen mode

I noticed that the Pulumi.yaml in my editor suddenly looked much more empty.... Yes; it had overwritten my YAML solution, and I had not backed up the data.

Luckily, it was pretty easy and not a big solution, so it did not take that long to re-write the YAML version again. Maybe I will learn my lesson to next time. Or maybe Pulumi adds a safety guard prompt.

The resulting Go code looks like this:

package main

import (
    "fmt"

    "github.com/pulumi/pulumi-aws/sdk/v5/go/aws/ec2"
    "github.com/pulumi/pulumi-aws/sdk/v5/go/aws/ecs"
    "github.com/pulumi/pulumi-aws/sdk/v5/go/aws/lb"
    "github.com/pulumi/pulumi-awsx/sdk/go/awsx/ec2"
    "github.com/pulumi/pulumi-awsx/sdk/go/awsx/ecr"
    "github.com/pulumi/pulumi-awsx/sdk/go/awsx/ecs"
    "github.com/pulumi/pulumi-awsx/sdk/go/awsx/lb"
    "github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    "github.com/pulumi/pulumi/sdk/v3/go/pulumi/config"
)

func main() {
    pulumi.Run(func(ctx *pulumi.Context) error {
        cfg := config.New(ctx, "")
        port := float64(80)
        if param := cfg.GetFloat64("port"); param != 0 {
            port = param
        }
        cpu := float64(512)
        if param := cfg.GetFloat64("cpu"); param != 0 {
            cpu = param
        }
        memory := float64(1024)
        if param := cfg.GetFloat64("memory"); param != 0 {
            memory = param
        }
        vpc, err := ec2.NewVpc(ctx, "vpc", &ec2.VpcArgs{
            NumberOfAvailabilityZones: 2,
            NatGateways: &ec2.NatGatewayConfigurationArgs{
                Strategy: ec2.NatGatewayStrategySingle,
            },
        })
        if err != nil {
            return err
        }
        cluster, err := ecs.NewCluster(ctx, "cluster", nil)
        if err != nil {
            return err
        }
        repo, err := ecr.NewRepository(ctx, "repo", nil)
        if err != nil {
            return err
        }
        image, err := ecr.NewImage(ctx, "image", &ecr.ImageArgs{
            RepositoryUrl: repo.Url,
            Path:          pulumi.String("./my-image"),
        })
        if err != nil {
            return err
        }
        lbsg, err := ec2.NewSecurityGroup(ctx, "lbsg", &ec2.SecurityGroupArgs{
            VpcId: vpc.VpcId,
            Ingress: ec2.SecurityGroupIngressArray{
                &ec2.SecurityGroupIngressArgs{
                    FromPort: pulumi.Float64(port),
                    ToPort:   pulumi.Float64(port),
                    Protocol: pulumi.String("tcp"),
                    CidrBlocks: pulumi.StringArray{
                        pulumi.String("0.0.0.0/0"),
                    },
                },
            },
            Egress: ec2.SecurityGroupEgressArray{
                &ec2.SecurityGroupEgressArgs{
                    FromPort: pulumi.Int(0),
                    ToPort:   pulumi.Int(0),
                    Protocol: pulumi.String("-1"),
                    CidrBlocks: pulumi.StringArray{
                        pulumi.String("0.0.0.0/0"),
                    },
                },
            },
        })
        if err != nil {
            return err
        }
        loadbalancer, err := lb.NewApplicationLoadBalancer(ctx, "loadbalancer", &lb.ApplicationLoadBalancerArgs{
            SubnetIds: vpc.PublicSubnetIds,
            SecurityGroups: pulumi.StringArray{
                lbsg.ID(),
            },
        })
        if err != nil {
            return err
        }
        containersg, err := ec2.NewSecurityGroup(ctx, "containersg", &ec2.SecurityGroupArgs{
            VpcId: vpc.VpcId,
            Ingress: ec2.SecurityGroupIngressArray{
                &ec2.SecurityGroupIngressArgs{
                    FromPort: pulumi.Float64(port),
                    ToPort:   pulumi.Float64(port),
                    Protocol: pulumi.String("tcp"),
                    SecurityGroups: pulumi.StringArray{
                        lbsg.ID(),
                    },
                },
            },
            Egress: ec2.SecurityGroupEgressArray{
                &ec2.SecurityGroupEgressArgs{
                    FromPort: pulumi.Int(0),
                    ToPort:   pulumi.Int(0),
                    Protocol: pulumi.String("-1"),
                    CidrBlocks: pulumi.StringArray{
                        pulumi.String("0.0.0.0/0"),
                    },
                },
            },
        })
        if err != nil {
            return err
        }
        _, err = ecs.NewFargateService(ctx, "service", &ecs.FargateServiceArgs{
            Cluster: cluster.Arn,
            TaskDefinitionArgs: &ecs.FargateServiceTaskDefinitionArgs{
                Container: &ecs.TaskDefinitionContainerDefinitionArgs{
                    Image:     image.ImageUri,
                    Cpu:       pulumi.Float64(cpu),
                    Memory:    pulumi.Float64(memory),
                    Essential: pulumi.Bool(true),
                    PortMappings: []ecs.TaskDefinitionPortMappingArgs{
                        &ecs.TaskDefinitionPortMappingArgs{
                            ContainerPort: pulumi.Float64(port),
                            TargetGroup:   loadbalancer.DefaultTargetGroup,
                        },
                    },
                },
            },
            NetworkConfiguration: &ecs.ServiceNetworkConfigurationArgs{
                Subnets: vpc.PrivateSubnetIds,
                SecurityGroups: pulumi.StringArray{
                    containersg.ID(),
                },
            },
            DeploymentCircuitBreaker: &ecs.ServiceDeploymentCircuitBreakerArgs{
                Enable:   pulumi.Bool(true),
                Rollback: pulumi.Bool(true),
            },
        })
        if err != nil {
            return err
        }
        ctx.Export("url", loadbalancer.LoadBalancer.ApplyT(func(loadBalancer *lb.LoadBalancer) (string, error) {
            return fmt.Sprintf("http://%v", loadBalancer.DnsName), nil
        }).(pulumi.StringOutput))
        return nil
    })
}

Enter fullscreen mode Exit fullscreen mode

This looks nice! It did not include my comments from the YAML definition though, which would have been nice. Unfortunately, this code does not quite compile with the Pulumi CLI version I used (3.46.0), and I had to do some tweaks:

  • Use the newest version of the AWSX SDK, which was not included in the generated code.

  • Change a literal value 2 to pulumi.IntRef(2)

  • The generated export of the load balancer URL did not provide the expected result

  • Change references to pulumi.Float64() to pulumi.Int()

The latter is because of that the type information for YAML configuration types only support Number, and not Integer - which is likely because of that it currently only supports the data types supported by YAML.

Not a perfect conversion, but much simpler than writing it from scratch, and the tweaks to do were pretty simple.

The tweaked version looks like this, almost the same:

package main

import (
    "github.com/pulumi/pulumi-aws/sdk/v5/go/aws/ec2"
    "github.com/pulumi/pulumi-aws/sdk/v5/go/aws/ecs"
    ec2x "github.com/pulumi/pulumi-awsx/sdk/go/awsx/ec2"
    ecrx "github.com/pulumi/pulumi-awsx/sdk/go/awsx/ecr"
    ecsx "github.com/pulumi/pulumi-awsx/sdk/go/awsx/ecs"
    lbx "github.com/pulumi/pulumi-awsx/sdk/go/awsx/lb"
    "github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    "github.com/pulumi/pulumi/sdk/v3/go/pulumi/config"
)

func main() {
    pulumi.Run(func(ctx *pulumi.Context) error {
        cfg := config.New(ctx, "")
        port := 80
        if param := cfg.GetInt("port"); param != 0 {
            port = param
        }
        cpu := 512
        if param := cfg.GetInt("cpu"); param != 0 {
            cpu = param
        }
        memory := 1024
        if param := cfg.GetInt("memory"); param != 0 {
            memory = param
        }

        vpc, err := ec2x.NewVpc(ctx, "vpc", &ec2x.VpcArgs{
            NumberOfAvailabilityZones: pulumi.IntRef(2),
            NatGateways: &ec2x.NatGatewayConfigurationArgs{
                Strategy: ec2x.NatGatewayStrategySingle,
            },
        })
        if err != nil {
            return err
        }
        cluster, err := ecs.NewCluster(ctx, "cluster", nil)
        if err != nil {
            return err
        }
        repo, err := ecrx.NewRepository(ctx, "repo", nil)
        if err != nil {
            return err
        }
        image, err := ecrx.NewImage(ctx, "image", &ecrx.ImageArgs{
            RepositoryUrl: repo.Url,
            Path:          pulumi.String("./my-image"),
        })
        if err != nil {
            return err
        }
        lbsg, err := ec2.NewSecurityGroup(ctx, "lbsg", &ec2.SecurityGroupArgs{
            VpcId: vpc.VpcId,
            Ingress: ec2.SecurityGroupIngressArray{
                &ec2.SecurityGroupIngressArgs{
                    FromPort: pulumi.Int(port),
                    ToPort:   pulumi.Int(port),
                    Protocol: pulumi.String("tcp"),
                    CidrBlocks: pulumi.StringArray{
                        pulumi.String("0.0.0.0/0"),
                    },
                },
            },
            Egress: ec2.SecurityGroupEgressArray{
                &ec2.SecurityGroupEgressArgs{
                    FromPort: pulumi.Int(0),
                    ToPort:   pulumi.Int(0),
                    Protocol: pulumi.String("-1"),
                    CidrBlocks: pulumi.StringArray{
                        pulumi.String("0.0.0.0/0"),
                    },
                },
            },
        })
        if err != nil {
            return err
        }
        loadbalancer, err := lbx.NewApplicationLoadBalancer(ctx, "loadbalancer", &lbx.ApplicationLoadBalancerArgs{
            SubnetIds: vpc.PublicSubnetIds,
            SecurityGroups: pulumi.StringArray{
                lbsg.ID(),
            },
        })
        if err != nil {
            return err
        }
        containersg, err := ec2.NewSecurityGroup(ctx, "containersg", &ec2.SecurityGroupArgs{
            VpcId: vpc.VpcId,
            Ingress: ec2.SecurityGroupIngressArray{
                &ec2.SecurityGroupIngressArgs{
                    FromPort: pulumi.Int(port),
                    ToPort:   pulumi.Int(port),
                    Protocol: pulumi.String("tcp"),
                    SecurityGroups: pulumi.StringArray{
                        lbsg.ID(),
                    },
                },
            },
            Egress: ec2.SecurityGroupEgressArray{
                &ec2.SecurityGroupEgressArgs{
                    FromPort: pulumi.Int(0),
                    ToPort:   pulumi.Int(0),
                    Protocol: pulumi.String("-1"),
                    CidrBlocks: pulumi.StringArray{
                        pulumi.String("0.0.0.0/0"),
                    },
                },
            },
        })
        if err != nil {
            return err
        }
        _, err = ecsx.NewFargateService(ctx, "service", &ecsx.FargateServiceArgs{
            Cluster: cluster.Arn,
            TaskDefinitionArgs: &ecsx.FargateServiceTaskDefinitionArgs{
                Container: &ecsx.TaskDefinitionContainerDefinitionArgs{
                    Image:     image.ImageUri,
                    Cpu:       pulumi.Int(cpu),
                    Memory:    pulumi.Int(memory),
                    Essential: pulumi.Bool(true),
                    PortMappings: ecsx.TaskDefinitionPortMappingArray{
                        &ecsx.TaskDefinitionPortMappingArgs{
                            ContainerPort: pulumi.Int(port),
                            TargetGroup:   loadbalancer.DefaultTargetGroup,
                        },
                    },
                },
            },
            NetworkConfiguration: &ecs.ServiceNetworkConfigurationArgs{
                Subnets: vpc.PrivateSubnetIds,
                SecurityGroups: pulumi.StringArray{
                    containersg.ID(),
                },
            },
            DeploymentCircuitBreaker: &ecs.ServiceDeploymentCircuitBreakerArgs{
                Enable:   pulumi.Bool(true),
                Rollback: pulumi.Bool(true),
            },
        })
        if err != nil {
            return err
        }
        ctx.Export("url", pulumi.Sprintf("http://%s", loadbalancer.LoadBalancer.DnsName()))
    return nil
    })
}

Enter fullscreen mode Exit fullscreen mode

This worked fine to deploy properly. I also tried conversion from YAML to Typescript and to Python, which also did not work properly right away and some tweaks were needed.

Final notes

I really like both the YAML language option and the pulumi convert command. I think YAML support is a nice approach if you want to start out with a simpler interface. It can also, in combination with the convert command, be a stepping stone towards diving into the usage of programming languages with Pulumi.

The conversion was not perfect out of the box, but goes a long way. Pulumi YAML was just recently made generally available, and Crosswalk for AWS, which I use here, is not yet in its multi-language 1.0 release. So some glitches are not surprising.

Have you tried these yourself, and what is your experience with these tools?

Top comments (0)