<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>The Ops Community ⚙️: Ivan</title>
    <description>The latest articles on The Ops Community ⚙️ by Ivan (@ivancernja).</description>
    <link>https://community.ops.io/ivancernja</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://community.ops.io/feed/ivancernja"/>
    <language>en</language>
    <item>
      <title>It's time to rethink how we use virtualization in backends</title>
      <dc:creator>Ivan</dc:creator>
      <pubDate>Thu, 27 Oct 2022 20:14:32 +0000</pubDate>
      <link>https://community.ops.io/ivancernja/its-time-to-rethink-how-we-use-virtualization-in-backends-g34</link>
      <guid>https://community.ops.io/ivancernja/its-time-to-rethink-how-we-use-virtualization-in-backends-g34</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TLDR&lt;/strong&gt;&lt;br&gt;
Virtual machines and containers have improved backends in a lot of ways, but over time they have also created a lot of problems. We believe it's time to rethink how we use virtualization for backend development.&lt;/p&gt;

&lt;p&gt;We're building a backend framework that shifts the scope of virtualization from processes down to service components.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In web applications nowadays, you can sort any component somewhere in a broad spectrum from client-side to server-side.&lt;/p&gt;

&lt;p&gt;On the client-side, there's everything that runs on people's devices, most likely a browser or an app. On the server-side, there's everything that runs in the cloud. That includes databases, authentication management, batch jobs, events handling etc.&lt;/p&gt;

&lt;p&gt;Each web framework squarely fits somewhere on that line. React, the most popular front-end web framework out there, is wholly client-side. Express, one of the most popular backend web frameworks, is wholly server-side.&lt;/p&gt;

&lt;p&gt;Client-side has been historically dominated by JavaScript frameworks. This is not surprising since every client ships a powerful JavaScript engine and that is the best way to make a web page interactive.&lt;/p&gt;

&lt;p&gt;On the server-side, things are more fragmented. This is also not surprising: backend services are just plain native processes that use their environment's network stack to respond to requests. And there is a world of different ways to write and run these: literally the history of computing.&lt;/p&gt;

&lt;p&gt;As in many other scenarios in software engineering and computer science, this huge free space of options is also the cause of a lot of problems. To understand why, we need to talk about containers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Containers are a solution and a problem
&lt;/h2&gt;

&lt;p&gt;On its way to settling in its standards, the cloud - epitomized by AWS - has evolved massively over the past decade. My co-founder has written a &lt;a href="https://www.shuttle.rs/blog/2022/05/09/ifc"&gt;post on this&lt;/a&gt; previously.&lt;/p&gt;

&lt;p&gt;Today we, as software engineers, deal with it as it is: the result of incremental changes on top of a status quo. And it is not ideal.&lt;/p&gt;

&lt;p&gt;What starts life as physical machines in a data center gets split up into tens, sometimes hundreds, of virtual machines in the AWS console. But VMs are heavy, slow to start and it's difficult to make a lot of them coexist without wasting resources like RAM and storage.&lt;/p&gt;

&lt;p&gt;Then came along containers. Building on top of the Linux kernel's namespacing features, they made images smaller and runtimes more efficient than VMs. The genius of it is to move the virtualization layer from the hardware - where the kernel itself runs virtualized - to the software - where only processes run "virtualized". With containers, virtualized processes run natively in the host kernel, like any other. Except that their I/Os are carefully kept segregated from others in the host system. Any bit of compiled code that is executable on the host can be run in a container. And you can run processes in a container without a separate boot sequence and a full-fledged virtualized operating system with its own heavy machinery like a scheduler and dedicated virtualized hardware.&lt;/p&gt;

&lt;p&gt;Containers are actually much older than a lot of people realise, going as far as 2008 with LXC in Linux’s case (even more in the case of FreeBSD). Their popularity, however, really took off with the arrival of Docker. The execution of Docker as a platform-as-a-service product was so good it took over software engineering practices for the following decade. And it is still the gold standard today in terms of usage.&lt;/p&gt;

&lt;p&gt;Of course, companies were quick to build products on top of containers. They basically pass through the benefits of containers to their paying customers. Heroku is one of the most notable example. And while containers delivered most of us, directly or indirectly, from having to deal with VMs as a unit of deployment, they certainly have their issues. The biggest one being their size.&lt;/p&gt;

&lt;p&gt;VMs have to run an entire operating system, containers don't. So they're quite a lot smaller. But container images still have to contain enough userspace to make the things you want to run actually runnable. For the way most people use them in deployments of web apps, this is generally still quite a lot!&lt;/p&gt;

&lt;p&gt;The heavier your containers are, the more difficult everything else becomes. They take longer to build, they need more resources to run, they are more expensive to store, etc.&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://shuttle.rs/"&gt;shuttle&lt;/a&gt; we're convinced that a lot of the pains experienced by software engineers in the post-Docker world can be traced back to that very simple statement: containers are often too heavy for the job.&lt;/p&gt;

&lt;h2&gt;
  
  
  Replacing containers
&lt;/h2&gt;

&lt;p&gt;You're probably thinking: it's nice and optimistic to say containers are too heavy, but what do you replace them with?&lt;/p&gt;

&lt;p&gt;Well first, as an open-source company, you avoid making the same mistake Docker made. If you make the scope of virtualization too broad, you will end up with the same result as containers. The root cause behind the heavy weight of containers is that they have been built for too many usecases. They layer virtualization on top of &lt;em&gt;all&lt;/em&gt; the I/Os of a native Linux process: their usecase is just about anything that runs.&lt;/p&gt;

&lt;p&gt;We’re concerned with the backend services most people write. These are HTTP request/response handlers, with or without state. And for that specific usecase, most projects just end up worse off by handing over backend services as container images to their deployment platform of choice.&lt;/p&gt;

&lt;p&gt;So we need to restrict the scope of virtualization to something more specific to web app backends. This is a trade-off of course, like most things in software engineering. By restricting the scope of a tool, you lose the ability to do certain things. But like most of these trade-offs, you usually are better served by erring on the side of simplicity unless you have specific needs that require extra complexity. In other words: use heavy machinery when you actually have a need for it, not before.&lt;/p&gt;

&lt;p&gt;Where does that leave us then? We need a new take on virtualization. One that has, perhaps, simplified I/Os and is engineered for backend services. Thankfully, we don't have to invent most of that wheel: let's talk about WASI.&lt;/p&gt;

&lt;h2&gt;
  
  
  WASM and WASI
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;If WASM+WASI existed in 2008, we wouldn't have needed to created Docker. That's how important it is. Webassembly on the server is the future of computing. A standardized system interface was the missing link. Let's hope WASI is up to the task! &lt;a href="https://t.co/wnXQg4kwa4"&gt;https://t.co/wnXQg4kwa4&lt;/a&gt;&lt;/p&gt;— Solomon Hykes (@solomonstre) &lt;a href="https://twitter.com/solomonstre/status/1111004913222324225?ref_src=twsrc%5Etfw"&gt;March 27, 2019&lt;/a&gt;
&lt;/blockquote&gt; 

&lt;p&gt;&lt;a href="http://webassembly.org/"&gt;WebAssembly&lt;/a&gt; (abbreviated WASM) is an instruction set for extremely lightweight virtual machines. Its most common use is to speed up client-side interactivity. This is made possible as popular browsers have rolled out WASM runtimes a few years back.&lt;/p&gt;

&lt;p&gt;WASM is made for fast sandboxing. However, without any extension, it is unable to perform even simple I/O operations like reading data from a file descriptor. This is not a big deal if WASM is used &lt;em&gt;in the browser&lt;/em&gt; - we definitely don't want to let browsers freely provide file system access to web apps. But it is a serious limitation if WASM is to be used server-side - how else are you going to serve endpoints without that?&lt;/p&gt;

&lt;p&gt;Therefore, the introduction of WASM was followed, a short while later, by WASI - the &lt;a href="https://wasi.dev"&gt;WebAssembly System Interface&lt;/a&gt;. WASI is a standard API to give WASM code the ability to do system-level I/O. This allows WASM code running in a WASI-compliant runtime to do a lot of what a native process can do through syscalls.&lt;/p&gt;

&lt;p&gt;The really powerful thing about WASM is that it is a very common compilation target. Major languages (and commonly associated frameworks) now support building WASM as a target, just the same way you build for amd64 or arm. And a lot of standard libraries have added support for WASI-based I/Os.&lt;/p&gt;

&lt;p&gt;This is what Docker's founder had to say about WASI, back in 2019. And we agree with them. At the end of the day containers are, really, just I/O-level virtualization. Now, a few years after its initial introduction, WASM runtimes have stabilised their support of WASI. This creates a prime environment to engineer, on top of WASI, a solution to containers' biggest drawbacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Changing virtualization for backends
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/qYvkDlIfKVVhZSmCci7KsaSWtgVb-9W92tvuB14zXro/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/c2h1dHRsZS5ycy9p/bWFnZXMvYmxvZy9i/ZXRhLWhlbGxvLnBu/Zw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/qYvkDlIfKVVhZSmCci7KsaSWtgVb-9W92tvuB14zXro/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/c2h1dHRsZS5ycy9p/bWFnZXMvYmxvZy9i/ZXRhLWhlbGxvLnBu/Zw" alt="Changing virtualization for backends" width="880" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we launched &lt;a href="https://shuttle.rs/"&gt;shuttle&lt;/a&gt; for its early alpha, back in March 2022, our purpose was to address the issues people face when building and deploying web app backends. So we created an open-source infrastructure-from-code platform with which you don’t need to write Containerfiles and orchestrate images, starting with support for Rust.&lt;/p&gt;

&lt;p&gt;Since then, more than 1.2k people starred the &lt;a href="https://github.com/shuttle-hq/shuttle"&gt;shuttle repo&lt;/a&gt; and hundreds joined our discord community. And we've seen more than 2000 deployments and hundreds of users! From which we received a ton of feedback.&lt;/p&gt;

&lt;p&gt;What we quickly realized is that while we simplified the process of getting started implementing your own backend and setting up its infrastructure, we completely failed to solve two core problems: long build and deploy times.&lt;/p&gt;

&lt;p&gt;Rust has notoriously long build times (this probably has to do with static linking and heavy reliance on compile-time code generation). And while it supports incremental compilation out of the box, in a containerized environment, missing the cache for an image layer means having to rebuild from scratch.&lt;/p&gt;

&lt;p&gt;We've found that no matter how much we tweaked our internal caching, too often users had to wait too long for their projects to build and deploy - something that can take minutes in the simplest projects, and closer to half an hour in complex ones. The reason was simple: our execution of our idea for shuttle is built on top of containers. And no matter how much we try to distance containers from our users, their limitations always surface back.&lt;/p&gt;

&lt;p&gt;It was time for a complete rethink, so we took a radical view: let's start from the services people are writing, distilling what they need done quickly and easily. And let's make it our mission to optimize the hell out of the entire stack. We thought that if the execution of that idea is done right, it'd let us trim the dependency tree of services our users deploy and slim the runtime that every service ships with.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What we quickly realized is that while we trimmed down the process of getting started implementing your own backend and setting up its infrastructure, we completely failed to solve two core problems: long build and deploy times.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After all, a major culprit of these long build and deploy times in the real world is the large number of heavy dependencies of even simple projects. There's not much you can do about this: most services have a pretty big runtime that includes heavy machinery like an asynchronous executor (e.g. &lt;a href="https://tokio.rs"&gt;tokio&lt;/a&gt;), a web server (e.g. &lt;a href="https://github.com/hyperium/hyper"&gt;hyper&lt;/a&gt;), database drivers (e.g. &lt;a href="https://github.com/launchbadge/sqlx"&gt;sqlx&lt;/a&gt;) and more. And on every deploy you need to re-build them and hope artifact caches are hit in order to get an incremental build. And it's not just building either, the running time of tests is also impacted by this. The closure of the codebase you're engaging in those tests is very large indeed as it follows that of your dependencies.&lt;/p&gt;

&lt;p&gt;This stuff materializes itself everywhere. Just try taking this hello world snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;axum&lt;/span&gt;&lt;span class="p"&gt;::{&lt;/span&gt;&lt;span class="n"&gt;Router&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nn"&gt;routing&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;get_hello&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;'static&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;"You're slow, Heroku!"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nd"&gt;#[tokio::main]&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;env&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;var&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"PORT"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.unwrap&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;router&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Router&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;get_hello&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="nf"&gt;.into_make_service&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="nn"&gt;hyper&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;Server&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nd"&gt;format!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"127.0.0.1:{port}"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.parse&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="nf"&gt;.unwrap&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
        &lt;span class="nf"&gt;.serve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;router&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;.await&lt;/span&gt;
        &lt;span class="nf"&gt;.unwrap&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and deploy it to Heroku:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/H0v44lH_E-1Of-OMq1vwTVDsJpeCFrp6-cbFcesWnfs/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/c2h1dHRsZS5ycy9p/bWFnZXMvYmxvZy9h/eHVtLWhlcm9rdS5n/aWY" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/H0v44lH_E-1Of-OMq1vwTVDsJpeCFrp6-cbFcesWnfs/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/c2h1dHRsZS5ycy9p/bWFnZXMvYmxvZy9h/eHVtLWhlcm9rdS5n/aWY" alt="Deployment demo" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To try to address this, we wanted to &lt;strong&gt;move all these heavy dependencies to a common runtime across services&lt;/strong&gt;. So your tokio, hyper, sqlx and co (in the case of Rust), now all belong to a long-lived containerized process running persistently in the cloud. Whereas all your service logic, database and endpoint code build into lightweight WASM modules that are dynamically loaded in-place by this global persistent process. That way "building" means compiling a very lightweight codebase with a small dependency footprint. And "deploying" means calling upon the control plane of that long-lived process to replace service components without rolling out new images, containers or VMs.&lt;/p&gt;

&lt;p&gt;This leaves us with a trimmed down user-facing API that still uses familiar objects like &lt;code&gt;PgClient&lt;/code&gt;s and axum-style routes with guards:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/weLS2fJrqKo_aOgdE9rrNP0DEvrHz04k1fXkf2CYWu8/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/c2h1dHRsZS5ycy9p/bWFnZXMvYmxvZy9i/ZXRhLWFwaS5wbmc" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/weLS2fJrqKo_aOgdE9rrNP0DEvrHz04k1fXkf2CYWu8/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/c2h1dHRsZS5ycy9p/bWFnZXMvYmxvZy9i/ZXRhLWFwaS5wbmc" alt="Beta API" width="880" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Except that now the virtualization platform in which your services are run is responsible for instantiating these objects and calling these functions.&lt;/p&gt;

&lt;p&gt;With this approach, the component of virtualization that you end up deploying on a daily basis is much smaller than traditional VMs and containers. In a way we can say this makes the virtualization layer more adapted to the specific needs of backend services running in the cloud. It's an optimized I/O surface between backend service components that change a lot (e.g. endpoint implementations) and their environing long-lived runtimes that don't (e.g. tokio/hyper/sqlx).&lt;/p&gt;

&lt;p&gt;This results in "images" that are effectively up to &lt;strong&gt;100x smaller&lt;/strong&gt; because of the switch from container images to WASM binaries. And super fast to deploy too, from tens of minutes sometimes to &lt;strong&gt;less than a second&lt;/strong&gt; all the time. All because when things are &lt;em&gt;really&lt;/em&gt; incremental, you don't have to build and test a large codebase with its large userspace dependencies on every push. You just need to build and test the code you're writing and the changes you've made.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/f2-u2R3k2bYPN_KcfgTr4RCxtrGuXiiWOJzwFzzo3TA/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/c2h1dHRsZS5ycy9p/bWFnZXMvYmxvZy9i/ZXRhLW5leHQtZGVw/bG95LWRlbW8uZ2lm" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/f2-u2R3k2bYPN_KcfgTr4RCxtrGuXiiWOJzwFzzo3TA/w:880/mb:500000/ar:1/aHR0cHM6Ly93d3cu/c2h1dHRsZS5ycy9p/bWFnZXMvYmxvZy9i/ZXRhLW5leHQtZGVw/bG95LWRlbW8uZ2lm" alt="Deploy your app in less than a second" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our vision for this new way of doing backend development is shuttle-next: a next-generation backend framework with the fastest build, test and deployment times ever.&lt;/p&gt;

&lt;p&gt;We believe that scoping down virtualization to the level of service components will eventually become the norm for backend development. In the same way we all think it's often not best to setup and start a VM only to run a single process, we will eventually all think it's misguided to build and start a container only to run a single service.&lt;/p&gt;

&lt;p&gt;We are launching shuttle-next as part of our closed beta for shuttle later this month, with the public release coming soon after. If you’re keen to give it a try early, &lt;strong&gt;&lt;a href="https://shuttle.rs/beta"&gt;sign up for the beta!&lt;/a&gt;&lt;/strong&gt; We'd love to know what you think!&lt;/p&gt;

&lt;p&gt;In the meantime, check out &lt;a href="https://github.com/shuttle-hq/shuttle"&gt;shuttle's GitHub repo&lt;/a&gt; and &lt;a href="https://twitter.com/shuttle_dev"&gt;Twitter&lt;/a&gt; for updates. If you’d like to support us, please star the repo and/or join the &lt;a href="https://discord.gg/shuttle"&gt;shuttle Discord community&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloudops</category>
    </item>
    <item>
      <title>Infrastructure from code - not 'as', but 'from'</title>
      <dc:creator>Ivan</dc:creator>
      <pubDate>Thu, 29 Sep 2022 07:58:56 +0000</pubDate>
      <link>https://community.ops.io/ivancernja/infrastructure-from-code-not-as-but-from-1nbc</link>
      <guid>https://community.ops.io/ivancernja/infrastructure-from-code-not-as-but-from-1nbc</guid>
      <description>&lt;p&gt;In the early days of Facebook (back when it was still called &lt;strong&gt;thefacebook.com&lt;/strong&gt;), Mark Zuckerberg hosted it on Harvard’s university servers. Back then companies used to buy or rent physical servers to run their software on. The advent of the cloud in the mid-2000s changed the game. The elasticity that this enabled has in big part enabled the rapid progress that we’ve all enjoyed since then. What we demand from software has increased tremendously, and correspondingly its architecture has become much more elaborate. The power of flexibility came at a price though - the complexity of wiring code with infrastructure. That price is even higher today.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Container Hero
&lt;/h3&gt;

&lt;p&gt;Heroku became part of the cloud-native lore as the first incredibly successful attempt at tackling this complexity. They led the first crusade to rid software developers of the infrastructure complexity dragon. People loved it. Heroku pioneered the wildly popular container-based approach to deployment that abstracted away the burden of managing virtual machines. By being opinionated with the use of containers, Heroku was able to appeal to a broad set of customers looking to quickly build apps. Containers are mutually isolated processes, wired together by third-party configuration which does not belong in the application’s code base - this design choice results in a lack of elasticity and granular control of your system. This results in a conservative outlook of dealing with infrastructure, constantly over-provisioning and hence overpaying to account for potential future load.&lt;/p&gt;

&lt;p&gt;Furthermore, infrastructure is still treated separately from code - the two worlds live separately and don’t really know much about each other. There is much less wiring to do than with AWS for example, but what is left to do - and there’s a lot of it - you still have to do yourself. Heroku trades off AWS’s elasticity for ready-made building block components that are statically wired up together through a combination of CLI commands and dashboard operations. Of course, Heroku is limited by its founding principle: static containers as building blocks of applications. With Heroku, it is true you do not have to think about infrastructure - but only in the beginning. Once your application scales, your bills stack up and you’re left without a choice: go back to AWS.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Serverless Conundrum
&lt;/h3&gt;

&lt;p&gt;We need to talk about serverless. Serverless (think AWS Lambda) was a new cloud computing execution model where machine allocation happens on-demand and the user is primarily abstracted away from the underlying servers. With it came a familiar promise - developers not needing to think about infrastructure at all. Despite its somewhat counterintuitive name (because, of course, there are always servers running somewhere), serverless sounds like a great ideal to strive towards. This is simple, developers want to spend as much time as possible on delivering business value by writing code, while companies would like to avoid spending fortunes on DevOps. This seems to be the holy grail, but there’s a catch. You might ask, “if you say serverless is so great, why have we all not switched yet”?&lt;/p&gt;

&lt;p&gt;Well, serverless forces you to write application business logic as functions, rather than the more traditional idiom of stateful processes. To reap the benefits of serverless, you have to build your application as a multitude of stateless request or event handlers, often requiring a bottoms-up redesign of your system. For some use-cases the serverless paradigm works, but in many cases breaking things into discrete, decoupled functions may not be optimal or even feasible. The next question is, can we have our cake and eat it too? Can we maintain the paradigm of stateful processes and abstract away the underlying infrastructure and orchestration?&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure from Code
&lt;/h3&gt;

&lt;p&gt;At shuttle we want to empower engineers by creating the best possible developer experience.&lt;/p&gt;

&lt;p&gt;We've already developed an annotation based system that enables Rust apps to be deployed with a one-liner, as well as dependencies like databases being provisioned through static analysis in real-time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="nd"&gt;#[shuttle_service::main]&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;rocket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nd"&gt;#[shared::Postgres]&lt;/span&gt; &lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;PgPool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// automatic db provisioning + hands you back an authenticated connection pool&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nn"&gt;shuttle_service&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ShuttleRocket&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// application code&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Building on the phenomenal engineering done before us, we see a better future. One where developers don’t need to do any “wiring” whatsoever when it comes to code and infrastructure.&lt;/p&gt;

&lt;p&gt;In this future, infrastructure can be defined directly from code. Not in the “Infrastructure as Code” kind of way though, but in the way that the code that developers write implicitly defines infrastructure. What your code actually needs in terms of infrastructure should be inferred as you build your application, instead of you having to think upfront about what infrastructure piece is needed and how to wire it up.&lt;/p&gt;

&lt;p&gt;This setup should also break the boundaries that keep containers isolated from each other (and thus make it difficult to orchestrate them), without necessarily getting rid of the paradigm of containers. It should not force you into any specific way of writing applications, but just be an extension of your workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Having your cake and eating it too
&lt;/h3&gt;

&lt;p&gt;When looking back at Heroku’s success, it becomes apparent that focusing on one language, Ruby, which was becoming quite popular at the time - was a remarkable strategy. It enabled their team to focus acutely and produce an unparalleled experience for their users.&lt;/p&gt;

&lt;p&gt;At shuttle we are convinced Rust is the best language to start this journey with. It’s been &lt;a href="https://www.cantorsparadise.com/the-most-loved-programming-language-in-the-world-5220475fcc22"&gt;the most loved&lt;/a&gt; language by developers for many years in a row (as well as one of the fastest-growing languages). If you want to create the best developer experience - it makes sense to start with the most loved language. Indeed, Rust is the first language packed with such a powerful set of tools for static analysis and code generation, that are required to create the best developer experience when it comes to Infrastructure &lt;del&gt;as&lt;/del&gt; from Code.&lt;/p&gt;

&lt;p&gt;Removing the burden of dealing with DevOps from developers, many of whom find it daunting and stressful, not only do we stand to make development more enjoyable and efficient, but also enable far more people to write and ship applications.&lt;/p&gt;

&lt;p&gt;From inception, all of us shared affection for open source software, not only from a philosophical standpoint. We have seen in practice that the best way to build software is together with the end-users. It all goes back to the idea of creating the best developer experience - so for us, this is a no-brainer.&lt;/p&gt;

&lt;p&gt;Our community is just as important to us, as our vision is, so if any of this resonates with you - &lt;a href="https://discord.gg/shuttle"&gt;join us on discord&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Also, if you’re curious to learn more about how we are building this - &lt;a href="https://github.com/shuttle-hq/shuttle"&gt;check out our GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Discussion time!
&lt;/h3&gt;

&lt;p&gt;With all of the above in mind, what do you think about Infrastructure from Code? We'd love to hear your opinions and questions!&lt;/p&gt;

</description>
      <category>rust</category>
      <category>devops</category>
      <category>serverless</category>
      <category>cloudops</category>
    </item>
  </channel>
</rss>
