{"id":25771,"date":"2026-04-22T23:16:24","date_gmt":"2026-04-22T14:16:24","guid":{"rendered":"https:\/\/aireviewirush.com\/?p=25771"},"modified":"2026-04-22T23:16:24","modified_gmt":"2026-04-22T14:16:24","slug":"the-invisible-engineering-behind-lambdas-community","status":"publish","type":"post","link":"https:\/\/aireviewirush.com\/?p=25771","title":{"rendered":"The invisible engineering behind Lambda\u2019s community"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div id=\"\">\n<p><figure><img decoding=\"async\" src=\"\/images\/xkcd-2259.png\" alt=\"XKCD 2259\" loading=\"lazy\"\/><figcaption>Supply: https:\/\/xkcd.com\/2259\/<\/figcaption><\/figure>\n<\/p>\n<p><em>A particular due to the engineers who shared their story with me and have helped carry this weblog put up to life: <a href=\"http:\/\/www.linkedin.com\/in\/ravi-nagayach\" target=\"_blank\" rel=\"noopener\">Ravi Nagayach<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/prashant-kumar-singh-903616a4\/\" target=\"_blank\" rel=\"noopener\">Prashant Singh<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/guptakshitij8\/\" target=\"_blank\" rel=\"noopener\">Kshitij Gupta<\/a>, and all the Lambda networking staff. These are of us doing the invisible engineering that retains AWS operating.<\/em><\/p>\n<hr\/>\n<p>Most infrastructure enhancements at AWS occur invisibly. Engineering groups spend years incrementally rebuilding programs that tens of millions of shoppers depend upon, whereas these programs proceed operating at full scale with out disruption. Marc Olson described this as changing a propeller plane to a jet whereas it\u2019s in flight. One mistake and the airplane goes down. However get it proper\u2026 and nobody notices.<\/p>\n<p>That is the work that can by no means make headlines or get a weblog put up (no less than not when issues go as deliberate). Work like optimizing iptables guidelines, working round kernel lock competition, or rewriting packet headers. The place success is silent. The reward is figuring out what you\u2019ve labored on is best right now than it was per week in the past, and that the subsequent staff gained\u2019t run into the identical constraints you simply eliminated.<\/p>\n<p>I\u2019ve been eager about this lots these days. There are massive launches like S3 Information, which remedy very seen buyer issues, after which there may be the work that\u2019s simply as spectacular that occurs quietly, over lengthy intervals of time, and simply out of sight of our prospects. Immediately, I need to share a Lambda story with you that\u2019s spanned the higher a part of a decade, and that\u2019s made issues we thought unattainable, akin to operating latency delicate workloads in a serverless operate, effectively, doable. It\u2019s the story of Lambda\u2019s networking staff, and the way their delicate inventiveness has each remodeled what\u2019s doable with Lambda and impacted how and what we are able to construct throughout AWS.<\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_53 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title \" >Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\" role=\"button\"><label for=\"item-69e910b1b1fdf\" ><span class=\"\"><span style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/label><input aria-label=\"Toggle\" aria-label=\"item-69e910b1b1fdf\"  type=\"checkbox\" id=\"item-69e910b1b1fdf\"><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/aireviewirush.com\/?p=25771\/#What%E2%80%99s_a_community_topology\" title=\"What&#8217;s a community topology? \">What&#8217;s a community topology? <\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/aireviewirush.com\/?p=25771\/#The_VPC_chilly_begin_downside\" title=\"The VPC chilly begin downside \">The VPC chilly begin downside <\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/aireviewirush.com\/?p=25771\/#Reimagining_our_community_topology_out_of_necessity\" title=\"Reimagining our community topology (out of necessity) \">Reimagining our community topology (out of necessity) <\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/aireviewirush.com\/?p=25771\/#One_bottleneck_at_a_time\" title=\"One bottleneck at a time \">One bottleneck at a time <\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/aireviewirush.com\/?p=25771\/#Invisible_engineering\" title=\"Invisible engineering \">Invisible engineering <\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/aireviewirush.com\/?p=25771\/#That_is_the_job\" title=\"That is the job \">That is the job <\/a><\/li><\/ul><\/nav><\/div>\n<h2 id=\"what-is-a-network-topology\"><span class=\"ez-toc-section\" id=\"What%E2%80%99s_a_community_topology\"><\/span>What&#8217;s a community topology? <a href=\"#what-is-a-network-topology\"\/><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Earlier than we get into the weeds, it helps to know what a community topology is, as a result of it\u2019s the inspiration for all the pieces that follows on this weblog. A community topology is the association of units, connections, and guidelines that decide how knowledge strikes between factors in a system. Consider it because the plumbing. It defines which paths exist, how visitors will get routed, how isolation is enforced between tenants, and what occurs when a packet must journey from level A to level B. In a cloud atmosphere, this plumbing is software-defined\u2014constructed from digital units, tunnels, routing guidelines, and packet filters moderately than bodily cables and switches.<\/p>\n<p>Once you\u2019re operating a single utility on a single machine, the topology is trivial. However while you\u2019re operating tens of millions of light-weight digital machines on shared {hardware}, every needing its personal remoted community path, its personal safety boundaries, and the power to hook up with a buyer\u2019s personal community, the topology turns into some of the consequential design selections that you just make. Each machine you add, each rule you create, each tunnel you identify has a price in latency, CPU, and reminiscence. And people prices multiply with density. Get the topology proper and builders simply see quick, dependable connectivity.<\/p>\n<p>For Lambda, that is the place our story begins. With a community topology that served non-VPC capabilities effectively, however one which imposed an actual value on capabilities connecting to a buyer\u2019s VPC.<\/p>\n<h2 id=\"the-vpc-cold-start-problem\"><span class=\"ez-toc-section\" id=\"The_VPC_chilly_begin_downside\"><\/span>The VPC chilly begin downside <a href=\"#the-vpc-cold-start-problem\"\/><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>A Lambda chilly begin occurs when Lambda has to create a brand new micro VM to deal with an invoke, as a result of there is no such thing as a heat execution atmosphere already obtainable to tackle the work. Creating the execution atmosphere consists of allocating the micro VM, downloading the client\u2019s code, beginning the language runtime, and operating the client\u2019s initialization code, all earlier than the invoke payload ever reaches a buyer\u2019s handler. A VPC chilly begin is all of that plus the extra community setup required for the microVM to achieve assets inside a buyer\u2019s personal community. This overhead is why VPC chilly begins have traditionally been slower than non-VPC chilly begins.<\/p>\n<p>When Lambda migrated to Firecracker microVMs in 2019, chilly begin overhead dropped from over ten seconds to underneath a second. All year long, the staff continued to chip away on the remaining latency with focused fixes, nonetheless, establishing the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Generic_Network_Virtualization_Encapsulation\" target=\"_blank\" rel=\"noopener\">Generic Community Virtualization Encapsulation (Geneve) tunnel<\/a> that routes a Lamba operate\u2019s visitors to the proper buyer VPC, together with DHCP, was nonetheless consuming 300 milliseconds. For some workloads, that was a manageable tradeoff, however for builders designing responsive functions, it was an actual barrier. And the staff\u2019s experiments confirmed it will worsen with density.<\/p>\n<p>The staff had been monitoring chilly begin metrics throughout each VPC and non-VPC configurations, and at increased microVMs densities, noticed tail latencies have been rising from tons of of milliseconds to seconds. The foundation trigger wasn\u2019t apparent, so that they instrumented the complete path and ran a collection of experiments, various concurrency, density, a mixture of create and destroy operations. What they discovered was that the dominant contributor was tunnel creation itself. Each packet touring by means of a Geneve tunnel carries a Digital Community Identifier (VNI), and that VNI must be set when the tunnel is created. In Lambda\u2019s case, the VNI wasn\u2019t obtainable till operate initialization, and Linux supplied no technique to replace it after the tunnel was created.<\/p>\n<p>Writing a customized kernel driver was on the desk, however sustaining Lambda-specific patches upstream indefinitely wasn\u2019t a trade-off the staff was keen to make. The actual selection was between the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Data_Plane_Development_Kit\" target=\"_blank\" rel=\"noopener\">Information Aircraft Growth Equipment (DPDK)<\/a> or <a href=\"https:\/\/en.wikipedia.org\/wiki\/Berkeley_Packet_Filter\" target=\"_blank\" rel=\"noopener\">prolonged Berkeley Packet Filter (eBPF)<\/a>. eBPF was the much less traveled path, however initiatives akin to Cilium have been proving its utility at scale. The staff could be among the many first in Lambda to make use of it in manufacturing, and there have been actual questions on whether or not it will maintain up at scale and cross the safety critiques that got here with it. However it supplied decrease overhead than DPDK, and extra importantly, it put the staff accountable for their very own infrastructure. So that they constructed a proof of idea.<\/p>\n<p>Tunnels have been created with dummy VNIs throughout pooling. When a operate initialized and the actual VNI turned obtainable, an eBPF program mapped the dummy VNI to the actual VNI, rewriting the Geneve header on egress and reversing it on ingress. Geneve tunnel latency dropped from 150 milliseconds to 200 microseconds. Costly tunnel creation moved off the recent path solely.<\/p>\n<p>With this answer, the staff had additionally eliminated a basic blocker for packing extra microVMs onto every employee, and lowered a supply of CPU warmth throughout bursts of chilly begins, which improved the platform\u2019s capability to soak up visitors spikes and deal with situations like availability zone evacuations.<\/p>\n<p><figure><img decoding=\"async\" src=\"\/images\/lambda-topo-latency-graph.png\" alt=\"Lambda latency dropped from 150ms to 200\u03bcs\" loading=\"lazy\"\/><figcaption>Drop in latency spikes from 150ms to 200\u03bcs<\/figcaption><\/figure>\n<\/p>\n<p>With Geneve tunnel latency down from 150 millisecond to 200 microseconds, the platform overhead for VPC chilly begins was not the bottleneck. DHCP remained open and nonetheless does, a multi-phase effort the staff is at the moment working by means of. However the headroom that this work created was important, and would grow to be the inspiration for SnapStart.<\/p>\n<h2 id=\"reimagining-our-network-topology-out-of-necessity\"><span class=\"ez-toc-section\" id=\"Reimagining_our_community_topology_out_of_necessity\"><\/span>Reimagining our community topology (out of necessity) <a href=\"#reimagining-our-network-topology-out-of-necessity\"\/><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><a href=\"https:\/\/docs.aws.amazon.com\/lambda\/latest\/dg\/snapstart.html\" target=\"_blank\" rel=\"noopener\">Lambda SnapStart<\/a> introduced a brand new set of challenges for our engineers. As an alternative of initializing every operate one after the other from scratch, SnapStart takes a snapshot of an already initialized execution atmosphere and clones it to serve a number of concurrent invocations concurrently. As a result of the initialization work occurs as soon as at snapshot time and never on each invocation, chilly begin occasions dropped dramatically, significantly for Java workloads the place initialization overhead had all the time been highest. The staff had a brand new impediment to unravel as every clone wanted its personal remoted community namespace with separate faucet, bridge, veth, and tunnel units, prepared earlier than the VM began. The unique design created these on demand, however SnapStart wanted them pre-created and able to connect.<\/p>\n<p>Every host had capability for as much as 2,500 micro VMs. When SnapStart launched, each topologies ran on the identical hosts, with the two,500 slots cut up between them, 200 allotted to the brand new snapshot topology and a pair of,300 for on-demand workloads. The 200 cap was a deliberate trade-off. These networks required twice as many Linux community units per VM, and the fee to create and destroy them grew with density. With every new machine there was a penalty. Full fleet adoption wasn\u2019t anticipated instantly, they figured they&#8217;d a 12 months of runway, so that they made the selection to launch with a decrease cap and are available again to the scaling downside later.<\/p>\n<p>Delivery with a cut up topology and a cap of 200 was the suitable name for launch, however Lambda was transferring towards snapshot-based VMs for all workloads, and two topologies operating side-by-side indefinitely was a tax that they have been unwilling to pay. The staff wanted to converge them and scale from 200 to 2,500 snapshot networks per host.<\/p>\n<h2 id=\"one-bottleneck-at-a-time\"><span class=\"ez-toc-section\" id=\"One_bottleneck_at_a_time\"><\/span>One bottleneck at a time <a href=\"#one-bottleneck-at-a-time\"\/><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>When the staff began scale testing the snapshot topology, the primary concern they bumped into was community creation itself. Creating Linux community units (faucet, veth, namespaces) bought slower as density elevated, and operating destroys alongside creates made all the pieces stall.<\/p>\n<p>Each time a brand new machine was created, Linux needed to traverse its current machine lists, so the price of creating the N+1 community grew with N. At their goal density of 4,000 networks (up from 2,500 throughout each topologies), with Lambda\u2019s fixed VM turnover, the overhead by no means stopped accumulating. The very best answer, it turned out, was to cease creating networks on demand altogether. As an alternative of paying the fee throughout operate invocation, the staff moved all of it to employee initialization, pre-creating all 4,000 networks earlier than the employee ever began a request. On the floor, spending three minutes creating networks earlier than a employee can do something helpful sounds shaky, however Lambda employees cycle sometimes in comparison with microVMs, which modifications the mathematics solely. As Ravi put it, \u201cabsorbing the fee as soon as at boot moderately than paying it constantly throughout operation\u201d was the suitable name, and the CPU drain throughout operate execution disappeared. Colm MacC\u00e1rthaigh calls this <a href=\"https:\/\/aws.amazon.com\/builders-library\/reliability-and-constant-work\" target=\"_blank\" rel=\"noopener\">fixed work<\/a>\u2014programs that do the identical quantity of labor no matter load, like a espresso urn that retains tons of of cups heat whether or not three folks present up or 300. The employee all the time pays the identical boot value. It was one layer, however there have been extra.<\/p>\n<p>The NAT implementation was one other supply of ache. The unique system used iptables for stateful Community Handle Translation. Packets underwent double NAT, as soon as within the VM\u2019s community namespace and once more on the eth0 interface. At excessive densities, with 1000&#8217;s of VMs processing visitors concurrently, the kernel needed to keep and question connection tables for each packet. The competition added important latency. The staff changed stateful NAT with stateless packet mangling utilizing eBPF, rewriting headers primarily based on predetermined mappings as an alternative of monitoring connection state. NAT setup latency dropped by 100x.<\/p>\n<p>After which there have been iptables guidelines, which do plenty of heavy lifting, from routing to NAT to filtering, however at their core they&#8217;re a algorithm the kernel evaluates in sequence for each packet, deciding what&#8217;s allowed and the place it goes. The configuration had grown to over 125,000 guidelines within the root community namespace. This wasn\u2019t gathered cruft or a self-discipline concern, however a density downside. Every VM slot required roughly 30 guidelines organized throughout chains and jumps for administration and knowledge visitors. Multiply that by 4,000 slots and add the fastened guidelines that utilized globally, and also you get a way of how the configuration grew to over 125,000 guidelines. It was a density downside, not a self-discipline downside. Every community slot required its personal chains, and each packet needed to traverse the principles in sequence. A packet for slot 0 processed shortly. A packet for slot 4,000 walked by means of 1000&#8217;s of extra guidelines, including as much as a millisecond of connection setup latency from rule traversal alone. The staff moved the 30 slot-specific guidelines into every particular person community namespace, decreasing the basis namespace from 125,000+ guidelines to only 144 static, slot-agnostic guidelines. The efficiency skew between slots disappeared.<\/p>\n<p><figure><img decoding=\"async\" src=\"\/images\/lambda-topo-iprules.png\" alt=\"Graph of iptables rules reduction\" loading=\"lazy\"\/><figcaption>What it appears to be like wish to go from 125,000+ iptables guidelines to 144 static, slot agnostic guidelines<\/figcaption><\/figure>\n<\/p>\n<p>Community pooling eradicated the CPU drain. Stateless NAT eliminated the conntrack desk bottleneck. Simplifying iptables fastened the efficiency skew. Nonetheless community creation was slower than it wanted to be.<\/p>\n<p>The offender was <a href=\"https:\/\/netdevconf.info\/2.2\/papers\/westphal-rtnlmutex-talk.pdf\" target=\"_blank\" rel=\"noopener\">Routing Netlink (RTNL) lock<\/a>, Linux\u2019s method of guaranteeing that just one factor can modify the community configuration at a time. It\u2019s a needed guardrail, however at scale a bottleneck. When the staff tried to create 1000&#8217;s of community units and namespaces in parallel throughout employee boot, operations queued behind the lock. What ought to have taken seconds stretched to minutes. It\u2019s a bit like when a automobile breaks down on a bridge in Amsterdam (a metropolis that&#8217;s not designed for automobiles). First the automobile behind it will get caught, then the automobile behind that one, then a tram, and on-and-on till all the metropolis is gridlocked. That\u2019s why I experience my bike.<\/p>\n<p>For Lambda, the repair was to rethink the order of operations. Pool community namespaces first, create veth pairs contained in the namespace earlier than transferring them to root, and batch eBPF program attachments for all veth units in a single operation as an alternative of one after the other. The queuing disappeared.<\/p>\n<h2 id=\"invisible-engineering\"><span class=\"ez-toc-section\" id=\"Invisible_engineering\"><\/span>Invisible engineering <a href=\"#invisible-engineering\"\/><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Lambda now runs a single, unified community topology supporting each conventional and snapshot-based workloads. That is what years of invisible engineering appear like in observe.<\/p>\n<p><img decoding=\"async\" src=\"\/images\/lambda-topo-network-topology-diagram.png\" alt=\"Lambda\u2019s network topology\" loading=\"lazy\"\/><\/p>\n<p>The staff scaled from 200 to 4,000 snapshot networks per employee, a 20x improve in capability, with benchmarks exhibiting potential for much more. All 4,000 networks are created in three minutes throughout employee initialization, with no background CPU drain throughout invokes. The iptables simplification eradicated efficiency variation between community slots. Each packet now traverses the identical 144 guidelines no matter slot task. And the mixed optimizations lowered CPU utilization by 1%. At Lambda\u2019s scale, every p.c interprets to important infrastructure financial savings.<\/p>\n<p>When the staff constructing Aurora DSQL wanted scalable Firecracker-based networking with the suitable safety and efficiency traits, they reached out to Lambda\u2019s networking staff. Quite than have them rebuild all the pieces from scratch, the staff encapsulated the complete networking stack right into a service that DSQL might set up and run on their very own employees. The service handles machine administration, firewall guidelines, NAT translation, and the safety hygiene required to soundly reuse a community after launch. DSQL requests a community when it wants one for a VM and releases it when performed. Lambda owns the service and vends new variations, and each optimization they make flows to DSQL routinely. It saved the DSQL staff months of engineering effort and gave them Lambda-grade networking density from day one.<\/p>\n<h2 id=\"this-is-the-job\"><span class=\"ez-toc-section\" id=\"That_is_the_job\"><\/span>That is the job <a href=\"#this-is-the-job\"\/><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Most of what we construct at AWS, no one will ever see. A buyer deploys a Lambda operate that connects to their VPC and it begins in milliseconds. They don\u2019t take into consideration the Geneve tunnels beneath, or the iptables guidelines, or the kernel mutex that needed to be labored round to make that doable. They shouldn\u2019t must.<\/p>\n<p>This specific effort took the higher a part of a decade, and it didn\u2019t include a product launch or a press launch. The staff converged two community topologies into one, eradicated bottlenecks at each layer of the stack, and scaled capability by 20x. After they have been performed, Lambda capabilities began quicker and ran extra effectively. And most prospects by no means observed the change. However the demand for quicker chilly begins hasn\u2019t slowed down. If something, it\u2019s accelerated as new workloads push Lambda in instructions we couldn\u2019t have anticipated 5 years in the past.<\/p>\n<p>The engineers who did this work knew that entering into. Optimizing iptables guidelines and dealing round kernel lock competition doesn\u2019t make headlines. However there&#8217;s a skilled delight that comes from doing the \u201cfactor\u201d correctly even when no one\u2019s watching. Pleasure within the unseen programs that keep up by means of the evening. In clear deployments. In rollbacks that go unnoticed. Within the analysis. In <a href=\"https:\/\/lpc.events\/event\/18\/contributions\/1959\/\" target=\"_blank\" rel=\"noopener\">listening to the group and dealing collaboratively<\/a> on modifications. Or figuring out the system is best right now than it was yesterday, and that the subsequent staff who works on it gained\u2019t hit the constraints you simply eliminated.<\/p>\n<p>That is what defines the perfect builders and the perfect groups. They do the work not as a result of somebody goes to put in writing about it, however as a result of it\u2019s the suitable factor to do. Aristotle known as this \u201cArete\u201d, the relentless and lifelong pursuit of excellence. And after I have a look at what these networking engineers have delivered, quietly and incrementally, I see that dedication in all places.<\/p>\n<p>Now, go construct!<\/p>\n<\/div>\n\n","protected":false},"excerpt":{"rendered":"<p>Supply: https:\/\/xkcd.com\/2259\/ A particular due to the engineers who shared their story with me and have helped carry this weblog put up to life: Ravi Nagayach, Prashant Singh, Kshitij Gupta, and all the Lambda networking staff. These are of us doing the invisible engineering that retains AWS operating. Most infrastructure enhancements at AWS occur invisibly. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":25773,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[],"class_list":{"0":"post-25771","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-cloud-computing"},"_links":{"self":[{"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/posts\/25771","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=25771"}],"version-history":[{"count":1,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/posts\/25771\/revisions"}],"predecessor-version":[{"id":25772,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/posts\/25771\/revisions\/25772"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/media\/25773"}],"wp:attachment":[{"href":"https:\/\/aireviewirush.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=25771"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=25771"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=25771"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}