{"id":20218,"date":"2026-01-07T07:16:25","date_gmt":"2026-01-06T22:16:25","guid":{"rendered":"https:\/\/aireviewirush.com\/?p=20218"},"modified":"2026-01-07T07:16:26","modified_gmt":"2026-01-06T22:16:26","slug":"constructing-neocloud-ai-information-facilities-with-cisco-8000-and-sonic-the-place-disaggregation-meets-determinism","status":"publish","type":"post","link":"https:\/\/aireviewirush.com\/?p=20218","title":{"rendered":"Constructing Neocloud AI Information Facilities with Cisco 8000 and SONiC: The place Disaggregation Meets Determinism"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p>A brand new paradigm is reshaping cloud infrastructure: neoclouds. These AI-first next-gen cloud suppliers are constructing GPU-dense platforms designed for the unrelenting scale and efficiency calls for of recent machine studying. Not like conventional cloud suppliers retrofitting current infrastructure, neoclouds are purpose-building AI-native materials from the bottom up\u2014the place each GPU cycle counts and each packet issues.<\/p>\n<p>In these AI-native environments, the community is not a passive conduit. It\u2019s the synchronizing pressure that retains colossal clusters of GPUs operating at full throttle, each second of the day. Attaining this requires extra than simply bandwidth: it calls for deterministic, lossless operation, deep observability, and the agility to evolve as AI workloads and architectures shift.<\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_53 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title \" >Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\" role=\"button\"><label for=\"item-69ec8ccb18d75\" ><span class=\"\"><span style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/label><input aria-label=\"Toggle\" aria-label=\"item-69ec8ccb18d75\"  type=\"checkbox\" id=\"item-69ec8ccb18d75\"><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/aireviewirush.com\/?p=20218\/#The_Neocloud_blueprint_Open_scalable_and_AI-optimized_with_Cisco_8000\" title=\"The Neocloud blueprint: Open, scalable, and AI-optimized with Cisco 8000\">The Neocloud blueprint: Open, scalable, and AI-optimized with Cisco 8000<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/aireviewirush.com\/?p=20218\/#Scale_out_Creating_multi-tier_backend_AI_materials_with_clever_cloth_capabilities\" title=\"Scale out: Creating multi-tier backend AI materials with clever cloth capabilities\">Scale out: Creating multi-tier backend AI materials with clever cloth capabilities<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/aireviewirush.com\/?p=20218\/#Key_capabilities_embody\" title=\"Key capabilities embody:\">Key capabilities embody:<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/aireviewirush.com\/?p=20218\/#Scale_throughout_Federating_AI_pods_globally\" title=\"Scale throughout: Federating AI pods globally\">Scale throughout: Federating AI pods globally<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"The_Neocloud_blueprint_Open_scalable_and_AI-optimized_with_Cisco_8000\"><\/span>The Neocloud blueprint: Open, scalable, and AI-optimized with Cisco 8000<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>That is the place the <strong>Cisco 8000 Collection with SONiC<\/strong> steps in\u2014not as a conventional swap, however because the clever spine for neoclouds. Designed for a disaggregated, open networking strategy, the Cisco 8000 Collection with SONiC straight addresses the distinctive wants of AI-native clouds in 4 elementary methods:<\/p>\n<p><strong>1. Operational agility by means of disaggregation<\/strong><\/p>\n<p>The Cisco 8000 Collection provides a versatile, open platform perfect for neoclouds searching for fast innovation. With absolutely supported Cisco-validated SONiC and key AI options, the platform permits a really disaggregated stack. This permits for impartial {hardware} and software program updates, straightforward integration of open-source capabilities, and superior AI observability and site visitors engineering. For backend buildouts, the Cisco 8122-64EH-O (64x800G QDD) and 8122-64EHF-O (64x800G OSFP) platforms\u2014each powered by the Cisco Silicon One G200 ASIC\u2014ship high-performance 800G throughput to fulfill the wants of demanding AI and knowledge middle workloads. These platforms mix dependable, purpose-built {hardware} with agile, cloud-native software program, guaranteeing a scalable basis for evolving infrastructure wants.<\/p>\n<p><strong>2. Deterministic, lossless cloth for distributed coaching<\/strong><\/p>\n<p>AI clusters rely upon synchronized, high-bandwidth, lossless networks to maintain hundreds of GPUs absolutely utilized. The Cisco 8122 platforms, constructed with G200 ASICs, ship massive, absolutely shared, on-die packet buffer, ultra-low jitter, and adaptive congestion administration\u2014all important for RDMA-based workloads and collective operations. With help for 800G at present and 1.6T speeds tomorrow, the material can scale as quick as AI ambition grows.<\/p>\n<p><strong>3. Intelligence inbuilt: Superior AI networking options<\/strong><\/p>\n<p>Cisco\u2019s providing is anchored by its superior AI networking options\u2014a wealthy set of instruments designed to offer real-time community insights, workload-aware scheduling, and dynamic congestion isolation. These options allow the material to implement predictive site visitors steering, aligning community habits with AI workload patterns to maximise cluster effectivity and throughput.<\/p>\n<p><strong>4. Open, programmable, and future-proof<\/strong><\/p>\n<p>With open NOS like SONiC, the community turns into as programmable because the AI workloads it helps. Operators can quickly deploy new options, combine with GPU schedulers, and lengthen the telemetry pipeline to match evolving wants. Moreover, the Cisco 8122 platforms are UEC-ready, aligning with the rising Extremely Ethernet Consortium 1.0 requirements to make sure your community is ready for future AI calls for.<\/p>\n<p style=\"text-align: center;\"><strong>Scaling the AI supercloud: Out and throughout<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden aligncenter size-large wp-image-483544\" data-lazy-type=\"image\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/Sonic-on-800-In-blog-image-1-1024x558.png\" alt=\"\" width=\"1024\" height=\"558\"><noscript><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-483544\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/Sonic-on-800-In-blog-image-1-1024x558.png\" alt=\"\" width=\"1024\" height=\"558\"><\/noscript><\/p>\n<p style=\"text-align: center;\">Determine 1: Scale out and scale throughout<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Scale_out_Creating_multi-tier_backend_AI_materials_with_clever_cloth_capabilities\"><\/span>Scale out: Creating multi-tier backend AI materials with clever cloth capabilities<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>As AI workloads scale, it&#8217;s essential for the underlying community to advance in each bandwidth and intelligence. Cisco multistage Clos topologies, constructed with Cisco 8122 platforms, ship really non-blocking materials optimized for large-scale GPU clusters. On the coronary heart of this resolution is the excellent, AI-native networking feature-set designed to maximise efficiency and effectivity for AI clusters.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Key_capabilities_embody\"><\/span>Key capabilities embody:<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<ul>\n<li><strong>Superior congestion administration:<\/strong><br \/>Precedence Move Management (PFC) and Express Congestion Notification (ECN) work in tandem to make sure the very best throughput and minimal latency throughout congestion, conserving clusters synchronized and operating easily.<\/li>\n<li><strong>Adaptive routing and switching (ARS):<\/strong><br \/>Dynamically steers site visitors in keeping with real-time congestion and stream patterns, maximizing effectivity throughout your entire community cloth. ARS provides two sub-modes:\n<ul>\n<li><strong>Flowlet load balancing<\/strong>: Splits site visitors into micro-bursts (flowlets) and routes every alongside the optimum path, enhancing utilization whereas preserving packet order\u2014important for RDMA-based GPU workloads.<\/li>\n<li><strong>Packet spraying<\/strong>: Distributes packets throughout all out there paths for max throughput, perfect for AI collective operations that tolerate packet reordering.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Weighted ECMP:<\/strong><br \/>Visitors is distributed inconsistently over a number of equal-cost paths in keeping with predefined weights. This ensures higher-capacity or less-congested hyperlinks carry extra site visitors, enhancing total utilization and efficiency in large-scale deployments.<\/li>\n<li><strong>QPID hashing:<\/strong><br \/>Employs superior hashing strategies to evenly unfold site visitors, minimizing stream collisions and stopping single-path oversubscription.<\/li>\n<li><strong>Packet trimming:<\/strong><br \/>Throughout excessive congestion, non-essential packet payloads are eliminated to alleviate hotspots, whereas essential header info is retained for continued routing with out dropping total packets.<\/li>\n<li><strong>Versatile topology help:<\/strong><br \/>Appropriate with a wide range of community architectures\u2014together with rail-only, rail-optimized, and conventional leaf\/backbone topologies. The system helps each IPv4 and IPv6 underlays and integrates with IP\/BGP and EVPN-based materials, permitting operators to tailor networks to particular AI cluster wants.<\/li>\n<li><strong>Multivendor SmartNIC interoperability:<\/strong><br \/>Designed for seamless integration with a various ecosystem of SmartNICs from a number of distributors, guaranteeing flexibility, funding safety, and future-proof infrastructure.<\/li>\n<li><strong>AI-driven observability with PIE port:<\/strong><br \/>Gives deep, real-time visibility at each per-port and per-flow ranges\u2014together with GPU-to-GPU site visitors and congestion hotspots\u2014utilizing ASIC-level telemetry, in-band INT packet tracing, and SONiC integration. This allows operators to proactively monitor, tune, and troubleshoot networks to optimize AI coaching outcomes.<\/li>\n<\/ul>\n<p>Collectively, these options create a cloth that&#8217;s not solely extremely scalable but in addition really AI-optimized. The Cisco 8122 platforms\u2019 clever networking capabilities allow the community to soak up synchronized site visitors bursts, forestall congestion collapse, and hold each GPU working at peak effectivity\u2014empowering next-generation AI workloads with unmatched efficiency and reliability.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Scale_throughout_Federating_AI_pods_globally\"><\/span>Scale throughout: Federating AI pods globally<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>As AI infrastructure expands past single knowledge facilities to span areas and continents, scale-across networking turns into essential. Neoclouds have to federate distributed GPU clusters whereas sustaining the low-latency, high-bandwidth efficiency that AI workloads demand.<\/p>\n<p>The Cisco 8223, powered by Silicon One P200\u2014the trade\u2019s first 51.2T deep-buffer router\u2014addresses this problem head-on. With built-in MACsec safety, 800GE interfaces supporting each OSFP and QSFP-DD optics, and coherent optics functionality, the 8223 delivers the pliability and effectivity next-generation distributed AI workloads require.<\/p>\n<p>Native SONiC help permits seamless integration between AI backends and WAN connectivity, permitting operators to construct open, programmable networks that scale globally with out sacrificing the efficiency traits of native clusters.<\/p>\n<p style=\"text-align: center;\"><strong>Accelerating neocloud AI networks with Cisco 8000 Collection<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden aligncenter size-large wp-image-483543\" data-lazy-type=\"image\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/Sonic-on-800-In-blog-image-2-1024x547.png\" alt=\"\" width=\"1024\" height=\"547\"><noscript><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-483543\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2025\/12\/Sonic-on-800-In-blog-image-2-1024x547.png\" alt=\"\" width=\"1024\" height=\"547\"><\/noscript><\/p>\n<p style=\"text-align: center;\">Determine 2: Cisco 8000 Collection for scale out and scale throughout<\/p>\n<p>Within the AI period, networks have developed from infrastructure value facilities to aggressive differentiators. For neoclouds, networking efficiency straight impacts GPU utilization, coaching effectivity, and in the end, buyer success.<\/p>\n<p>By combining the Cisco 8000 Collection platforms, superior AI networking options, and the openness of SONiC, neoclouds can construct infrastructure that scales seamlessly, operates effectively, and adapts as AI workloads evolve. It\u2019s not nearly conserving tempo with AI innovation\u2014it\u2019s about enabling it.<\/p>\n<blockquote>\n<\/blockquote>\n<p>Further assets:<\/p>\n<p>\u00a0<\/p>\n<\/p><\/div>\n\n","protected":false},"excerpt":{"rendered":"<p>A brand new paradigm is reshaping cloud infrastructure: neoclouds. These AI-first next-gen cloud suppliers are constructing GPU-dense platforms designed for the unrelenting scale and efficiency calls for of recent machine studying. Not like conventional cloud suppliers retrofitting current infrastructure, neoclouds are purpose-building AI-native materials from the bottom up\u2014the place each GPU cycle counts and each [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":20220,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[],"class_list":{"0":"post-20218","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-cloud-computing"},"_links":{"self":[{"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/posts\/20218","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=20218"}],"version-history":[{"count":1,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/posts\/20218\/revisions"}],"predecessor-version":[{"id":20219,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/posts\/20218\/revisions\/20219"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/media\/20220"}],"wp:attachment":[{"href":"https:\/\/aireviewirush.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=20218"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=20218"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=20218"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}