{"id":22728,"date":"2026-02-23T18:16:39","date_gmt":"2026-02-23T09:16:39","guid":{"rendered":"https:\/\/aireviewirush.com\/?p=22728"},"modified":"2026-02-23T18:16:39","modified_gmt":"2026-02-23T09:16:39","slug":"converged-north-south-networks-the-essential-path-for-ai-success","status":"publish","type":"post","link":"https:\/\/aireviewirush.com\/?p=22728","title":{"rendered":"Converged north-south networks: the essential path for AI success"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p>After we discuss constructing AI information facilities, east-west GPU materials usually steal the highlight. However there\u2019s one other site visitors path that\u2019s simply as essential: north-south connectivity. In right now\u2019s AI environments, how your information middle ingests information and delivers outcomes at scale could make or break your AI technique.<\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_53 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title \" >Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\" role=\"button\"><label for=\"item-69e63ed33d5fa\" ><span class=\"\"><span style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/label><input aria-label=\"Toggle\" aria-label=\"item-69e63ed33d5fa\"  type=\"checkbox\" id=\"item-69e63ed33d5fa\"><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/aireviewirush.com\/?p=22728\/#Why_north-south_site_visitors_now_issues_most_for_AI_at_scale\" title=\"Why north-south site visitors now issues most for AI at scale\">Why north-south site visitors now issues most for AI at scale<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/aireviewirush.com\/?p=22728\/#Understanding_the_AI_cluster_a_multi-network_structure\" title=\"Understanding the AI cluster: a multi-network structure\">Understanding the AI cluster: a multi-network structure<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/aireviewirush.com\/?p=22728\/#How_north-south_connectivity_impacts_GPU_effectivity\" title=\"How north-south connectivity impacts GPU effectivity\">How north-south connectivity impacts GPU effectivity<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/aireviewirush.com\/?p=22728\/#Simplifying_AI_infrastructure_with_converged_front-end_and_storage_networks\" title=\"Simplifying AI infrastructure with converged front-end and storage networks\">Simplifying AI infrastructure with converged front-end and storage networks<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/aireviewirush.com\/?p=22728\/#Optimizing_AI_site_visitors_with_Cisco_Silicon_One_and_Cisco_NX-OS\" title=\"Optimizing AI site visitors with Cisco Silicon One and Cisco NX-OS\">Optimizing AI site visitors with Cisco Silicon One and Cisco NX-OS<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/aireviewirush.com\/?p=22728\/#Scaling_AI_operations_utilizing_Cisco_Nexus_One_with_Nexus_Dashboard\" title=\"Scaling AI operations utilizing Cisco Nexus One with Nexus Dashboard\">Scaling AI operations utilizing Cisco Nexus One with Nexus Dashboard<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/aireviewirush.com\/?p=22728\/#Accelerating_AI_deployment_with_Cisco_Validated_Designs\" title=\"Accelerating AI deployment with Cisco Validated Designs\">Accelerating AI deployment with Cisco Validated Designs<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/aireviewirush.com\/?p=22728\/#Future-proofing_your_AI_technique_with_a_resilient_community_basis\" title=\"Future-proofing your AI technique with a resilient community basis\">Future-proofing your AI technique with a resilient community basis<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"Why_north-south_site_visitors_now_issues_most_for_AI_at_scale\"><\/span>Why north-south site visitors now issues most for AI at scale<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>AI is now not a siloed mission tucked away in an remoted cluster. Enterprises are quickly evolving to ship AI as a shared service, pulling in large volumes of knowledge from exterior sources and serving outcomes to customers, functions, and downstream techniques. This AI-driven site visitors generates the bursty, high-bandwidth north-south flows that characterize fashionable AI environments:<\/p>\n<ul>\n<li>Ingesting and preprocessing big datasets from object shops, information lakes, or streaming platforms<\/li>\n<li>Loading and checkpointing massive fashions from high-performance storage<\/li>\n<li>Querying vector databases and have shops to offer context for retrieval-augmented technology (RAG) and agentic workflows<\/li>\n<li>Serving real-time inference to 1000&#8217;s of concurrent customers or microservices<\/li>\n<\/ul>\n<p>AI workloads amplify conventional north-south challenges; usually they arrive in unpredictable bursts, can transfer terabytes in minutes, and are extremely delicate to latency and jitter. Any stall leaves costly GPUs idle and elongates job completion instances, drives up prices, and diminishes returns on AI investments.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Understanding_the_AI_cluster_a_multi-network_structure\"><\/span>Understanding the AI cluster: a multi-network structure<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>It\u2019s straightforward to think about an AI cluster as a single, monolithic community. In actuality, it\u2019s a composition of a number of interconnected networks that should work collectively predictably:<\/p>\n<ul>\n<li>Entrance-end community connects customers, functions, and providers to the AI cluster.<\/li>\n<li>Storage community gives high-throughput storage entry.<\/li>\n<li>Again-end compute community carries GPU-to-GPU site visitors for computation.<\/li>\n<li>Out-of-band administration community for baseboard administration controller (BMC), host administration, and control-plane entry.<\/li>\n<li>Knowledge middle material, together with border\/edge, ties the cluster into the remainder of the surroundings and the web.<\/li>\n<\/ul>\n<figure id=\"attachment_486421\" aria-describedby=\"caption-attachment-486421\" style=\"width: 768px\" class=\"wp-caption aligncenter\"><img fetchpriority=\"high\" decoding=\"async\" class=\"lazy lazy-hidden wp-image-486421 size-medium_large\" data-lazy-type=\"image\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2026\/02\/north-south-traffic-blog-figure-1-768x541.jpg\" alt=\"\" width=\"768\" height=\"541\"><noscript><img fetchpriority=\"high\" decoding=\"async\" class=\"wp-image-486421 size-medium_large\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2026\/02\/north-south-traffic-blog-figure-1-768x541.jpg\" alt=\"\" width=\"768\" height=\"541\"><\/noscript><figcaption id=\"caption-attachment-486421\" class=\"wp-caption-text\">Determine 1. AI cluster information middle material illustrates the interconnection between front-end, storage, back-end compute, and out-of-band administration networks.<\/figcaption><\/figure>\n<p>Peak efficiency isn\u2019t nearly bandwidth, it\u2019s about how effectively your material handles congestion, failures, and operational complexity throughout all of those planes as AI demand grows.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"How_north-south_connectivity_impacts_GPU_effectivity\"><\/span>How north-south connectivity impacts GPU effectivity<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Trendy AI depends on steady, real-time interactions between GPU clusters and the surface world. For instance:<\/p>\n<ul>\n<li>Fetching reside information from exterior software programming interfaces (APIs) or enterprise sources and companion techniques<\/li>\n<li>Excessive-speed loading of coaching units and mannequin checkpoints from converged storage materials<\/li>\n<li>Performing dynamic contextual lookups from vector databases and search indices for RAG and agent-based workflows<\/li>\n<li>Serving high-QPS inference for user-facing functions and inside providers<\/li>\n<\/ul>\n<p>These patterns generate:<\/p>\n<ul>\n<li><strong>Bursty, unpredictable hundreds:<\/strong> Batch\/distributed inference jobs can all of the sudden devour vital bandwidth, stressing uplinks and core hyperlinks.<\/li>\n<li><strong>Tight latency and jitter budgets:<\/strong> Even short-lived congestion or microbursts could cause head-of-line blocking and decelerate GPU pipelines.<\/li>\n<li><strong>Threat of static scorching spots:<\/strong> Conventional static equal-cost multi-path (ECMP) hashing can&#8217;t adapt to altering hyperlink utilization, resulting in congested paths and underutilized capability elsewhere.<\/li>\n<\/ul>\n<p>To maintain your GPUs absolutely utilized, your north-south community should be congestion-aware, resilient, and straightforward to function at scale.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Simplifying_AI_infrastructure_with_converged_front-end_and_storage_networks\"><\/span>Simplifying AI infrastructure with converged front-end and storage networks<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Many main AI deployments are converging front-end and storage site visitors onto a unified, high-performance Ethernet material distinct from the east-west compute community. This architectural strategy is pushed by each efficiency necessities and operational effectivity\u2014permitting prospects to reuse optics and cabling whereas leveraging current Clos material investments, considerably decreasing value and cabling complexity.<\/p>\n<p>This converged north-south material:<\/p>\n<ul>\n<li>Delivers high-performance storage entry over 400G\/800G leaf-spine architectures<\/li>\n<li>Carries host administration and control-plane site visitors from administration nodes to compute and storage nodes<\/li>\n<li>Connects to frame leaf or core switches for exterior connectivity and tenant ingress\/egress<\/li>\n<\/ul>\n<figure id=\"attachment_486422\" aria-describedby=\"caption-attachment-486422\" style=\"width: 768px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden size-medium_large wp-image-486422\" data-lazy-type=\"image\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2026\/02\/north-south-traffic-blog-figure-2-768x516.jpg\" alt=\"\" width=\"768\" height=\"516\"><noscript><img loading=\"lazy\" decoding=\"async\" class=\"size-medium_large wp-image-486422\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2026\/02\/north-south-traffic-blog-figure-2-768x516.jpg\" alt=\"\" width=\"768\" height=\"516\"><\/noscript><figcaption id=\"caption-attachment-486422\" class=\"wp-caption-text\">Determine 2. Knowledge middle material AI cluster: converged front-end and storage community with backbone, leaf, and GPU nodes.<\/figcaption><\/figure>\n<p>Cisco N9000 switches working Cisco NX-OS are purpose-built for these unified materials, delivering each the size and throughput required by fashionable AI front-end and storage networks. By combining predictable, heavy storage site visitors with lighter, latency-sensitive front-end software flows, you possibly can maximize your material\u2019s effectivity when it\u2019s correctly engineered.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Optimizing_AI_site_visitors_with_Cisco_Silicon_One_and_Cisco_NX-OS\"><\/span>Optimizing AI site visitors with Cisco Silicon One and Cisco NX-OS<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Managing north-south AI site visitors isn\u2019t nearly merging inference, storage, and coaching workloads on one community however can be about addressing the challenges of converging storage networks linked to totally different endpoints. It\u2019s about optimizing for every site visitors sort to attenuate latency and keep away from efficiency dips throughout congestion.<\/p>\n<p>In fashionable AI infrastructure, totally different workloads demand totally different therapy:<\/p>\n<ul>\n<li>Inference site visitors\u202frequires low, predictable latency.<\/li>\n<li>Coaching site visitors\u202fwants most throughput.<\/li>\n<li>Storage site visitors\u202fcan have totally different patterns between high-performance storage, customary storage, and shared storage.<\/li>\n<\/ul>\n<p>Whereas the back-end material primarily handles lossless distant direct reminiscence entry (RDMA) site visitors, the converged front-end and storage material carries a mixture of site visitors varieties. Within the absence of high quality of service (QoS) and efficient load-balancing mechanisms, sudden bursts of administration or person information can result in packet loss, which is catastrophic for the strict lossless ROCEv2 necessities. That\u2019s why Cisco Silicon One and Cisco NX-OS work in tandem, delivering dynamic load balancing (DLB) that operates in each flowlet and per-packet modes, all orchestrated via refined coverage management.<\/p>\n<p>Our strategy makes use of Cisco Silicon One application-specific built-in circuits (ASICs) paired with Cisco NX-OS intelligence to offer policy-driven, traffic-aware load balancing that adapts in actual time. This contains the next:<\/p>\n<ul>\n<li><strong>Per-packet DLB:<\/strong> When endpoints (comparable to SuperNICs) can deal with out-of-order supply, per-packet mode distributes particular person packets throughout all out there hyperlinks in a DLB ECMP group. This maximizes hyperlink utilization and immediately relieves congestion scorching spots\u2014essential for bursty AI workloads.<\/li>\n<li><strong>Flowlet-based DLB:<\/strong> For site visitors requiring in-order supply, flowlet-based DLB splits site visitors at pure burst boundaries. Utilizing real-time congestion and delay metrics measured by Cisco Silicon One, the system intelligently steers every burst to the least-utilized ECMP path\u2014sustaining circulate integrity whereas optimizing community sources.<\/li>\n<li><strong>Coverage-driven preferential therapy:<\/strong> High quality of service (QoS) insurance policies override default conduct utilizing match standards comparable to differentiated providers code level (DSCP) markings or entry management lists (ACLs). This permits selective per-packet load balancing for particular high-priority or congestion-sensitive flows, guaranteeing every site visitors sort receives optimum dealing with.<\/li>\n<li><strong>Coexistence with conventional ECMP:<\/strong> DLB site visitors leverages dynamic, telemetry-driven choice whereas non-DLB flows proceed utilizing conventional ECMP. This enables incremental adoption and focused optimization with out requiring a forklift improve of your complete infrastructure.<\/li>\n<\/ul>\n<p>This simultaneous mixed-mode strategy is especially helpful for north-south flows comparable to storage, checkpointing, and database entry, the place congestion consciousness and even utilization instantly translate into higher GPU effectivity.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Scaling_AI_operations_utilizing_Cisco_Nexus_One_with_Nexus_Dashboard\"><\/span>Scaling AI operations utilizing Cisco Nexus One with Nexus Dashboard<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Cisco Nexus One is a <a href=\"https:\/\/blogs.cisco.com\/datacenter\/networking-for-the-agentic-era-cisco-unveils-new-innovations-in-scale-and-simplicity\" target=\"_blank\" rel=\"noopener\">unified answer<\/a> that delivers community intelligence from silicon to software program\u2014operationalized via Cisco Nexus Dashboard on-premises and cloud-managed Cisco Hyperfabric. It gives the intelligence required to function trusted, future-ready materials at scale with assured efficiency.<\/p>\n<p>As AI clusters and community materials develop, operational simplicity turns into mission essential. With Cisco Nexus Dashboard, you acquire a unified operational layer for seamless provisioning, monitoring, and troubleshooting throughout your complete multi-fabric surroundings.<\/p>\n<p>In an AI information middle, this permits a unified expertise, simplified automation, and AI job observability. Utilizing Cisco Nexus Dashboard, operators can handle configurations and insurance policies for AI clusters and different materials from a single management level, considerably decreasing deployment and change-management overhead.<\/p>\n<figure id=\"attachment_486423\" aria-describedby=\"caption-attachment-486423\" style=\"width: 768px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden size-medium_large wp-image-486423\" data-lazy-type=\"image\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2026\/02\/north-south-traffic-blog-figure-3-768x423.jpg\" alt=\"\" width=\"768\" height=\"423\"><noscript><img loading=\"lazy\" decoding=\"async\" class=\"size-medium_large wp-image-486423\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2026\/02\/north-south-traffic-blog-figure-3-768x423.jpg\" alt=\"\" width=\"768\" height=\"423\"><\/noscript><figcaption id=\"caption-attachment-486423\" class=\"wp-caption-text\">Determine 3. Unified expertise: system dashboard view instance in Cisco Nexus Dashboard displaying essential anomaly stage, advisory stage, community infrastructure, AI sources, and material map.<\/figcaption><\/figure>\n<p>Nexus Dashboard simplifies automation by offering templates and policy-driven workflows to roll out best-practice specific congestion notification (ECN), precedence circulate management (PFC), and load-balancing configurations throughout materials, considerably decreasing guide effort.<\/p>\n<figure id=\"attachment_486424\" aria-describedby=\"caption-attachment-486424\" style=\"width: 768px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden size-medium_large wp-image-486424\" data-lazy-type=\"image\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2026\/02\/north-south-traffic-blog-figure-4-768x329.jpg\" alt=\"\" width=\"768\" height=\"329\"><noscript><img loading=\"lazy\" decoding=\"async\" class=\"size-medium_large wp-image-486424\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2026\/02\/north-south-traffic-blog-figure-4-768x329.jpg\" alt=\"\" width=\"768\" height=\"329\"><\/noscript><figcaption id=\"caption-attachment-486424\" class=\"wp-caption-text\">Determine 4. Simplified automation: instance settings edit display for \u201cAllow Dynamic Load Balancing,\u201d \u201cDLB Mode,\u201d and different choices.<\/figcaption><\/figure>\n<p>Utilizing Cisco Nexus Dashboard, you acquire end-to-end visibility into AI workloads throughout the total stack, enabling real-time monitoring of networks, NICs, GPUs, and distributed compute nodes.<\/p>\n<figure id=\"attachment_486425\" aria-describedby=\"caption-attachment-486425\" style=\"width: 768px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"lazy lazy-hidden size-medium_large wp-image-486425\" data-lazy-type=\"image\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2026\/02\/north-south-traffic-blog-figure-5-768x425.jpg\" alt=\"\" width=\"768\" height=\"425\"><noscript><img loading=\"lazy\" decoding=\"async\" class=\"size-medium_large wp-image-486425\" src=\"https:\/\/blogs.cisco.com\/gcs\/ciscoblogs\/1\/2026\/02\/north-south-traffic-blog-figure-5-768x425.jpg\" alt=\"\" width=\"768\" height=\"425\"><\/noscript><figcaption id=\"caption-attachment-486425\" class=\"wp-caption-text\">Determine 5. AI job observability: community topology dashboard displaying essential anomalies on leaf1 and GPU 3 for a working job.<\/figcaption><\/figure>\n<h2><span class=\"ez-toc-section\" id=\"Accelerating_AI_deployment_with_Cisco_Validated_Designs\"><\/span>Accelerating AI deployment with Cisco Validated Designs<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Cisco Validated Designs (CVDs) and Cisco reference architectures present prescriptive, confirmed blueprints for constructing converged north-south materials which can be AI-ready, eradicating guesswork and dashing deployment.<\/p>\n<p><strong>North\u2013south connectivity in enterprise AI\u2014key takeaways:<\/strong><\/p>\n<ul>\n<li>North-south efficiency is now on the essential path for enterprise AI; ignoring it may well negate investments in high-end GPUs.<\/li>\n<li>Converged front-end and storage materials constructed on high-density 400G\/800G-capable Cisco N9000 switches present scalable, environment friendly entry to information and providers.<\/li>\n<li>Cisco NX-OS policy-based load balancing mixed-mode is a strong functionality for dealing with unpredictable site visitors in an AI cluster whereas preserving efficiency.<\/li>\n<li>Cisco Nexus Dashboard centralizes operations, visibility, and diagnostics throughout materials, which is important when many AI workloads share the identical infrastructure.<\/li>\n<li><a href=\"https:\/\/www.cisco.com\/c\/en\/us\/products\/collateral\/networking\/ios-nx-os-software\/nx-os\/fabric-experience-so.html\" target=\"_blank\" rel=\"noopener\">Cisco Nexus One<\/a> simplifies AI community operations from silicon to working mannequin; allows scalable information middle materials; and delivers job-aware, network-to-GPU visibility for seamless telemetry correlation throughout networks.<\/li>\n<li>Cisco Validated architectures and reference designs provide confirmed patterns for safe, automated, and high-throughput north-south connectivity tailor-made to AI clusters.<\/li>\n<\/ul>\n<h2><span class=\"ez-toc-section\" id=\"Future-proofing_your_AI_technique_with_a_resilient_community_basis\"><\/span>Future-proofing your AI technique with a resilient community basis<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>On this new paradigm, north-south networks are making a comeback, rising because the decisive think about your AI journey. Successful with AI isn\u2019t nearly deploying the quickest GPUs; it\u2019s about constructing a north-south community that may preserve tempo with fashionable enterprise calls for. With Cisco Silicon One, NX-OS, and Nexus Dashboard, you acquire a resilient, clever, and high-throughput basis that connects your information to customers and functions on the pace your group requires, unlocking the total energy of your AI investments.<\/p>\n<blockquote>\n<\/blockquote><\/div>\n\n","protected":false},"excerpt":{"rendered":"<p>After we discuss constructing AI information facilities, east-west GPU materials usually steal the highlight. However there\u2019s one other site visitors path that\u2019s simply as essential: north-south connectivity. In right now\u2019s AI environments, how your information middle ingests information and delivers outcomes at scale could make or break your AI technique. Why north-south site visitors now [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":22730,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[],"class_list":{"0":"post-22728","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-cloud-computing"},"_links":{"self":[{"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/posts\/22728","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=22728"}],"version-history":[{"count":1,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/posts\/22728\/revisions"}],"predecessor-version":[{"id":22729,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/posts\/22728\/revisions\/22729"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=\/wp\/v2\/media\/22730"}],"wp:attachment":[{"href":"https:\/\/aireviewirush.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=22728"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=22728"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aireviewirush.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=22728"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}