{"id":107228,"date":"2025-10-15T11:06:57","date_gmt":"2025-10-15T18:06:57","guid":{"rendered":"https:\/\/developer.nvidia.com\/blog\/?p=107228"},"modified":"2025-12-10T12:22:15","modified_gmt":"2025-12-10T20:22:15","slug":"accelerated-and-distributed-upf-for-the-era-of-agentic-ai-and-6g","status":"publish","type":"post","link":"https:\/\/developer.nvidia.com\/blog\/accelerated-and-distributed-upf-for-the-era-of-agentic-ai-and-6g\/","title":{"rendered":"Accelerated and Distributed UPF for the Era of Agentic AI and 6G"},"content":{"rendered":"\n<p>The telecommunications industry is innovating rapidly toward 6G for both AI-native Radio Access Networks (AI-RAN) and AI-Core. The distributed User Plane Function (dUPF) brings compute closer to the network edge through decentralized packet processing and routing, enabling ultra-low latency, high throughput, and the seamless integration of distributed AI workloads. dUPF is becoming a crucial component in the evolution of mobile networks to be part of the foundational AI infrastructure.<\/p>\n\n\n<div class='stb-container stb-style-info stb-no-caption'><div class='stb-caption'><div class='stb-logo'><img class='stb-logo__image' src='data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAADIAAAAyCAYAAAAeP4ixAAAACXBIWXMAAAsTAAALEwEAmpwYAAAKT2lDQ1BQaG90b3Nob3AgSUNDIHByb2ZpbGUAAHjanVNnVFPpFj333vRCS4iAlEtvUhUIIFJCi4AUkSYqIQkQSoghodkVUcERRUUEG8igiAOOjoCMFVEsDIoK2AfkIaKOg6OIisr74Xuja9a89+bN\/rXXPues852zzwfACAyWSDNRNYAMqUIeEeCDx8TG4eQuQIEKJHAAEAizZCFz\/SMBAPh+PDwrIsAHvgABeNMLCADATZvAMByH\/w\/qQplcAYCEAcB0kThLCIAUAEB6jkKmAEBGAYCdmCZTAKAEAGDLY2LjAFAtAGAnf+bTAICd+Jl7AQBblCEVAaCRACATZYhEAGg7AKzPVopFAFgwABRmS8Q5ANgtADBJV2ZIALC3AMDOEAuyAAgMADBRiIUpAAR7AGDIIyN4AISZABRG8lc88SuuEOcqAAB4mbI8uSQ5RYFbCC1xB1dXLh4ozkkXKxQ2YQJhmkAuwnmZGTKBNA\/g88wAAKCRFRHgg\/P9eM4Ors7ONo62Dl8t6r8G\/yJiYuP+5c+rcEAAAOF0ftH+LC+zGoA7BoBt\/qIl7gRoXgugdfeLZrIPQLUAoOnaV\/Nw+H48PEWhkLnZ2eXk5NhKxEJbYcpXff5nwl\/AV\/1s+X48\/Pf14L7iJIEyXYFHBPjgwsz0TKUcz5IJhGLc5o9H\/LcL\/\/wd0yLESWK5WCoU41EScY5EmozzMqUiiUKSKcUl0v9k4t8s+wM+3zUAsGo+AXuRLahdYwP2SycQWHTA4vcAAPK7b8HUKAgDgGiD4c93\/+8\/\/UegJQCAZkmScQAAXkQkLlTKsz\/HCAAARKCBKrBBG\/TBGCzABhzBBdzBC\/xgNoRCJMTCQhBCCmSAHHJgKayCQiiGzbAdKmAv1EAdNMBRaIaTcA4uwlW4Dj1wD\/phCJ7BKLyBCQRByAgTYSHaiAFiilgjjggXmYX4IcFIBBKLJCDJiBRRIkuRNUgxUopUIFVIHfI9cgI5h1xGupE7yAAygvyGvEcxlIGyUT3UDLVDuag3GoRGogvQZHQxmo8WoJvQcrQaPYw2oefQq2gP2o8+Q8cwwOgYBzPEbDAuxsNCsTgsCZNjy7EirAyrxhqwVqwDu4n1Y8+xdwQSgUXACTYEd0IgYR5BSFhMWE7YSKggHCQ0EdoJNwkDhFHCJyKTqEu0JroR+cQYYjIxh1hILCPWEo8TLxB7iEPENyQSiUMyJ7mQAkmxpFTSEtJG0m5SI+ksqZs0SBojk8naZGuyBzmULCAryIXkneTD5DPkG+Qh8lsKnWJAcaT4U+IoUspqShnlEOU05QZlmDJBVaOaUt2ooVQRNY9aQq2htlKvUYeoEzR1mjnNgxZJS6WtopXTGmgXaPdpr+h0uhHdlR5Ol9BX0svpR+iX6AP0dwwNhhWDx4hnKBmbGAcYZxl3GK+YTKYZ04sZx1QwNzHrmOeZD5lvVVgqtip8FZHKCpVKlSaVGyovVKmqpqreqgtV81XLVI+pXlN9rkZVM1PjqQnUlqtVqp1Q61MbU2epO6iHqmeob1Q\/pH5Z\/YkGWcNMw09DpFGgsV\/jvMYgC2MZs3gsIWsNq4Z1gTXEJrHN2Xx2KruY\/R27iz2qqaE5QzNKM1ezUvOUZj8H45hx+Jx0TgnnKKeX836K3hTvKeIpG6Y0TLkxZVxrqpaXllirSKtRq0frvTau7aedpr1Fu1n7gQ5Bx0onXCdHZ4\/OBZ3nU9lT3acKpxZNPTr1ri6qa6UbobtEd79up+6Ynr5egJ5Mb6feeb3n+hx9L\/1U\/W36p\/VHDFgGswwkBtsMzhg8xTVxbzwdL8fb8VFDXcNAQ6VhlWGX4YSRudE8o9VGjUYPjGnGXOMk423GbcajJgYmISZLTepN7ppSTbmmKaY7TDtMx83MzaLN1pk1mz0x1zLnm+eb15vft2BaeFostqi2uGVJsuRaplnutrxuhVo5WaVYVVpds0atna0l1rutu6cRp7lOk06rntZnw7Dxtsm2qbcZsOXYBtuutm22fWFnYhdnt8Wuw+6TvZN9un2N\/T0HDYfZDqsdWh1+c7RyFDpWOt6azpzuP33F9JbpL2dYzxDP2DPjthPLKcRpnVOb00dnF2e5c4PziIuJS4LLLpc+Lpsbxt3IveRKdPVxXeF60vWdm7Obwu2o26\/uNu5p7ofcn8w0nymeWTNz0MPIQ+BR5dE\/C5+VMGvfrH5PQ0+BZ7XnIy9jL5FXrdewt6V3qvdh7xc+9j5yn+M+4zw33jLeWV\/MN8C3yLfLT8Nvnl+F30N\/I\/9k\/3r\/0QCngCUBZwOJgUGBWwL7+Hp8Ib+OPzrbZfay2e1BjKC5QRVBj4KtguXBrSFoyOyQrSH355jOkc5pDoVQfujW0Adh5mGLw34MJ4WHhVeGP45wiFga0TGXNXfR3ENz30T6RJZE3ptnMU85ry1KNSo+qi5qPNo3ujS6P8YuZlnM1VidWElsSxw5LiquNm5svt\/87fOH4p3iC+N7F5gvyF1weaHOwvSFpxapLhIsOpZATIhOOJTwQRAqqBaMJfITdyWOCnnCHcJnIi\/RNtGI2ENcKh5O8kgqTXqS7JG8NXkkxTOlLOW5hCepkLxMDUzdmzqeFpp2IG0yPTq9MYOSkZBxQqohTZO2Z+pn5mZ2y6xlhbL+xW6Lty8elQfJa7OQrAVZLQq2QqboVFoo1yoHsmdlV2a\/zYnKOZarnivN7cyzytuQN5zvn\/\/tEsIS4ZK2pYZLVy0dWOa9rGo5sjxxedsK4xUFK4ZWBqw8uIq2Km3VT6vtV5eufr0mek1rgV7ByoLBtQFr6wtVCuWFfevc1+1dT1gvWd+1YfqGnRs+FYmKrhTbF5cVf9go3HjlG4dvyr+Z3JS0qavEuWTPZtJm6ebeLZ5bDpaql+aXDm4N2dq0Dd9WtO319kXbL5fNKNu7g7ZDuaO\/PLi8ZafJzs07P1SkVPRU+lQ27tLdtWHX+G7R7ht7vPY07NXbW7z3\/T7JvttVAVVN1WbVZftJ+7P3P66Jqun4lvttXa1ObXHtxwPSA\/0HIw6217nU1R3SPVRSj9Yr60cOxx++\/p3vdy0NNg1VjZzG4iNwRHnk6fcJ3\/ceDTradox7rOEH0x92HWcdL2pCmvKaRptTmvtbYlu6T8w+0dbq3nr8R9sfD5w0PFl5SvNUyWna6YLTk2fyz4ydlZ19fi753GDborZ752PO32oPb++6EHTh0kX\/i+c7vDvOXPK4dPKy2+UTV7hXmq86X23qdOo8\/pPTT8e7nLuarrlca7nuer21e2b36RueN87d9L158Rb\/1tWeOT3dvfN6b\/fF9\/XfFt1+cif9zsu72Xcn7q28T7xf9EDtQdlD3YfVP1v+3Njv3H9qwHeg89HcR\/cGhYPP\/pH1jw9DBY+Zj8uGDYbrnjg+OTniP3L96fynQ89kzyaeF\/6i\/suuFxYvfvjV69fO0ZjRoZfyl5O\/bXyl\/erA6xmv28bCxh6+yXgzMV70VvvtwXfcdx3vo98PT+R8IH8o\/2j5sfVT0Kf7kxmTk\/8EA5jz\/GMzLdsAAAAgY0hSTQAAeiUAAICDAAD5\/wAAgOkAAHUwAADqYAAAOpgAABdvkl\/FRgAACLRJREFUeNrsmmuIXGcZgJ\/3+845c9udZLNp7umF2osUS9NqL5S2VsE\/BX8IoRZBWtAi\/vRSEMG\/Bi0UBf+0ItQ\/tRcQQRBBK5hWrJq2aatNm0uTbHaTbPYyM7tzOee7vP6Yk1uzKWTrbqTkO7zMcOYczjzfe39nRFX5JCzDJ2RdAbkCskIrueQ7FveWbwSNjvbMXvLBHGCJUYkaRVV3ALeosjnG2FDV6RD1qKq+psq0qiIy3MckyXBucMFjbrzrhysMcpGlaNMaeSRL7OPWmNsAE1WJQfEx4n3E+9DyIf5R4UngX5dXI8g5r4ICIjxYqyS\/qmT2WmtMeV6JJYDzEWcCxsha48PDzseHQ4hPi\/AdoHuZQPRcLSAU31jTXPN0VqkLGkASkLS8wJH4LtblGGMRcsCiCqo8rqp3q8aHgGOrDtKa\/scZHGvY2ahlz6T1q1E\/DyZBkjGIrrxCsaaByBxJ82bMwjHiwmE0GhRLiHJrCPnvgC8CrVWNWkXepsjb+Lx9Q8UOnkmbt6IaEKkijTugfiuYKtgGmBrYUUy6lqS2jerYDhITMVawVsiqa7BJ43bQH696+K03tlBrbKZRrz5Zad60BrMG0QJG7oDK1aARzAhIbQhiqmCb0N+HFIepjF6PNYIQMEaojW7B2Oq3QO8Tzh4rrxHXJvj2nVmWfpnazeBnId0E2ZYyBmRg6qXVpiAVkCqoR9xRstoGkrSOEUVDTpI1qTQ2IMh3xRhOy8onxFA0LcVXbGUDmFGIA8g2lc4dgVACCYgBLJgEpIqYGjo4iBEQMaAFIkK1sRkx6ReySmNzpTpKpTq68iBiuM1a+YJkm0A9mBTs2vLTODxHLIOblGJBEsRUEc0RHMYYVD2qnqy6DpuOjAZf3DuMaLoKIMSrjZEtJOuG2rCNYchFhxrReG6EPptzRACDHd2B2Po51wdMUietjOL94GpXdHFFb+XDb4xxPdgRSMsQO\/yCaBialQaQYaJAz3FaVbB1Qu8AGnvnJVZjUoytEWNYs9z+6JJBVClQAujQB8JiubslxHlmdW4SjRD7qF9AYyyVJojYob8Mi6\/AMiLWskwrRCZ8CNPExWFojX2IXdAcYlH6iJ4DoGd8R4ca5YwfiGBsZWiwfsDHyfDLCL9x7yD3\/4z5iSGIBiiOQ1iA2AN1QzM6AxGGmV5zlAohCjEqGiMiKSZpEGOBy9sR5LVVA\/E+HB3k8bm8NzncZdOAYhLcDIQ2aB9wpRSgA9A+6tt418X5ghAiIUZM0sCmDdxgDl90\/i4i\/17Vxsr5+IfuwuwbcTCBZBvR0AE\/DX6u1Ex3qJ3T4mdw+Tx5\/xTBR0LUoe9nY4hJ6XcmiLH4xXL9Y1kgUSEidPt+V29+L2qboAH1bTS0IXRKkC6EDupOUgzmKJzHFT18qQ2kQlodx+cd+t3JPSC\/Wd1WVxVRxXte6LYm\/+L7x4l2PbE4hbo51M2jbhZ1pwj5cYpBm6IIOKd4r4QQiTFi0zUYW6fbOUQoFneJmLi6IGWyFgO9PPygO7c3km3FByX4BYJv410LV3RwzuF8xId4RhNRFVVLUl2Hdx36nYmXQV66rMOHqLzW7Uw9HdwCkm0mhkAISowQIsMvX2ogln4RY0SSGjap0+tMhOAHTwxrMvmQrCKIiJAXcVe\/\/f6CZJvQMpMrwzxRdoJEBdV45pxNm3jXI+9NPyvCnrOh+lxZ8Vrr\/APMkd7C1AsxBiRtAlruqZz\/GDGoRhCLsTXy7omeatwlJkNMeoGseIlijFyAVjj\/c9ebeqxaGzeaLyAiiAiqw+Rn01FIMlwxgZg6MRQg9rmRsRv38z+aPSfLMacLA5l9K++f2l1r3PSAtRWQDGMcQkGIILaCSTbQ6xxBbAXve9RGtz9bqW9ANVweEGuXAhG86z+v6h+QZAzFIkSsdEhjoIgOjQ6wCBYRe2Bs02f\/JqfLf872YjatDrvKlfYRI3KBWCPEGF6JIQfTQEyC2AYmHSdNKxAWcd0jiMlQIMlG\/xxiCM51ca6Hcz1iHGBTy6uv\/JUnvv+9VXD20v4\/LCDvxTCYwGblbKsCZgSTjGJtgsYCEYuqUqlvfNOabNiHmIRKpUGSNnj8m9\/m\/s8\/xE+ffGrlTStNzUUSvhYoLUi3IxGVDEER2yCtKtY71M0DkFXXnUirY2fu7fZ6PProY7z04gur5yPOxYuBoKHXRTxIhpy2c21gE8UkDpEWgkdNtugipAZOnjzJzp072b179+o6+49+9s7S5X2Ar331wUMP3j5\/t6muK2cOCSQVrOlSyZTceibmUn6\/9\/W2Td9l8thRnnnqJ0wdO7r6UStrbFj6PHBo\/qrWnuePcuctluu2WQ5+8AF50adwntlWzuSJNgcmpjh25OVBa\/o47779Bv1+\/\/KE37f3vrl0CxwC22+6pfaZHfcxv9Dm0J559u3vMD27iIkDEnHMzszQas0xefhgemDfOwTvL9\/PCp+6ZsvSPhIj69evr7QXFtg6PsZ1122lPlLn9bf2056ZpNfpEENBo9Fgfm4mKSHsh0b8yyu0lgMyumHbRUEK73tjWUJzdIRaNWN83Rhrx8bBdzGxT6\/XIy9yXJ43gGZpkXr+qIUcGFwq0CWD+G7rIr28Z9BdzCqVKovdHpHAfKtNa36OXneRGBVjbVmD6UZg03A4zOk5qyshwqpoZObUqaV7k+D5YN\/bL2679savbxtvkqQwPraWkeZafL9F3p2n3+szc\/LEoXZrbgHYAiyUsgj0gNOTC11xkPcOHLrYBJLDU9Ovrtt6w7vXb\/vSp8ebTSZOzDBSh5YWxKiIEaZPTL2vMebABDBbApQD44\/RG13qiLJWrV58eOcDWb1+zV333Pvrz919z\/2zrQVm5+fI+33mZk51D+77z59OnZz6JaqvAvMfPRrQlQVZqoxfYt227qpNj2zcuv2OLEuzQb8\/eXj\/vt\/mg\/5bwNFSC1xWkP\/XdeUvHFdAroB89PrvAIkUyrgAK0PWAAAAAElFTkSuQmCC' alt='img'\/><\/div><div class='stb-caption-content'><\/div><div class='stb-tool'><\/div><\/div><div class='stb-content'>NVIDIA AI Aerial software is now available as open source.<a href=\"https:\/\/github.com\/NVIDIA\/aerial-cuda-accelerated-ran\"> Access Now on GitHub ><\/a><\/div><\/div>\n\n\n\n<p>This post explores the architectural advantages of dUPF at the telecom edge to enable <a href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/ai-agents\/\">agentic AI<\/a> applications. It features a reference implementation of a dUPF user plane application built with <a href=\"https:\/\/docs.nvidia.com\/doca\/archive\/doca-v1.5.2\/flow-programming-guide\/index.html\">NVIDIA DOCA Flow<\/a> to leverage hardware-accelerated packet steering and processing. The demonstration highlights how the NVIDIA accelerated compute platform enables energy-efficient, low-latency user plane operations, reinforcing the essential role of dUPF in the 6G <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-and-telecom-industry-leaders-to-develop-ai-native-wireless-networks-for-6g\">AI-Native Wireless Networks Initiative ( AI-WIN)<\/a> full-stack architecture.<\/p>\n\n\n\n<h2 id=\"what_is_dupf\"  class=\"wp-block-heading\"><strong>What is dUPF?<\/strong><a href=\"#what_is_dupf\" class=\"heading-anchor-link\"><i class=\"fas fa-link\"><\/i><\/a><\/h2>\n\n\n\n<p>dUPF is a 3GPP 5G core network function, which handles user plane packet processing at distributed locations as defined in section 6.2.5 of <a href=\"https:\/\/www.3gpp.org\/ftp\/Specs\/archive\/23_series\/23.501\/23501-j50.zip\">3GPP 5G core architecture<\/a> and in section 4.2 of <a href=\"https:\/\/portal.3gpp.org\/desktopmodules\/Specifications\/SpecificationDetails.aspx?specificationId=3856\">3GPP 5G Mobile Edge Computing (MEC) architecture<\/a>. dUPF moves user data processing closer to users and radio nodes. Unlike traditional UPFs that cause latency due to long backhaul routes, the dUPF handles traffic at the network edge, enabling real-time applications and local breakout for AI traffic through AI-specific local data networks (AI-DN), as shown in Figure 1.<\/p>\n\n\n\n<figure data-wp-context=\"{&quot;imageId&quot;:&quot;69efd9022e99e&quot;}\" data-wp-interactive=\"core\/image\" class=\"wp-block-image aligncenter size-full wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"900\" height=\"473\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-3gpp-multiple-pdu-sessions-ai-dn-traffic-1.png\" alt=\"Diagram showing that dUPF in multiple PDU sessions the 3GPP MEC connectivity model anchors AI-DN at distributed sites connecting UE data traffic to AI-DN:  UE &gt; AI-RAN &gt; dUPF &gt; AI-DN.\n\" class=\"wp-image-107281\" srcset=\"https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-3gpp-multiple-pdu-sessions-ai-dn-traffic-1.png 900w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-3gpp-multiple-pdu-sessions-ai-dn-traffic-1-300x158.png 300w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-3gpp-multiple-pdu-sessions-ai-dn-traffic-1-625x328.png 625w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-3gpp-multiple-pdu-sessions-ai-dn-traffic-1-179x94.png 179w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-3gpp-multiple-pdu-sessions-ai-dn-traffic-1-768x404.png 768w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-3gpp-multiple-pdu-sessions-ai-dn-traffic-1-645x339.png 645w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-3gpp-multiple-pdu-sessions-ai-dn-traffic-1-500x263.png 500w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-3gpp-multiple-pdu-sessions-ai-dn-traffic-1-160x84.png 160w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-3gpp-multiple-pdu-sessions-ai-dn-traffic-1-362x190.png 362w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-3gpp-multiple-pdu-sessions-ai-dn-traffic-1-209x110.png 209w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><button\n\t\t\tclass=\"lightbox-trigger\"\n\t\t\ttype=\"button\"\n\t\t\taria-haspopup=\"dialog\"\n\t\t\taria-label=\"Enlarge\"\n\t\t\tdata-wp-init=\"callbacks.initTriggerButton\"\n\t\t\tdata-wp-on-async--click=\"actions.showLightbox\"\n\t\t\tdata-wp-style--right=\"state.imageButtonRight\"\n\t\t\tdata-wp-style--top=\"state.imageButtonTop\"\n\t\t>\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewBox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\" \/>\n\t\t\t<\/svg>\n\t\t<\/button><figcaption class=\"wp-element-caption\"><em><em>Figure 1. dUPF in 3GPP multiple PDU Sessions MEC connectivity model&nbsp; anchors AI-DN traffic at the distributed sites<\/em><\/em><\/figcaption><\/figure>\n\n\n\n<h2 id=\"how_does_dupf_work_in_the_6g_ai-centric_network\"  class=\"wp-block-heading\"><strong>How does dUPF work in the 6G AI-centric network?<\/strong><a href=\"#how_does_dupf_work_in_the_6g_ai-centric_network\" class=\"heading-anchor-link\"><i class=\"fas fa-link\"><\/i><\/a><\/h2>\n\n\n\n<p>6G aims to transform telecom operators into critical AI infrastructure, hosting <a href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/ai-factory\/\">AI factories<\/a> and distributing AI inference as an AI grid. dUPF is a crucial aspect of this, enabling 6G distributed edge agentic AI and local breakout (LBO).<\/p>\n\n\n\n<p>Next-generation applications like video search and summarization (VSS), XR, gaming, and industrial automation demand real-time, autonomous intelligence at the network edge, which traditional centralized wireless core architectures cannot provide.<\/p>\n\n\n\n<p>This proximity offers several benefits:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Ultra-low latency:<\/strong> Enables immediate responsiveness for mission-critical 6G use cases.<\/li>\n\n\n\n<li><strong>Efficient data handling:<\/strong> Processes local data at the source, reducing latency and optimizing network resources.<\/li>\n\n\n\n<li><strong>Enhanced data privacy and security:<\/strong> Localized processing minimizes sensitive data exposure, fostering trust.<\/li>\n\n\n\n<li><strong>Decentralized compute for resilient AI:<\/strong> Distributes AI workloads, creating a robust, resilient infrastructure and eliminating single points of failure.<\/li>\n<\/ul>\n\n\n\n<h2 id=\"what_are_the_benefits_of_dupf_on_nvidia_accelerated_edge_infrastructure\"  class=\"wp-block-heading\">What are the benefits of dUPF on NVIDIA accelerated edge infrastructure?<a href=\"#what_are_the_benefits_of_dupf_on_nvidia_accelerated_edge_infrastructure\" class=\"heading-anchor-link\"><i class=\"fas fa-link\"><\/i><\/a><\/h2>\n\n\n\n<p><a href=\"https:\/\/developer.nvidia.com\/aerial\">NVIDIA AI Aerial<\/a> platform is a suite of accelerated computing platforms, software, and services for designing, simulating, and operating wireless networks. The benefits of dUPF on AI Aerial edge infrastructure include:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Ultra-low latency:<\/strong> Latency is as low as 25 microseconds with zero packet loss, improving user experience for edge AI inferencing.<\/li>\n\n\n\n<li><strong>Cost reduction:<\/strong> Lower backhaul and OPEX through distributed processing and optimized resource utilization, reducing transport costs.<\/li>\n\n\n\n<li><strong>Energy efficiency:<\/strong> <a href=\"https:\/\/docs.nvidia.com\/doca\/archive\/doca-v1.5.2\/flow-programming-guide\/index.html\">NVIDIA DOCA Flow<\/a>-enabled HW acceleration reduces CPU usage, freeing cores for AI applications on shared hardware, lowering power consumption.<\/li>\n\n\n\n<li><strong>New revenue models:<\/strong> Enables AI-native services and applications requiring real-time edge data processing.<\/li>\n\n\n\n<li><strong>Enhanced network performance:<\/strong> Improved scalability, jitter minimization, and deterministic behavior for AI and RAN traffic.<\/li>\n<\/ul>\n\n\n\n<figure data-wp-context=\"{&quot;imageId&quot;:&quot;69efd9022fd9b&quot;}\" data-wp-interactive=\"core\/image\" class=\"wp-block-image aligncenter size-full wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"620\" height=\"395\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-application-layer-nvidia-ai-aerial-platform.jpg\" alt=\"Diagram showing that dUPF is a component of the application layer of the NVIDIA AI Aerial platform alongside RAN virtual Distributed Unit (vDU) and virtual RAN Centralized Unit (vCU).\n\" class=\"wp-image-107235\" srcset=\"https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-application-layer-nvidia-ai-aerial-platform.jpg 620w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-application-layer-nvidia-ai-aerial-platform-300x191.jpg 300w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-application-layer-nvidia-ai-aerial-platform-179x115.jpg 179w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-application-layer-nvidia-ai-aerial-platform-471x300.jpg 471w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-application-layer-nvidia-ai-aerial-platform-141x90.jpg 141w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-application-layer-nvidia-ai-aerial-platform-362x231.jpg 362w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-application-layer-nvidia-ai-aerial-platform-173x110.jpg 173w\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" \/><button\n\t\t\tclass=\"lightbox-trigger\"\n\t\t\ttype=\"button\"\n\t\t\taria-haspopup=\"dialog\"\n\t\t\taria-label=\"Enlarge\"\n\t\t\tdata-wp-init=\"callbacks.initTriggerButton\"\n\t\t\tdata-wp-on-async--click=\"actions.showLightbox\"\n\t\t\tdata-wp-style--right=\"state.imageButtonRight\"\n\t\t\tdata-wp-style--top=\"state.imageButtonTop\"\n\t\t>\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewBox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\" \/>\n\t\t\t<\/svg>\n\t\t<\/button><figcaption class=\"wp-element-caption\"><em><em>Figure 2. dUPF is a component of the NVIDIA AI Aerial platform<\/em> application layer<\/em><\/figcaption><\/figure>\n\n\n\n<p>The key value propositions of dUPF are fully aligned with the 6G AI-WIN initiative, making dUPF an integral part of the AI-WIN full stack. This initiative brings together T-Mobile, MITRE, Cisco, ODC, and Booz Allen Hamilton to develop an AI-native network stack for 6G, built on NVIDIA AI Aerial.&nbsp;<\/p>\n\n\n\n<h2 id=\"dupf_use_cases\"  class=\"wp-block-heading\"><strong>dUPF use cases<\/strong><a href=\"#dupf_use_cases\" class=\"heading-anchor-link\"><i class=\"fas fa-link\"><\/i><\/a><\/h2>\n\n\n\n<p>Key use cases for dUPF include:<\/p>\n\n\n\n<p><strong>Ultra-low-latency applications<\/strong>: By hosting dUPF functions at the edge, data can be processed and routed locally, eliminating backhaul delays. This is critical for:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>AR\/VR and real-time conversations with an AI agent<\/li>\n\n\n\n<li>VSS<\/li>\n\n\n\n<li>Autonomous vehicle and robot communications (V2X)<\/li>\n\n\n\n<li>Remote surgery and real-time industrial automation&nbsp;<\/li>\n<\/ul>\n\n\n\n<p><strong>AI and data-intensive workloads at the edge<\/strong>: Integration of dUPF with AI-native platforms (such as NVIDIA Grace Hopper) enables real-time edge inferencing for applications like distributed AI RAN, agentic AI, and localized autonomous control.&nbsp;<\/p>\n\n\n\n<p>Figure 3 illustrates a VSS data processing ingestion pipeline, where camera streams are handled at the edge alongside the deployed dUPF for local breakout. By shifting inference tasks to the edge server, operators deliver low-latency services while significantly reducing the data load on their backbone networks.<\/p>\n\n\n\n<figure data-wp-context=\"{&quot;imageId&quot;:&quot;69efd9023131c&quot;}\" data-wp-interactive=\"core\/image\" class=\"wp-block-image aligncenter size-full wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"1999\" height=\"550\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/data-processing-block-ingestions-pipeline-vss-1.png\" alt=\"Flow chart showing the data processing block of ingestions pipeline for VSS. \n\" class=\"wp-image-107328\" srcset=\"https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/data-processing-block-ingestions-pipeline-vss-1.png 1999w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/data-processing-block-ingestions-pipeline-vss-1-300x83.png 300w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/data-processing-block-ingestions-pipeline-vss-1-625x172.png 625w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/data-processing-block-ingestions-pipeline-vss-1-179x49.png 179w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/data-processing-block-ingestions-pipeline-vss-1-768x211.png 768w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/data-processing-block-ingestions-pipeline-vss-1-1536x423.png 1536w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/data-processing-block-ingestions-pipeline-vss-1-645x177.png 645w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/data-processing-block-ingestions-pipeline-vss-1-500x138.png 500w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/data-processing-block-ingestions-pipeline-vss-1-160x44.png 160w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/data-processing-block-ingestions-pipeline-vss-1-362x100.png 362w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/data-processing-block-ingestions-pipeline-vss-1-400x110.png 400w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/data-processing-block-ingestions-pipeline-vss-1-1024x282.png 1024w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/data-processing-block-ingestions-pipeline-vss-1-960x264.png 960w\" sizes=\"auto, (max-width: 1999px) 100vw, 1999px\" \/><button\n\t\t\tclass=\"lightbox-trigger\"\n\t\t\ttype=\"button\"\n\t\t\taria-haspopup=\"dialog\"\n\t\t\taria-label=\"Enlarge\"\n\t\t\tdata-wp-init=\"callbacks.initTriggerButton\"\n\t\t\tdata-wp-on-async--click=\"actions.showLightbox\"\n\t\t\tdata-wp-style--right=\"state.imageButtonRight\"\n\t\t\tdata-wp-style--top=\"state.imageButtonTop\"\n\t\t>\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewBox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\" \/>\n\t\t\t<\/svg>\n\t\t<\/button><figcaption class=\"wp-element-caption\"><em>&nbsp;<em>Figure 3. Camera and video streams can be offloaded to dUPF deployed in the edge for the VSS data processing block<\/em><\/em><\/figcaption><\/figure>\n\n\n\n<h2 id=\"dupf_user_plane_reference_implementation\"  class=\"wp-block-heading\"><strong>dUPF user plane reference implementation<\/strong><a href=\"#dupf_user_plane_reference_implementation\" class=\"heading-anchor-link\"><i class=\"fas fa-link\"><\/i><\/a><\/h2>\n\n\n\n<p>The dUPF user plane reference implementation is based on a decomposed architecture as illustrated in Figure 3, which comprises two key components, dUPF-UP and dUPF-CP:<\/p>\n\n\n\n<p><strong>dUPF-UP:<\/strong> This component is responsible for user plane packet processing accelerated using DOCA Flow APIs, which handles essential UPF user plane functionalities:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Packet Detection Rule (PDR)<\/li>\n\n\n\n<li>QoS Enforcement Rule (QER)<\/li>\n\n\n\n<li>Usage Report Rule (URR)<\/li>\n\n\n\n<li>Forwarding Action Rule (FAR)&nbsp;<\/li>\n<\/ul>\n\n\n\n<p><strong>dUPF-CP:<\/strong> This component communicates with SMF over a 3GPP N4 interface and communicates with dUPF-UP through an internal messaging interface (gRPC) over CNI to facilitate user plane packet processing.<\/p>\n\n\n\n<figure data-wp-context=\"{&quot;imageId&quot;:&quot;69efd90232279&quot;}\" data-wp-interactive=\"core\/image\" class=\"wp-block-image aligncenter size-full wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"624\" height=\"467\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-reference-architecture.gif\" alt=\"dUPF reference architecture with dUPF-CP and dUPF-UP, connected through internal gRPC interface and supports 3GPP standard interfaces (N3,N6, and N4).\n\" class=\"wp-image-107236\"\/><button\n\t\t\tclass=\"lightbox-trigger\"\n\t\t\ttype=\"button\"\n\t\t\taria-haspopup=\"dialog\"\n\t\t\taria-label=\"Enlarge\"\n\t\t\tdata-wp-init=\"callbacks.initTriggerButton\"\n\t\t\tdata-wp-on-async--click=\"actions.showLightbox\"\n\t\t\tdata-wp-style--right=\"state.imageButtonRight\"\n\t\t\tdata-wp-style--top=\"state.imageButtonTop\"\n\t\t>\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewBox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\" \/>\n\t\t\t<\/svg>\n\t\t<\/button><figcaption class=\"wp-element-caption\"><em>&nbsp;<em>Figure 4. dUPF reference architecture with dUPF-CP and dUPF-CP supporting 3GPP standard interfaces (N3, N6, and N4)<\/em><\/em><\/figcaption><\/figure>\n\n\n\n<p>The dUPF-UP is deployed on the NVIDIA accelerated <a href=\"https:\/\/www.supermicro.com\/en\/accelerators\/nvidia\/mgx?utm_source=mgx&amp;utm_medium=301\">Supermicro 1U Grace Hopper MGX System<\/a> server platform with NVIDIA Grace CPU and NVIDIA BF3 DPU. AI-DN traffic is handled by dUPF-UP at the edge, and other user traffic (such as Internet traffic) is delivered to centralized UPF through the transport network.<\/p>\n\n\n\n<h3 id=\"dupf-up_acceleration_architecture_and_data_flows\"  class=\"wp-block-heading\">dUPF-UP acceleration architecture and data flows<a href=\"#dupf-up_acceleration_architecture_and_data_flows\" class=\"heading-anchor-link\"><i class=\"fas fa-link\"><\/i><\/a><\/h3>\n\n\n\n<p>The <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/grace-cpu-superchip\/\">NVIDIA Grace CPU Superchip<\/a> and <a href=\"https:\/\/www.nvidia.com\/en-us\/networking\/products\/data-processing-unit\/\">NVIDIA BlueField-3 (BF3) SuperNIC<\/a> are key hardware for co-hosted RAN and dUPF-UP. Figure 5 illustrates dUPF-UP packet processing.<\/p>\n\n\n\n<figure data-wp-context=\"{&quot;imageId&quot;:&quot;69efd90232ff7&quot;}\" data-wp-interactive=\"core\/image\" class=\"wp-block-image aligncenter size-full wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"556\" height=\"364\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-application-nvidia-grace-cpu.gif\" alt=\"dUPF-UP application on an NVIDIA Grace CPU host with packet processing accelerated by BF3 HW pipelines via DOCA Flow SDK. SR-IOV enables efficient packet handling between host Grace CPU and BF3 NIC.\n\" class=\"wp-image-107237\"\/><button\n\t\t\tclass=\"lightbox-trigger\"\n\t\t\ttype=\"button\"\n\t\t\taria-haspopup=\"dialog\"\n\t\t\taria-label=\"Enlarge\"\n\t\t\tdata-wp-init=\"callbacks.initTriggerButton\"\n\t\t\tdata-wp-on-async--click=\"actions.showLightbox\"\n\t\t\tdata-wp-style--right=\"state.imageButtonRight\"\n\t\t\tdata-wp-style--top=\"state.imageButtonTop\"\n\t\t>\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewBox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\" \/>\n\t\t\t<\/svg>\n\t\t<\/button><figcaption class=\"wp-element-caption\"><em><em>&nbsp;Figure 5. dUPF-UP application on an NVIDIA Grace CPU host with packet processing accelerated by BF3 HW pipelines<\/em><\/em><\/figcaption><\/figure>\n\n\n\n<p>The Grace CPU Superchip, with 72 Arm Neoverse V2 cores, uses the NVIDIA Scalable Coherency Fabric (SCF) to achieve a 3.2 TB\/s bandwidth. This boosts dUPF user plane packet processing performance and energy efficiency. The BF3 SuperNIC accelerates dUPF data plane functions through DOCA Flow pipelines, including:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Packet classification (5-tuples,<strong> <\/strong>DSCP\/VLAN, GTP TEID\/QFI)<\/li>\n\n\n\n<li>GTP encapsulation\/decapsulation<\/li>\n\n\n\n<li>Metering (AMBR\/MBR)<\/li>\n\n\n\n<li>Counting (URR usage\/quotas)<\/li>\n\n\n\n<li>Forwarding (fast path for direct forwarding, slow path for exception packets)<\/li>\n\n\n\n<li>Mirroring for host CPU processing (Lawful Intercept, for example)<\/li>\n<\/ul>\n\n\n\n<h3 id=\"dupf-up_reference_implementation_with_doca_flow\"  class=\"wp-block-heading\">dUPF-UP reference implementation with DOCA Flow<a href=\"#dupf-up_reference_implementation_with_doca_flow\" class=\"heading-anchor-link\"><i class=\"fas fa-link\"><\/i><\/a><\/h3>\n\n\n\n<p>The dUPF-UP reference implementation accelerates AI traffic LBO through DOCA Flow, leveraging IP subnet-based Service Data Flow (SDF) classification and simplifying AI-DN deployment. Key simplifications include:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Differentiating edge AI applications using IP subnet SDF<\/li>\n\n\n\n<li>Avoiding IP segmentation\/reassembly by aligning MTUs<\/li>\n\n\n\n<li>Simplifying QoS and charging with the Packet Detection Rule (PDR)-based assurance&nbsp;<\/li>\n<\/ul>\n\n\n\n<p>dUPF-UP DOCA Flow pipelines are designed for N3 and N6 interfaces.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">N3 interface DOCA Flow pipeline design<\/h4>\n\n\n\n<p>N3 interface uplink pipelines contain pipes as shown in Figure 5:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>GTP decap:<\/strong> Performs GTP header decapsulation<\/li>\n\n\n\n<li><strong>Counter:<\/strong> Counts receiving packets for URR reporting<\/li>\n\n\n\n<li><strong>Policer QoS flow MBR:<\/strong> QER enforcement for QoS flow level MBR<\/li>\n\n\n\n<li><strong>Policer QoS&nbsp; Session MBR:<\/strong> QER enforcement for session level MBR<\/li>\n\n\n\n<li><strong>Counter:<\/strong> Counts packets post QER metering for URR reporting<\/li>\n\n\n\n<li><strong>FAR (DSCP Marking)<\/strong>: Performs DSCP marking and other FAR handling<\/li>\n\n\n\n<li><strong>Forward<\/strong>: Forwards packet to N6 interface<\/li>\n<\/ul>\n\n\n\n<figure data-wp-context=\"{&quot;imageId&quot;:&quot;69efd90234619&quot;}\" data-wp-interactive=\"core\/image\" class=\"wp-block-image aligncenter size-full wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"975\" height=\"215\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-n3-doca-flow-pipelines.png\" alt=\"Flow diagram showing dUPF-UP N3 interface uplink DOCA Flow pipelines: GTP Decap-&gt;Counter-&gt;Policer (Qos Flow MBR)-&gt;Policer (Session MBR)-&gt;Counter-&gt;FAR-&gt;Foward.\n\" class=\"wp-image-107239\" srcset=\"https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-n3-doca-flow-pipelines.png 975w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-n3-doca-flow-pipelines-300x66.png 300w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-n3-doca-flow-pipelines-625x138.png 625w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-n3-doca-flow-pipelines-179x39.png 179w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-n3-doca-flow-pipelines-768x169.png 768w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-n3-doca-flow-pipelines-645x142.png 645w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-n3-doca-flow-pipelines-500x110.png 500w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-n3-doca-flow-pipelines-160x35.png 160w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-n3-doca-flow-pipelines-362x80.png 362w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-n3-doca-flow-pipelines-499x110.png 499w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-n3-doca-flow-pipelines-960x212.png 960w\" sizes=\"auto, (max-width: 975px) 100vw, 975px\" \/><button\n\t\t\tclass=\"lightbox-trigger\"\n\t\t\ttype=\"button\"\n\t\t\taria-haspopup=\"dialog\"\n\t\t\taria-label=\"Enlarge\"\n\t\t\tdata-wp-init=\"callbacks.initTriggerButton\"\n\t\t\tdata-wp-on-async--click=\"actions.showLightbox\"\n\t\t\tdata-wp-style--right=\"state.imageButtonRight\"\n\t\t\tdata-wp-style--top=\"state.imageButtonTop\"\n\t\t>\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewBox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\" \/>\n\t\t\t<\/svg>\n\t\t<\/button><figcaption class=\"wp-element-caption\"><em><em>Figure 6. dUPF-UP N3 uplink DOCA Flow pipelines<\/em><\/em><\/figcaption><\/figure>\n\n\n\n<h4 class=\"wp-block-heading\">N6 interface DOCA Flow pipeline design<\/h4>\n\n\n\n<p>N6 interface downlink pipelines contain pipes as shown in Figure 7:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>GTP Decap:<\/strong> Performs GTP header decapsulation<\/li>\n\n\n\n<li><strong>Counter:<\/strong> Counts receiving packets for URR reporting<\/li>\n\n\n\n<li><strong>Policer QoS Flow MBR:<\/strong> Performs QER enforcement for QoS flow level MBR<\/li>\n\n\n\n<li><strong>Policer QoS&nbsp; Session MBR:<\/strong> Performs QER enforcement for session level MBR<\/li>\n\n\n\n<li><strong>Counter:<\/strong> Counts packets post QER metering for URR reporting<\/li>\n\n\n\n<li><strong>GTP Encap:<\/strong> Performs GTP header encapsulation<\/li>\n\n\n\n<li><strong>FAR (DSCP Marking):<\/strong> Performs DSCP marking and other FAR handling<\/li>\n\n\n\n<li><strong>Forward:<\/strong> Forwards packet to N6 interface<\/li>\n<\/ul>\n\n\n\n<figure data-wp-context=\"{&quot;imageId&quot;:&quot;69efd902353fe&quot;}\" data-wp-interactive=\"core\/image\" class=\"wp-block-image aligncenter size-full wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"975\" height=\"215\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-downlink-doca-flow-pipelines.png\" alt=\"Flow diagram for dUPF-UP N6 interface downlink DOCA Flow pipelines: Counter-&gt;Policer (Qos Flow MBR)-&gt;Policer (Session MBR)-&gt;Counter-&gt;GTP encap-&gt;FAR-&gt;Foward.\n\" class=\"wp-image-107242\" srcset=\"https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-downlink-doca-flow-pipelines.png 975w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-downlink-doca-flow-pipelines-300x66.png 300w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-downlink-doca-flow-pipelines-625x138.png 625w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-downlink-doca-flow-pipelines-179x39.png 179w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-downlink-doca-flow-pipelines-768x169.png 768w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-downlink-doca-flow-pipelines-645x142.png 645w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-downlink-doca-flow-pipelines-500x110.png 500w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-downlink-doca-flow-pipelines-160x35.png 160w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-downlink-doca-flow-pipelines-362x80.png 362w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-downlink-doca-flow-pipelines-499x110.png 499w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/dupf-up-downlink-doca-flow-pipelines-960x212.png 960w\" sizes=\"auto, (max-width: 975px) 100vw, 975px\" \/><button\n\t\t\tclass=\"lightbox-trigger\"\n\t\t\ttype=\"button\"\n\t\t\taria-haspopup=\"dialog\"\n\t\t\taria-label=\"Enlarge\"\n\t\t\tdata-wp-init=\"callbacks.initTriggerButton\"\n\t\t\tdata-wp-on-async--click=\"actions.showLightbox\"\n\t\t\tdata-wp-style--right=\"state.imageButtonRight\"\n\t\t\tdata-wp-style--top=\"state.imageButtonTop\"\n\t\t>\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewBox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\" \/>\n\t\t\t<\/svg>\n\t\t<\/button><figcaption class=\"wp-element-caption\"><em><em>Figure 7. dUPF-UP N6 downlink DOCA Flow pipelines<\/em><\/em><\/figcaption><\/figure>\n\n\n\n<p>To learn more about how to program Counter, Policer, GTP Encap, GTP Decap, FAR, and Forward pipes see the <a href=\"https:\/\/confluence.nvidia.com\/pages\/viewpage.action?spaceKey=docadev&amp;title=.DOCA+Flow+v3.2.0-OCT_GA\">DOCA Flow Program Guide<\/a> and&nbsp; <a href=\"https:\/\/confluence.nvidia.com\/display\/docadev\/.DOCA+Accelerated+UPF+Reference+Application+Guide+v3.2.0-OCT_GA\">DOCA Flow Example Application Guide<\/a>.&nbsp;<\/p>\n\n\n\n<h3 id=\"dupf-up_example_implementation_lab_validation\"  class=\"wp-block-heading\">dUPF-UP example implementation lab validation<a href=\"#dupf-up_example_implementation_lab_validation\" class=\"heading-anchor-link\"><i class=\"fas fa-link\"><\/i><\/a><\/h3>\n\n\n\n<p>dUPF-UP was tested on a &nbsp;Supermicro 1U Grace Hopper MGX System server, using two dedicated CPU cores (core-0 and core-1). Core-0 managed control procedures for AI-DN session setup, while Core-1 handled slow path exception packets via Poll Mode Driver (PMD) mode. The dUPF-CP simulator initiated 60,000 UE sessions at 1,000 sessions\/second. After setup, user plane packets were sent over dual 100G links from a TRex traffic generator.<\/p>\n\n\n\n<p>Observations include:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Core-0 averaged under 7% CPU usage for control procedures<\/li>\n\n\n\n<li>Core-1 showed 100% CPU usage due to PMD polling mode, but no exception packets were delivered to it as all user plane packets were handled by BF3<\/li>\n\n\n\n<li>BF3 NIC hardware accelerated all user plane packets, achieving 100 Gbps throughput with zero packet loss<\/li>\n<\/ul>\n\n\n\n<h3 id=\"lab_performance_testing_summary\"  class=\"wp-block-heading\">Lab performance testing summary<a href=\"#lab_performance_testing_summary\" class=\"heading-anchor-link\"><i class=\"fas fa-link\"><\/i><\/a><\/h3>\n\n\n\n<p>Based on the performance lab testing, the dUPF-UP example implementation on Grace plus BF3 achieved 100 Gbps throughput (line rate of 100G links of the test setup) with zero packet loss. This demonstrates full hardware acceleration of user plane packet processing for AI traffic using an IP subnet SDF-based pipeline design. This was accomplished using only two Grace CPU cores. Archived functionalities and performance in lab testing validated the value propositions of dUPF-UP on the AI Aerial platform.<\/p>\n\n\n\n<h2 id=\"dupf_ecosystem_adoption\"  class=\"wp-block-heading\"><strong>dUPF ecosystem adoption<\/strong><a href=\"#dupf_ecosystem_adoption\" class=\"heading-anchor-link\"><i class=\"fas fa-link\"><\/i><\/a><\/h2>\n\n\n\n<p>Cisco embraces dUPF architecture, accelerated by the NVIDIA AI Aerial platform and the NVIDIA DOCA framework, as a cornerstone for 6G AI-centric networks. When combined with the AI-ready data center architecture, this enables telecom operators to deploy high-performance, energy-efficient dUPF with security infused and closely integrated AI inference extended to the network edge\u2014opening the door to applications such as VSS, agentic AI, XR, and ultra-responsive AI-driven services.<\/p>\n\n\n\n<p>\u201cSoftware-defined DPU and GPU-accelerated edge infrastructure enable efficient deployment of Wireless RAN, Core, and AI applications, delivering superior user experiences and new monetization opportunities for service providers,\u201d said Darin Kaufman, Head of Product, Cisco Mobility. \u201cTogether, Cisco and NVIDIA are building intelligent, secure, and energy-efficient edge networks that power the next generation of wireless connectivity.\u201d&nbsp;&nbsp;<\/p>\n\n\n\n<h2 id=\"get_started_building_and_deploying_ai-native_networks\"  class=\"wp-block-heading\"><strong>Get started building and deploying AI-native networks<\/strong><a href=\"#get_started_building_and_deploying_ai-native_networks\" class=\"heading-anchor-link\"><i class=\"fas fa-link\"><\/i><\/a><\/h2>\n\n\n\n<p>dUPF is a critical component for the 6G AI-centric network. By strategically deploying high-performance, ultra-low-latency, and energy-efficient dUPF accelerated on the <a href=\"https:\/\/developer.nvidia.com\/aerial\">NVIDIA AI Aerial<\/a> platform with integrated AI inference at the network edge, operators can enable a new era of services. This dramatically lowers operational expenditures and ensures that the network infrastructure is agile and scalable enough to handle the immense demands of future AI-centric applications within a 6G network.<\/p>\n\n\n\n<p>To get started, contact <a href=\"mailto:telco@nvidia.com\">telco@nvidia.com<\/a> to learn more about DOCA Flow hardware acceleration and the benefits of dUPF deployment on <a href=\"https:\/\/developer.nvidia.com\/aerial\">AI Aerial<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The telecommunications industry is innovating rapidly toward 6G for both AI-native Radio Access Networks (AI-RAN) and AI-Core. The distributed User Plane Function (dUPF) brings compute closer to the network edge through decentralized packet processing and routing, enabling ultra-low latency, high throughput, and the seamless integration of distributed AI workloads. dUPF is becoming a crucial component &hellip; <a href=\"https:\/\/developer.nvidia.com\/blog\/accelerated-and-distributed-upf-for-the-era-of-agentic-ai-and-6g\/\">Continued<\/a><\/p>\n","protected":false},"author":3000,"featured_media":107232,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"publish_to_discourse":"","publish_post_category":"318","wpdc_auto_publish_overridden":"1","wpdc_topic_tags":"","wpdc_pin_topic":"","wpdc_pin_until":"","discourse_post_id":"1696211","discourse_permalink":"https:\/\/forums.developer.nvidia.com\/t\/accelerated-and-distributed-upf-for-the-era-of-agentic-ai-and-6g\/347794","wpdc_publishing_response":"success","wpdc_publishing_error":"","nv_subtitle":"","ai_post_summary":"<ul><li>The distributed User Plane Function (dUPF) is a crucial component in the evolution of mobile networks, enabling ultra-low latency, high throughput, and the integration of distributed AI workloads at the network edge.<\/li><li>dUPF handles user plane packet processing at distributed locations, reducing latency and optimizing network resources, and is a key aspect of the 6G AI-Native Wireless Networks Initiative (AI-WIN) full-stack architecture on NVIDIA accelerated edge infrastructure.<\/li><li>When implemented on the NVIDIA AI Aerial platform, dUPF achieves ultra-low latency of as low as 25 microseconds with zero packet loss, cost reduction, energy efficiency, and enhanced network performance, making it suitable for applications like AR\/VR, video search and summarization, and autonomous vehicle communications.<\/li><\/ul>","footnotes":"","_links_to":"","_links_to_target":""},"categories":[2758,1205],"tags":[817,3965,4602,1461,453],"coauthors":[4821,3221,3222,3238],"class_list":["post-107228","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-edge-computing","category-networking-communications","tag-5g","tag-ai-agent","tag-ai-factory","tag-doca","tag-featured","tagify_workload-generative-ai","tagify_workload-edge-computing","tagify_workload-networking-communications","tagify_workload-cybersecurity"],"acf":{"post_industry":["Telecommunications"],"post_products":["Aerial","BlueField DPU","DOCA","Grace CPU"],"post_learning_levels":["Beginner Technical"],"post_content_types":["Deep dive"],"post_collections":""},"jetpack_featured_media_url":"https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2025\/10\/telecom-icon-graphic.png","primary_category":{"category":"Networking \/ Communications","link":"https:\/\/developer.nvidia.com\/blog\/category\/networking-communications\/","id":1205,"data_source":""},"nv_translations":[{"language":"zh_CN","title":"\u9762\u5411\u4ee3\u7406\u5f0f AI \u548c 6G \u65f6\u4ee3\u7684\u52a0\u901f\u548c\u5206\u5e03\u5f0f UPF","post_id":15523}],"jetpack_shortlink":"https:\/\/wp.me\/pcCQAL-rTu","jetpack_likes_enabled":true,"jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/posts\/107228","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/users\/3000"}],"replies":[{"embeddable":true,"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/comments?post=107228"}],"version-history":[{"count":22,"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/posts\/107228\/revisions"}],"predecessor-version":[{"id":110351,"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/posts\/107228\/revisions\/110351"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/media\/107232"}],"wp:attachment":[{"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/media?parent=107228"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/categories?post=107228"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/tags?post=107228"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/coauthors?post=107228"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}