• bitcoinBitcoin(BTC)$76,159.00-0.85%
  • ethereumEthereum(ETH)$2,290.020.03%
  • tetherTether(USDT)$1.000.00%
  • rippleXRP(XRP)$1.38-1.16%
  • binancecoinBNB(BNB)$623.19-0.01%
  • usd-coinUSDC(USDC)$1.00-0.01%
  • solanaSolana(SOL)$83.61-1.17%
  • tronTRON(TRX)$0.323353-0.71%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.040.40%
  • dogecoinDogecoin(DOGE)$0.0992321.35%
  • whitebitWhiteBIT Coin(WBT)$53.99-0.52%
  • USDSUSDS(USDS)$1.000.01%
  • leo-tokenLEO Token(LEO)$10.370.31%
  • HyperliquidHyperliquid(HYPE)$39.86-4.46%
  • cardanoCardano(ADA)$0.2465180.31%
  • bitcoin-cashBitcoin Cash(BCH)$448.47-0.03%
  • moneroMonero(XMR)$380.37-0.03%
  • chainlinkChainlink(LINK)$9.23-0.07%
  • CantonCanton(CC)$0.1495210.88%
  • zcashZcash(ZEC)$334.34-6.22%
  • stellarStellar(XLM)$0.161902-1.77%
  • USD1USD1(USD1)$1.00-0.01%
  • MemeCoreMemeCore(M)$3.43-12.78%
  • daiDai(DAI)$1.000.02%
  • litecoinLitecoin(LTC)$55.23-0.15%
  • avalanche-2Avalanche(AVAX)$9.14-0.57%
  • hedera-hashgraphHedera(HBAR)$0.089172-0.69%
  • Ethena USDeEthena USDe(USDE)$1.00-0.01%
  • suiSui(SUI)$0.92-0.22%
  • shiba-inuShiba Inu(SHIB)$0.0000060.29%
  • RainRain(RAIN)$0.0075001.65%
  • paypal-usdPayPal USD(PYUSD)$1.000.01%
  • the-open-networkToncoin(TON)$1.29-0.28%
  • crypto-com-chainCronos(CRO)$0.069164-0.46%
  • Circle USYCCircle USYC(USYC)$1.120.00%
  • tether-goldTether Gold(XAUT)$4,592.29-1.73%
  • BittensorBittensor(TAO)$256.773.94%
  • Global DollarGlobal Dollar(USDG)$1.00-0.01%
  • World Liberty FinancialWorld Liberty Financial(WLFI)$0.0737241.27%
  • BlackRock USD Institutional Digital Liquidity FundBlackRock USD Institutional Digital Liquidity Fund(BUIDL)$1.000.00%
  • pax-goldPAX Gold(PAXG)$4,591.08-1.76%
  • mantleMantle(MNT)$0.63-0.38%
  • polkadotPolkadot(DOT)$1.230.63%
  • uniswapUniswap(UNI)$3.240.89%
  • SkySky(SKY)$0.087796-0.18%
  • Pi NetworkPi Network(PI)$0.1874883.02%
  • Falcon USDFalcon USD(USDF)$1.00-0.06%
  • nearNEAR Protocol(NEAR)$1.35-0.67%
  • okbOKB(OKB)$82.69-1.00%
  • AsterAster(ASTER)$0.650.09%
TradePoint.io
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop
No Result
View All Result
TradePoint.io
No Result
View All Result

NVIDIA AI Unveils ProRL Agent: A Decoupled Rollout-as-a-Service Infrastructure for Reinforcement Learning of Multi-Turn LLM Agents at Scale

March 28, 2026
in AI & Technology
Reading Time: 7 mins read
A A
NVIDIA AI Unveils ProRL Agent: A Decoupled Rollout-as-a-Service Infrastructure for Reinforcement Learning of Multi-Turn LLM Agents at Scale
ShareShareShareShareShare

NVIDIA researchers introduced ProRL AGENT, a scalable infrastructure designed for reinforcement learning (RL) training of multi-turn LLM agents. By adopting a ‘Rollout-as-a-Service’ philosophy, the system decouples agentic rollout orchestration from the training loop. This architectural shift addresses the inherent resource conflicts between I/O-intensive environment interactions and GPU-intensive policy updates that currently bottleneck agent development.

The Core Problem: Tight Coupling

Multi-turn agent tasks involve interacting with external environments, such as code repositories or operating systems, via iterative tool use. Many existing frameworks—including SkyRL, VeRL-Tool, Agent Lightning, rLLM, and GEM—embed rollout control directly within the training process.

YOU MAY ALSO LIKE

Amazon brings dark mode to Kindle Colorsoft and Scribe Colorsoft

Mistral AI launches Workflows, a Temporal-powered orchestration engine already running millions of daily executions

This tight coupling leads to two primary limitations:

  • Conflicting System Requirements: Rollouts are I/O-bound, requiring sandbox creation, long-lived tool sessions, and asynchronous coordination. Training is GPU-intensive, centered on forward/backward passes and gradient synchronization. Running both in one process causes interference and reduces hardware efficiency.
  • Maintenance Barriers: Embedding rollout logic in the trainer makes it difficult to migrate to different training backends or support new runtime environments without re-implementing the execution pipeline.
https://arxiv.org/pdf/2603.18815

System Design: Rollout-as-a-Service

ProRL AGENT operates as a standalone HTTP service that manages the full rollout lifecycle. The RL trainer interacts with the server solely through an API, remaining agnostic to the underlying rollout infrastructure.

Three-Stage Asynchronous Pipeline

To maximize throughput, the server orchestrates rollouts through an asynchronous three-stage ‘assembly line’:

  1. INIT: Initialization workers spin up sandbox containers and configure tools.
  2. RUN: Rollout workers drive the multi-turn agent loop and collect trajectories.
  3. EVAL: Evaluation workers score results against ground truth to produce reward signals.

By assigning each stage to an independent worker pool, ProRL AGENT allows phases to overlap across different jobs, preventing slow evaluations (such as full test suite executions) from stalling the rollout process.

https://arxiv.org/pdf/2603.18815

HPC-Compatible Sandboxing and Optimized Tools

ProRL AGENT utilizes Singularity for its sandbox infrastructure. Unlike Docker-based platforms, Singularity allows rootless execution, which is required for deployment on shared HPC clusters managed by Slurm.

The system includes several optimizations to reduce tool execution latency, which often dominates total rollout time:

  • Efficient Bash: Replaces tmux-based terminal multiplexing with a ptyprocess-based direct pseudo-terminal, reducing shell command latency from 0.78s to 0.42s.
  • Direct IPython API: Connects to persistent kernels via an in-process API instead of network gateways, removing networking overhead.
  • Unix Domain Sockets (UDS): Replaces TCP loopback for communication between the agent and the execution server inside the container to shave off additional latency.

Advanced Features for Scalable RL

The infrastructure introduces mechanisms to improve training stability and hardware utilization:

Load Balancing and Prefix Cache Reuse

The server manages a pool of LLM inference backends (e.g., vLLM) using a min-heap keyed by assignment counts. When a task is assigned, all subsequent calls within that task are routed to the same backend. This strategy maximizes prefix cache reuse, reducing inference time across multiple agent turns.

Token-in/Token-out Communication

To eliminate re-tokenization drift—where the token sequence generated during rollout differs from what is used during training—ProRL AGENT uses token IDs as the canonical representation throughout the entire process. Log-probabilities and IDs are propagated unchanged from the inference backend to the trainer.

Optimized DAPO Implementation

The system supports Dynamic Sampling Policy Optimization (DAPO), which filters out ‘non-informative’ prompts that yield uniform rewards. ProRL AGENT uses an asynchronous replenishment mechanism to maintain maximum throughput, terminating redundant active jobs early once the target number of informative prompts is reached.

Experimental Results on SWE-Bench Verified

The system was validated using Qwen3 models across multiple scales. ProRL AGENT consistently improved performance compared to reproduced baselines.

Model Scale Reproduced Baseline ProRL Agent (RL)
Qwen3-4B 14.8 21.2
Qwen3-8B 9.6 18.0
Qwen3-14B 15.4 (reproduced baseline) 23.6

Note: The reported prior result for SkyRL-Agent-14B-v0 was 21.6.

In addition to software engineering, the system demonstrated generality in STEM, Math, and Code domains, showing steady reward growth during RL training. Scalability tests confirmed that rollout throughput increases near-linearly as compute nodes are added.

Key Takeaways

  • Architectural Decoupling: ProRL Agent treats the full agentic rollout lifecycle—including environment initialization, tool execution, and reward scoring—as an independent HTTP service, separating I/O-intensive tasks from GPU-intensive policy training.
  • Significant Performance Gains: This infrastructure enabled the Qwen3-8B model to nearly double its performance on the SWE-Bench Verified benchmark (from 9.6% to 18.0%), while the Qwen3-14B model improved from 15.4% to 23.6%.
  • System Latency Reductions: Targeted optimizations, such as replacing tmux with ptyprocess for shell execution, reduced action latency from 0.78s to 0.42s, contributing to near-linear throughput scaling across compute nodes.
  • Elimination of Tokenization Drift: The framework utilizes a token-in/token-out communication pipeline, ensuring that the exact token IDs generated during rollout are passed to the trainer without the risk of lossy re-tokenization.
  • HPC-Native Deployment: By using Singularity instead of Docker, ProRL Agent supports rootless execution and native Slurm integration, allowing large-scale agent training on shared high-performance computing clusters.

Check out the Paper and Repo. Also, feel free to follow us on Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

The post NVIDIA AI Unveils ProRL Agent: A Decoupled Rollout-as-a-Service Infrastructure for Reinforcement Learning of Multi-Turn LLM Agents at Scale appeared first on MarkTechPost.

Credit: Source link

ShareTweetSendSharePin

Related Posts

Amazon brings dark mode to Kindle Colorsoft and Scribe Colorsoft
AI & Technology

Amazon brings dark mode to Kindle Colorsoft and Scribe Colorsoft

April 28, 2026
Mistral AI launches Workflows, a Temporal-powered orchestration engine already running millions of daily executions
AI & Technology

Mistral AI launches Workflows, a Temporal-powered orchestration engine already running millions of daily executions

April 28, 2026
Union accuses Apple of unlawful discrimination against represented workers
AI & Technology

Union accuses Apple of unlawful discrimination against represented workers

April 28, 2026
Lyft to Acquire London Black Cab App Gett
AI & Technology

Lyft to Acquire London Black Cab App Gett

April 28, 2026
Next Post
Special Report: Suspect dead, security guard hurt in Michigan synagogue attack

Special Report: Suspect dead, security guard hurt in Michigan synagogue attack

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

No Result
View All Result
How an FCC squabble over paperwork is jeopardizing a rural expansion of wireless services

How an FCC squabble over paperwork is jeopardizing a rural expansion of wireless services

April 24, 2026
Robots warm up for Beijing half marathon

Robots warm up for Beijing half marathon

April 25, 2026
A humanoid robot is seen chasing a group of wild boars off the street

A humanoid robot is seen chasing a group of wild boars off the street

April 25, 2026

About

Learn more

Our Services

Legal

Privacy Policy

Terms of Use

Bloggers

Learn more

Article Links

Contact

Advertise

Ask us anything

©2020- TradePoint.io - All rights reserved!

Tradepoint.io, being just a publishing and technology platform, is not a registered broker-dealer or investment adviser. So we do not provide investment advice. Rather, brokerage services are provided to clients of Tradepoint.io by independent SEC-registered broker-dealers and members of FINRA/SIPC. Every form of investing carries some risk and past performance is not a guarantee of future results. “Tradepoint.io“, “Instant Investing” and “My Trading Tools” are registered trademarks of Apperbuild, LLC.

This website is operated by Apperbuild, LLC. We have no link to any brokerage firm and we do not provide investment advice. Every information and resource we provide is solely for the education of our readers. © 2020 Apperbuild, LLC. All rights reserved.

No Result
View All Result
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop

© 2023 - TradePoint.io - All Rights Reserved!