• bitcoinBitcoin(BTC)$69,514.001.03%
  • ethereumEthereum(ETH)$2,022.150.10%
  • tetherTether(USDT)$1.000.00%
  • binancecoinBNB(BNB)$639.980.11%
  • rippleXRP(XRP)$1.380.75%
  • usd-coinUSDC(USDC)$1.000.01%
  • solanaSolana(SOL)$85.19-1.14%
  • tronTRON(TRX)$0.285453-0.11%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.04-0.53%
  • dogecoinDogecoin(DOGE)$0.0942353.34%
  • whitebitWhiteBIT Coin(WBT)$55.080.45%
  • USDSUSDS(USDS)$1.000.00%
  • cardanoCardano(ADA)$0.2607601.29%
  • bitcoin-cashBitcoin Cash(BCH)$444.70-0.74%
  • leo-tokenLEO Token(LEO)$9.180.25%
  • HyperliquidHyperliquid(HYPE)$33.69-4.29%
  • moneroMonero(XMR)$349.531.82%
  • chainlinkChainlink(LINK)$8.92-0.45%
  • Ethena USDeEthena USDe(USDE)$1.000.02%
  • CantonCanton(CC)$0.1475881.73%
  • stellarStellar(XLM)$0.1590954.75%
  • USD1USD1(USD1)$1.000.00%
  • RainRain(RAIN)$0.008980-1.43%
  • daiDai(DAI)$1.00-0.01%
  • hedera-hashgraphHedera(HBAR)$0.0951350.57%
  • litecoinLitecoin(LTC)$53.51-0.81%
  • avalanche-2Avalanche(AVAX)$9.522.19%
  • paypal-usdPayPal USD(PYUSD)$1.000.00%
  • suiSui(SUI)$0.95-1.17%
  • zcashZcash(ZEC)$220.581.32%
  • shiba-inuShiba Inu(SHIB)$0.0000063.26%
  • the-open-networkToncoin(TON)$1.31-1.65%
  • crypto-com-chainCronos(CRO)$0.0753270.22%
  • tether-goldTether Gold(XAUT)$5,154.481.20%
  • World Liberty FinancialWorld Liberty Financial(WLFI)$0.1013280.89%
  • pax-goldPAX Gold(PAXG)$5,193.961.10%
  • MemeCoreMemeCore(M)$1.48-2.95%
  • polkadotPolkadot(DOT)$1.48-1.30%
  • uniswapUniswap(UNI)$3.83-1.60%
  • mantleMantle(MNT)$0.703.39%
  • Pi NetworkPi Network(PI)$0.2261284.40%
  • Circle USYCCircle USYC(USYC)$1.120.00%
  • okbOKB(OKB)$96.48-0.69%
  • BlackRock USD Institutional Digital Liquidity FundBlackRock USD Institutional Digital Liquidity Fund(BUIDL)$1.000.00%
  • BittensorBittensor(TAO)$198.251.72%
  • SkySky(SKY)$0.0782483.53%
  • Falcon USDFalcon USD(USDF)$1.00-0.01%
  • AsterAster(ASTER)$0.70-1.17%
  • Global DollarGlobal Dollar(USDG)$1.000.00%
  • aaveAave(AAVE)$110.472.89%
TradePoint.io
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop
No Result
View All Result
TradePoint.io
No Result
View All Result

Researchers baked 3x inference speedups directly into LLM weights — without speculative decoding

February 23, 2026
in AI & Technology
Reading Time: 11 mins read
A A
Researchers baked 3x inference speedups directly into LLM weights — without speculative decoding
ShareShareShareShareShare

As agentic AI workflows multiply the cost and latency of long reasoning chains, a team from the University of Maryland, Lawrence Livermore National Labs, Columbia University and TogetherAI has found a way to bake 3x throughput gains directly into a model’s weights.

Unlike speculative decoding, which requires a separate drafting model, this approach requires no additional infrastructure — just a single special token added to the model’s existing architecture.

YOU MAY ALSO LIKE

Epic is increasing the price of Fortnite’s V-Bucks currency

OpenAI upgrades ChatGPT with interactive learning tools as lawsuits and Pentagon backlash mount

The limits of next-token prediction

Next-token prediction — generating text one token per forward pass — creates a throughput ceiling that becomes painfully expensive when models need to produce thousands of tokens. This bottleneck is especially problematic in reasoning models, which frequently generate thousands of “chain of thought” tokens before producing the final response, leading to a slow and expensive user experience.

Multi-token prediction (MTP) offers an alternative training paradigm that allows a language model to produce multiple tokens simultaneously in a single forward pass.  For example, the model can be trained to predict a block of tokens all at once instead of just the immediate next token.

John Kirchenbauer, a doctorate candidate in computer science at the University of Maryland and co-author of the paper, told VentureBeat that as we move toward agentic workflows, the focus is shifting from overall throughput to single-user speed. “Today, with ultra-long thinking traces being the norm and agentic outer loops multiplying out those costs even further, latency is becoming as equally important a dimension of overall serving efficiency as gross tokens per second per hardware unit (tps/GPU),” Kirchenbauer said. He said that while standard batched next-token prediction is already optimal for overall throughput, the new approach “strive[s] to saturate the GPU with just a single user’s query to decrease latency for that single user.”

Other methods exist, but they come with drawbacks. “It’s worth noting that speculative decoding, and diffusion LLMs as an efficiency focused alternative to next token prediction (NTP), are both latency focused acceleration techniques,” Kirchenbauer said. But speculative decoding requires deploying and managing an auxiliary “drafting” model, which spends more absolute compute to draft and verify. MTP, on the other hand, “leverages a similar sort of tradeoff, it’s just simpler to serve and scientifically interesting in its own right.”

Current MTP paradigms have limitations, however. The standard objective for training a language model for MTP involves comparing its predictions against ground-truth text from a dataset. The pitfall is that this standard training teaches the model to predict the probability of a token at a specific position independently, rather than caring about the joint relationship between a sequence of tokens.

If a model tries to predict multiple tokens at once using this standard method, two major problems occur. The first is grammatical mismatch. For example, if a model predicts two words following the prefix “The zookeeper fed the,” it might sample independently and produce a mismatched phrase like “panda meat” or “lion bamboo” instead of “panda bamboo” and “lion meat.”

The second issue is degenerate repetition. Because typical text is unpredictable, a model trying to predict a token 100 positions into the future against a standard dataset will just predict “the,” since it is the most common word in English. This results in the model outputting nonsense like “…the the the…” for far-future positions.

Multi-token prediction via self-distillation

To solve the issues of generating multiple tokens, the researchers propose a novel training technique that uses a student-teacher scheme. A student model, which is the model learning to predict multiple tokens, generates a deterministic multi-token block. A teacher model, acting as a strong standard next-token prediction language model, evaluates that block. The teacher acts as a critic, calculating how likely and coherent the student’s proposed sequence is. If the student proposes a mismatched phrase like “lion bamboo,” the teacher assigns it a high loss, teaching the student to avoid that construction.

Image credit: VentureBeat with Nano Banana Pro

The paradigm is inspired by on-policy reinforcement learning because the student model is not simply memorizing static text. It generates a full rollout (sequence of actions in RL parlance) instantly in parallel on a single forward pass and receives a reward based on how good the teacher thinks it is. Unlike static supervised methods where training pairs are fixed in advance, the feedback here is dynamic, generated from the student’s own outputs in real time. The strong teacher also verifies the coherence of the tokens, which prevents the student model from learning degenerate outputs like repeated words.

For developers, the beauty of this approach lies in its simplicity. “There are truly no modifications to the architecture except for the addition of a special token,” Kirchenbauer said. By co-opting an unused slot in a model’s existing embedding matrix to act as an mask token, the technique converts sequential operations into parallel ones. “Any standard next token prediction language model can be adapted in this way… the internal implementation — MoE, windowed attention, SSM layers, etc. — are left untouched and present no barrier to adaptation.”

For engineering teams, this means the adaptation can be applied to models already in production without rebuilding pipelines.

ConfAdapt

Image credit: VentureBeat with Nano Banana Pro

Generating multiple tokens at the same time can still hurt the accuracy of the response at inference time. To maximize generation speed without sacrificing the quality of the output, the authors introduce an adaptive decoding strategy called ConfAdapt.

ConfAdapt evaluates a confidence threshold, such as 90%, at each step. The model generates a block of tokens, but it only keeps the tokens that meet or exceed this high-confidence threshold. When the upcoming text is highly predictable or structural, the model’s confidence is very high. It will accept and output a large chunk of tokens all at once, saving significant computational time on easy tokens. It then focuses its costly single-token passes on harder tokens that require more computational effort.

Putting multi-token prediction to the test

To see how the training paradigm performed in practice, the researchers applied their method to popular open-weight instruction-tuned models. They tested the strong general-purpose model Llama-3.1-8B-Magpie and the smaller, efficient Qwen3-4B-Instruct-2507, which is often chosen for cost-sensitive enterprise deployments. Both models were tuned on MetaMathQA, a dataset of synthetic grade school math problems that rely heavily on reasoning traces.

MTP with ConfAdapt

Example of multi-token bocks generated with ConfAdapt (source: arXiv)

The experiments revealed a clear sweet spot between speed and accuracy. Using the ConfAdapt strategy, the Llama-3.1-8B model achieved a 3x speedup with less than a 3% drop in accuracy on math benchmarks. The Qwen3-4B model achieved the same 3x speedup with a slightly higher 7% drop in accuracy. More aggressive settings could hit 5x speedups, though they came with steeper accuracy penalties.

How this translates to real-world tasks depends on predictability. “As the ConfAdapt approach naturally tailors the acceleration to the inherent entropy in the domain, when the model ‘knows’ exactly what comes next it can emit it in a single pass,” he noted, leading to massive acceleration on predictable tasks, while using more steps for uncertain outputs.

The speedups also transferred across domains that were not included in the multi-token prediction training phase. This included tasks within the same domain as the training data, like math and reasoning, as well as open-ended tasks such as creative writing and summarization.

Screenshot 2026-02-20 at 9.22.58 PM

The sweetspot of MTP with ConfAdapt is around 3x acceleration (source: arXiv)

Despite this transfer learning, enterprises deploying these models for specialized tasks shouldn’t rely on it entirely. “Our recommendation would be to tune/adapt the model for MTP using samples from the special industrial domain,” Kirchenbauer said. “The best performance is likely achieved if the MTP adaptation is performed using prompts from the deployment domain.”

Serving compatibility and the road ahead

The research team released their trained models on Hugging Face and will soon release the code for their MTP framework. Infrastructure teams integrating these models into vLLM or SGLang will need to account for changes in how batching and KV caching are handled — but that’s a one-time engineering investment, not an ongoing burden. However, Kirchenbauer sees “no clear barriers to integration” and confirmed the team is “working with some systems experts to identify the shortest path to integration.”

Kirchenbauer’s advice for teams wanting to test the released models: start with toy prompts like counting or repeating a phrase to see ConfAdapt’s gains in action, then adapt the model using samples from your specific deployment domain for best results. “Overall we do expect that a production-ready implementation of our approach could simplify the lifecycle of building and deploying low-latency agentic models,” Kirchenbauer concluded. “While existing acceleration techniques for NTP models focus almost solely on inference harnesses and logic, our approach just bakes some of the complexity into the model itself making it largely complementary to existing work.”

Credit: Source link

ShareTweetSendSharePin

Related Posts

Epic is increasing the price of Fortnite’s V-Bucks currency
AI & Technology

Epic is increasing the price of Fortnite’s V-Bucks currency

March 10, 2026
OpenAI upgrades ChatGPT with interactive learning tools as lawsuits and Pentagon backlash mount
AI & Technology

OpenAI upgrades ChatGPT with interactive learning tools as lawsuits and Pentagon backlash mount

March 10, 2026
Anthropic and OpenAI just exposed SAST’s structural blind spot with free tools
AI & Technology

Anthropic and OpenAI just exposed SAST’s structural blind spot with free tools

March 10, 2026
Live Nation settlement avoids breakup with Ticketmaster
AI & Technology

Live Nation settlement avoids breakup with Ticketmaster

March 10, 2026
Next Post
Brandi Carlile Says Donald Trump Voters ‘Got F–king Scammed,’ But ‘They Can Still Change Their Minds’ – Yahoo

Brandi Carlile Says Donald Trump Voters ‘Got F–king Scammed,’ But ‘They Can Still Change Their Minds’ - Yahoo

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

No Result
View All Result
You can (sort of) block Grok from editing your uploaded photos

You can (sort of) block Grok from editing your uploaded photos

March 9, 2026
Hours of Clinton depositions about Epstein released

Hours of Clinton depositions about Epstein released

March 7, 2026
Fidelity Floating Rate High Income Fund Q4 2025 Commentary

Fidelity Floating Rate High Income Fund Q4 2025 Commentary

March 8, 2026

About

Learn more

Our Services

Legal

Privacy Policy

Terms of Use

Bloggers

Learn more

Article Links

Contact

Advertise

Ask us anything

©2020- TradePoint.io - All rights reserved!

Tradepoint.io, being just a publishing and technology platform, is not a registered broker-dealer or investment adviser. So we do not provide investment advice. Rather, brokerage services are provided to clients of Tradepoint.io by independent SEC-registered broker-dealers and members of FINRA/SIPC. Every form of investing carries some risk and past performance is not a guarantee of future results. “Tradepoint.io“, “Instant Investing” and “My Trading Tools” are registered trademarks of Apperbuild, LLC.

This website is operated by Apperbuild, LLC. We have no link to any brokerage firm and we do not provide investment advice. Every information and resource we provide is solely for the education of our readers. © 2020 Apperbuild, LLC. All rights reserved.

No Result
View All Result
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop

© 2023 - TradePoint.io - All Rights Reserved!