• bitcoinBitcoin(BTC)$80,391.001.07%
  • ethereumEthereum(ETH)$2,316.791.63%
  • tetherTether(USDT)$1.000.01%
  • rippleXRP(XRP)$1.433.24%
  • binancecoinBNB(BNB)$653.702.26%
  • usd-coinUSDC(USDC)$1.000.00%
  • solanaSolana(SOL)$93.385.91%
  • tronTRON(TRX)$0.3504260.99%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.032.53%
  • dogecoinDogecoin(DOGE)$0.1105303.86%
  • whitebitWhiteBIT Coin(WBT)$59.361.34%
  • HyperliquidHyperliquid(HYPE)$43.853.39%
  • zcashZcash(ZEC)$619.358.31%
  • cardanoCardano(ADA)$0.2773145.86%
  • USDSUSDS(USDS)$1.000.00%
  • leo-tokenLEO Token(LEO)$10.31-0.44%
  • bitcoin-cashBitcoin Cash(BCH)$452.170.32%
  • chainlinkChainlink(LINK)$10.577.31%
  • moneroMonero(XMR)$408.124.36%
  • the-open-networkToncoin(TON)$2.61-3.01%
  • CantonCanton(CC)$0.1515484.45%
  • stellarStellar(XLM)$0.1670375.43%
  • MemeCoreMemeCore(M)$3.58-8.15%
  • litecoinLitecoin(LTC)$58.784.16%
  • daiDai(DAI)$1.000.03%
  • USD1USD1(USD1)$1.000.02%
  • suiSui(SUI)$1.0912.80%
  • avalanche-2Avalanche(AVAX)$10.045.59%
  • hedera-hashgraphHedera(HBAR)$0.0943344.73%
  • Ethena USDeEthena USDe(USDE)$1.000.00%
  • shiba-inuShiba Inu(SHIB)$0.0000063.48%
  • RainRain(RAIN)$0.0075431.48%
  • paypal-usdPayPal USD(PYUSD)$1.00-0.01%
  • crypto-com-chainCronos(CRO)$0.0724714.22%
  • BittensorBittensor(TAO)$313.533.81%
  • Circle USYCCircle USYC(USYC)$1.120.00%
  • tether-goldTether Gold(XAUT)$4,702.22-0.02%
  • Global DollarGlobal Dollar(USDG)$1.000.01%
  • BlackRock USD Institutional Digital Liquidity FundBlackRock USD Institutional Digital Liquidity Fund(BUIDL)$1.000.00%
  • World Liberty FinancialWorld Liberty Financial(WLFI)$0.0752682.73%
  • uniswapUniswap(UNI)$3.759.64%
  • polkadotPolkadot(DOT)$1.385.73%
  • mantleMantle(MNT)$0.694.63%
  • OndoOndo(ONDO)$0.45741730.53%
  • pax-goldPAX Gold(PAXG)$4,709.280.06%
  • internet-computerInternet Computer(ICP)$3.8828.69%
  • nearNEAR Protocol(NEAR)$1.597.97%
  • SkySky(SKY)$0.0834962.93%
  • okbOKB(OKB)$88.353.36%
  • pepePepe(PEPE)$0.0000045.17%
TradePoint.io
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop
No Result
View All Result
TradePoint.io
No Result
View All Result

Google AI Releases Multi-Token Prediction (MTP) Drafters for Gemma 4: Delivering Up to 3x Faster Inference Without Quality Loss

May 6, 2026
in AI & Technology
Reading Time: 5 mins read
A A
Google AI Releases Multi-Token Prediction (MTP) Drafters for Gemma 4: Delivering Up to 3x Faster Inference Without Quality Loss
ShareShareShareShareShare

Large language models are getting incredibly powerful, but let’s be honest—their inference speed is still a massive headache for anyone trying to use them in production. Google just launched Multi-Token Prediction (MTP) drafters for the Gemma 4 model family. This specialized speculative decoding architecture can actually triple (3x) your speed at inference time, all without sacrificing a bit of output quality or reasoning accuracy. The release comes just weeks after Gemma 4 surpassed 60 million downloads and directly targets one of the most persistent pain points in deploying large language models: the memory-bandwidth bottleneck that slows token generation regardless of hardware capability.

https://blog.google/innovation-and-ai/technology/developers-tools/multi-token-prediction-gemma-4/?linkId=61725841

Why LLM Inference is Slow?

Today’s large language models operate autoregressively. They produce exactly one token at a time, sequentially. Every single token generation requires loading billions of model parameters from VRAM (video RAM) into compute units. This process is described as memory-bandwidth bound. The bottleneck is not the raw computing power of the GPU or processor, but the speed at which data can be transferred from memory to the compute units.

YOU MAY ALSO LIKE

OpenAI Adds Chrome Extension to Codex, Letting Its AI Agent Access LinkedIn, Salesforce, Gmail, and Internal Tools via Signed-In Sessions

Anthropic says it hit a $30 billion revenue run rate after ‘crazy’ 80x growth

The consequence is a significant latency bottleneck: compute sits underutilized while the system is busy just moving data around. What makes this especially inefficient is that the model applies the same amount of computation to a trivially predictable token like predicting “words” after “Actions speak louder than…” as it does to generating a complex logical inference. There’s no mechanism in standard autoregressive decoding to exploit how easy or hard the next token is to predict.

What is Speculative Decoding?

Speculative decoding is the foundational technique that Gemma 4’s MTP drafters are built on. The technique decouples token generation from verification by pairing two models: a lightweight drafter and a heavy target model.

Here’s how the pipeline works in practice. The small, fast drafter model proposes several future tokens in rapid succession — a “draft” sequence — in less time than the large target model (e.g., Gemma 4 31B) takes to process even a single token. The target model then verifies all of these suggested tokens in parallel in a single forward pass. If the target model agrees with the draft, it accepts the entire sequence — and even generates one additional token of its own in the process. This means an application can output the full drafted sequence plus one extra token in roughly the same wall-clock time it would normally take to generate just one token.

Since the primary Gemma 4 model retains the final verification step, the output is identical to what the target model would have produced on its own, token-by-token. There is no quality tradeoff — it is a lossless speedup.

MTP: What’s New in the Gemma 4 Drafter Architecture

Google has introduced several architectural enhancements that make the Gemma 4 MTP drafters particularly efficient. The draft models seamlessly utilize the target model’s activations and share its KV cache (key-value cache). The KV cache is a standard optimization in transformer inference that stores intermediate attention computations so they don’t need to be recalculated on every step. By sharing this cache, the drafter avoids wasting time recomputing context that the larger target model has already processed.

Additionally, for the E2B and E4B edge models, the smallest Gemma 4 variants designed to run on mobile and edge devices — Google implemented an efficient clustering technique in the embedder layer. This specifically addresses a bottleneck prominent on edge hardware: the final logit calculation, which maps internal model representations to vocabulary probabilities. The clustering approach accelerates this step, improving end-to-end generation speed on hardware-constrained devices.

For hardware-specific performance, the Gemma 4 26B mixture-of-experts (MoE) model presents unique routing challenges on Apple Silicon at a batch size of 1. However, increasing the batch size to between 4 and 8 unlocks up to a ~2.2x speedup locally. Similar batch-size-dependent gains are observed on NVIDIA A100 hardware.

Key Takeaways

  • Google has released Multi-Token Prediction (MTP) drafters for the Gemma 4 model family, delivering up to 3x faster inference speeds without any degradation in output quality or reasoning accuracy.
  • MTP drafters use a speculative decoding architecture that pairs a lightweight drafter model with a heavy target model — the drafter proposes several tokens at once, and the target model verifies them all in a single forward pass, breaking the one-token-at-a-time bottleneck.
  • The draft models share the target model’s KV cache and activations, and for E2B and E4B edge models, an efficient clustering technique in the embedder addresses the final logit calculation bottleneck — enabling faster generation even on memory-constrained devices.
  • MTP drafters are available now under the Apache 2.0 license, with model weights on Hugging Face and Kaggle.

Check out the Model Weights and Technical details. Also, feel free to follow us on Twitter and don’t forget to join our 130k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.? Connect with us

The post Google AI Releases Multi-Token Prediction (MTP) Drafters for Gemma 4: Delivering Up to 3x Faster Inference Without Quality Loss appeared first on MarkTechPost.

Credit: Source link

ShareTweetSendSharePin

Related Posts

OpenAI Adds Chrome Extension to Codex, Letting Its AI Agent Access LinkedIn, Salesforce, Gmail, and Internal Tools via Signed-In Sessions
AI & Technology

OpenAI Adds Chrome Extension to Codex, Letting Its AI Agent Access LinkedIn, Salesforce, Gmail, and Internal Tools via Signed-In Sessions

May 8, 2026
Anthropic says it hit a  billion revenue run rate after ‘crazy’ 80x growth
AI & Technology

Anthropic says it hit a $30 billion revenue run rate after ‘crazy’ 80x growth

May 8, 2026
How to Build a Single-Cell RNA-seq Analysis Pipeline with Scanpy for PBMC Clustering, Annotation, and Trajectory Discovery
AI & Technology

How to Build a Single-Cell RNA-seq Analysis Pipeline with Scanpy for PBMC Clustering, Annotation, and Trajectory Discovery

May 8, 2026
Discord Is Recovering From An Outage That Took Some Users Offline
AI & Technology

Discord Is Recovering From An Outage That Took Some Users Offline

May 8, 2026
Next Post
I Discovered Another Infidelity and I’m Pregnant

I Discovered Another Infidelity and I'm Pregnant

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

No Result
View All Result
Concentration Risk Meets Diversification Reality

Concentration Risk Meets Diversification Reality

May 8, 2026
My Mom Expects Me To Pay k For Her Funeral

My Mom Expects Me To Pay $25k For Her Funeral

May 2, 2026
Warehouse fire prompts state of emergency in New Jersey

Warehouse fire prompts state of emergency in New Jersey

May 8, 2026

About

Learn more

Our Services

Legal

Privacy Policy

Terms of Use

Bloggers

Learn more

Article Links

Contact

Advertise

Ask us anything

©2020- TradePoint.io - All rights reserved!

Tradepoint.io, being just a publishing and technology platform, is not a registered broker-dealer or investment adviser. So we do not provide investment advice. Rather, brokerage services are provided to clients of Tradepoint.io by independent SEC-registered broker-dealers and members of FINRA/SIPC. Every form of investing carries some risk and past performance is not a guarantee of future results. “Tradepoint.io“, “Instant Investing” and “My Trading Tools” are registered trademarks of Apperbuild, LLC.

This website is operated by Apperbuild, LLC. We have no link to any brokerage firm and we do not provide investment advice. Every information and resource we provide is solely for the education of our readers. © 2020 Apperbuild, LLC. All rights reserved.

No Result
View All Result
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop

© 2023 - TradePoint.io - All Rights Reserved!