• bitcoinBitcoin(BTC)$77,295.000.51%
  • ethereumEthereum(ETH)$2,334.531.92%
  • tetherTether(USDT)$1.00-0.02%
  • rippleXRP(XRP)$1.400.24%
  • binancecoinBNB(BNB)$627.930.45%
  • usd-coinUSDC(USDC)$1.00-0.05%
  • solanaSolana(SOL)$84.881.06%
  • tronTRON(TRX)$0.321629-0.80%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.040.95%
  • dogecoinDogecoin(DOGE)$0.1022572.77%
  • whitebitWhiteBIT Coin(WBT)$54.901.04%
  • USDSUSDS(USDS)$1.000.00%
  • HyperliquidHyperliquid(HYPE)$40.48-1.87%
  • leo-tokenLEO Token(LEO)$10.370.04%
  • cardanoCardano(ADA)$0.2497490.94%
  • bitcoin-cashBitcoin Cash(BCH)$454.621.56%
  • moneroMonero(XMR)$381.261.60%
  • chainlinkChainlink(LINK)$9.371.10%
  • CantonCanton(CC)$0.1501240.80%
  • zcashZcash(ZEC)$339.87-0.31%
  • stellarStellar(XLM)$0.163986-0.65%
  • MemeCoreMemeCore(M)$3.54-3.10%
  • USD1USD1(USD1)$1.000.02%
  • daiDai(DAI)$1.00-0.06%
  • litecoinLitecoin(LTC)$55.991.07%
  • avalanche-2Avalanche(AVAX)$9.331.17%
  • hedera-hashgraphHedera(HBAR)$0.0902681.22%
  • Ethena USDeEthena USDe(USDE)$1.000.00%
  • suiSui(SUI)$0.940.57%
  • shiba-inuShiba Inu(SHIB)$0.0000060.95%
  • RainRain(RAIN)$0.0075275.93%
  • paypal-usdPayPal USD(PYUSD)$1.000.03%
  • the-open-networkToncoin(TON)$1.310.25%
  • crypto-com-chainCronos(CRO)$0.069378-0.24%
  • Circle USYCCircle USYC(USYC)$1.120.00%
  • tether-goldTether Gold(XAUT)$4,596.22-1.41%
  • BittensorBittensor(TAO)$264.266.41%
  • Global DollarGlobal Dollar(USDG)$1.000.00%
  • World Liberty FinancialWorld Liberty Financial(WLFI)$0.0739780.61%
  • BlackRock USD Institutional Digital Liquidity FundBlackRock USD Institutional Digital Liquidity Fund(BUIDL)$1.000.00%
  • pax-goldPAX Gold(PAXG)$4,593.36-1.44%
  • mantleMantle(MNT)$0.640.38%
  • polkadotPolkadot(DOT)$1.240.55%
  • uniswapUniswap(UNI)$3.302.13%
  • Pi NetworkPi Network(PI)$0.1969385.46%
  • SkySky(SKY)$0.086912-1.46%
  • Falcon USDFalcon USD(USDF)$1.00-0.12%
  • nearNEAR Protocol(NEAR)$1.370.86%
  • okbOKB(OKB)$83.30-0.94%
  • AsterAster(ASTER)$0.662.64%
TradePoint.io
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop
No Result
View All Result
TradePoint.io
No Result
View All Result

NVIDIA AI Brings Nemotron-3-Nano-30B to NVFP4 with Quantization Aware Distillation (QAD) for Efficient Reasoning Inference

February 2, 2026
in AI & Technology
Reading Time: 7 mins read
A A
NVIDIA AI Brings Nemotron-3-Nano-30B to NVFP4 with Quantization Aware Distillation (QAD) for Efficient Reasoning Inference
ShareShareShareShareShare

NVIDIA has released Nemotron-Nano-3-30B-A3B-NVFP4, a production checkpoint that runs a 30B parameter reasoning model in 4 bit NVFP4 format while keeping accuracy close to its BF16 baseline. The model combines a hybrid Mamba2 Transformer Mixture of Experts architecture with a Quantization Aware Distillation (QAD) recipe designed specifically for NVFP4 deployment. Overall, it is an ultra-efficient NVFP4 precision version of Nemotron-3-Nano that delivers up to 4x higher throughput on Blackwell B200.

https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-NVFP4

What is Nemotron-Nano-3-30B-A3B-NVFP4?

Nemotron-Nano-3-30B-A3B-NVFP4 is a quantized version of Nemotron-3-Nano-30B-A3B-BF16, trained from scratch by NVIDIA team as a unified reasoning and chat model. It is built as a hybrid Mamba2 Transformer MoE network:

YOU MAY ALSO LIKE

How to build custom reasoning agents with a fraction of the compute

American AI startup Poolside launches free, high-performing open model Laguna XS.2 for local agentic coding

  • 30B parameters in total
  • 52 layers in depth
  • 23 Mamba2 and MoE layers
  • 6 grouped query attention layers with 2 groups
  • Each MoE layer has 128 routed experts and 1 shared expert
  • 6 experts are active per token, which gives about 3.5B active parameters per token

The model is pre-trained on 25T tokens using a Warmup Stable Decay learning rate schedule with a batch size of 3072, a peak learning rate of 1e-3 and a minimum learning rate of 1e-5.

Post training follows a 3 stage pipeline:

  1. Supervised fine tuning on synthetic and curated data for code, math, science, tool calling, instruction following and structured outputs.
  2. Reinforcement learning with synchronous GRPO across multi step tool use, multi turn chat and structured environments, and RLHF with a generative reward model.
  3. Post training quantization to NVFP4 with FP8 KV cache and a selective high precision layout, followed by QAD.

The NVFP4 checkpoint keeps the attention layers and the Mamba layers that feed into them in BF16, quantizes remaining layers to NVFP4 and uses FP8 for the KV cache.

NVFP4 format and why it matters?

NVFP4 is a 4 bit floating point format designed for both training and inference on recent NVIDIA GPUs. The main properties of NVFP4:

  • Compared with FP8, NVFP4 delivers 2 to 3 times higher arithmetic throughput.
  • It reduces memory usage by about 1.8 times for weights and activations.
  • It extends MXFP4 by reducing the block size from 32 to 16 and introduces two level scaling.

The two level scaling uses E4M3-FP8 scales per block and a FP32 scale per tensor. The smaller block size allows the quantizer to adapt to local statistics and the dual scaling increases dynamic range while keeping quantization error low.

For very large LLMs, simple post training quantization (PTQ) to NVFP4 already gives decent accuracy across benchmarks. For smaller models, especially those heavily postage pipelines, the research team notes that PTQ causes non negligible accuracy drops, which motivates a training based recovery method.

From QAT to QAD

Standard Quantization Aware Training (QAT) inserts a pseudo quantization into the forward pass and reuses the original task loss, such as next token cross entropy. This works well for convolutional networks, but the research team lists 2 main issues for modern LLMs:

  • Complex multi stage post training pipelines with SFT, RL and model merging are hard to reproduce.
  • Original training data for open models is often unavailabublic form.

Quantization Aware Distillation (QAD) changes the objective instead of the full pipeline. A frozen BF16 model acts as teacher and the NVFP4 model is a student. Training minimizes KL divergence between their output token distributions, not the original supervised or RL objective.

The research team highlights 3 properties of QAD:

  1. It aligns the quantized model with the high precision teacher more accurately than QAT.
  2. It stays stable even when the teacher has already gone through several stages, such as supervised fine tuning, reinforcement learning and model merging, because QAD only tries to match the final teacher behavior.
  3. It works with partial, synthetic or filtered data, because it only needs input text to query the teacher and student, not the original labels or reward models.

Benchmarks on Nemotron-3-Nano-30B

Nemotron-3-Nano-30B-A3B is one of the RL heavy models in the QAD research. The below Table shows accuracy on AA-LCR, AIME25, GPQA-D, LiveCodeBench-v5 and SciCode-TQ, NVFP4-QAT and NVFP4-QAD.

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf

Key Takeaways

  • Nemotron-3-Nano-30B-A3B-NVFP4 is a 30B parameter hybrid Mamba2 Transformer MoE model that runs in 4 bit NVFP4 with FP8 KV cache and a small set of BF16 layers preserved for stability, while keeping about 3.5B active parameters per token and supporting context windows up to 1M tokens.
  • NVFP4 is a 4 bit floating point format with block size 16 and two level scaling, using E4M3-FP8 per block scales and a FP32 per tensor scale, which gives about 2 to 3 times higher arithmetic throughput and about 1.8 times lower memory cost than FP8 for weights and activations.
  • Quantization Aware Distillation (QAD) replaces the original task loss with KL divergence to a frozen BF16 teacher, so the NVFP4 student directly matches the teacher’s output distribution without replaying the full SFT, RL and model merge pipeline or needing the original reward models.
  • Using the new Quantization Aware Distillation method, the NVFP4 version achieves up to 99.4% accuracy of BF16
  • On AA-LCR, AIME25, GPQA-D, LiveCodeBench and SciCode, NVFP4-PTQ shows noticeable accuracy loss and NVFP4-QAT degrades further, while NVFP4-QAD recovers performance to near BF16 levels, reducing the gap to only a few points across these reasoning and coding benchmarks.

Check out the Paper and Model Weights. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

The post NVIDIA AI Brings Nemotron-3-Nano-30B to NVFP4 with Quantization Aware Distillation (QAD) for Efficient Reasoning Inference appeared first on MarkTechPost.

Credit: Source link

ShareTweetSendSharePin

Related Posts

How to build custom reasoning agents with a fraction of the compute
AI & Technology

How to build custom reasoning agents with a fraction of the compute

April 28, 2026
American AI startup Poolside launches free, high-performing open model Laguna XS.2 for local agentic coding
AI & Technology

American AI startup Poolside launches free, high-performing open model Laguna XS.2 for local agentic coding

April 28, 2026
Texas Instruments made a new flagship graphing calculator: the TI-84 Evo
AI & Technology

Texas Instruments made a new flagship graphing calculator: the TI-84 Evo

April 28, 2026
iOS 27 will reportedly come with new AI-powered photo editing tools
AI & Technology

iOS 27 will reportedly come with new AI-powered photo editing tools

April 28, 2026
Next Post
China's Soft PMI Data Suggests Domestic Challenges Carried Over Into 2026

China's Soft PMI Data Suggests Domestic Challenges Carried Over Into 2026

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

No Result
View All Result
Mistral AI launches Workflows, a Temporal-powered orchestration engine already running millions of daily executions

Mistral AI launches Workflows, a Temporal-powered orchestration engine already running millions of daily executions

April 28, 2026
Vance says there was ‘no deal’ with Iran during peace talks

Vance says there was ‘no deal’ with Iran during peace talks

April 26, 2026
3 Ways to Get Involved in NID

3 Ways to Get Involved in NID

April 28, 2026

About

Learn more

Our Services

Legal

Privacy Policy

Terms of Use

Bloggers

Learn more

Article Links

Contact

Advertise

Ask us anything

©2020- TradePoint.io - All rights reserved!

Tradepoint.io, being just a publishing and technology platform, is not a registered broker-dealer or investment adviser. So we do not provide investment advice. Rather, brokerage services are provided to clients of Tradepoint.io by independent SEC-registered broker-dealers and members of FINRA/SIPC. Every form of investing carries some risk and past performance is not a guarantee of future results. “Tradepoint.io“, “Instant Investing” and “My Trading Tools” are registered trademarks of Apperbuild, LLC.

This website is operated by Apperbuild, LLC. We have no link to any brokerage firm and we do not provide investment advice. Every information and resource we provide is solely for the education of our readers. © 2020 Apperbuild, LLC. All rights reserved.

No Result
View All Result
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop

© 2023 - TradePoint.io - All Rights Reserved!