• bitcoinBitcoin(BTC)$67,950.00-3.11%
  • ethereumEthereum(ETH)$1,980.85-3.54%
  • tetherTether(USDT)$1.000.00%
  • binancecoinBNB(BNB)$627.26-1.96%
  • rippleXRP(XRP)$1.37-1.88%
  • usd-coinUSDC(USDC)$1.000.00%
  • solanaSolana(SOL)$84.46-2.50%
  • tronTRON(TRX)$0.284409-0.87%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.02-1.05%
  • dogecoinDogecoin(DOGE)$0.090456-2.65%
  • whitebitWhiteBIT Coin(WBT)$54.42-2.97%
  • USDSUSDS(USDS)$1.000.00%
  • cardanoCardano(ADA)$0.258203-3.54%
  • bitcoin-cashBitcoin Cash(BCH)$450.54-0.30%
  • leo-tokenLEO Token(LEO)$9.050.03%
  • HyperliquidHyperliquid(HYPE)$30.850.53%
  • moneroMonero(XMR)$351.20-0.83%
  • chainlinkChainlink(LINK)$8.80-3.22%
  • Ethena USDeEthena USDe(USDE)$1.00-0.01%
  • CantonCanton(CC)$0.152397-1.09%
  • stellarStellar(XLM)$0.151591-3.18%
  • USD1USD1(USD1)$1.000.02%
  • RainRain(RAIN)$0.008990-2.26%
  • daiDai(DAI)$1.000.02%
  • hedera-hashgraphHedera(HBAR)$0.096439-2.40%
  • litecoinLitecoin(LTC)$53.97-1.38%
  • paypal-usdPayPal USD(PYUSD)$1.00-0.01%
  • avalanche-2Avalanche(AVAX)$9.00-2.09%
  • suiSui(SUI)$0.90-3.61%
  • zcashZcash(ZEC)$208.65-5.31%
  • the-open-networkToncoin(TON)$1.330.39%
  • shiba-inuShiba Inu(SHIB)$0.000005-1.75%
  • crypto-com-chainCronos(CRO)$0.074980-2.42%
  • tether-goldTether Gold(XAUT)$5,144.271.29%
  • World Liberty FinancialWorld Liberty Financial(WLFI)$0.097018-3.41%
  • MemeCoreMemeCore(M)$1.522.03%
  • pax-goldPAX Gold(PAXG)$5,181.151.04%
  • polkadotPolkadot(DOT)$1.49-1.09%
  • uniswapUniswap(UNI)$3.83-2.73%
  • mantleMantle(MNT)$0.68-1.29%
  • Pi NetworkPi Network(PI)$0.22995914.17%
  • okbOKB(OKB)$102.646.07%
  • Circle USYCCircle USYC(USYC)$1.120.00%
  • BlackRock USD Institutional Digital Liquidity FundBlackRock USD Institutional Digital Liquidity Fund(BUIDL)$1.000.00%
  • BittensorBittensor(TAO)$183.800.47%
  • Falcon USDFalcon USD(USDF)$1.000.01%
  • AsterAster(ASTER)$0.70-0.79%
  • Global DollarGlobal Dollar(USDG)$1.000.00%
  • aaveAave(AAVE)$109.92-3.88%
  • SkySky(SKY)$0.071636-5.27%
TradePoint.io
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop
No Result
View All Result
TradePoint.io
No Result
View All Result

Microsoft Releases Phi-4-Reasoning-Vision-15B: A Compact Multimodal Model for Math, Science, and GUI Understanding

March 6, 2026
in AI & Technology
Reading Time: 8 mins read
A A
Microsoft Releases Phi-4-Reasoning-Vision-15B: A Compact Multimodal Model for Math, Science, and GUI Understanding
ShareShareShareShareShare

Microsoft has released Phi-4-reasoning-vision-15B, a 15 billion parameter open-weight multimodal reasoning model designed for image and text tasks that require both perception and selective reasoning. It is a compact model built to balance reasoning quality, compute efficiency, and training-data requirements, with particular strength in scientific and mathematical reasoning and understanding user interfaces.

https://arxiv.org/pdf/2603.03975

What the model is built on?

Phi-4-reasoning-vision-15B combines the Phi-4-Reasoning language backbone with the SigLIP-2 vision encoder using a mid-fusion architecture. In this setup, the vision encoder first converts images into visual tokens, then those tokens are projected into the language model embedding space and processed by the pretrained language model. This design acts as a practical trade-off: it preserves strong cross-modal reasoning while keeping training and inference costs manageable compared with heavier early-fusion designs.

YOU MAY ALSO LIKE

Slay the Spire 2, Scott Pilgrim EX and other new indie games worth checking out

Anthropic launches Claude Marketplace, giving enterprises access to Claude-powered tools from Replit, GitLab, Harvey and more

https://arxiv.org/pdf/2603.03975

Why Microsoft took the smaller-model route?

Many recent vision-language models have grown in parameter count and token usage, which raises both latency and deployment cost. Phi-4-reasoning-vision-15B was built as a smaller alternative that still handles common multimodal workloads without relying on extremely large training datasets or excessive inference-time token generation. The model was trained on 200 billion multimodal tokens, building on Phi-4-Reasoning, which was trained on 16 billion tokens, and ultimately on the Phi-4 base model, which was trained on 400 billion unique tokens. Microsoft contrasts that with the more than 1 trillion tokens used to train several recent multimodal models such as Qwen 2.5 VL, Qwen 3 VL, Kimi-VL, and Gemma 3.

https://arxiv.org/pdf/2603.03975

High-resolution perception was a core design choice

Microsoft team explains one of the more useful technical lessons in their technical report that multimodal reasoning often fails because perception fails first. Models can miss the answer not because they lack reasoning ability, but because they fail to extract the relevant visual details from dense images such as screenshots, documents, or interfaces with small interactive elements.

Phi-4-reasoning-vision-15B uses a dynamic resolution vision encoder with up to 3,600 visual tokens, which is intended to support high-resolution understanding for tasks such as GUI grounding and fine-grained document analysis. The Microsoft team states that high-resolution, dynamic-resolution encoders yield consistent improvements, and explicitly notes that accurate perception is a prerequisite for high-quality reasoning.

Mixed reasoning instead of forcing reasoning everywhere

A second important design decision is the model’s mixed reasoning and non-reasoning training strategy. Rather than forcing chain-of-thought-style reasoning for all tasks, Microsoft team trained the model to switch between two modes. Reasoning samples include <think>...</think> traces, while non-reasoning samples begin with <nothink> and are used for perception-focused tasks such as captioning, grounding, OCR, and simple VQA. The reasoning data makes up about 20% of the overall training mixture.

The goal of this hybrid setup is to let the model respond directly on tasks where longer reasoning adds latency without improving accuracy, while still invoking structured reasoning on tasks such as math and science. Microsoft team also notes an important limitation: the boundary between these modes is learned implicitly, so switching is not always optimal. Users can override the default behavior through explicit prompting with <think> or <nothink> tokens.

What areas are stronger?

Microsoft team highlights 2 main application areas. The first is scientific and mathematical reasoning over visual inputs, including handwritten equations, diagrams, charts, tables, and quantitative documents. The second is computer-use agent tasks, where the model interprets screen content, localizes GUI elements, and supports interaction with desktop, web, or mobile interfaces.

https://arxiv.org/pdf/2603.03975

Benchmark results

Microsoft team reports the following benchmark scores for Phi-4-reasoning-vision-15B: 84.8 on AI2DTEST, 83.3 on ChartQATEST, 44.9 on MathVerseMINI, 36.2 on MathVisionMINI, 75.2 on MathVistaMINI, 54.3 on MMMUVAL, 64.5 on MMStar, 76.0 on OCRBench, and 88.2 on ScreenSpotv2. The technical report also notes that these results were generated using Eureka ML Insights and VLMEvalKit, with fixed evaluation settings, and that Microsoft team presents them as comparison results rather than leaderboard claims.

Key Takeaways

  • Phi-4-reasoning-vision-15B is a 15B open-weight multimodal model built by combining Phi-4-Reasoning with the SigLIP-2 vision encoder in a mid-fusion architecture.
  • Microsoft team designed the model for compact multimodal reasoning, with a focus on math, science, document understanding, and GUI grounding, rather than scaling to a much larger parameter count.
  • High-resolution visual perception is a core part of the system, with support for dynamic resolution encoding and up to 3,600 visual tokens, which helps on dense screenshots, documents, and interface-heavy tasks.
  • The model uses mixed reasoning and non-reasoning training, allowing it to switch between <think> and <nothink> modes depending on whether a task needs explicit reasoning or direct perception-based output.
  • Microsoft’s reported benchmarks show strong performance for its size, including results on AI2DTEST, ChartQATEST, MathVistaMINI, OCRBench, and ScreenSpotv2, which supports its positioning as a compact but capable vision-language reasoning model.

Check out the Paper, Repo and Model Weights. Also, feel free to follow us on Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

The post Microsoft Releases Phi-4-Reasoning-Vision-15B: A Compact Multimodal Model for Math, Science, and GUI Understanding appeared first on MarkTechPost.

Credit: Source link

ShareTweetSendSharePin

Related Posts

Slay the Spire 2, Scott Pilgrim EX and other new indie games worth checking out
AI & Technology

Slay the Spire 2, Scott Pilgrim EX and other new indie games worth checking out

March 7, 2026
Anthropic launches Claude Marketplace, giving enterprises access to Claude-powered tools from Replit, GitLab, Harvey and more
AI & Technology

Anthropic launches Claude Marketplace, giving enterprises access to Claude-powered tools from Replit, GitLab, Harvey and more

March 7, 2026
Valve doesn’t sound confident the Steam Machine will ship in 2026
AI & Technology

Valve doesn’t sound confident the Steam Machine will ship in 2026

March 6, 2026
LangChain’s CEO argues that better models alone won’t get your AI agent to production
AI & Technology

LangChain’s CEO argues that better models alone won’t get your AI agent to production

March 6, 2026
Next Post
Kornacki: Will incumbents ‘go down’ in tonight’s primaries in Texas & North Carolina?

Kornacki: Will incumbents ‘go down’ in tonight’s primaries in Texas & North Carolina?

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

No Result
View All Result
Justin Timberlake trying to stop release of video from DUI arrest

Justin Timberlake trying to stop release of video from DUI arrest

March 7, 2026
Rep. Menefee reacts to early Texas primary results

Rep. Menefee reacts to early Texas primary results

March 6, 2026
AI giant Anthropic ‘philosopher’ Amanda Askell’s oddball blog posts surface after Trump blasts ‘leftwing nut jobs’

AI giant Anthropic ‘philosopher’ Amanda Askell’s oddball blog posts surface after Trump blasts ‘leftwing nut jobs’

March 3, 2026

About

Learn more

Our Services

Legal

Privacy Policy

Terms of Use

Bloggers

Learn more

Article Links

Contact

Advertise

Ask us anything

©2020- TradePoint.io - All rights reserved!

Tradepoint.io, being just a publishing and technology platform, is not a registered broker-dealer or investment adviser. So we do not provide investment advice. Rather, brokerage services are provided to clients of Tradepoint.io by independent SEC-registered broker-dealers and members of FINRA/SIPC. Every form of investing carries some risk and past performance is not a guarantee of future results. “Tradepoint.io“, “Instant Investing” and “My Trading Tools” are registered trademarks of Apperbuild, LLC.

This website is operated by Apperbuild, LLC. We have no link to any brokerage firm and we do not provide investment advice. Every information and resource we provide is solely for the education of our readers. © 2020 Apperbuild, LLC. All rights reserved.

No Result
View All Result
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop

© 2023 - TradePoint.io - All Rights Reserved!