• Kinza Babylon Staked BTCKinza Babylon Staked BTC(KBTC)$83,270.000.00%
  • Steakhouse EURCV Morpho VaultSteakhouse EURCV Morpho Vault(STEAKEURCV)$0.000000-100.00%
  • Stride Staked InjectiveStride Staked Injective(STINJ)$16.51-4.18%
  • Vested XORVested XOR(VXOR)$3,404.231,000.00%
  • FibSwap DEXFibSwap DEX(FIBO)$0.0084659.90%
  • ICPanda DAOICPanda DAO(PANDA)$0.003106-39.39%
  • TruFin Staked APTTruFin Staked APT(TRUAPT)$8.020.00%
  • bitcoinBitcoin(BTC)$106,329.000.74%
  • ethereumEthereum(ETH)$2,622.453.42%
  • VNST StablecoinVNST Stablecoin(VNST)$0.0000400.67%
  • tetherTether(USDT)$1.000.01%
  • rippleXRP(XRP)$2.211.49%
  • binancecoinBNB(BNB)$669.651.21%
  • Wrapped SOLWrapped SOL(SOL)$143.66-2.32%
  • solanaSolana(SOL)$158.660.38%
  • usd-coinUSDC(USDC)$1.000.00%
  • dogecoinDogecoin(DOGE)$0.1968811.99%
  • tronTRON(TRX)$0.268815-0.62%
  • cardanoCardano(ADA)$0.701.77%
  • staked-etherLido Staked Ether(STETH)$2,619.713.34%
  • wrapped-bitcoinWrapped Bitcoin(WBTC)$106,190.000.53%
  • Gaj FinanceGaj Finance(GAJ)$0.0059271.46%
  • Content BitcoinContent Bitcoin(CTB)$24.482.55%
  • HyperliquidHyperliquid(HYPE)$36.387.52%
  • USD OneUSD One(USD1)$1.000.11%
  • SuiSui(SUI)$3.350.53%
  • Wrapped stETHWrapped stETH(WSTETH)$3,154.753.35%
  • UGOLD Inc.UGOLD Inc.(UGOLD)$3,042.460.08%
  • ParkcoinParkcoin(KPK)$1.101.76%
  • chainlinkChainlink(LINK)$14.160.96%
  • avalanche-2Avalanche(AVAX)$21.372.33%
  • stellarStellar(XLM)$0.2725972.01%
  • bitcoin-cashBitcoin Cash(BCH)$403.190.23%
  • ToncoinToncoin(TON)$3.220.64%
  • leo-tokenLEO Token(LEO)$8.52-1.32%
  • shiba-inuShiba Inu(SHIB)$0.0000133.08%
  • hedera-hashgraphHedera(HBAR)$0.1723092.08%
  • wethWETH(WETH)$2,622.773.41%
  • Yay StakeStone EtherYay StakeStone Ether(YAYSTONE)$2,671.07-2.84%
  • USDSUSDS(USDS)$1.000.01%
  • litecoinLitecoin(LTC)$90.142.04%
  • Wrapped eETHWrapped eETH(WEETH)$2,803.223.49%
  • moneroMonero(XMR)$358.303.95%
  • polkadotPolkadot(DOT)$4.182.70%
  • Pundi AIFXPundi AIFX(PUNDIAI)$16.000.00%
  • Binance Bridged USDT (BNB Smart Chain)Binance Bridged USDT (BNB Smart Chain)(BSC-USD)$1.00-0.04%
  • PengPeng(PENG)$0.60-13.59%
  • Ethena USDeEthena USDe(USDE)$1.00-0.04%
  • Bitget TokenBitget Token(BGB)$4.800.23%
  • PepePepe(PEPE)$0.0000136.25%
TradePoint.io
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop
No Result
View All Result
TradePoint.io
No Result
View All Result

QwenLong-L1 solves long-context reasoning challenge that stumps current LLMs

May 30, 2025
in AI & Technology
Reading Time: 5 mins read
A A
QwenLong-L1 solves long-context reasoning challenge that stumps current LLMs
ShareShareShareShareShare

YOU MAY ALSO LIKE

How Good Are AI Agents at Real Research? Inside the Deep Research Bench Report

People Can Fly cancels two games and lays off developers

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Alibaba Group has introduced QwenLong-L1, a new framework that enables large language models (LLMs) to reason over extremely long inputs. This development could unlock a new wave of enterprise applications that require models to understand and draw insights from extensive documents such as detailed corporate filings, lengthy financial statements, or complex legal contracts.

The challenge of long-form reasoning for AI

Recent advances in large reasoning models (LRMs), particularly through reinforcement learning (RL), have significantly improved their problem-solving capabilities. Research shows that when trained with RL fine-tuning, LRMs acquire skills similar to human “slow thinking,” where they develop sophisticated strategies to tackle complex tasks.

However, these improvements are primarily seen when models work with relatively short pieces of text, typically around 4,000 tokens. The ability of these models to scale their reasoning to much longer contexts (e.g., 120,000 tokens) remains a major challenge. Such long-form reasoning requires a robust understanding of the entire context and the ability to perform multi-step analysis. “This limitation poses a significant barrier to practical applications requiring interaction with external knowledge, such as deep research, where LRMs must collect and process information from knowledge-intensive environments,” the developers of QwenLong-L1 write in their paper.

The researchers formalize these challenges into the concept of “long-context reasoning RL.” Unlike short-context reasoning, which often relies on knowledge already stored within the model, long-context reasoning RL requires models to retrieve and ground relevant information from lengthy inputs accurately. Only then can they generate chains of reasoning based on this incorporated information. 

Training models for this through RL is tricky and often results in inefficient learning and unstable optimization processes. Models struggle to converge on good solutions or lose their ability to explore diverse reasoning paths.

QwenLong-L1: A multi-stage approach

QwenLong-L1 is a reinforcement learning framework designed to help LRMs transition from proficiency with short texts to robust generalization across long contexts. The framework enhances existing short-context LRMs through a carefully structured, multi-stage process:

Warm-up Supervised Fine-Tuning (SFT): The model first undergoes an SFT phase, where it is trained on examples of long-context reasoning. This stage establishes a solid foundation, enabling the model to ground information accurately from long inputs. It helps develop fundamental capabilities in understanding context, generating logical reasoning chains, and extracting answers.

Curriculum-Guided Phased RL: At this stage, the model is trained through multiple phases, with the target length of the input documents gradually increasing. This systematic, step-by-step approach helps the model stably adapt its reasoning strategies from shorter to progressively longer contexts. It avoids the instability often seen when models are abruptly trained on very long texts.

Difficulty-Aware Retrospective Sampling: The final training stage incorporates challenging examples from the preceding training phases, ensuring the model continues to learn from the hardest problems. This prioritizes difficult instances and encourages the model to explore more diverse and complex reasoning paths.

QwenLong-L1 process Source: arXiv

Beyond this structured training, QwenLong-L1 also uses a distinct reward system. While training for short-context reasoning tasks often relies on strict rule-based rewards (e.g., a correct answer in a math problem), QwenLong-L1 employs a hybrid reward mechanism. This combines rule-based verification, which ensures precision by checking for strict adherence to correctness criteria, with an “LLM-as-a-judge.” This judge model compares the semanticity of the generated answer with the ground truth, allowing for more flexibility and better handling of the diverse ways correct answers can be expressed when dealing with long, nuanced documents.

Putting QwenLong-L1 to the test

The Alibaba team evaluated QwenLong-L1 using document question-answering (DocQA) as the primary task. This scenario is highly relevant to enterprise needs, where AI must understand dense documents to answer complex questions. 

Experimental results across seven long-context DocQA benchmarks showed QwenLong-L1’s capabilities. Notably, the QWENLONG-L1-32B model (based on DeepSeek-R1-Distill-Qwen-32B) achieved performance comparable to Anthropic’s Claude-3.7 Sonnet Thinking, and outperformed models like OpenAI’s o3-mini and Qwen3-235B-A22B. The smaller QWENLONG-L1-14B model also outperformed Google’s Gemini 2.0 Flash Thinking and Qwen3-32B. 

Source: arXiv
Source: arXiv

An important finding relevant to real-world applications is how RL training results in the model developing specialized long-context reasoning behaviors. The paper notes that models trained with QwenLong-L1 become better at “grounding” (linking answers to specific parts of a document), “subgoal setting” (breaking down complex questions), “backtracking” (recognizing and correcting their own mistakes mid-reasoning), and “verification” (double-checking their answers).

For instance, while a base model might get sidetracked by irrelevant details in a financial document or get stuck in a loop of over-analyzing unrelated information, the QwenLong-L1 trained model demonstrated an ability to engage in effective self-reflection. It could successfully filter out these distractor details, backtrack from incorrect paths, and arrive at the correct answer.

Techniques like QwenLong-L1 could significantly expand the utility of AI in the enterprise. Potential applications include legal tech (analyzing thousands of pages of legal documents), finance (deep research on annual reports and financial filings for risk assessment or investment opportunities) and customer service (analyzing long customer interaction histories to provide more informed support). The researchers have released the code for the QwenLong-L1 recipe and the weights for the trained models.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.

Credit: Source link
ShareTweetSendSharePin

Related Posts

How Good Are AI Agents at Real Research? Inside the Deep Research Bench Report
AI & Technology

How Good Are AI Agents at Real Research? Inside the Deep Research Bench Report

June 2, 2025
People Can Fly cancels two games and lays off developers
AI & Technology

People Can Fly cancels two games and lays off developers

June 2, 2025
How S&P is using deep web scraping, ensemble learning and Snowflake architecture to collect 5X more data on SMEs
AI & Technology

How S&P is using deep web scraping, ensemble learning and Snowflake architecture to collect 5X more data on SMEs

June 2, 2025
Strengthening U.S. Chip Manufacturing – The Key to AI Leadership
AI & Technology

Strengthening U.S. Chip Manufacturing – The Key to AI Leadership

June 2, 2025
Next Post
Mystery solved: A library book went missing for 43 years and returned | Nightly News: Kids Edition

Mystery solved: A library book went missing for 43 years and returned | Nightly News: Kids Edition

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

No Result
View All Result
Motown legend Smokey Robinson accused of sexual assault by ex-employees

Motown legend Smokey Robinson accused of sexual assault by ex-employees

May 27, 2025
Raccoon found with meth pipes during Ohio traffic stop

Raccoon found with meth pipes during Ohio traffic stop

May 27, 2025
Trump says rising prices for strollers and tires are ‘peanuts’ compared to drops in energy prices

Trump says rising prices for strollers and tires are ‘peanuts’ compared to drops in energy prices

May 29, 2025

About

Learn more

Our Services

Legal

Privacy Policy

Terms of Use

Bloggers

Learn more

Article Links

Contact

Advertise

Ask us anything

©2020- TradePoint.io - All rights reserved!

Tradepoint.io, being just a publishing and technology platform, is not a registered broker-dealer or investment adviser. So we do not provide investment advice. Rather, brokerage services are provided to clients of Tradepoint.io by independent SEC-registered broker-dealers and members of FINRA/SIPC. Every form of investing carries some risk and past performance is not a guarantee of future results. “Tradepoint.io“, “Instant Investing” and “My Trading Tools” are registered trademarks of Apperbuild, LLC.

This website is operated by Apperbuild, LLC. We have no link to any brokerage firm and we do not provide investment advice. Every information and resource we provide is solely for the education of our readers. © 2020 Apperbuild, LLC. All rights reserved.

No Result
View All Result
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop

© 2023 - TradePoint.io - All Rights Reserved!