• bitcoinBitcoin(BTC)$76,020.00-2.21%
  • ethereumEthereum(ETH)$2,260.06-3.58%
  • tetherTether(USDT)$1.00-0.03%
  • rippleXRP(XRP)$1.37-2.26%
  • binancecoinBNB(BNB)$616.31-2.05%
  • usd-coinUSDC(USDC)$1.00-0.01%
  • solanaSolana(SOL)$83.00-2.82%
  • tronTRON(TRX)$0.3261480.95%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.030.05%
  • dogecoinDogecoin(DOGE)$0.107124-2.71%
  • whitebitWhiteBIT Coin(WBT)$57.484.36%
  • USDSUSDS(USDS)$1.00-0.02%
  • leo-tokenLEO Token(LEO)$10.37-0.03%
  • HyperliquidHyperliquid(HYPE)$39.04-4.03%
  • cardanoCardano(ADA)$0.246273-2.84%
  • bitcoin-cashBitcoin Cash(BCH)$445.57-2.07%
  • moneroMonero(XMR)$375.09-2.04%
  • chainlinkChainlink(LINK)$9.11-3.14%
  • CantonCanton(CC)$0.1508850.19%
  • zcashZcash(ZEC)$332.96-0.53%
  • stellarStellar(XLM)$0.160017-2.71%
  • USD1USD1(USD1)$1.00-0.02%
  • MemeCoreMemeCore(M)$3.45-2.47%
  • daiDai(DAI)$1.00-0.04%
  • litecoinLitecoin(LTC)$55.73-2.96%
  • avalanche-2Avalanche(AVAX)$9.16-2.77%
  • Ethena USDeEthena USDe(USDE)$1.00-0.01%
  • hedera-hashgraphHedera(HBAR)$0.088292-2.75%
  • RainRain(RAIN)$0.0079136.50%
  • shiba-inuShiba Inu(SHIB)$0.000006-2.87%
  • suiSui(SUI)$0.91-3.28%
  • paypal-usdPayPal USD(PYUSD)$1.00-0.03%
  • the-open-networkToncoin(TON)$1.31-1.15%
  • crypto-com-chainCronos(CRO)$0.068258-1.63%
  • Circle USYCCircle USYC(USYC)$1.120.00%
  • tether-goldTether Gold(XAUT)$4,618.251.08%
  • Global DollarGlobal Dollar(USDG)$1.00-0.01%
  • BittensorBittensor(TAO)$249.80-4.97%
  • BlackRock USD Institutional Digital Liquidity FundBlackRock USD Institutional Digital Liquidity Fund(BUIDL)$1.000.00%
  • pax-goldPAX Gold(PAXG)$4,620.181.24%
  • mantleMantle(MNT)$0.62-2.54%
  • polkadotPolkadot(DOT)$1.21-3.33%
  • uniswapUniswap(UNI)$3.21-3.52%
  • World Liberty FinancialWorld Liberty Financial(WLFI)$0.060078-18.11%
  • Pi NetworkPi Network(PI)$0.175176-7.94%
  • SkySky(SKY)$0.077830-9.53%
  • Falcon USDFalcon USD(USDF)$1.00-0.05%
  • okbOKB(OKB)$82.18-1.93%
  • nearNEAR Protocol(NEAR)$1.32-3.38%
  • AsterAster(ASTER)$0.66-2.69%
TradePoint.io
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop
No Result
View All Result
TradePoint.io
No Result
View All Result

New AI framework autonomously optimizes training data, architectures and algorithms — outperforming human baselines

April 27, 2026
in AI & Technology
Reading Time: 7 mins read
A A
New AI framework autonomously optimizes training data, architectures and algorithms — outperforming human baselines
ShareShareShareShareShare

AI R&D runs on a cycle of hypothesis, experiment, and analysis — each step demanding substantial manual engineering effort. A new framework from researchers at SII-GAIR aims to close that bottleneck by automating the full optimization loop for training data, model architectures, and learning algorithms.

A new framework called ASI-EVOLVE, developed by researchers at the Generative Artificial Intelligence Research Lab (SII-GAIR), aims to solve this bottleneck. Designed as an agentic system for AI-for-AI research, it uses a continuous “learn-design-experiment-analyze” cycle to automate the optimization of the foundational AI stack.

YOU MAY ALSO LIKE

Sony Says Your PlayStation Won’t Check For Game Licenses Every 30 Days

IBM Releases Two Granite Speech 4.1 2B Models: Autoregressive ASR with Translation and Non-Autoregressive Editing for Fast Inference

In experiments, this self-improvement loop autonomously discovered novel designs that significantly outperformed state-of-the-art human baselines. The system generated novel language model architectures, improved pretraining data pipelines to boost benchmark scores by over 18 points, and designed highly efficient reinforcement learning algorithms. 

For enterprise teams running repeated optimization cycles on their AI systems, the framework offers a path to reducing manual engineering overhead while matching or exceeding the performance of human-designed baselines.

The data and design bottleneck

Engineering teams can only explore a tiny fraction of the vast possible design space for AI models at any given time. Executing experimental workflows requires costly manual effort and frequent human intervention. And the insights gained from these expensive cycles are often siloed as individual intuition or experience, making it difficult to systematically preserve and transfer that knowledge to future projects or across different teams. These constraints fundamentally limit the pace and scale of AI innovation.

AI has made incredible strides in scientific discovery, ranging from specialized tools like AlphaFold solving discrete biological problems to agentic systems answering basic scientific questions. However, current frameworks still struggle with open-ended AI innovation and are mostly limited to narrow optimization within very specific constraints.

Advancing core AI capabilities is far more complex. It requires modifying large interdependent codebases, running compute-heavy experiments that consume tens to hundreds of GPU hours, and analyzing multi-dimensional feedback from training dynamics. 

“Existing frameworks have not yet demonstrated that AI can operate effectively in this regime in a unified way, nor that it can generate meaningful advances across the three foundational pillars of AI development rather than within a single narrowly scoped setting,” the researchers write.

How ASI-EVOLVE learns to research

To overcome the limitations of manual R&D, ASI-EVOLVE operates on a continuous loop between prior knowledge, hypothesis generation, experimentation, and refinement. The system learns relevant knowledge and historical experience from existing databases, designs a candidate program representing its next hypothesis, runs experiments to obtain evaluation signals, and analyzes outcomes into reusable, human-readable lessons that it feeds back into its knowledge base.

ASI-EVOLVE framework (source: arXiv)

There are two key components that drive ASI-EVOLVE. The “Cognition Base” acts as the system’s foundational domain expertise. To speed up the search process, the system is pre-loaded with human knowledge, task-relevant heuristics, and known pitfalls extracted from existing literature. This steers the exploration toward promising directions right from the first iteration. 

The second component is the “Analyzer,” which tackles the complex, multi-dimensional feedback from the experiments. It processes raw training logs, benchmark results, and efficiency traces, distilling them into compact, actionable insights and causal analyses.

Several other complementary modules bring the framework together. A “Researcher” agent reviews prior knowledge from the cognition base and past experimental results to generate new hypotheses, either proposing localized code modifications or writing new programs. 

The “Engineer” component runs the actual experiments. Because AI training trials are incredibly costly, the Engineer is equipped with efficiency measures like wall-clock limits and early rejection quick tests to filter out flawed candidate programs before they consume excessive GPU hours. 

Finally, the “Database” serves as the system’s persistent memory, storing the code, research motivations, raw results, and the Analyzer’s final reports for every iteration, ensuring that insights compound systematically over time.

By unifying these components, ASI-EVOLVE ensures that an AI agent systematically learns from complex, real-world experimental feedback without requiring constant human intervention. 

While previous frameworks are designed to evolve candidate solutions, “ASI-EVOLVE evolves cognition itself,” the researchers write. “Accumulated experience and distilled insights are continuously stored and retrieved to inform future exploration, ensuring that the system grows not only in the quality of its solutions but in its capacity to reason about where to search next.”

ASI-EVOLVE in action

In their experiments, the researchers showed that ASI-EVOLVE can successfully improve data curation, model architectures, and learning algorithms to create better AI systems.

For real-world enterprise applications, high-quality data is a persistent bottleneck. When tasked with designing category-specific cleaning strategies for massive pretraining corpora, ASI-EVOLVE inspected data samples and diagnosed quality issues like HTML artifacts and formatting inconsistencies. The system autonomously formulated custom curation rules, discovering that systematic cleaning combined with domain-aware preservation rules is far more effective than aggressive filtering. 

In benchmark tests, 3B-parameter models trained on the AI-curated data saw an average score boost of nearly 4 points over models trained on raw data. The gains were highest in knowledge-intensive tasks, with performance increasing by over 18 points on Massive Multitask Language Understanding (MMLU), an LLM benchmark that covers tasks across STEM, humanities, and social sciences.

asi-evolve-results

ASI-EVOLVE discovers optimizes datasets, architectures, and algorithms (source: arXiv)

Beyond data, the system proved highly capable at neural architecture design. Across 1,773 autonomous exploration rounds, it generated 105 novel linear attention architectures that surpassed DeltaNet, a highly efficient human-designed baseline. To achieve these results, ASI-EVOLVE developed multi-scale routing mechanisms that dynamically adjust the model’s computational budget based on the specific content of the input.

Finally, in reinforcement learning algorithm design, ASI-EVOLVE discovered novel optimization mechanisms. It designed algorithms that outperformed the competitive GRPO baseline on complex mathematical reasoning benchmarks such as AMC32 and AIME24. One successful variant invented a “Budget-Constrained Dynamic Radius” that keeps model updates within a defined budget, effectively stabilizing training on noisy data.

What this means for enterprise AI

Enterprise AI workflows constantly require optimizations to existing systems, from fine-tuning open-source models on proprietary data to making small changes to architectures and algorithms. Usually, the computational resources and engineering hours required to carry out such efforts are immense and beyond the capabilities of most organizations. As a result, many are left to run unoptimized versions of standard AI models.

The research team says the framework is designed so enterprises can integrate proprietary domain knowledge into the cognition repository and allow the autonomous loop to iterate on internal AI systems.

The research team has open-sourced the ASI-EVOLVE code, making the foundational framework available for developers and product builders. 

Credit: Source link

ShareTweetSendSharePin

Related Posts

Sony Says Your PlayStation Won’t Check For Game Licenses Every 30 Days
AI & Technology

Sony Says Your PlayStation Won’t Check For Game Licenses Every 30 Days

April 30, 2026
IBM Releases Two Granite Speech 4.1 2B Models: Autoregressive ASR with Translation and Non-Autoregressive Editing for Fast Inference
AI & Technology

IBM Releases Two Granite Speech 4.1 2B Models: Autoregressive ASR with Translation and Non-Autoregressive Editing for Fast Inference

April 30, 2026
Mark Zuckerberg Says Meta Is Working On AI Agents For Personal And Business Use
AI & Technology

Mark Zuckerberg Says Meta Is Working On AI Agents For Personal And Business Use

April 29, 2026
Amazon’s OpenAI gambit signals a new phase in the cloud wars — one where exclusivity no longer applies
AI & Technology

Amazon’s OpenAI gambit signals a new phase in the cloud wars — one where exclusivity no longer applies

April 29, 2026
Next Post
Surveillance video captures the moment thieves break into a New Jersey jewelry store

Surveillance video captures the moment thieves break into a New Jersey jewelry store

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

No Result
View All Result
A man caused a parking garage pipe to burst after attempting a pullup at George Mason University

A man caused a parking garage pipe to burst after attempting a pullup at George Mason University

April 28, 2026
Artemis II: Mission around the moon

Artemis II: Mission around the moon

April 25, 2026
Sabre Corporation (SABR) Shareholder/Analyst Call Prepared Remarks Transcript

Sabre Corporation (SABR) Shareholder/Analyst Call Prepared Remarks Transcript

April 29, 2026

About

Learn more

Our Services

Legal

Privacy Policy

Terms of Use

Bloggers

Learn more

Article Links

Contact

Advertise

Ask us anything

©2020- TradePoint.io - All rights reserved!

Tradepoint.io, being just a publishing and technology platform, is not a registered broker-dealer or investment adviser. So we do not provide investment advice. Rather, brokerage services are provided to clients of Tradepoint.io by independent SEC-registered broker-dealers and members of FINRA/SIPC. Every form of investing carries some risk and past performance is not a guarantee of future results. “Tradepoint.io“, “Instant Investing” and “My Trading Tools” are registered trademarks of Apperbuild, LLC.

This website is operated by Apperbuild, LLC. We have no link to any brokerage firm and we do not provide investment advice. Every information and resource we provide is solely for the education of our readers. © 2020 Apperbuild, LLC. All rights reserved.

No Result
View All Result
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop

© 2023 - TradePoint.io - All Rights Reserved!