• bitcoinBitcoin(BTC)$68,352.00-4.05%
  • ethereumEthereum(ETH)$1,982.52-4.89%
  • tetherTether(USDT)$1.000.00%
  • binancecoinBNB(BNB)$627.71-3.39%
  • rippleXRP(XRP)$1.37-2.93%
  • usd-coinUSDC(USDC)$1.000.00%
  • solanaSolana(SOL)$84.63-4.33%
  • tronTRON(TRX)$0.284455-0.30%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.02-1.05%
  • dogecoinDogecoin(DOGE)$0.091596-2.21%
  • whitebitWhiteBIT Coin(WBT)$54.660.06%
  • USDSUSDS(USDS)$1.000.01%
  • cardanoCardano(ADA)$0.259638-3.86%
  • bitcoin-cashBitcoin Cash(BCH)$450.32-2.23%
  • leo-tokenLEO Token(LEO)$9.050.10%
  • HyperliquidHyperliquid(HYPE)$30.740.20%
  • moneroMonero(XMR)$346.70-4.71%
  • chainlinkChainlink(LINK)$8.80-4.98%
  • Ethena USDeEthena USDe(USDE)$1.000.04%
  • CantonCanton(CC)$0.152790-3.22%
  • stellarStellar(XLM)$0.152703-4.11%
  • USD1USD1(USD1)$1.00-0.01%
  • RainRain(RAIN)$0.009089-2.79%
  • daiDai(DAI)$1.000.00%
  • hedera-hashgraphHedera(HBAR)$0.097385-3.11%
  • paypal-usdPayPal USD(PYUSD)$1.000.00%
  • litecoinLitecoin(LTC)$54.02-2.88%
  • avalanche-2Avalanche(AVAX)$9.04-3.77%
  • suiSui(SUI)$0.90-5.84%
  • zcashZcash(ZEC)$210.37-7.78%
  • the-open-networkToncoin(TON)$1.33-0.46%
  • shiba-inuShiba Inu(SHIB)$0.000005-2.73%
  • crypto-com-chainCronos(CRO)$0.076043-1.85%
  • tether-goldTether Gold(XAUT)$5,130.770.64%
  • World Liberty FinancialWorld Liberty Financial(WLFI)$0.099378-3.04%
  • MemeCoreMemeCore(M)$1.502.61%
  • pax-goldPAX Gold(PAXG)$5,165.820.57%
  • polkadotPolkadot(DOT)$1.51-1.55%
  • uniswapUniswap(UNI)$3.85-3.76%
  • mantleMantle(MNT)$0.68-2.00%
  • Pi NetworkPi Network(PI)$0.22889312.86%
  • Circle USYCCircle USYC(USYC)$1.120.00%
  • okbOKB(OKB)$96.470.76%
  • BlackRock USD Institutional Digital Liquidity FundBlackRock USD Institutional Digital Liquidity Fund(BUIDL)$1.000.00%
  • Falcon USDFalcon USD(USDF)$1.00-0.05%
  • AsterAster(ASTER)$0.70-0.65%
  • Global DollarGlobal Dollar(USDG)$1.000.02%
  • BittensorBittensor(TAO)$177.25-6.43%
  • aaveAave(AAVE)$110.96-5.91%
  • SkySky(SKY)$0.070339-7.77%
TradePoint.io
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop
No Result
View All Result
TradePoint.io
No Result
View All Result

Google PM open-sources Always On Memory Agent, ditching vector databases for LLM-driven persistent memory

March 6, 2026
in AI & Technology
Reading Time: 5 mins read
A A
Google PM open-sources Always On Memory Agent, ditching vector databases for LLM-driven persistent memory
ShareShareShareShareShare

Google senior AI product manager Shubham Saboo has turned one of the thorniest problems in agent design into an open-source engineering exercise: persistent memory.

YOU MAY ALSO LIKE

Anthropic launches Claude Marketplace, giving enterprises access to Claude-powered tools from Replit, GitLab, Harvey and more

Valve doesn’t sound confident the Steam Machine will ship in 2026

This week, he published an open-source “Always On Memory Agent” on the official Google Cloud Platform Github page under a permissive MIT License, allowing for commercial usage.

It was built with Google’s Agent Development Kit, or ADK introduced last Spring in 2025, and Gemini 3.1 Flash-Lite, a low-cost model Google introduced on March 3, 2026 as its fastest and most cost-efficient Gemini 3 series model.

The project serves as a practical reference implementation for something many AI teams want but few have productionized cleanly: an agent system that can ingest information continuously, consolidate it in the background, and retrieve it later without relying on a conventional vector database.

For enterprise developers, the release matters less as a product launch than as a signal about where agent infrastructure is headed.

The repo packages a view of long-running autonomy that is increasingly attractive for support systems, research assistants, internal copilots and workflow automation. It also brings governance questions into sharper focus as soon as memory stops being session-bound.

What the repo appears to do — and what it does not clearly claim

The repo also appears to use a multi-agent internal architecture, with specialist components handling ingestion, consolidation and querying.

But the supplied materials do not clearly establish a broader claim that this is a shared memory framework for multiple independent agents.

That distinction matters. ADK as a framework supports multi-agent systems, but this specific repo is best described as an always-on memory agent, or memory layer, built with specialist subagents and persistent storage.

Even at this narrower level, it addresses a core infrastructure problem many teams are actively working through.

The architecture favors simplicity over a traditional retrieval stack

According to the repository, the agent runs continuously, ingests files or API input, stores structured memories in SQLite, and performs scheduled memory consolidation every 30 minutes by default.

A local HTTP API and Streamlit dashboard are included, and the system supports text, image, audio, video and PDF ingestion. The repo frames the design with an intentionally provocative claim: “No vector database. No embeddings. Just an LLM that reads, thinks, and writes structured memory.”

That design choice is likely to draw attention from developers managing cost and operational complexity. Traditional retrieval stacks often require separate embedding pipelines, vector storage, indexing logic and synchronization work.

Saboo’s example instead leans on the model to organize and update memory directly. In practice, that can simplify prototypes and reduce infrastructure sprawl, especially for smaller or medium-memory agents. It also shifts the performance question from vector search overhead to model latency, memory compaction logic and long-run behavioral stability.

Flash-Lite gives the always-on model some economic logic

That is where Gemini 3.1 Flash-Lite enters the story.

Google says the model is built for high-volume developer workloads at scale and priced at $0.25 per 1 million input tokens and $1.50 per 1 million output tokens.

The company also says Flash-Lite is 2.5 times faster than Gemini 2.5 Flash in time to first token and delivers a 45% increase in output speed while maintaining similar or better quality.

On Google’s published benchmarks, the model posts an Elo score of 1432 on Arena.ai, 86.9% on GPQA Diamond and 76.8% on MMMU Pro. Google positions those characteristics as a fit for high-frequency tasks such as translation, moderation, UI generation and simulation.

Those numbers help explain why Flash-Lite is paired with a background-memory agent. A 24/7 service that periodically re-reads, consolidates and serves memory needs predictable latency and low enough inference cost to avoid making “always on” prohibitively expensive.

Google’s ADK documentation reinforces the broader story. The framework is presented as model-agnostic and deployment-agnostic, with support for workflow agents, multi-agent systems, tools, evaluation and deployment targets including Cloud Run and Vertex AI Agent Engine. That combination makes the memory agent feel less like a one-off demo and more like a reference point for a broader agent runtime strategy.

The enterprise debate is about governance, not just capability

Public reaction shows why enterprise adoption of persistent memory will not hinge on speed or token pricing alone.

Several responses on X highlighted exactly the concerns enterprise architects are likely to raise. Franck Abe called Google ADK and 24/7 memory consolidation “brilliant leaps for continuous agent autonomy,” but warned that an agent “dreaming” and cross-pollinating memories in the background without deterministic boundaries becomes “a compliance nightmare.”

ELED made a related point, arguing that the main cost of always-on agents is not tokens but “drift and loops.”

Those critiques go directly to the operational burden of persistent systems: who can write memory, what gets merged, how retention works, when memories are deleted, and how teams audit what the agent learned over time?

Another reaction, from Iffy, challenged the repo’s “no embeddings” framing, arguing that the system still has to chunk, index and retrieve structured memory, and that it may work well for small-context agents but break down once memory stores become much larger.

That criticism is technically important. Removing a vector database does not remove retrieval design; it changes where the complexity lives.

For developers, the tradeoff is less about ideology than fit. A lighter stack may be attractive for low-cost, bounded-memory agents, while larger-scale deployments may still demand stricter retrieval controls, more explicit indexing strategies and stronger lifecycle tooling.

ADK broadens the story beyond a single demo

Other commenters focused on developer workflow. One asked for the ADK repo and documentation and wanted to know whether the runtime is serverless or long-running, and whether tool-calling and evaluation hooks are available out of the box.

Based on the supplied materials, the answer is effectively both: the memory-agent example itself is structured like a long-running service, while ADK more broadly supports multiple deployment patterns and includes tools and evaluation capabilities.

The always-on memory agent is interesting on its own, but the larger message is that Saboo is trying to make agents feel like deployable software systems rather than isolated prompts. In that framing, memory becomes part of the runtime layer, not just an add-on feature.

What Saboo has shown — and what he has not

What Saboo has not shown yet is just as important as what he’s published.

The provided materials do not include a direct Flash-Lite versus Anthropic Claude Haiku benchmark for agent loops in production use.

They also do not lay out enterprise-grade compliance controls specific to this memory agent, such as: deterministic policy boundaries, retention guarantees, segregation rules or formal audit workflows.

And while the repo appears to use multiple specialist agents internally, the materials do not clearly prove a larger claim about persistent memory shared across multiple independent agents.

For now, the repo reads as a compelling engineering template rather than a complete enterprise memory platform.

Why this matters now

Still, the release lands at the right time. Enterprise AI teams are moving beyond single-turn assistants and into systems expected to remember preferences, preserve project context and operate across longer horizons.

Saboo’s open-source memory agent offers a concrete starting point for that next layer of infrastructure, and Flash-Lite gives the economics some credibility.

But the strongest takeaway from the reaction around the launch is that continuous memory will be judged on governance as much as capability.

That is the real enterprise question behind Saboo’s demo: not whether an agent can remember, but whether it can remember in ways that stay bounded, inspectable and safe enough to trust in production.

Credit: Source link

ShareTweetSendSharePin

Related Posts

Anthropic launches Claude Marketplace, giving enterprises access to Claude-powered tools from Replit, GitLab, Harvey and more
AI & Technology

Anthropic launches Claude Marketplace, giving enterprises access to Claude-powered tools from Replit, GitLab, Harvey and more

March 7, 2026
Valve doesn’t sound confident the Steam Machine will ship in 2026
AI & Technology

Valve doesn’t sound confident the Steam Machine will ship in 2026

March 6, 2026
Samsung Galaxy Buds 4 and 4 Pro review: Impressive audio, imperfect ANC
AI & Technology

Samsung Galaxy Buds 4 and 4 Pro review: Impressive audio, imperfect ANC

March 6, 2026
Capcom’s long-delayed Pragmata is now arriving a week earlier
AI & Technology

Capcom’s long-delayed Pragmata is now arriving a week earlier

March 6, 2026
Next Post
Sec. Noem dodges questions about calling Renee Good, Alex Pretti ‘domestic terrorists’ 

Sec. Noem dodges questions about calling Renee Good, Alex Pretti ‘domestic terrorists’ 

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

No Result
View All Result
Why Wall Street isn’t panicking over the Iran war — yet

Why Wall Street isn’t panicking over the Iran war — yet

March 4, 2026
Apple unveils the M5 Pro and M5 Max chips, which feature new, faster ‘super cores’

Apple unveils the M5 Pro and M5 Max chips, which feature new, faster ‘super cores’

March 3, 2026
NATO Air Defenses Shoot Down Iranian Missile Headed Toward Turkey – The New York Times

NATO Air Defenses Shoot Down Iranian Missile Headed Toward Turkey – The New York Times

March 4, 2026

About

Learn more

Our Services

Legal

Privacy Policy

Terms of Use

Bloggers

Learn more

Article Links

Contact

Advertise

Ask us anything

©2020- TradePoint.io - All rights reserved!

Tradepoint.io, being just a publishing and technology platform, is not a registered broker-dealer or investment adviser. So we do not provide investment advice. Rather, brokerage services are provided to clients of Tradepoint.io by independent SEC-registered broker-dealers and members of FINRA/SIPC. Every form of investing carries some risk and past performance is not a guarantee of future results. “Tradepoint.io“, “Instant Investing” and “My Trading Tools” are registered trademarks of Apperbuild, LLC.

This website is operated by Apperbuild, LLC. We have no link to any brokerage firm and we do not provide investment advice. Every information and resource we provide is solely for the education of our readers. © 2020 Apperbuild, LLC. All rights reserved.

No Result
View All Result
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop

© 2023 - TradePoint.io - All Rights Reserved!