Memories.ai
Emerging from UK roots with APAC founders from Meta's Reality Labs, Memories AI was a low-profile AI lab crafting the world's first Large Visual Memory Model for analyzing millions of hours of video.

Region
UK
Sector
AI Infra
Stage
Seed
Funding
Seed Round
Memories.ai: The Video AI Pioneer Building Human-Like Visual Recall
Excerpt
Emerging from UK roots with APAC founders from Meta's Reality Labs, Memories AI was a low-profile AI lab crafting the world's first Large Visual Memory Model for analyzing millions of hours of video. Coming out of stealth and backed by Susa Ventures, Samsung Next and other U.S. VCs, it's primed for co-investment, targeting untapped markets in security and media with scalable, privacy-focused tech.
Project and Product
Memories.ai represents a quiet revolution in AI's approach to visual data, shifting from short-term video processing to long-term, human-like memory retention. Founded in stealth mode and emerging publicly in July 2025, the company has developed the world's first Large Visual Memory Model (LVMM), enabling AI to "see, remember, and understand" vast video archives with virtually unlimited context. While mainstream AI models like GPT-4o and Gemini falter beyond a few hours of footage due to context limitations, Memories AI's platform handles up to 10 million hours, making it ideal for industries drowning in video data.
The project's genesis stems from the founders' frustration with existing AI's "amnesia" in visual domains. As Dr. Shawn Shen noted in a TechCrunch interview, "All top AI companies... are focused on producing end-to-end models. Those capabilities are good, but these models often have limitations around understanding video context beyond one or two hours." Inspired by human visual memory—sifting through vast contexts without overload—Memories AI built a layered system: compression to strip noise and retain essentials, indexing for natural-language searchability, and aggregation for insightful summaries and reports.
At its core, the product is a cloud-based AI platform offering tools like Video Chat, Clip Search, Video to Text, Audio to Text, Agent Video Creator, and Agent Video Marketer. Users upload videos, and the LVMM processes them into searchable "memory atoms"—compact embeddings capturing key elements like who, what, when, and where. This allows queries such as "Show me all unattended bags in the terminal" for security footage or "What's the viral cosmetics trend?" across a million TikTok videos. The system outperforms competitors by up to 20 points on benchmarks like MVBench and NextQA, with ultra-low hallucinations thanks to its memory-centric design.
How it works: Videos are ingested and compressed on-device or in the cloud, reducing data to essential insights without storing raw footage indefinitely, addressing privacy concerns. A Query Model translates natural language into vectors, a Retrieval Model fetches relevant atoms, and agents like Full-Modal Caption and Reconstruction stitch them into coherent responses. This modular approach keeps compute low—heavy lifting happens at ingest, not query time—enabling scalability for enterprises.
Target users span enterprises and prosumers. For security firms, it enables real-time threat detection, human re-identification, and slip-and-fall alerts across camera networks. Media and marketing teams use it to analyze trends, draft stories, and auto-edit videos, turning raw footage into shareable highlights. Prosumers benefit from tools like TikTok Roast (fun video critiques) and Playground (experimental features). Early adopters include innovative teams worldwide, as praised by Samsung Next's Sam Campbell: "One thing we liked about Memories.ai is that it could do a lot of on-device computing... unlocking better security applications."
Pricing is accessible: Free tier with 500 credits/month for basic features; Plus at $20/month for 5,000 credits; Enterprise with custom credits and advanced tools like Video Scriptor. Credits buy extra processing, with bundles from $9.20 for 2,000. Subscriptions auto-renew, non-refundable, but paid plans allow credit rollover.
From X posts, the product shines in creative demos: summarizing long videos like Tiki on TikTok or extracting quirky moments (e.g., "fridge cigarette" cravings from footage). Blog entries detail the journey from research to LVMM, emphasizing human-like recall for AI agents, robots, and self-driving tech. Future plans include shared drives for seamless syncing, AI assistants contextualizing user lives via photos/smart glasses, and training humanoid robots on complex tasks.
Competitors like mem0 and Letta offer memory layers but lack robust video support; Twelve Labs and Google focus on understanding but cap at shorter contexts. Memories.ai's edge: entity-centric graphs for consistent identities across hours, reinforcement learning for better queries, and on-device processing for privacy.
With $8M seed funding oversubscribed from an initial $4M target, the project is bootstrapping toward broader adoption. Early X buzz, including semantic searches yielding launch announcements and usecases, shows organic growth—posts garnering thousands of views and engagements. In a market where 97% of U.S. retailers plan AI increases in 2025, Memories.ai addresses the "blue ocean" of long-context visual intelligence. As a UK-originated venture with global ambitions, it's a cross-border story of turning research into scalable AI for everyday and enterprise challenges.
Team
Memories.ai's team is a compact, research-heavy group embodying APAC-born entrepreneurship's blend of technical prowess and understated ambition. Co-founders Dr. Shawn Shen and Ben (Enmin) Zhou, both ex-Meta Reality Labs alumni, bring deep expertise in AI and video tech from their time at one of the world's leading labs.
Dr. Shawn Shen, a Cambridge PhD in Computer Science, leads as the visionary force. His dissertation under Professor Per Ola Kristensson focused on next-generation input techniques and interactive AI in mixed reality (MR), proposing machine learning to augment human abilities in immersive environments. Shen's accolades include full scholarships like the Jardine and Trinity Bursary, ranking first in college admissions. At Meta, he was a research scientist, contributing to projects like Make-A-Video and helping develop Llama models. Now a lecturer at the University of Bristol (joint with Bristol Vision Institute and Interaction Group), he balances academia with entrepreneurship, seeking PhD students in immersive tech. Shen's X posts reveal a candid, meme-loving persona, sharing product teasers and insights like "a visual AI assistant... built with @memories_ai." His obsession with video intelligence, as noted by Susa Ventures' Misha Gordon-Rowe, drives the company's focus: "Shen is a highly technical founder... pushing boundaries of video understanding."
Ben Zhou, the engineering backbone, served as a machine learning engineer at Meta, specializing in scalable AI systems. His Crunchbase profile highlights roles in founding and investing, with a background in APAC tech ecosystems given his name and collaborations. Zhou complements Shen's research flair with practical implementation, turning prototypes into robust platforms.
The core team, inferred as small and distributed, includes engineers and researchers from Meta backgrounds, focusing on AI perception and memory. Recent X posts advertise a remote part-time role for a "LUCI Beta Program" teammate to gather user feedback, signaling lean operations with potential for growth. Funding from U.S.-led Susa Ventures and Samsung Next provides cross-border credibility, with plans to augment the team using the $8M.
The duo's APAC heritage—Shen from China via UK education, Zhou similarly—infuses the project with efficient, innovative ethos: building on limited resources for global impact. As Fenomstalent notes, they bring "technical depth, product design intuition, and emotional intelligence." This team isn't hype-driven; they're researchers solving real problems, making Memories AI a credible hidden gem for investors seeking global talent.
Why It's US-Ready
Memories.ai's US-readiness is evident in its strategic funding, market alignment, and technical design, positioning it as an ideal co-investment for American VCs and institutions eyeing cross-border AI. Launched in July 2025 with immediate U.S. availability via its web platform, the project taps into America's video-saturated economy—where security and media sectors generate petabytes of data annually. Backed by Samsung Next and led by Susa Ventures (a San Francisco firm), it's already embedded in U.S. ecosystems, with Wilson Sonsini advising on the $8M round.
Technically, the platform supports on-device processing, aligning with U.S. privacy priorities like CCPA—reducing cloud reliance and enabling secure home applications, as Samsung's Campbell highlighted. English-first interface, global payments (credit cards, etc.), and App Store potential facilitate adoption. Early traction on X, with posts viewed by tens of thousands, indicates buzz in tech hubs like the Bay Area.
Investment-wise, its profile offers high upside in the "long-context visual intelligence" niche, a gap Susa identified. APAC founders deliver cost-efficient innovation (e.g., efficient models on standard hardware), while U.S. partners like Fusion Fund provide market savvy.
Culturally, it resonates with American individualism—tools for personal video archives or creative prosumer workflows. Future usecases like robot training or self-driving recall fit U.S. innovation hubs. In an AI bubble-wary market, Memories.ai's credibility—oversubscribed funding, Meta pedigree, and real-world benchmarks—makes it a safe bet for bridging UK, APAC ingenuity with U.S. demand.
This analysis is based on publicly available information and company disclosures. For investment decisions, please conduct thorough due diligence and consult with qualified financial advisors.
US Readiness Assessment
Readiness Level: High
This startup demonstrates strong readiness for US market entry with established compliance, market strategy, and operational infrastructure.
Key Strengths
- ✓World's first Large Visual Memory Model handling 10M+ hours of video with unlimited context
- ✓20-point performance improvement on MVBench/NextQA benchmarks with ultra-low hallucinations
- ✓On-device processing for privacy compliance and scalable enterprise deployment
Company Information
Founders
Shawn Shen, Ben Zhou