@arankomatsuzaki\n\t, many thanks!\n","updatedAt":"2025-09-23T05:04:46.817Z","author":{"_id":"67eb818141abf40cd87ab303","avatarUrl":"/avatars/8fd19f5fcac50946be63d55d265e68b0.svg","fullname":"Zilin Xiao","name":"MrZilinXiao","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7190561890602112},"editors":["MrZilinXiao"],"editorAvatarUrls":["/avatars/8fd19f5fcac50946be63d55d265e68b0.svg"],"reactions":[],"isReport":false}},{"id":"68d34a9bb32dbced7f1416b5","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264},"createdAt":"2025-09-24T01:34:19.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Zero-shot Multimodal Document Retrieval via Cross-modal Question Generation](https://huggingface.co/papers/2508.17079) (2025)\n* [Recurrence Meets Transformers for Universal Multimodal Retrieval](https://huggingface.co/papers/2509.08897) (2025)\n* [LexSemBridge: Fine-Grained Dense Representation Enhancement through Token-Aware Embedding Augmentation](https://huggingface.co/papers/2508.17858) (2025)\n* [SPANER: Shared Prompt Aligner for Multimodal Semantic Representation](https://huggingface.co/papers/2508.13387) (2025)\n* [On The Role of Pretrained Language Models in General-Purpose Text Embeddings: A Survey](https://huggingface.co/papers/2507.20783) (2025)\n* [EVENT-Retriever: Event-Aware Multimodal Image Retrieval for Realistic Captions](https://huggingface.co/papers/2509.00751) (2025)\n* [Multimodal Representation Learning Conditioned on Semantic Relations](https://huggingface.co/papers/2508.17497) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2025-09-24T01:34:19.750Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6583813428878784},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2509.18095","authors":[{"_id":"68d22a351ca7156988a8edfa","user":{"_id":"67eb818141abf40cd87ab303","avatarUrl":"/avatars/8fd19f5fcac50946be63d55d265e68b0.svg","isPro":false,"fullname":"Zilin Xiao","user":"MrZilinXiao","type":"user"},"name":"Zilin Xiao","status":"claimed_verified","statusLastChangedAt":"2025-09-23T10:05:31.739Z","hidden":false},{"_id":"68d22a351ca7156988a8edfb","name":"Qi Ma","hidden":false},{"_id":"68d22a351ca7156988a8edfc","name":"Mengting Gu","hidden":false},{"_id":"68d22a351ca7156988a8edfd","name":"Chun-cheng Jason Chen","hidden":false},{"_id":"68d22a351ca7156988a8edfe","name":"Xintao Chen","hidden":false},{"_id":"68d22a351ca7156988a8edff","name":"Vicente Ordonez","hidden":false},{"_id":"68d22a351ca7156988a8ee00","name":"Vijai Mohan","hidden":false}],"publishedAt":"2025-09-22T17:59:42.000Z","submittedOnDailyAt":"2025-09-23T03:34:46.809Z","title":"MetaEmbed: Scaling Multimodal Retrieval at Test-Time with Flexible Late\n Interaction","submittedOnDailyBy":{"_id":"67eb818141abf40cd87ab303","avatarUrl":"/avatars/8fd19f5fcac50946be63d55d265e68b0.svg","isPro":false,"fullname":"Zilin Xiao","user":"MrZilinXiao","type":"user"},"summary":"Universal multimodal embedding models have achieved great success in\ncapturing semantic relevance between queries and candidates. However, current\nmethods either condense queries and candidates into a single vector,\npotentially limiting the expressiveness for fine-grained information, or\nproduce too many vectors that are prohibitively expensive for multi-vector\nretrieval. In this work, we introduce MetaEmbed, a new framework for multimodal\nretrieval that rethinks how multimodal embeddings are constructed and\ninteracted with at scale. During training, a fixed number of learnable Meta\nTokens are appended to the input sequence. At test-time, their last-layer\ncontextualized representations serve as compact yet expressive multi-vector\nembeddings. Through the proposed Matryoshka Multi-Vector Retrieval training,\nMetaEmbed learns to organize information by granularity across multiple\nvectors. As a result, we enable test-time scaling in multimodal retrieval,\nwhere users can balance retrieval quality against efficiency demands by\nselecting the number of tokens used for indexing and retrieval interactions.\nExtensive evaluations on the Massive Multimodal Embedding Benchmark (MMEB) and\nthe Visual Document Retrieval Benchmark (ViDoRe) confirm that MetaEmbed\nachieves state-of-the-art retrieval performance while scaling robustly to\nmodels with 32B parameters.","upvotes":7,"discussionId":"68d22a351ca7156988a8ee01","ai_summary":"MetaEmbed, a new framework for multimodal retrieval, uses learnable Meta Tokens to provide compact yet expressive multi-vector embeddings, enabling scalable and efficient retrieval performance.","ai_keywords":["MetaEmbed","Meta Tokens","multimodal embeddings","Matryoshka Multi-Vector Retrieval","Massive Multimodal Embedding Benchmark","Visual Document Retrieval Benchmark"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6465be400e6c7618f61633c6","avatarUrl":"/avatars/7bf32a6b7639119bacbc39f64aa0fcfb.svg","isPro":false,"fullname":"VisLang","user":"vislang","type":"user"},{"_id":"67eb818141abf40cd87ab303","avatarUrl":"/avatars/8fd19f5fcac50946be63d55d265e68b0.svg","isPro":false,"fullname":"Zilin Xiao","user":"MrZilinXiao","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"64d4615cf8082bf19b916492","avatarUrl":"/avatars/8e1b59565ec5e4b31090cf1b911781b9.svg","isPro":false,"fullname":"wongyukim","user":"wongyukim","type":"user"},{"_id":"619efa9e24a474e0d95bd5be","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/619efa9e24a474e0d95bd5be/XuyOHUmc49EAIUdY4FhZA.png","isPro":false,"fullname":"Adhi Setiawan","user":"adhisetiawan","type":"user"},{"_id":"63c6cb6a50cc81901da65e15","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63c6cb6a50cc81901da65e15/t4LN1BPCFlwbSJ9GD9YDd.jpeg","isPro":true,"fullname":"Théo Pomies","user":"theopomies","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
MetaEmbed, a new framework for multimodal retrieval, uses learnable Meta Tokens to provide compact yet expressive multi-vector embeddings, enabling scalable and efficient retrieval performance.
AI-generated summary
Universal multimodal embedding models have achieved great success in
capturing semantic relevance between queries and candidates. However, current
methods either condense queries and candidates into a single vector,
potentially limiting the expressiveness for fine-grained information, or
produce too many vectors that are prohibitively expensive for multi-vector
retrieval. In this work, we introduce MetaEmbed, a new framework for multimodal
retrieval that rethinks how multimodal embeddings are constructed and
interacted with at scale. During training, a fixed number of learnable Meta
Tokens are appended to the input sequence. At test-time, their last-layer
contextualized representations serve as compact yet expressive multi-vector
embeddings. Through the proposed Matryoshka Multi-Vector Retrieval training,
MetaEmbed learns to organize information by granularity across multiple
vectors. As a result, we enable test-time scaling in multimodal retrieval,
where users can balance retrieval quality against efficiency demands by
selecting the number of tokens used for indexing and retrieval interactions.
Extensive evaluations on the Massive Multimodal Embedding Benchmark (MMEB) and
the Visual Document Retrieval Benchmark (ViDoRe) confirm that MetaEmbed
achieves state-of-the-art retrieval performance while scaling robustly to
models with 32B parameters.
Meta Superintelligence Labs presents MetaEmbed: Scalable multimodal retrieval
• Flexible late interaction via Meta Tokens • Test-time scaling: trade off retrieval accuracy vs efficiency • SOTA on MMEB + ViDoRe, robust up to 32B models • Matryoshka training → coarse-to-fine multi-vector embeddings