lynx   »   [go: up one dir, main page]

https://huggingface.co/spaces/nvidia/describe-anything-model-demo
Code: https://github.com/NVlabs/describe-anything
Project Page (with a 3-minute video): https://describe-anything.github.io
Models, Datasets, and Benchmark: https://huggingface.co/collections/nvidia/describe-anything-680825bb8f5e41ff0785834c

\n

\"image.png\"

\n","updatedAt":"2025-04-23T01:54:25.832Z","author":{"_id":"63797c273f575acc2f6893c0","avatarUrl":"/avatars/32d7a6a8881c8c4d80a097b732ed24b6.svg","fullname":"Long(Tony) Lian","name":"longlian","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":14}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.6108959317207336},"editors":["longlian"],"editorAvatarUrls":["/avatars/32d7a6a8881c8c4d80a097b732ed24b6.svg"],"reactions":[],"isReport":false}},{"id":"68088d031a8a0eb89a2e60d2","author":{"_id":"66dabcea947b2e8abc2462e1","avatarUrl":"/avatars/1247f3c7e1f671386fb0171a3116cca9.svg","fullname":"Lennart ","name":"Adenda123A","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false},"createdAt":"2025-04-23T06:47:31.000Z","type":"comment","data":{"edited":true,"hidden":true,"hiddenBy":"","hiddenReason":"Off-Topic","latest":{"raw":"This comment has been hidden","html":"This comment has been hidden","updatedAt":"2025-04-23T06:48:26.642Z","author":{"_id":"66dabcea947b2e8abc2462e1","avatarUrl":"/avatars/1247f3c7e1f671386fb0171a3116cca9.svg","fullname":"Lennart ","name":"Adenda123A","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false}},"numEdits":0,"editors":[],"editorAvatarUrls":[],"reactions":[]}},{"id":"6809957e5d9d1072c8e7f51e","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264},"createdAt":"2025-04-24T01:35:58.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [URECA: Unique Region Caption Anything](https://huggingface.co/papers/2504.05305) (2025)\n* [Fine-Grained Video Captioning through Scene Graph Consolidation](https://huggingface.co/papers/2502.16427) (2025)\n* [Caption Anything in Video: Fine-grained Object-centric Captioning via Spatiotemporal Multimodal Prompting](https://huggingface.co/papers/2504.05541) (2025)\n* [Eagle 2.5: Boosting Long-Context Post-Training for Frontier Vision-Language Models](https://huggingface.co/papers/2504.15271) (2025)\n* [The Devil is in the Distributions: Explicit Modeling of Scene Content is Key in Zero-Shot Video Captioning](https://huggingface.co/papers/2503.23679) (2025)\n* [OmniDiff: A Comprehensive Benchmark for Fine-grained Image Difference Captioning](https://huggingface.co/papers/2503.11093) (2025)\n* [Get In Video: Add Anything You Want to the Video](https://huggingface.co/papers/2503.06268) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-04-24T01:35:58.062Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6963605880737305},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"680da23f038897aa905440f9","author":{"_id":"666a8c166cf1faccbd5aa1f7","avatarUrl":"/avatars/43584df79a96b7e4bdc700c594b82255.svg","fullname":"h","name":"ssss777","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false},"createdAt":"2025-04-27T03:19:27.000Z","type":"comment","data":{"edited":true,"hidden":true,"hiddenBy":"","hiddenReason":"Spam","latest":{"raw":"This comment has been hidden","html":"This comment has been hidden","updatedAt":"2025-04-27T03:20:15.052Z","author":{"_id":"666a8c166cf1faccbd5aa1f7","avatarUrl":"/avatars/43584df79a96b7e4bdc700c594b82255.svg","fullname":"h","name":"ssss777","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false}},"numEdits":0,"editors":[],"editorAvatarUrls":[],"reactions":[]}}],"primaryEmailConfirmed":false,"paper":{"id":"2504.16072","authors":[{"_id":"6808467a867c3ef14f8326ce","user":{"_id":"63797c273f575acc2f6893c0","avatarUrl":"/avatars/32d7a6a8881c8c4d80a097b732ed24b6.svg","isPro":true,"fullname":"Long(Tony) Lian","user":"longlian","type":"user"},"name":"Long Lian","status":"claimed_verified","statusLastChangedAt":"2025-04-23T08:28:14.686Z","hidden":false},{"_id":"6808467a867c3ef14f8326cf","name":"Yifan Ding","hidden":false},{"_id":"6808467a867c3ef14f8326d0","name":"Yunhao Ge","hidden":false},{"_id":"6808467a867c3ef14f8326d1","user":{"_id":"62fab69f8cd542e895bafd6e","avatarUrl":"/avatars/c553bff4bd52b9a4f79e9c76fa22e27e.svg","isPro":false,"fullname":"Sifei Liu","user":"zwrq","type":"user"},"name":"Sifei Liu","status":"admin_assigned","statusLastChangedAt":"2025-04-23T13:46:08.248Z","hidden":false},{"_id":"6808467a867c3ef14f8326d2","user":{"_id":"645bd14978730bcc10414267","avatarUrl":"/avatars/155cefe7bd5878532ce19693be5a13ec.svg","isPro":false,"fullname":"Hanzi Mao","user":"hannamao","type":"user"},"name":"Hanzi Mao","status":"admin_assigned","statusLastChangedAt":"2025-04-23T13:46:14.917Z","hidden":false},{"_id":"6808467a867c3ef14f8326d3","user":{"_id":"620dd3888528f797e88cb9b5","avatarUrl":"/avatars/af04728788d78fe7d6375e19e32a535e.svg","isPro":false,"fullname":"Boyi Li","user":"Boyiliee","type":"user"},"name":"Boyi Li","status":"claimed_verified","statusLastChangedAt":"2025-04-23T08:28:09.738Z","hidden":false},{"_id":"6808467a867c3ef14f8326d4","name":"Marco Pavone","hidden":false},{"_id":"6808467a867c3ef14f8326d5","name":"Ming-Yu Liu","hidden":false},{"_id":"6808467a867c3ef14f8326d6","user":{"_id":"64cbdf02f103036e23d1c7f3","avatarUrl":"/avatars/496069463900dea20929b57381182d39.svg","isPro":true,"fullname":"Trevor Darrell","user":"trevordarrell","type":"user"},"name":"Trevor Darrell","status":"admin_assigned","statusLastChangedAt":"2025-04-23T13:46:25.340Z","hidden":false},{"_id":"6808467a867c3ef14f8326d7","user":{"_id":"6333a9195a032dcd095dda13","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1664329996201-noauth.jpeg","isPro":true,"fullname":"Adam Yala","user":"yala","type":"user"},"name":"Adam Yala","status":"claimed_verified","statusLastChangedAt":"2025-04-23T08:28:12.415Z","hidden":false},{"_id":"6808467a867c3ef14f8326d8","user":{"_id":"649f05367b57fab3a5b27c8b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/649f05367b57fab3a5b27c8b/UDJB4yqF2NmaRwCyTOfcl.jpeg","isPro":false,"fullname":"Yin Cui","user":"richardaecn","type":"user"},"name":"Yin Cui","status":"claimed_verified","statusLastChangedAt":"2025-04-23T08:28:06.739Z","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/63797c273f575acc2f6893c0/37vXp0iDKwhBbVyNnCGHy.qt"],"publishedAt":"2025-04-22T17:51:41.000Z","submittedOnDailyAt":"2025-04-23T00:22:38.011Z","title":"Describe Anything: Detailed Localized Image and Video Captioning","submittedOnDailyBy":{"_id":"63797c273f575acc2f6893c0","avatarUrl":"/avatars/32d7a6a8881c8c4d80a097b732ed24b6.svg","isPro":true,"fullname":"Long(Tony) Lian","user":"longlian","type":"user"},"summary":"Generating detailed and accurate descriptions for specific regions in images\nand videos remains a fundamental challenge for vision-language models. We\nintroduce the Describe Anything Model (DAM), a model designed for detailed\nlocalized captioning (DLC). DAM preserves both local details and global context\nthrough two key innovations: a focal prompt, which ensures high-resolution\nencoding of targeted regions, and a localized vision backbone, which integrates\nprecise localization with its broader context. To tackle the scarcity of\nhigh-quality DLC data, we propose a Semi-supervised learning (SSL)-based Data\nPipeline (DLC-SDP). DLC-SDP starts with existing segmentation datasets and\nexpands to unlabeled web images using SSL. We introduce DLC-Bench, a benchmark\ndesigned to evaluate DLC without relying on reference captions. DAM sets new\nstate-of-the-art on 7 benchmarks spanning keyword-level, phrase-level, and\ndetailed multi-sentence localized image and video captioning.","upvotes":62,"discussionId":"6808467e867c3ef14f832831","projectPage":"https://describe-anything.github.io","githubRepo":"https://github.com/NVlabs/describe-anything","ai_summary":"The Describe Anything Model (DAM) leverages a focal prompt and localized vision backbone to achieve detailed localized captioning, outperforming existing models on various benchmarks through a semi-supervised data pipeline.","ai_keywords":["focal prompt","localized vision backbone","detailed localized captioning","semi-supervised learning","DLC-SDP","DLC-Bench"],"githubStars":1345},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"63797c273f575acc2f6893c0","avatarUrl":"/avatars/32d7a6a8881c8c4d80a097b732ed24b6.svg","isPro":true,"fullname":"Long(Tony) Lian","user":"longlian","type":"user"},{"_id":"649f05367b57fab3a5b27c8b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/649f05367b57fab3a5b27c8b/UDJB4yqF2NmaRwCyTOfcl.jpeg","isPro":false,"fullname":"Yin Cui","user":"richardaecn","type":"user"},{"_id":"6333a9195a032dcd095dda13","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1664329996201-noauth.jpeg","isPro":true,"fullname":"Adam Yala","user":"yala","type":"user"},{"_id":"62f0ecd2700bdc19558360de","avatarUrl":"/avatars/5325b4b763f30c41f30e3aec0d2b59fa.svg","isPro":false,"fullname":"Junyi Zhang","user":"Junyi42","type":"user"},{"_id":"6611e6e1188ff298b0dd0b79","avatarUrl":"/avatars/3a495283955ec9e06e1829c7eb2cd9a4.svg","isPro":false,"fullname":"Alane Suhr","user":"alsuhr","type":"user"},{"_id":"6629dac35e13d8145e3a605e","avatarUrl":"/avatars/95938f20ab9e067838f37aca6ea235ae.svg","isPro":false,"fullname":"Jiaxin Ge","user":"JiaxinGe","type":"user"},{"_id":"63d6f76f44f1d8fbe58fbd9d","avatarUrl":"/avatars/184f248a090b543e576a97b7f46ab3ef.svg","isPro":false,"fullname":"Xudong Wang","user":"xudongw","type":"user"},{"_id":"630489e5dae2eb7d083e78b1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/630489e5dae2eb7d083e78b1/RKjwiquU1JzaHoV4op42F.jpeg","isPro":false,"fullname":"Ritwik Gupta","user":"RitwikGupta","type":"user"},{"_id":"668cd4bbe990292e5f6974d3","avatarUrl":"/avatars/d1747b2372e94500ecb5fb56809b482d.svg","isPro":false,"fullname":"Jinyeong Kim","user":"rubatoyeong","type":"user"},{"_id":"620dd3888528f797e88cb9b5","avatarUrl":"/avatars/af04728788d78fe7d6375e19e32a535e.svg","isPro":false,"fullname":"Boyi Li","user":"Boyiliee","type":"user"},{"_id":"67daff267f78cc8481c7a87e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/Du5cDMFmxUyTxNtXFdouF.png","isPro":false,"fullname":"Natalia Harguindeguy","user":"nharguindeguy","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2504.16072

Describe Anything: Detailed Localized Image and Video Captioning

Published on Apr 22
· Submitted by Long(Tony) Lian on Apr 23
Authors:
,
,
,
,

Abstract

The Describe Anything Model (DAM) leverages a focal prompt and localized vision backbone to achieve detailed localized captioning, outperforming existing models on various benchmarks through a semi-supervised data pipeline.

AI-generated summary

Generating detailed and accurate descriptions for specific regions in images and videos remains a fundamental challenge for vision-language models. We introduce the Describe Anything Model (DAM), a model designed for detailed localized captioning (DLC). DAM preserves both local details and global context through two key innovations: a focal prompt, which ensures high-resolution encoding of targeted regions, and a localized vision backbone, which integrates precise localization with its broader context. To tackle the scarcity of high-quality DLC data, we propose a Semi-supervised learning (SSL)-based Data Pipeline (DLC-SDP). DLC-SDP starts with existing segmentation datasets and expands to unlabeled web images using SSL. We introduce DLC-Bench, a benchmark designed to evaluate DLC without relying on reference captions. DAM sets new state-of-the-art on 7 benchmarks spanning keyword-level, phrase-level, and detailed multi-sentence localized image and video captioning.

Community

Paper author Paper submitter
edited Apr 23

We’re excited to introduce the Describe Anything Model (DAM), a powerful MLLM that generates detailed descriptions for user-defined regions in images or videos using points, boxes, scribbles, or masks.

Huggingface Demo (super cool): https://huggingface.co/spaces/nvidia/describe-anything-model-demo
Code: https://github.com/NVlabs/describe-anything
Project Page (with a 3-minute video): https://describe-anything.github.io
Models, Datasets, and Benchmark: https://huggingface.co/collections/nvidia/describe-anything-680825bb8f5e41ff0785834c

image.png

This comment has been hidden (marked as Off-Topic)

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

This comment has been hidden (marked as Spam)

Sign up or log in to comment

Models citing this paper 5

Browse 5 models citing this paper

Datasets citing this paper 2

Spaces citing this paper 6

Collections including this paper 15

Лучший частный хостинг