lynx   »   [go: up one dir, main page]

https://humanaigc.github.io/animate-anyone-2/

\n","updatedAt":"2025-02-13T08:45:43.660Z","author":{"_id":"67ad9f06040354c9105b00bc","avatarUrl":"/avatars/39e9f4c48c93bb33f155390653936fc1.svg","fullname":"LiHu","name":"Hookszdp","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":2}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5612666606903076},"editors":["Hookszdp"],"editorAvatarUrls":["/avatars/39e9f4c48c93bb33f155390653936fc1.svg"],"reactions":[],"isReport":false},"replies":[{"id":"67afa47799309155a7446ba1","author":{"_id":"62e54f0eae9d3f10acb95cb9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62e54f0eae9d3f10acb95cb9/VAyk05hqB3OZWXEZW-B0q.png","fullname":"mrfakename","name":"mrfakename","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":2928},"createdAt":"2025-02-14T20:15:51.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Very cool! Any plans to release the code and models?","html":"

Very cool! Any plans to release the code and models?

\n","updatedAt":"2025-02-14T20:15:51.038Z","author":{"_id":"62e54f0eae9d3f10acb95cb9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62e54f0eae9d3f10acb95cb9/VAyk05hqB3OZWXEZW-B0q.png","fullname":"mrfakename","name":"mrfakename","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":2928}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8674271702766418},"editors":["mrfakename"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/62e54f0eae9d3f10acb95cb9/VAyk05hqB3OZWXEZW-B0q.png"],"reactions":[{"reaction":"👍","users":["Avremi","agust"],"count":2}],"isReport":false,"parentCommentId":"67adb137a73736ca3ef7db75"}},{"id":"67b1b9208113760488099365","author":{"_id":"65ad14a73b9e1f0f30804f3b","avatarUrl":"/avatars/eea8417b5506f82a6a80b43b5528909f.svg","fullname":"kkhann","name":"aliok","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false},"createdAt":"2025-02-16T10:08:32.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"That is great, when you release that ?\n","html":"

That is great, when you release that ?

\n","updatedAt":"2025-02-16T10:08:32.619Z","author":{"_id":"65ad14a73b9e1f0f30804f3b","avatarUrl":"/avatars/eea8417b5506f82a6a80b43b5528909f.svg","fullname":"kkhann","name":"aliok","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9736039042472839},"editors":["aliok"],"editorAvatarUrls":["/avatars/eea8417b5506f82a6a80b43b5528909f.svg"],"reactions":[],"isReport":false,"parentCommentId":"67adb137a73736ca3ef7db75"}},{"id":"681261ecec67f8f30bf2023b","author":{"_id":"64ef4294f9a3108f33f95838","avatarUrl":"/avatars/b18dfe5e59939840e25975e448ce8737.svg","fullname":"evaZQR","name":"EVAzqr","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false},"createdAt":"2025-04-30T17:46:20.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Cool, so when you release the code and models?","html":"

Cool, so when you release the code and models?

\n","updatedAt":"2025-04-30T17:46:20.934Z","author":{"_id":"64ef4294f9a3108f33f95838","avatarUrl":"/avatars/b18dfe5e59939840e25975e448ce8737.svg","fullname":"evaZQR","name":"EVAzqr","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8678199052810669},"editors":["EVAzqr"],"editorAvatarUrls":["/avatars/b18dfe5e59939840e25975e448ce8737.svg"],"reactions":[],"isReport":false,"parentCommentId":"67adb137a73736ca3ef7db75"}}]},{"id":"67ae9d8195a6ead70a4499f2","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264},"createdAt":"2025-02-14T01:33:53.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [HumanDiT: Pose-Guided Diffusion Transformer for Long-form Human Motion Video Generation](https://huggingface.co/papers/2502.04847) (2025)\n* [Joint Learning of Depth and Appearance for Portrait Image Animation](https://huggingface.co/papers/2501.08649) (2025)\n* [VideoAnydoor: High-fidelity Video Object Insertion with Precise Motion Control](https://huggingface.co/papers/2501.01427) (2025)\n* [DriveEditor: A Unified 3D Information-Guided Framework for Controllable Object Editing in Driving Scenes](https://huggingface.co/papers/2412.19458) (2024)\n* [3D Object Manipulation in a Single Image using Generative Models](https://huggingface.co/papers/2501.12935) (2025)\n* [AniDoc: Animation Creation Made Easier](https://huggingface.co/papers/2412.14173) (2024)\n* [Edit as You See: Image-guided Video Editing via Masked Motion Modeling](https://huggingface.co/papers/2501.04325) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-02-14T01:33:53.291Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7187901735305786},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2502.06145","authors":[{"_id":"67ad9fb9731ff0d7da9f40e9","user":{"_id":"67ad9f06040354c9105b00bc","avatarUrl":"/avatars/39e9f4c48c93bb33f155390653936fc1.svg","isPro":false,"fullname":"LiHu","user":"Hookszdp","type":"user"},"name":"Li Hu","status":"claimed_verified","statusLastChangedAt":"2025-02-13T08:21:24.286Z","hidden":false},{"_id":"67ad9fb9731ff0d7da9f40ea","name":"Guangyuan Wang","hidden":false},{"_id":"67ad9fb9731ff0d7da9f40eb","name":"Zhen Shen","hidden":false},{"_id":"67ad9fb9731ff0d7da9f40ec","name":"Xin Gao","hidden":false},{"_id":"67ad9fb9731ff0d7da9f40ed","name":"Dechao Meng","hidden":false},{"_id":"67ad9fb9731ff0d7da9f40ee","name":"Lian Zhuo","hidden":false},{"_id":"67ad9fb9731ff0d7da9f40ef","name":"Peng Zhang","hidden":false},{"_id":"67ad9fb9731ff0d7da9f40f0","name":"Bang Zhang","hidden":false},{"_id":"67ad9fb9731ff0d7da9f40f1","name":"Liefeng Bo","hidden":false}],"publishedAt":"2025-02-10T04:20:11.000Z","submittedOnDailyAt":"2025-02-13T06:15:43.646Z","title":"Animate Anyone 2: High-Fidelity Character Image Animation with\n Environment Affordance","submittedOnDailyBy":{"_id":"67ad9f06040354c9105b00bc","avatarUrl":"/avatars/39e9f4c48c93bb33f155390653936fc1.svg","isPro":false,"fullname":"LiHu","user":"Hookszdp","type":"user"},"summary":"Recent character image animation methods based on diffusion models, such as\nAnimate Anyone, have made significant progress in generating consistent and\ngeneralizable character animations. However, these approaches fail to produce\nreasonable associations between characters and their environments. To address\nthis limitation, we introduce Animate Anyone 2, aiming to animate characters\nwith environment affordance. Beyond extracting motion signals from source\nvideo, we additionally capture environmental representations as conditional\ninputs. The environment is formulated as the region with the exclusion of\ncharacters and our model generates characters to populate these regions while\nmaintaining coherence with the environmental context. We propose a\nshape-agnostic mask strategy that more effectively characterizes the\nrelationship between character and environment. Furthermore, to enhance the\nfidelity of object interactions, we leverage an object guider to extract\nfeatures of interacting objects and employ spatial blending for feature\ninjection. We also introduce a pose modulation strategy that enables the model\nto handle more diverse motion patterns. Experimental results demonstrate the\nsuperior performance of the proposed method.","upvotes":18,"discussionId":"67ad9fbb731ff0d7da9f4145","ai_summary":"The proposed Animate Anyone 2 model generates character animations that consider environmental context and object interactions, using a shape-agnostic mask and pose modulation strategy.","ai_keywords":["diffusion models","Animate Anyone","environment affordance","conditional inputs","shape-agnostic mask","object guider","spatial blending","pose modulation"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"67ad9f06040354c9105b00bc","avatarUrl":"/avatars/39e9f4c48c93bb33f155390653936fc1.svg","isPro":false,"fullname":"LiHu","user":"Hookszdp","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"646f3418a6a58aa29505fd30","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/646f3418a6a58aa29505fd30/1z13rnpb6rsUgQsYumWPg.png","isPro":false,"fullname":"QINGHE WANG","user":"DecoderWQH666","type":"user"},{"_id":"66f612b934b8ac9ffa44f084","avatarUrl":"/avatars/6836c122e19c66c90f1673f28b30d7f0.svg","isPro":false,"fullname":"Tang","user":"tommysally","type":"user"},{"_id":"6683fc5344a65be1aab25dc0","avatarUrl":"/avatars/e13cde3f87b59e418838d702807df3b5.svg","isPro":false,"fullname":"hjkim","user":"hojie11","type":"user"},{"_id":"636bcc3c7631fe5e86ffff45","avatarUrl":"/avatars/cd4a74a67ef000df12740b1cbb94676a.svg","isPro":false,"fullname":"JeonghoLee","user":"Twohalf","type":"user"},{"_id":"650c8bfb3d3542884da1a845","avatarUrl":"/avatars/863a5deebf2ac6d4faedc4dd368e0561.svg","isPro":false,"fullname":"Adhurim ","user":"Limi07","type":"user"},{"_id":"62ca6778b34e600d7eb3e45e","avatarUrl":"/avatars/a940f8d31d77d9ccd29fd45031e0e861.svg","isPro":false,"fullname":"anil berry","user":"aberrya","type":"user"},{"_id":"63d180bab30415240fd54752","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63d180bab30415240fd54752/NubwOS3K4y8haplY-MDwh.jpeg","isPro":false,"fullname":"Jefferson Araujo","user":"jeffaraujo","type":"user"},{"_id":"67b1a02c46b13200819aec08","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67b1a02c46b13200819aec08/83apjllqYocftrXgeeSp9.png","isPro":false,"fullname":"fenchenw","user":"Fanwenyoo","type":"user"},{"_id":"640c860b4b5450a208ade26d","avatarUrl":"/avatars/14daaea136ae2ed2f168cbc8480c4282.svg","isPro":false,"fullname":"Lekawanich","user":"Napawit","type":"user"},{"_id":"65f1305e4f412ecacea3c19a","avatarUrl":"/avatars/f2eeac115d5822b688c6810207172e53.svg","isPro":false,"fullname":"pz","user":"pz78","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2502.06145

Animate Anyone 2: High-Fidelity Character Image Animation with Environment Affordance

Published on Feb 10
· Submitted by LiHu on Feb 13
Authors:
Li Hu ,
,
,
,
,
,
,
,

Abstract

The proposed Animate Anyone 2 model generates character animations that consider environmental context and object interactions, using a shape-agnostic mask and pose modulation strategy.

AI-generated summary

Recent character image animation methods based on diffusion models, such as Animate Anyone, have made significant progress in generating consistent and generalizable character animations. However, these approaches fail to produce reasonable associations between characters and their environments. To address this limitation, we introduce Animate Anyone 2, aiming to animate characters with environment affordance. Beyond extracting motion signals from source video, we additionally capture environmental representations as conditional inputs. The environment is formulated as the region with the exclusion of characters and our model generates characters to populate these regions while maintaining coherence with the environmental context. We propose a shape-agnostic mask strategy that more effectively characterizes the relationship between character and environment. Furthermore, to enhance the fidelity of object interactions, we leverage an object guider to extract features of interacting objects and employ spatial blending for feature injection. We also introduce a pose modulation strategy that enables the model to handle more diverse motion patterns. Experimental results demonstrate the superior performance of the proposed method.

Community

Paper author Paper submitter
·

Very cool! Any plans to release the code and models?

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.06145 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.06145 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.06145 in a Space README.md to link it from this page.

Collections including this paper 4

Лучший частный хостинг