lynx   »   [go: up one dir, main page]

https://github.com/CaraJ7/CoMat

\n","updatedAt":"2024-04-06T20:12:37.750Z","author":{"_id":"62e90880735537d702edbf1c","avatarUrl":"/avatars/8bfc6ba5fe95d798fdb1cdffb957aac4.svg","fullname":"Hal Rottenberg","name":"halr9000","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":2}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.987487256526947},"editors":["halr9000"],"editorAvatarUrls":["/avatars/8bfc6ba5fe95d798fdb1cdffb957aac4.svg"],"reactions":[],"isReport":false},"replies":[{"id":"6611ef9b188ff298b0df4fa1","author":{"_id":"6349214f8146350b3a4c5cdf","avatarUrl":"/avatars/cfd24caac9a87efb528d0f4c375932bc.svg","fullname":"Dongzhi Jiang","name":"CaraJ","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":11},"createdAt":"2024-04-07T00:58:03.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Thanks for your interest! \nWe are currently organizing the training code. Hopefully, the code will be released within April as stated.","html":"

Thanks for your interest!
We are currently organizing the training code. Hopefully, the code will be released within April as stated.

\n","updatedAt":"2024-04-07T00:58:03.516Z","author":{"_id":"6349214f8146350b3a4c5cdf","avatarUrl":"/avatars/cfd24caac9a87efb528d0f4c375932bc.svg","fullname":"Dongzhi Jiang","name":"CaraJ","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":11}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9883170127868652},"editors":["CaraJ"],"editorAvatarUrls":["/avatars/cfd24caac9a87efb528d0f4c375932bc.svg"],"reactions":[],"isReport":false,"parentCommentId":"6611acb5e8d2cda2303a6dc6"}}]},{"id":"664db1b7f3ac7f1f85379d01","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264},"createdAt":"2024-05-22T08:49:59.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Object-Conditioned Energy-Based Attention Map Alignment in Text-to-Image Diffusion Models](https://huggingface.co/papers/2404.07389) (2024)\n* [TextCraftor: Your Text Encoder Can be Image Quality Controller](https://huggingface.co/papers/2403.18978) (2024)\n* [Training-free Subject-Enhanced Attention Guidance for Compositional Text-to-image Generation](https://huggingface.co/papers/2405.06948) (2024)\n* [Getting it Right: Improving Spatial Consistency in Text-to-Image Models](https://huggingface.co/papers/2404.01197) (2024)\n* [Towards Better Text-to-Image Generation Alignment via Attention Modulation](https://huggingface.co/papers/2404.13899) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2024-05-22T08:49:59.716Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7364601492881775},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"666558100e5728eb02b9af3a","author":{"_id":"6186ddf6a7717cb375090c01","avatarUrl":"/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg","fullname":"Julien BLANCHON","name":"blanchon","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":142},"createdAt":"2024-06-09T07:21:52.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"# Solving Misalignment in Text-to-Image AI: CoMat Explained!\n\nhttps://cdn-uploads.huggingface.co/production/uploads/6186ddf6a7717cb375090c01/mJrzx7xXpdT65TyciltQ7.mp4 \n\n## Links 🔗:\n👉 Subscribe: https://www.youtube.com/@Arxflix\n👉 Twitter: https://x.com/arxflix\n👉 LMNT (Partner): https://lmnt.com/\n\n\nBy Arxflix\n![9t4iCUHx_400x400-1.jpg](https://cdn-uploads.huggingface.co/production/uploads/6186ddf6a7717cb375090c01/v4S5zBurs0ouGNwYj1GEd.jpeg)","html":"

Solving Misalignment in Text-to-Image AI: CoMat Explained!

\n

\n\n

Links 🔗:

\n

👉 Subscribe: https://www.youtube.com/@Arxflix
👉 Twitter: https://x.com/arxflix
👉 LMNT (Partner): https://lmnt.com/

\n

By Arxflix
\"9t4iCUHx_400x400-1.jpg\"

\n","updatedAt":"2024-06-09T07:21:52.990Z","author":{"_id":"6186ddf6a7717cb375090c01","avatarUrl":"/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg","fullname":"Julien BLANCHON","name":"blanchon","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":142}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.4806991517543793},"editors":["blanchon"],"editorAvatarUrls":["/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2404.03653","authors":[{"_id":"660f613059d8544c00e52eab","user":{"_id":"6349214f8146350b3a4c5cdf","avatarUrl":"/avatars/cfd24caac9a87efb528d0f4c375932bc.svg","isPro":false,"fullname":"Dongzhi Jiang","user":"CaraJ","type":"user"},"name":"Dongzhi Jiang","status":"claimed_verified","statusLastChangedAt":"2024-04-05T10:13:50.800Z","hidden":false},{"_id":"660f613059d8544c00e52eac","name":"Guanglu Song","hidden":false},{"_id":"660f613059d8544c00e52ead","name":"Xiaoshi Wu","hidden":false},{"_id":"660f613059d8544c00e52eae","name":"Renrui Zhang","hidden":false},{"_id":"660f613059d8544c00e52eaf","name":"Dazhong Shen","hidden":false},{"_id":"660f613059d8544c00e52eb0","user":{"_id":"64db05f23725f8d9a9f566c4","avatarUrl":"/avatars/345b6d2dfadb91ac1f02be023dd27c8f.svg","isPro":true,"fullname":"Zhuofan Zong","user":"zongzhuofan","type":"user"},"name":"Zhuofan Zong","status":"claimed_verified","statusLastChangedAt":"2024-06-18T13:44:17.566Z","hidden":false},{"_id":"660f613059d8544c00e52eb1","name":"Yu Liu","hidden":false},{"_id":"660f613059d8544c00e52eb2","name":"Hongsheng Li","hidden":false}],"publishedAt":"2024-04-04T17:59:46.000Z","submittedOnDailyAt":"2024-04-05T00:55:54.593Z","title":"CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept\n Matching","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"Diffusion models have demonstrated great success in the field of\ntext-to-image generation. However, alleviating the misalignment between the\ntext prompts and images is still challenging. The root reason behind the\nmisalignment has not been extensively investigated. We observe that the\nmisalignment is caused by inadequate token attention activation. We further\nattribute this phenomenon to the diffusion model's insufficient condition\nutilization, which is caused by its training paradigm. To address the issue, we\npropose CoMat, an end-to-end diffusion model fine-tuning strategy with an\nimage-to-text concept matching mechanism. We leverage an image captioning model\nto measure image-to-text alignment and guide the diffusion model to revisit\nignored tokens. A novel attribute concentration module is also proposed to\naddress the attribute binding problem. Without any image or human preference\ndata, we use only 20K text prompts to fine-tune SDXL to obtain CoMat-SDXL.\nExtensive experiments show that CoMat-SDXL significantly outperforms the\nbaseline model SDXL in two text-to-image alignment benchmarks and achieves\nstart-of-the-art performance.","upvotes":36,"discussionId":"660f613259d8544c00e52f21","ai_summary":"CoMat, an end-to-end diffusion model fine-tuning strategy with an image-to-text concept matching mechanism, improves text-to-image alignment in SDXL without additional data.","ai_keywords":["diffusion models","text-to-image generation","token attention activation","condition utilization","training paradigm","image captioning model","attribute concentration module"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"655ac762cb17ec19ef82719b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/655ac762cb17ec19ef82719b/1kDncYrGLYS_2SR8cNdAL.png","isPro":false,"fullname":"Welcome to matlok","user":"matlok","type":"user"},{"_id":"645b8b2687c79b6ec0bb3b7a","avatarUrl":"/avatars/00a9db32a42dc950112bf2593bb109cb.svg","isPro":false,"fullname":"Renrui","user":"ZrrSkywalker","type":"user"},{"_id":"65fd1fa34213282b1189cc10","avatarUrl":"/avatars/d3977708e371925376903ef7a06f4be5.svg","isPro":false,"fullname":"Zilu Guo","user":"ClaireG","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"6349214f8146350b3a4c5cdf","avatarUrl":"/avatars/cfd24caac9a87efb528d0f4c375932bc.svg","isPro":false,"fullname":"Dongzhi Jiang","user":"CaraJ","type":"user"},{"_id":"64513261938967fd069d2340","avatarUrl":"/avatars/e4c3c435f6a4cda57d0e2f16ec1cda6e.svg","isPro":false,"fullname":"sdtana","user":"sdtana","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"63c5d43ae2804cb2407e4d43","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1673909278097-noauth.png","isPro":false,"fullname":"xziayro","user":"xziayro","type":"user"},{"_id":"65668656a9441fd9212edf0b","avatarUrl":"/avatars/d125f9a238525bd9476ed847a5cc3d5a.svg","isPro":false,"fullname":"PS Q","user":"wow2000","type":"user"},{"_id":"6266513d539521e602b5dc3a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6266513d539521e602b5dc3a/7ZU_GyMBzrFHcHDoAkQlp.png","isPro":false,"fullname":"Ameer Azam","user":"ameerazam08","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"6340651b388c3fa40f9a5bc0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6340651b388c3fa40f9a5bc0/av1C4_S7bHGxAzOu8lOmG.jpeg","isPro":true,"fullname":"Adam Molnar","user":"lunarflu","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":2}">
Papers
arxiv:2404.03653

CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching

Published on Apr 4, 2024
· Submitted by AK on Apr 5, 2024
#2 Paper of the day
Authors:
,
,
,
,
,

Abstract

CoMat, an end-to-end diffusion model fine-tuning strategy with an image-to-text concept matching mechanism, improves text-to-image alignment in SDXL without additional data.

AI-generated summary

Diffusion models have demonstrated great success in the field of text-to-image generation. However, alleviating the misalignment between the text prompts and images is still challenging. The root reason behind the misalignment has not been extensively investigated. We observe that the misalignment is caused by inadequate token attention activation. We further attribute this phenomenon to the diffusion model's insufficient condition utilization, which is caused by its training paradigm. To address the issue, we propose CoMat, an end-to-end diffusion model fine-tuning strategy with an image-to-text concept matching mechanism. We leverage an image captioning model to measure image-to-text alignment and guide the diffusion model to revisit ignored tokens. A novel attribute concentration module is also proposed to address the attribute binding problem. Without any image or human preference data, we use only 20K text prompts to fine-tune SDXL to obtain CoMat-SDXL. Extensive experiments show that CoMat-SDXL significantly outperforms the baseline model SDXL in two text-to-image alignment benchmarks and achieves start-of-the-art performance.

Community

This one looks exciting. I certainly hope that they keep to "release training code in April" as stated on the project site:

https://github.com/CaraJ7/CoMat

·
Paper author

Thanks for your interest!
We are currently organizing the training code. Hopefully, the code will be released within April as stated.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Solving Misalignment in Text-to-Image AI: CoMat Explained!

Links 🔗:

👉 Subscribe: https://www.youtube.com/@Arxflix
👉 Twitter: https://x.com/arxflix
👉 LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2404.03653 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2404.03653 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2404.03653 in a Space README.md to link it from this page.

Collections including this paper 17

Лучший частный хостинг