lynx   »   [go: up one dir, main page]

https://www.youtube.com/@Arxflix
๐Ÿ‘‰ Twitter: https://x.com/arxflix
๐Ÿ‘‰ LMNT (Partner): https://lmnt.com/

\n

By Arxflix
\"9t4iCUHx_400x400-1.jpg\"

\n","updatedAt":"2024-06-09T07:20:42.619Z","author":{"_id":"6186ddf6a7717cb375090c01","avatarUrl":"/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg","fullname":"Julien BLANCHON","name":"blanchon","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":142}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.4771699607372284},"editors":["blanchon"],"editorAvatarUrls":["/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2205.11487","authors":[{"_id":"6411c77f6b75ddced3890cb2","name":"Chitwan Saharia","hidden":false},{"_id":"6411c77f6b75ddced3890cb3","name":"William Chan","hidden":false},{"_id":"6411c77f6b75ddced3890cb4","name":"Saurabh Saxena","hidden":false},{"_id":"6411c77f6b75ddced3890cb5","name":"Lala Li","hidden":false},{"_id":"6411c77f6b75ddced3890cb6","name":"Jay Whang","hidden":false},{"_id":"6411c77f6b75ddced3890cb7","name":"Emily Denton","hidden":false},{"_id":"6411c77f6b75ddced3890cb8","name":"Seyed Kamyar Seyed Ghasemipour","hidden":false},{"_id":"6411c77f6b75ddced3890cb9","name":"Burcu Karagol Ayan","hidden":false},{"_id":"6411c77f6b75ddced3890cba","name":"S. Sara Mahdavi","hidden":false},{"_id":"6411c77f6b75ddced3890cbb","name":"Rapha Gontijo Lopes","hidden":false},{"_id":"6411c77f6b75ddced3890cbc","name":"Tim Salimans","hidden":false},{"_id":"6411c77f6b75ddced3890cbd","name":"Jonathan Ho","hidden":false},{"_id":"6411c77f6b75ddced3890cbe","name":"David J Fleet","hidden":false},{"_id":"6411c77f6b75ddced3890cbf","name":"Mohammad Norouzi","hidden":false}],"publishedAt":"2022-05-23T17:42:53.000Z","title":"Photorealistic Text-to-Image Diffusion Models with Deep Language\n Understanding","summary":"We present Imagen, a text-to-image diffusion model with an unprecedented\ndegree of photorealism and a deep level of language understanding. Imagen\nbuilds on the power of large transformer language models in understanding text\nand hinges on the strength of diffusion models in high-fidelity image\ngeneration. Our key discovery is that generic large language models (e.g. T5),\npretrained on text-only corpora, are surprisingly effective at encoding text\nfor image synthesis: increasing the size of the language model in Imagen boosts\nboth sample fidelity and image-text alignment much more than increasing the\nsize of the image diffusion model. Imagen achieves a new state-of-the-art FID\nscore of 7.27 on the COCO dataset, without ever training on COCO, and human\nraters find Imagen samples to be on par with the COCO data itself in image-text\nalignment. To assess text-to-image models in greater depth, we introduce\nDrawBench, a comprehensive and challenging benchmark for text-to-image models.\nWith DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP,\nLatent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen\nover other models in side-by-side comparisons, both in terms of sample quality\nand image-text alignment. See https://imagen.research.google/ for an overview\nof the results.","upvotes":1,"discussionId":"641192363ea54b1aa7e2f434","ai_summary":"Imagen is a text-to-image diffusion model that leverages large transformer language models for enhanced photorealism and alignment with text, outperforming other models like VQ-GAN+CLIP and DALL-E 2 on new benchmarks.","ai_keywords":["diffusion model","transformer language models","image synthesis","large language models","FID score","COCO dataset","image-text alignment","DrawBench","VQ-GAN+CLIP","Latent Diffusion Models","DALL-E 2"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6502fd2cf10502379a8f53a7","avatarUrl":"/avatars/eb76b7bc9d0de6003f50c5f8be40e3a4.svg","isPro":false,"fullname":"Karapanter","user":"Karapanter","type":"user"}],"acceptLanguages":["*"]}">
Papers
arxiv:2205.11487

Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding

Published on May 23, 2022
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Imagen is a text-to-image diffusion model that leverages large transformer language models for enhanced photorealism and alignment with text, outperforming other models like VQ-GAN+CLIP and DALL-E 2 on new benchmarks.

AI-generated summary

We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key discovery is that generic large language models (e.g. T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, without ever training on COCO, and human raters find Imagen samples to be on par with the COCO data itself in image-text alignment. To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models. With DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment. See https://imagen.research.google/ for an overview of the results.

Community

Cutting-Edge Photorealistic Text-to-Image Models Explained

Links ๐Ÿ”—:

๐Ÿ‘‰ Subscribe: https://www.youtube.com/@Arxflix
๐Ÿ‘‰ Twitter: https://x.com/arxflix
๐Ÿ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 195

Browse 195 models citing this paper

Datasets citing this paper 1

Spaces citing this paper 2,838

Collections including this paper 2

ะ›ัƒั‡ัˆะธะน ั‡ะฐัั‚ะฝั‹ะน ั…ะพัั‚ะธะฝะณ