lynx   »   [go: up one dir, main page]

https://www.youtube.com/@Arxflix
👉 Twitter: https://x.com/arxflix
👉 LMNT (Partner): https://lmnt.com/

\n

By Arxflix
\"9t4iCUHx_400x400-1.jpg\"

\n","updatedAt":"2024-06-08T20:56:49.279Z","author":{"_id":"6186ddf6a7717cb375090c01","avatarUrl":"/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg","fullname":"Julien BLANCHON","name":"blanchon","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":142}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.48773056268692017},"editors":["blanchon"],"editorAvatarUrls":["/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2204.06125","authors":[{"_id":"6411c77c6b75ddced388f661","name":"Aditya Ramesh","hidden":false},{"_id":"6411c77c6b75ddced388f662","name":"Prafulla Dhariwal","hidden":false},{"_id":"6411c77c6b75ddced388f663","name":"Alex Nichol","hidden":false},{"_id":"6411c77c6b75ddced388f664","user":{"_id":"63326985fece0490c3d3c7ad","avatarUrl":"/avatars/cf1e528aed36848d1911a24ea8bfa2ae.svg","isPro":false,"fullname":"Casey Chu","user":"caseychu","type":"user"},"name":"Casey Chu","status":"extracted_pending","statusLastChangedAt":"2023-03-15T09:39:55.974Z","hidden":false},{"_id":"6411c77c6b75ddced388f665","name":"Mark Chen","hidden":false}],"publishedAt":"2022-04-13T01:10:33.000Z","title":"Hierarchical Text-Conditional Image Generation with CLIP Latents","summary":"Contrastive models like CLIP have been shown to learn robust representations\nof images that capture both semantics and style. To leverage these\nrepresentations for image generation, we propose a two-stage model: a prior\nthat generates a CLIP image embedding given a text caption, and a decoder that\ngenerates an image conditioned on the image embedding. We show that explicitly\ngenerating image representations improves image diversity with minimal loss in\nphotorealism and caption similarity. Our decoders conditioned on image\nrepresentations can also produce variations of an image that preserve both its\nsemantics and style, while varying the non-essential details absent from the\nimage representation. Moreover, the joint embedding space of CLIP enables\nlanguage-guided image manipulations in a zero-shot fashion. We use diffusion\nmodels for the decoder and experiment with both autoregressive and diffusion\nmodels for the prior, finding that the latter are computationally more\nefficient and produce higher-quality samples.","upvotes":2,"discussionId":"641192323ea54b1aa7e2edcc","ai_summary":"A two-stage model using contrastive models like CLIP for embedding text captions generates diverse, photorealistic images with explicit control over semantics and style, utilizing diffusion models for decoding and various prior models.","ai_keywords":["contrastive models","CLIP","image embedding","image generation","decoder","diffusion models","autoregressive models","explicit representation","image diversity","photorealism","caption similarity","language-guided image manipulations","zero-shot"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6316c9e193ab42acfb0d2599","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6316c9e193ab42acfb0d2599/Z_iugzOnx-Qbx7SwBD4TC.jpeg","isPro":false,"fullname":"Steven Casteel","user":"stevencasteel","type":"user"},{"_id":"5f0988ad19cb630495b8147a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5f0988ad19cb630495b8147a/I5T-tPlEXzsXR7iI8d4u1.png","isPro":false,"fullname":"Sayantan Das","user":"ucalyptus","type":"user"}],"acceptLanguages":["*"]}">
Papers
arxiv:2204.06125

Hierarchical Text-Conditional Image Generation with CLIP Latents

Published on Apr 13, 2022
Authors:
,
,
,

Abstract

A two-stage model using contrastive models like CLIP for embedding text captions generates diverse, photorealistic images with explicit control over semantics and style, utilizing diffusion models for decoding and various prior models.

AI-generated summary

Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples.

Community

Exploring Hierarchical Text-Conditional Image Generation with CLIP Latents

Links 🔗:

👉 Subscribe: https://www.youtube.com/@Arxflix
👉 Twitter: https://x.com/arxflix
👉 LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 1

Spaces citing this paper 6

Collections including this paper 8

Лучший частный хостинг