lynx   »   [go: up one dir, main page]

https://www.youtube.com/@Arxflix
👉 Twitter: https://x.com/arxflix
👉 LMNT (Partner): https://lmnt.com/

\n

By Arxflix
\"9t4iCUHx_400x400-1.jpg\"

\n","updatedAt":"2024-06-08T22:26:35.564Z","author":{"_id":"6186ddf6a7717cb375090c01","avatarUrl":"/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg","fullname":"Julien BLANCHON","name":"blanchon","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":142}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.49010011553764343},"editors":["blanchon"],"editorAvatarUrls":["/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2302.08113","authors":[{"_id":"6412f251034388abd55712ae","user":{"_id":"62e29044a133a252b5cf70b2","avatarUrl":"/avatars/6d09ddcba9bc47c309150a8d77815891.svg","isPro":false,"fullname":"Omer Bar-Tal","user":"omerbartal","type":"user"},"name":"Omer Bar-Tal","status":"claimed_verified","statusLastChangedAt":"2023-05-16T12:25:19.769Z","hidden":false},{"_id":"6412f251034388abd55712af","name":"Lior Yariv","hidden":false},{"_id":"6412f251034388abd55712b0","name":"Yaron Lipman","hidden":false},{"_id":"6412f251034388abd55712b1","name":"Tali Dekel","hidden":false}],"publishedAt":"2023-02-16T06:28:29.000Z","title":"MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation","summary":"Recent advances in text-to-image generation with diffusion models present\ntransformative capabilities in image quality. However, user controllability of\nthe generated image, and fast adaptation to new tasks still remains an open\nchallenge, currently mostly addressed by costly and long re-training and\nfine-tuning or ad-hoc adaptations to specific image generation tasks. In this\nwork, we present MultiDiffusion, a unified framework that enables versatile and\ncontrollable image generation, using a pre-trained text-to-image diffusion\nmodel, without any further training or finetuning. At the center of our\napproach is a new generation process, based on an optimization task that binds\ntogether multiple diffusion generation processes with a shared set of\nparameters or constraints. We show that MultiDiffusion can be readily applied\nto generate high quality and diverse images that adhere to user-provided\ncontrols, such as desired aspect ratio (e.g., panorama), and spatial guiding\nsignals, ranging from tight segmentation masks to bounding boxes. Project\nwebpage: https://multidiffusion.github.io","upvotes":1,"discussionId":"6412f258034388abd5571487","ai_summary":"MultiDiffusion is a unified framework using a pre-trained model for controllable and diverse image generation without re-training, achieved through an optimization task combining multiple diffusion processes.","ai_keywords":["diffusion models","text-to-image generation","controllable image generation","pre-trained model","optimization task","multiple diffusion processes","shared parameters","aspect ratio","segmentation masks","bounding boxes"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"}],"acceptLanguages":["*"]}">
Papers
arxiv:2302.08113

MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation

Published on Feb 16, 2023
Authors:
,
,

Abstract

MultiDiffusion is a unified framework using a pre-trained model for controllable and diverse image generation without re-training, achieved through an optimization task combining multiple diffusion processes.

AI-generated summary

Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. Project webpage: https://multidiffusion.github.io

Community

Controlled Image Generation Without Re-training! Discover MultiDiffusion

Links 🔗:

👉 Subscribe: https://www.youtube.com/@Arxflix
👉 Twitter: https://x.com/arxflix
👉 LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2302.08113 in a dataset README.md to link it from this page.

Spaces citing this paper 7

Collections including this paper 1

Лучший частный хостинг