Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2024-02-23T01:21:46.737Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6838977336883545},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2402.13763","authors":[{"_id":"65d757049103d35b77b0b8ea","name":"Sifei Li","hidden":false},{"_id":"65d757049103d35b77b0b8eb","user":{"_id":"63228c7c15b7beab57c9ab0b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1663208563439-noauth.jpeg","isPro":false,"fullname":"Yuxin Zhang","user":"ZHazel","type":"user"},"name":"Yuxin Zhang","status":"claimed_verified","statusLastChangedAt":"2025-06-05T12:42:51.908Z","hidden":false},{"_id":"65d757049103d35b77b0b8ec","name":"Fan Tang","hidden":false},{"_id":"65d757049103d35b77b0b8ed","name":"Chongyang Ma","hidden":false},{"_id":"65d757049103d35b77b0b8ee","name":"Weiming dong","hidden":false},{"_id":"65d757049103d35b77b0b8ef","name":"Changsheng Xu","hidden":false}],"publishedAt":"2024-02-21T12:38:48.000Z","submittedOnDailyAt":"2024-02-22T11:45:32.921Z","title":"Music Style Transfer with Time-Varying Inversion of Diffusion Models","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"With the development of diffusion models, text-guided image style transfer\nhas demonstrated high-quality controllable synthesis results. However, the\nutilization of text for diverse music style transfer poses significant\nchallenges, primarily due to the limited availability of matched audio-text\ndatasets. Music, being an abstract and complex art form, exhibits variations\nand intricacies even within the same genre, thereby making accurate textual\ndescriptions challenging. This paper presents a music style transfer approach\nthat effectively captures musical attributes using minimal data. We introduce a\nnovel time-varying textual inversion module to precisely capture\nmel-spectrogram features at different levels. During inference, we propose a\nbias-reduced stylization technique to obtain stable results. Experimental\nresults demonstrate that our method can transfer the style of specific\ninstruments, as well as incorporate natural sounds to compose melodies. Samples\nand source code are available at https://lsfhuihuiff.github.io/MusicTI/.","upvotes":11,"discussionId":"65d757049103d35b77b0b914","ai_summary":"A method for music style transfer captures musical attributes using minimal data with a novel time-varying textual inversion module and bias-reduced stylization technique.","ai_keywords":["diffusion models","music style transfer","mel-spectrogram features","time-varying textual inversion","bias-reduced stylization"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"61868ce808aae0b5499a2a95","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61868ce808aae0b5499a2a95/F6BA0anbsoY_Z7M1JrwOe.jpeg","isPro":true,"fullname":"Sylvain Filoni","user":"fffiloni","type":"user"},{"_id":"635964636a61954080850e1d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/635964636a61954080850e1d/0bfExuDTrHTtm8c-40cDM.png","isPro":false,"fullname":"William Lamkin","user":"phanes","type":"user"},{"_id":"6527e89a8808d80ccff88b7a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6527e89a8808d80ccff88b7a/CuGNmF1Et8KMQ0mCd1NEJ.jpeg","isPro":true,"fullname":"Hafedh Hichri","user":"not-lain","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"61e7c06064d3c6c929057bee","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61e7c06064d3c6c929057bee/QxULx1EA1bgmjXxupQX4B.jpeg","isPro":false,"fullname":"蓋瑞王","user":"gary109","type":"user"},{"_id":"65676a0a461af93fca9f2329","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65676a0a461af93fca9f2329/-CB4C1C6yLM4gRU2K5gsS.jpeg","isPro":false,"fullname":"Juan Delgadillo","user":"juandelgadillo","type":"user"},{"_id":"633b71b47af633cbcd0671d8","avatarUrl":"/avatars/6671941ced18ae516db6ebfbf73e239f.svg","isPro":false,"fullname":"juand4bot","user":"juandavidgf","type":"user"},{"_id":"64d0d6d80b71aea8be8759e8","avatarUrl":"/avatars/2aad898b34a940a6aa4368526aa20d84.svg","isPro":false,"fullname":"Yoonjae Jeong","user":"hybris75","type":"user"},{"_id":"64c24ecb28cc6b6edf517cb9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64c24ecb28cc6b6edf517cb9/xEr-j6aOC_KlJ6H1e77XK.jpeg","isPro":false,"fullname":"Angelina Patrihina","user":"lina-pat","type":"user"},{"_id":"641b754d1911d3be6745cce9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/641b754d1911d3be6745cce9/DxjZG1XT4H3ZHF7qHxWxk.jpeg","isPro":true,"fullname":"atayloraerospace","user":"Taylor658","type":"user"},{"_id":"663ccbff3a74a20189d4aa2e","avatarUrl":"/avatars/83a54455e0157480f65c498cd9057cf2.svg","isPro":false,"fullname":"Nguyen Van Thanh","user":"NguyenVanThanhHust","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
A method for music style transfer captures musical attributes using minimal data with a novel time-varying textual inversion module and bias-reduced stylization technique.
AI-generated summary
With the development of diffusion models, text-guided image style transfer
has demonstrated high-quality controllable synthesis results. However, the
utilization of text for diverse music style transfer poses significant
challenges, primarily due to the limited availability of matched audio-text
datasets. Music, being an abstract and complex art form, exhibits variations
and intricacies even within the same genre, thereby making accurate textual
descriptions challenging. This paper presents a music style transfer approach
that effectively captures musical attributes using minimal data. We introduce a
novel time-varying textual inversion module to precisely capture
mel-spectrogram features at different levels. During inference, we propose a
bias-reduced stylization technique to obtain stable results. Experimental
results demonstrate that our method can transfer the style of specific
instruments, as well as incorporate natural sounds to compose melodies. Samples
and source code are available at https://lsfhuihuiff.github.io/MusicTI/.