lynx   »   [go: up one dir, main page]

https://yuqingwang1029.github.io/TokenBridge/

\n","updatedAt":"2025-03-24T02:20:05.667Z","author":{"_id":"63ea23b9dedfeebe54d02bdf","avatarUrl":"/avatars/4d9f9a546aa8c63e277161ea700075c4.svg","fullname":"Yuqing Wang","name":"Epiphqny","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":6}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.4047073423862457},"editors":["Epiphqny"],"editorAvatarUrls":["/avatars/4d9f9a546aa8c63e277161ea700075c4.svg"],"reactions":[],"isReport":false}},{"id":"67e0f53332fc11e2f93d4011","author":{"_id":"64d98ef7a4839890b25eb78b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64d98ef7a4839890b25eb78b/215-CSVLl81z6CAq0ECWU.jpeg","fullname":"Fangyuan Yu","name":"Ksgk-fy","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":15},"createdAt":"2025-03-24T06:01:23.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Interesting work! Looking forward for the code release ~","html":"

Interesting work! Looking forward for the code release ~

\n","updatedAt":"2025-03-24T06:01:23.883Z","author":{"_id":"64d98ef7a4839890b25eb78b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64d98ef7a4839890b25eb78b/215-CSVLl81z6CAq0ECWU.jpeg","fullname":"Fangyuan Yu","name":"Ksgk-fy","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":15}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.798714280128479},"editors":["Ksgk-fy"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/64d98ef7a4839890b25eb78b/215-CSVLl81z6CAq0ECWU.jpeg"],"reactions":[{"reaction":"🤝","users":["Epiphqny"],"count":1}],"isReport":false},"replies":[{"id":"67e14a230f3d12ec85a3a25b","author":{"_id":"63ea23b9dedfeebe54d02bdf","avatarUrl":"/avatars/4d9f9a546aa8c63e277161ea700075c4.svg","fullname":"Yuqing Wang","name":"Epiphqny","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":6},"createdAt":"2025-03-24T12:03:47.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Thanks for your interest! We will release the code as soon as possible!","html":"

Thanks for your interest! We will release the code as soon as possible!

\n","updatedAt":"2025-03-24T12:03:47.598Z","author":{"_id":"63ea23b9dedfeebe54d02bdf","avatarUrl":"/avatars/4d9f9a546aa8c63e277161ea700075c4.svg","fullname":"Yuqing Wang","name":"Epiphqny","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":6}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9828863143920898},"editors":["Epiphqny"],"editorAvatarUrls":["/avatars/4d9f9a546aa8c63e277161ea700075c4.svg"],"reactions":[],"isReport":false,"parentCommentId":"67e0f53332fc11e2f93d4011"}}]},{"id":"67e2082cbf8cc1d14190ecfa","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264},"createdAt":"2025-03-25T01:34:36.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [V2Flow: Unifying Visual Tokenization and Large Language Model Vocabularies for Autoregressive Image Generation](https://huggingface.co/papers/2503.07493) (2025)\n* [Improving Autoregressive Image Generation through Coarse-to-Fine Token Prediction](https://huggingface.co/papers/2503.16194) (2025)\n* [Layton: Latent Consistency Tokenizer for 1024-pixel Image Reconstruction and Generation by 256 Tokens](https://huggingface.co/papers/2503.08377) (2025)\n* [HiTVideo: Hierarchical Tokenizers for Enhancing Text-to-Video Generation with Autoregressive Large Language Models](https://huggingface.co/papers/2503.11513) (2025)\n* [Frequency Autoregressive Image Generation with Continuous Tokens](https://huggingface.co/papers/2503.05305) (2025)\n* [Unified Autoregressive Visual Generation and Understanding with Continuous Tokens](https://huggingface.co/papers/2503.13436) (2025)\n* [NFIG: Autoregressive Image Generation with Next-Frequency Prediction](https://huggingface.co/papers/2503.07076) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-03-25T01:34:36.643Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.693417489528656},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2503.16430","authors":[{"_id":"67e0bd81b04d9e836829c468","user":{"_id":"63ea23b9dedfeebe54d02bdf","avatarUrl":"/avatars/4d9f9a546aa8c63e277161ea700075c4.svg","isPro":false,"fullname":"Yuqing Wang","user":"Epiphqny","type":"user"},"name":"Yuqing Wang","status":"claimed_verified","statusLastChangedAt":"2025-03-24T08:10:42.267Z","hidden":false},{"_id":"67e0bd81b04d9e836829c469","user":{"_id":"64415957bd0c9726529802f6","avatarUrl":"/avatars/1132d1ee68fb58ec635d57c8175caacd.svg","isPro":false,"fullname":"Zhijie Lin","user":"Ikuinen","type":"user"},"name":"Zhijie Lin","status":"admin_assigned","statusLastChangedAt":"2025-03-24T12:40:54.208Z","hidden":false},{"_id":"67e0bd81b04d9e836829c46a","user":{"_id":"6427e08288215cee63b1c44d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6427e08288215cee63b1c44d/rzaG978FF-ywzicWNl_xl.jpeg","isPro":false,"fullname":"yao teng","user":"tytyt","type":"user"},"name":"Yao Teng","status":"admin_assigned","statusLastChangedAt":"2025-03-24T12:41:01.459Z","hidden":false},{"_id":"67e0bd81b04d9e836829c46b","user":{"_id":"627d2723401f42c57b6b7c0c","avatarUrl":"/avatars/6ff754e56aaee63d8572881a6a966171.svg","isPro":false,"fullname":"Yuanzhi Zhu","user":"Yuanzhi","type":"user"},"name":"Yuanzhi Zhu","status":"admin_assigned","statusLastChangedAt":"2025-03-24T12:41:07.856Z","hidden":false},{"_id":"67e0bd81b04d9e836829c46c","user":{"_id":"60d2e681b8448e1785bbda06","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1624434302056-noauth.jpeg","isPro":false,"fullname":"Shuhuai Ren","user":"ShuhuaiRen","type":"user"},"name":"Shuhuai Ren","status":"claimed_verified","statusLastChangedAt":"2025-03-24T08:10:44.045Z","hidden":false},{"_id":"67e0bd81b04d9e836829c46d","user":{"_id":"67298e44017b96a1d0101dc4","avatarUrl":"/avatars/1f8ed1a3e911e6a3021087b9371d284c.svg","isPro":false,"fullname":"Jiashi Feng","user":"jshfeng","type":"user"},"name":"Jiashi Feng","status":"admin_assigned","statusLastChangedAt":"2025-03-24T12:41:16.160Z","hidden":false},{"_id":"67e0bd81b04d9e836829c46e","user":{"_id":"65d5ec74cd05bc1eaa125040","avatarUrl":"/avatars/2de1b1539a86452c2c89570eeb02f5ab.svg","isPro":false,"fullname":"Xihui Liu","user":"XihuiLiu","type":"user"},"name":"Xihui Liu","status":"admin_assigned","statusLastChangedAt":"2025-03-24T12:41:22.229Z","hidden":false}],"publishedAt":"2025-03-20T17:59:59.000Z","submittedOnDailyAt":"2025-03-24T00:50:05.627Z","title":"Bridging Continuous and Discrete Tokens for Autoregressive Visual\n Generation","submittedOnDailyBy":{"_id":"63ea23b9dedfeebe54d02bdf","avatarUrl":"/avatars/4d9f9a546aa8c63e277161ea700075c4.svg","isPro":false,"fullname":"Yuqing Wang","user":"Epiphqny","type":"user"},"summary":"Autoregressive visual generation models typically rely on tokenizers to\ncompress images into tokens that can be predicted sequentially. A fundamental\ndilemma exists in token representation: discrete tokens enable straightforward\nmodeling with standard cross-entropy loss, but suffer from information loss and\ntokenizer training instability; continuous tokens better preserve visual\ndetails, but require complex distribution modeling, complicating the generation\npipeline. In this paper, we propose TokenBridge, which bridges this gap by\nmaintaining the strong representation capacity of continuous tokens while\npreserving the modeling simplicity of discrete tokens. To achieve this, we\ndecouple discretization from the tokenizer training process through\npost-training quantization that directly obtains discrete tokens from\ncontinuous representations. Specifically, we introduce a dimension-wise\nquantization strategy that independently discretizes each feature dimension,\npaired with a lightweight autoregressive prediction mechanism that efficiently\nmodel the resulting large token space. Extensive experiments show that our\napproach achieves reconstruction and generation quality on par with continuous\nmethods while using standard categorical prediction. This work demonstrates\nthat bridging discrete and continuous paradigms can effectively harness the\nstrengths of both approaches, providing a promising direction for high-quality\nvisual generation with simple autoregressive modeling. Project page:\nhttps://yuqingwang1029.github.io/TokenBridge.","upvotes":34,"discussionId":"67e0bd85b04d9e836829c55f","projectPage":"https://yuqingwang1029.github.io/TokenBridge/","githubRepo":"https://github.com/YuqingWang1029/TokenBridge","ai_summary":"TokenBridge addresses the dilemma between discrete and continuous token representations in autoregressive visual generation by using post-training quantization to preserve visual details while maintaining modeling simplicity.","ai_keywords":["autoregressive visual generation models","tokenizers","discrete tokens","continuous tokens","cross-entropy loss","distribution modeling","TokenBridge","dimension-wise quantization","lightweight autoregressive prediction mechanism"],"githubStars":141},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"63ea23b9dedfeebe54d02bdf","avatarUrl":"/avatars/4d9f9a546aa8c63e277161ea700075c4.svg","isPro":false,"fullname":"Yuqing Wang","user":"Epiphqny","type":"user"},{"_id":"6427e08288215cee63b1c44d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6427e08288215cee63b1c44d/rzaG978FF-ywzicWNl_xl.jpeg","isPro":false,"fullname":"yao teng","user":"tytyt","type":"user"},{"_id":"64b4eecf2fc8324fcb63b404","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64b4eecf2fc8324fcb63b404/zGYqYVB4-o-GBMybJ8CDA.png","isPro":false,"fullname":"Yunhan Yang","user":"yhyang-myron","type":"user"},{"_id":"66ffe645a771e3600e57d77f","avatarUrl":"/avatars/7a5777533e4d610ce157fd64a187a954.svg","isPro":false,"fullname":"楼天成","user":"ohyeahhhhhh","type":"user"},{"_id":"637cba13b8e573d75be96ea6","avatarUrl":"/avatars/5eca230e63d66947b2a05c1ff964a96c.svg","isPro":false,"fullname":"Nina","user":"NinaKarine","type":"user"},{"_id":"60d045c4778bafd0fbcfa3f5","avatarUrl":"/avatars/0cc0c2739c1934430ea09df7e9668c80.svg","isPro":false,"fullname":"Yi Chen","user":"ChenYi99","type":"user"},{"_id":"6463554dd2044cd1d7c6e0bf","avatarUrl":"/avatars/d7653623117268c545a7063fec69664b.svg","isPro":false,"fullname":"Bingzheng Wei","user":"Bingzheng","type":"user"},{"_id":"6387658188b39a64e1eb42a8","avatarUrl":"/avatars/f6bc42e244750eb1acb0be7659e604b3.svg","isPro":false,"fullname":"Qian Wu","user":"Fivethousand","type":"user"},{"_id":"67dd114c74d4b56c8063035b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/xhS1RJdONYGpjX6EOlOKV.png","isPro":false,"fullname":"B.K. Lee","user":"vaultOfyoLee","type":"user"},{"_id":"668125557b50b433cda2a211","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/668125557b50b433cda2a211/j3z3wT5Rv9IyUKtbzQpnc.png","isPro":false,"fullname":"Tianwei Xiong","user":"YuuTennYi","type":"user"},{"_id":"60d2e681b8448e1785bbda06","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1624434302056-noauth.jpeg","isPro":false,"fullname":"Shuhuai Ren","user":"ShuhuaiRen","type":"user"},{"_id":"627d2723401f42c57b6b7c0c","avatarUrl":"/avatars/6ff754e56aaee63d8572881a6a966171.svg","isPro":false,"fullname":"Yuanzhi Zhu","user":"Yuanzhi","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2503.16430

Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation

Published on Mar 20
· Submitted by Yuqing Wang on Mar 24

Abstract

TokenBridge addresses the dilemma between discrete and continuous token representations in autoregressive visual generation by using post-training quantization to preserve visual details while maintaining modeling simplicity.

AI-generated summary

Autoregressive visual generation models typically rely on tokenizers to compress images into tokens that can be predicted sequentially. A fundamental dilemma exists in token representation: discrete tokens enable straightforward modeling with standard cross-entropy loss, but suffer from information loss and tokenizer training instability; continuous tokens better preserve visual details, but require complex distribution modeling, complicating the generation pipeline. In this paper, we propose TokenBridge, which bridges this gap by maintaining the strong representation capacity of continuous tokens while preserving the modeling simplicity of discrete tokens. To achieve this, we decouple discretization from the tokenizer training process through post-training quantization that directly obtains discrete tokens from continuous representations. Specifically, we introduce a dimension-wise quantization strategy that independently discretizes each feature dimension, paired with a lightweight autoregressive prediction mechanism that efficiently model the resulting large token space. Extensive experiments show that our approach achieves reconstruction and generation quality on par with continuous methods while using standard categorical prediction. This work demonstrates that bridging discrete and continuous paradigms can effectively harness the strengths of both approaches, providing a promising direction for high-quality visual generation with simple autoregressive modeling. Project page: https://yuqingwang1029.github.io/TokenBridge.

Community

Paper author Paper submitter

Interesting work! Looking forward for the code release ~

·
Paper author

Thanks for your interest! We will release the code as soon as possible!

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.16430 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.16430 in a Space README.md to link it from this page.

Collections including this paper 4

Лучший частный хостинг