lynx   »   [go: up one dir, main page]

https://annaj2178.github.io/EnerverseAC.github.io/
Open-Sourced Code: https://github.com/AgibotTech/EnerVerse-AC

\n

Overview:

\n

\"image.png\"

\n

Application:

\n

EVAC can be used as the policy evaluator and the data engine.

\n

\"leaf-fail.gif\"

\n","updatedAt":"2025-05-16T04:34:08.230Z","author":{"_id":"634e4120038b5879133552f5","avatarUrl":"/avatars/34ec861b4bbf1aecf927a7d6e726c7a4.svg","fullname":"Siyuan","name":"SiyuanH","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":8}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.5712032318115234},"editors":["SiyuanH"],"editorAvatarUrls":["/avatars/34ec861b4bbf1aecf927a7d6e726c7a4.svg"],"reactions":[],"isReport":false}},{"id":"6827e79cbc09f10228ec0539","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264},"createdAt":"2025-05-17T01:34:20.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [RoboFactory: Exploring Embodied Agent Collaboration with Compositional Constraints](https://huggingface.co/papers/2503.16408) (2025)\n* [RoboVerse: Towards a Unified Platform, Dataset and Benchmark for Scalable and Generalizable Robot Learning](https://huggingface.co/papers/2504.18904) (2025)\n* [Unified World Models: Coupling Video and Action Diffusion for Pretraining on Large Robotic Datasets](https://huggingface.co/papers/2504.02792) (2025)\n* [Latent Adaptive Planner for Dynamic Manipulation](https://huggingface.co/papers/2505.03077) (2025)\n* [EmbodiedAgent: A Scalable Hierarchical Approach to Overcome Practical Challenge in Multi-Robot Control](https://huggingface.co/papers/2504.10030) (2025)\n* [GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Data](https://huggingface.co/papers/2505.03233) (2025)\n* [ViSA-Flow: Accelerating Robot Skill Learning via Large-Scale Video Semantic Action Flow](https://huggingface.co/papers/2505.01288) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-05-17T01:34:20.021Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7107728719711304},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2505.09723","authors":[{"_id":"6826b00c251d26fc0cd035cc","user":{"_id":"63c20105726f62e411fbe882","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63c20105726f62e411fbe882/2UsU9O2psbDjJzz-sAmGH.jpeg","isPro":false,"fullname":"Yuxin Jiang","user":"YuxinJiang","type":"user"},"name":"Yuxin Jiang","status":"admin_assigned","statusLastChangedAt":"2025-05-16T08:10:30.068Z","hidden":false},{"_id":"6826b00c251d26fc0cd035cd","user":{"_id":"6575f9aeca03b6c514fe6e5c","avatarUrl":"/avatars/a6e9d428beaa124ee989d702b9bf4f85.svg","isPro":false,"fullname":"Shengcong Chen","user":"Shengcong","type":"user"},"name":"Shengcong Chen","status":"admin_assigned","statusLastChangedAt":"2025-05-16T08:10:37.358Z","hidden":false},{"_id":"6826b00c251d26fc0cd035ce","user":{"_id":"63c7a33121bd95f80ed74652","avatarUrl":"/avatars/7dd59afea785a2bff0ec2b757abd474e.svg","isPro":false,"fullname":"Siyuan Huang","user":"thuhsy","type":"user"},"name":"Siyuan Huang","status":"admin_assigned","statusLastChangedAt":"2025-05-16T08:10:50.217Z","hidden":false},{"_id":"6826b00c251d26fc0cd035cf","user":{"_id":"640b00555a9c21b95c6449b3","avatarUrl":"/avatars/5fa43b956f3acc671f033e31b7ca76c5.svg","isPro":false,"fullname":"Liliang Chen","user":"pathcn","type":"user"},"name":"Liliang Chen","status":"admin_assigned","statusLastChangedAt":"2025-05-16T08:10:55.980Z","hidden":false},{"_id":"6826b00c251d26fc0cd035d0","name":"Pengfei Zhou","hidden":false},{"_id":"6826b00c251d26fc0cd035d1","name":"Yue Liao","hidden":false},{"_id":"6826b00c251d26fc0cd035d2","name":"Xindong He","hidden":false},{"_id":"6826b00c251d26fc0cd035d3","name":"Chiming Liu","hidden":false},{"_id":"6826b00c251d26fc0cd035d4","user":{"_id":"65c04e9c27a5fdca81abcbd9","avatarUrl":"/avatars/12a155683c824fa23da4a9e2bed4f64e.svg","isPro":false,"fullname":"Hongsheng LI","user":"hsli-cuhk","type":"user"},"name":"Hongsheng Li","status":"admin_assigned","statusLastChangedAt":"2025-05-16T08:11:29.857Z","hidden":false},{"_id":"6826b00c251d26fc0cd035d5","user":{"_id":"67739bfa64e8b7438ae68eb4","avatarUrl":"/avatars/15193bfbce487b2de4ce8c86bd18885a.svg","isPro":false,"fullname":"Maoqing Yao","user":"AutobotZero","type":"user"},"name":"Maoqing Yao","status":"admin_assigned","statusLastChangedAt":"2025-05-16T08:11:36.940Z","hidden":false},{"_id":"6826b00c251d26fc0cd035d6","user":{"_id":"646ec9b135f55eb49e405faa","avatarUrl":"/avatars/a17194be585d20e2a021e77a5a20e213.svg","isPro":false,"fullname":"Guanghui Ren","user":"sundrops","type":"user"},"name":"Guanghui Ren","status":"admin_assigned","statusLastChangedAt":"2025-05-16T08:11:44.407Z","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/634e4120038b5879133552f5/PBKoxKQrSb2bFzjsx2Dta.gif"],"publishedAt":"2025-05-14T18:30:53.000Z","submittedOnDailyAt":"2025-05-16T02:11:01.174Z","title":"EnerVerse-AC: Envisioning Embodied Environments with Action Condition","submittedOnDailyBy":{"_id":"634e4120038b5879133552f5","avatarUrl":"/avatars/34ec861b4bbf1aecf927a7d6e726c7a4.svg","isPro":true,"fullname":"Siyuan","user":"SiyuanH","type":"user"},"summary":"Robotic imitation learning has advanced from solving static tasks to\naddressing dynamic interaction scenarios, but testing and evaluation remain\ncostly and challenging due to the need for real-time interaction with dynamic\nenvironments. We propose EnerVerse-AC (EVAC), an action-conditional world model\nthat generates future visual observations based on an agent's predicted\nactions, enabling realistic and controllable robotic inference. Building on\nprior architectures, EVAC introduces a multi-level action-conditioning\nmechanism and ray map encoding for dynamic multi-view image generation while\nexpanding training data with diverse failure trajectories to improve\ngeneralization. As both a data engine and evaluator, EVAC augments\nhuman-collected trajectories into diverse datasets and generates realistic,\naction-conditioned video observations for policy testing, eliminating the need\nfor physical robots or complex simulations. This approach significantly reduces\ncosts while maintaining high fidelity in robotic manipulation evaluation.\nExtensive experiments validate the effectiveness of our method. Code,\ncheckpoints, and datasets can be found at\n.","upvotes":23,"discussionId":"6826b013251d26fc0cd037ba","githubRepo":"https://github.com/AgibotTech/EnerVerse-AC","ai_summary":"EnerVerse-AC, an action-conditional world model, enables realistic robotic inference and testing by simulating future actions and observations, thereby reducing costs and improving generalization in dynamic settings.","ai_keywords":["action-conditional world model","robotic imitation learning","dynamic multi-view image generation","ray map encoding","failure trajectories","policy testing"],"githubStars":118},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"634e4120038b5879133552f5","avatarUrl":"/avatars/34ec861b4bbf1aecf927a7d6e726c7a4.svg","isPro":true,"fullname":"Siyuan","user":"SiyuanH","type":"user"},{"_id":"646ec9b135f55eb49e405faa","avatarUrl":"/avatars/a17194be585d20e2a021e77a5a20e213.svg","isPro":false,"fullname":"Guanghui Ren","user":"sundrops","type":"user"},{"_id":"6476ee439b76d1d5c89b7db1","avatarUrl":"/avatars/ad8f855d89c7473e9371ef0a2fd3b1be.svg","isPro":false,"fullname":"he","user":"claude2","type":"user"},{"_id":"63a59b5f28af3c9aa204a384","avatarUrl":"/avatars/087e0c098714338fc0d76c41c21dbec1.svg","isPro":false,"fullname":"yangyue","user":"yang5114","type":"user"},{"_id":"64993b27f8069251837b81ed","avatarUrl":"/avatars/9aa3956dc527dccfdc6b6014dedf761f.svg","isPro":false,"fullname":"Song Jianheng","user":"JJH1998","type":"user"},{"_id":"6458ce236fa580137af5aa95","avatarUrl":"/avatars/db65a7332e375eb5daad5c1b076b1e3b.svg","isPro":false,"fullname":"Yuxiang Chai","user":"Yuxiang007","type":"user"},{"_id":"66bb136002fd8eb58bc84ffb","avatarUrl":"/avatars/122cb8f59c502392768099b3c2afe043.svg","isPro":false,"fullname":"qinqi","user":"Dakerqi","type":"user"},{"_id":"6530e62f536dbca918e71c3e","avatarUrl":"/avatars/efc93bc767e561c6c6d429f65c23382d.svg","isPro":false,"fullname":"Xiangyu Z","user":"PhoenixZ","type":"user"},{"_id":"665d4b515fdfe8f923e347a7","avatarUrl":"/avatars/d114b24c02dadfca0a8aee104755a8ec.svg","isPro":false,"fullname":"Zhaokai Wang","user":"wzk1015","type":"user"},{"_id":"6440e108bd0c97265293a82d","avatarUrl":"/avatars/53715cd172c263c3fa4adc93126473eb.svg","isPro":false,"fullname":"Cai","user":"Wenzhe99","type":"user"},{"_id":"66c4816e96583c59b09fec30","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66c4816e96583c59b09fec30/RLurCsmcgOfyuWVkpsv1L.jpeg","isPro":false,"fullname":"Ryan Chen","user":"ryancll118","type":"user"},{"_id":"66362e4cf6185405d3d44c10","avatarUrl":"/avatars/08a9d3e8fa574dcc9c2c3e0df8957cad.svg","isPro":false,"fullname":"Xiaogang Jia","user":"DavidJia","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2505.09723

EnerVerse-AC: Envisioning Embodied Environments with Action Condition

Published on May 14
· Submitted by Siyuan on May 16
Authors:
,
,
,
,

Abstract

EnerVerse-AC, an action-conditional world model, enables realistic robotic inference and testing by simulating future actions and observations, thereby reducing costs and improving generalization in dynamic settings.

AI-generated summary

Robotic imitation learning has advanced from solving static tasks to addressing dynamic interaction scenarios, but testing and evaluation remain costly and challenging due to the need for real-time interaction with dynamic environments. We propose EnerVerse-AC (EVAC), an action-conditional world model that generates future visual observations based on an agent's predicted actions, enabling realistic and controllable robotic inference. Building on prior architectures, EVAC introduces a multi-level action-conditioning mechanism and ray map encoding for dynamic multi-view image generation while expanding training data with diverse failure trajectories to improve generalization. As both a data engine and evaluator, EVAC augments human-collected trajectories into diverse datasets and generates realistic, action-conditioned video observations for policy testing, eliminating the need for physical robots or complex simulations. This approach significantly reduces costs while maintaining high fidelity in robotic manipulation evaluation. Extensive experiments validate the effectiveness of our method. Code, checkpoints, and datasets can be found at <https://annaj2178.github.io/EnerverseAC.github.io>.

Community

Paper submitter
edited May 16

Project Page: https://annaj2178.github.io/EnerverseAC.github.io/
Open-Sourced Code: https://github.com/AgibotTech/EnerVerse-AC

Overview:

image.png

Application:

EVAC can be used as the policy evaluator and the data engine.

leaf-fail.gif

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.09723 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.09723 in a Space README.md to link it from this page.

Collections including this paper 3

Лучший частный хостинг