Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2024-01-31T01:22:57.427Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7319927215576172},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[{"reaction":"👍","users":["nicoism","xiaolouge","twiceYuan","Jay110","AdinaY"],"count":5}],"isReport":false}},{"id":"6629d7fd30c11f38ca7b553e","author":{"_id":"65a8c01226598b99554309e8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/b7-9j8mslfRo268Tw3jPN.jpeg","fullname":"jay patel","name":"Jay110","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false},"createdAt":"2024-04-25T04:11:41.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"@librarian-bot need papers related to UI Automation testing using Multi Modal","html":"
\n\n@librarian-bot\n\t need papers related to UI Automation testing using Multi Modal
\n","updatedAt":"2024-06-09T07:18:40.122Z","author":{"_id":"6186ddf6a7717cb375090c01","avatarUrl":"/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg","fullname":"Julien BLANCHON","name":"blanchon","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":142}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5153980255126953},"editors":["blanchon"],"editorAvatarUrls":["/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2401.16158","authors":[{"_id":"65b86e5d45d7cf731f836a12","user":{"_id":"6438f6415aa69077ffb16942","avatarUrl":"/avatars/c83dbd3e10e88db97c2a86092bad5917.svg","isPro":false,"fullname":"Junyang Wang","user":"junyangwang0410","type":"user"},"name":"Junyang Wang","status":"admin_assigned","statusLastChangedAt":"2024-01-30T08:19:05.778Z","hidden":false},{"_id":"65b86e5d45d7cf731f836a13","user":{"_id":"645b10e80c73ea27d13f7aca","avatarUrl":"/avatars/95e565306472a15067440b5b43e07a6f.svg","isPro":false,"fullname":"xuhaiyang","user":"xhyandwyy","type":"user"},"name":"Haiyang Xu","status":"claimed_verified","statusLastChangedAt":"2024-01-30T08:06:34.075Z","hidden":false},{"_id":"65b86e5d45d7cf731f836a14","user":{"_id":"63cd1e04ff7cd335f0ddfa66","avatarUrl":"/avatars/8cca4ed96c699f53d4daabff0f6d6b56.svg","isPro":false,"fullname":"Jiabo Ye","user":"Mizukiluke","type":"user"},"name":"Jiabo Ye","status":"admin_assigned","statusLastChangedAt":"2025-08-25T18:03:07.419Z","hidden":false},{"_id":"65b86e5d45d7cf731f836a15","user":{"_id":"64771cfdd7cf39f2e9381aa9","avatarUrl":"/avatars/48adf00c3b653df02628f80511639e19.svg","isPro":false,"fullname":"Ming","user":"MingYan123","type":"user"},"name":"Ming Yan","status":"claimed_verified","statusLastChangedAt":"2024-01-30T08:06:19.833Z","hidden":false},{"_id":"65b86e5d45d7cf731f836a16","user":{"_id":"64777a346e6c7ac608c1e9bf","avatarUrl":"/avatars/b0e65ba781c90c2560606eb5467101eb.svg","isPro":false,"fullname":"Weizhou Shen","user":"shenwzh3","type":"user"},"name":"Weizhou Shen","status":"admin_assigned","statusLastChangedAt":"2024-01-30T08:19:28.485Z","hidden":false},{"_id":"65b86e5d45d7cf731f836a17","name":"Ji Zhang","hidden":false},{"_id":"65b86e5d45d7cf731f836a18","user":{"_id":"635b8b6a37c6a2c12e2cce00","avatarUrl":"/avatars/229fb72180529141515d1df797b33709.svg","isPro":false,"fullname":"Fei Huang","user":"hzhwcmhf","type":"user"},"name":"Fei Huang","status":"admin_assigned","statusLastChangedAt":"2024-01-30T08:20:49.620Z","hidden":false},{"_id":"65b86e5d45d7cf731f836a19","user":{"_id":"6494457c6339264dd78bcb95","avatarUrl":"/avatars/d87842251f1a43f50cc827f0e2a995ee.svg","isPro":false,"fullname":"sdzy","user":"sdzy","type":"user"},"name":"Jitao Sang","status":"claimed_verified","statusLastChangedAt":"2024-12-04T09:47:58.287Z","hidden":false}],"publishedAt":"2024-01-29T13:46:37.000Z","submittedOnDailyAt":"2024-01-30T01:04:56.696Z","title":"Mobile-Agent: Autonomous Multi-Modal Mobile Device Agent with Visual\n Perception","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"Mobile device agent based on Multimodal Large Language Models (MLLM) is\nbecoming a popular application. In this paper, we introduce Mobile-Agent, an\nautonomous multi-modal mobile device agent. Mobile-Agent first leverages visual\nperception tools to accurately identify and locate both the visual and textual\nelements within the app's front-end interface. Based on the perceived vision\ncontext, it then autonomously plans and decomposes the complex operation task,\nand navigates the mobile Apps through operations step by step. Different from\nprevious solutions that rely on XML files of Apps or mobile system metadata,\nMobile-Agent allows for greater adaptability across diverse mobile operating\nenvironments in a vision-centric way, thereby eliminating the necessity for\nsystem-specific customizations. To assess the performance of Mobile-Agent, we\nintroduced Mobile-Eval, a benchmark for evaluating mobile device operations.\nBased on Mobile-Eval, we conducted a comprehensive evaluation of Mobile-Agent.\nThe experimental results indicate that Mobile-Agent achieved remarkable\naccuracy and completion rates. Even with challenging instructions, such as\nmulti-app operations, Mobile-Agent can still complete the requirements. Code\nand model will be open-sourced at https://github.com/X-PLUG/MobileAgent.","upvotes":20,"discussionId":"65b86e6045d7cf731f836ac3","ai_summary":"Mobile-Agent, a vision-centric multimodal large language model agent for mobile devices, autonomously performs complex tasks without relying on system-specific customizations and achieves high accuracy and adaptability.","ai_keywords":["Multimodal Large Language Models","MLLM","mobile device agent","visual perception tools","autonomous planning","vision-centric","Mobile-Eval","benchmark","mobile device operations"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"62133e6d4edc5ca8e6b04c41","avatarUrl":"/avatars/659617b6cc1a8d8a3433f686db41acc5.svg","isPro":false,"fullname":"Nauman Mustafa","user":"naxautify","type":"user"},{"_id":"635cada2c017767a629db012","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1667018139063-noauth.jpeg","isPro":false,"fullname":"Ojasvi Singh Yadav","user":"ojasvisingh786","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"645b10e80c73ea27d13f7aca","avatarUrl":"/avatars/95e565306472a15067440b5b43e07a6f.svg","isPro":false,"fullname":"xuhaiyang","user":"xhyandwyy","type":"user"},{"_id":"5e4318d616b09a31220980d6","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5e4318d616b09a31220980d6/24rMJ_vPh3gW9ZEmj64xr.png","isPro":true,"fullname":"Manuel Romero","user":"mrm8488","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"6438f6415aa69077ffb16942","avatarUrl":"/avatars/c83dbd3e10e88db97c2a86092bad5917.svg","isPro":false,"fullname":"Junyang Wang","user":"junyangwang0410","type":"user"},{"_id":"646427889dd8b530a8615fd8","avatarUrl":"/avatars/72a38d297cec02cdad7c8555dd0e759f.svg","isPro":false,"fullname":"Vince","user":"bolerovt","type":"user"},{"_id":"64771cfdd7cf39f2e9381aa9","avatarUrl":"/avatars/48adf00c3b653df02628f80511639e19.svg","isPro":false,"fullname":"Ming","user":"MingYan123","type":"user"},{"_id":"62009887577fcfa0ce82f1b0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/laLRCNcbJc2bZzf_4MCXR.png","isPro":false,"fullname":"chenyunkuo","user":"yunkchen","type":"user"},{"_id":"5f17f0a0925b9863e28ad517","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5f17f0a0925b9863e28ad517/fXIY5i9RLsIa1v3CCuVtt.jpeg","isPro":true,"fullname":"Victor Mustar","user":"victor","type":"user"},{"_id":"644e1b1d9b4e87c31bab0a14","avatarUrl":"/avatars/88bb4c4a67dc8958069e9014f5e73a0b.svg","isPro":false,"fullname":"Michael Barry","user":"MichaelBarryUK","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Mobile-Agent, a vision-centric multimodal large language model agent for mobile devices, autonomously performs complex tasks without relying on system-specific customizations and achieves high accuracy and adaptability.
AI-generated summary
Mobile device agent based on Multimodal Large Language Models (MLLM) is
becoming a popular application. In this paper, we introduce Mobile-Agent, an
autonomous multi-modal mobile device agent. Mobile-Agent first leverages visual
perception tools to accurately identify and locate both the visual and textual
elements within the app's front-end interface. Based on the perceived vision
context, it then autonomously plans and decomposes the complex operation task,
and navigates the mobile Apps through operations step by step. Different from
previous solutions that rely on XML files of Apps or mobile system metadata,
Mobile-Agent allows for greater adaptability across diverse mobile operating
environments in a vision-centric way, thereby eliminating the necessity for
system-specific customizations. To assess the performance of Mobile-Agent, we
introduced Mobile-Eval, a benchmark for evaluating mobile device operations.
Based on Mobile-Eval, we conducted a comprehensive evaluation of Mobile-Agent.
The experimental results indicate that Mobile-Agent achieved remarkable
accuracy and completion rates. Even with challenging instructions, such as
multi-app operations, Mobile-Agent can still complete the requirements. Code
and model will be open-sourced at https://github.com/X-PLUG/MobileAgent.