About Me
Hello, this is WANG, Xingbo (王星博). I am currently a postdoctoral associate working with Prof. Fei Wang (opens new window) at the Weill Medical College (opens new window) of Cornell University. I also closely collaborate with Prof. Tanzeem Choudhury (opens new window) at Cornell Tech. Previously, I received my PhD at the CSE (opens new window) department of HKUST (opens new window), where I was advised by Prof. Huamin Qu (opens new window). Before that, I obtained my B.S. degree from Wuhan University (opens new window).
My research interests lie in the intersection of human computer interaction (HCI), data visualization (VIS), natural language processing (NLP) and multimodal machine learning. I design and develop interactive systems and visual interfaces for human-centered AI, which
- adopt data-driven approaches to help model experts understand and diagnose AI in downstream tasks;
- promote human-AI interaction to augment human intelligence in decision-making, learning, and creation.
Moreover, I investigate and analyze multimodality of human communication (e.g., engagement, humor, emotion) in videos for real-world applications.
Current Projects
- LLMs for precision and personalized health (e.g., sleep, mental health)
- Collaborative human-AI decision making and creativity support
- Interpretability and hallucinations of LLMs
- Multimodal analysis of EHR data, video data
I welcome the opportunity for research collaboration. If you're interested, drop me an email—I'm eager to explore potential research projects or ideas together.
News
- Jul. 2024 📢 : Two co-athored papers with SYSU's teams on LLMs for (1) supporting teachers' teaching plans and (2) proactive peer support are accepted to UIST 2024.
- Jun. 2024 📢 : I will serve as an Associate Chair for CSCW 2025. Thanks for the invitation!
- May. 2024 📢 : One paper about multimodal vocabulary learning via story retelling (opens new window) is accepted to DIS 2024.
- Feb. 2024 📢 : One scoping review paper about LLM interpretability and applications in biomedicine (opens new window) is published in iScience.
- Jan. 2024 📢 : One viewpoint paper about LLM in healthcare (opens new window) is published in Journal of General Internal Medicine (JGIM).
- Dec. 2023 📢 : I will serve as an Associate Chair for CHI LBW 2024. Thanks for the invitation!
- Nov. 2023 📢 : Built a web app for personalized health data analysis: link (opens new window).
- Aug. 2023 📢 : Built a web app for interactive visual data analysis, where you can use chat with your data through natural language: link (opens new window).
- July. 2023 🎉 : Four papers (one about explaning commonsense knowledge and reasoning capabilities of NLP models, and three about multimodal data analysis) are accepted to IEEE VIS 2023 @ Melbourne. 👏 More details to come later.
- July. 2023 🎉 : One paper about story co-writing for vocabulary learning is conditionally accepted to ACM UIST 2023@San Francisco. 👏
- Jun. 2023 📢 : I started my postdoc @ Weill Cornell Medicine.
- Mar. 2023 📢 : Our MultiViz paper is accepted to CHI LBW 23. 👏
- Feb. 2023 📢 : I gave a talk at Weil Cornell Medicine titled "Unlease the Power of Data-driven AI through Explanations and Interactions".
- Feb. 2023 📢 : One paper about multimodal satisfaction analysis is acceoted to IEEE TVCG. 👏 (Anchorage: Visual Analysis of Satisfaction in Customer Service Videos via Anchor Events)
- Jan. 2023 📢 : One paper about interpretability analysis of multimodal models is accepted to ICLR 2023. 👏 (MultiViz: Towards Visualizing and Understanding Multimodal Models)
- Jan. 2023 📢 : One paper about explanation and diagnosis of NLI is accepted to IEEE TVCG. 👏 (XNLI: Explaining and Diagnosing NLI-based Visual Data Analysis)
- Jan. 2023 📢 : One paper about shortcut exploration in NLP datasets is accepted to IEEE TVCG. 👏 (ShortcutLens: A Visual Analytics Approach for Exploring Shortcuts in Natural Language Understanding Dataset)
- Dec. 2022 Three papers receive minor revision at IEEE TVCG.
- Jul. 2022 📢 : One poster about pre-screening tool for students with specific learning disabilities is accepted by ASSETS 2022 👏
- Jul. 2022: I will serve as a student volunter captain at IEEE VIS 2022 (opens new window).
- Jan. 2022 📢 : One paper about persuasive strategies in online discussion is accepted with minor revision by CSCW 2022 👏 (Persua: A Visual Interactive System to Enhance the Persuasiveness of Arguments in Online Discussion)
- Jan. 2022 📢 : One paper about multimodal gesture analysis receives minor revision at IEEE TVCG 👏 (GestureLens: Visual Analysis of Gestures in Presentation Videos)
- Oct. 2021 We release the project pages of DeHumor (opens new window) and M2Lens (opens new window). Welcome to check them out.
- Sep. 2021 Our paper "M2Lens: Visualizing and Explaining Multimodal Models for Sentiment Analysis" receives Best Paper Honorable Mention ! (opens new window)
- Jul. 2021 I present "M2Lens: Visualizing and Explaining Multimodal Models for Sentiment Analysis" @China VIS 2020, Wuhan! (, )
- Jul. 2021 I visit IDG (opens new window) group of Zhejiang Univeristy, thank professor Wu for his hospitality! ()
- Jul. 2021 Our XAI paper "M2Lens: Visualizing and Explaining Multimodal Models for Sentiment Analysis" is accepted by IEEE VIS 2021, Virtual! See you online!
- Jul. 2021 Our paper "DeHumor: Visual Analytics for Decomposing Humor" is finally accepted by IEEE TVCG, 2021! Stay tuned!
- Jan. 2021 Our team won Silver Award of the 6th China International College Students' "Internet+" Innovation and Entrepreneurship Competition (, , ) Announcement (opens new window)
- Dec. 2020 Our team won Silver Award of 12th Challenge Cup National College Student Business Plan Competition (, ) Announcement (opens new window)
- Jun. 2020 I passed the PhD Qualifying Exam (PQE). And now I am a PhD Candidate!
- !!! Jun. 2020 !!! Our project won the First Prize for the 6th Hong Kong University Student Innovation and Entrepreneurship Competition
- **!!! May. 2020 !!! Welcome to check the online presentation of VoiceCoach for CHI 2020.
- Dec. 2019 Our paper "VoiceCoach: Interactive Evidence-based Training for Voice Modulation Skills in Public Speaking" has been conditionally accepted by CHI 2020.
- Jul. 2019 Our paper "EmoCo: Visual Analysis of Emotion Coherence in Presentation Videos" is accepted by IEEE VIS 2019 @Vancouver, Canada!
- Jun. 2019 Our project FLY healthtech wins Student Team Award (opens new window) @HKUST-SINO One Million Dollar Entrepreneurship Competition 2019!
- Aug. 2018 I join Vislab @HKUST.
- Jun. - Aug.2018 I visit at IDG (opens new window) @ZJU.
- Jun. 2018 I graduate from WHU.
Publications
User interfaces for human-AI interactions and human-data interactions
ComPeer: A Generative Conversational Agent for Proactive Peer Support
Tianjian Liu, Hongzheng Zhao, Yuheng Liu, Xingbo Wang, Zhenhui Peng
The ACM Symposium on User Interface Software and Technology (UIST 2024) (to Appear).
LessonPlanner: Assisting Novice Teachers to Prepare Pedagogy-Driven Lesson Plans with Large Language Models
Haoxiang Fan, Guanzheng Chen, Xingbo Wang, Zhenhui Peng
The ACM Symposium on User Interface Software and Technology (UIST 2024) (to Appear).
CommonsenseVIS: Visualizing and Understanding Commonsense Reasoning Capabilities of Natural Language Models
Xingbo Wang, Renfei Huang, Zhihua Jin, Tianqing Fang, and Huamin Qu
IEEE Transactions on Visualization and Computer Graphics (VIS 2023).
PDF | arXiv (opens new window) | IEEEXplore (opens new window) | 💻/🌐 Project Page & Demo (opens new window) | video (opens new window) | Slides
Storyfier: Exploring Vocabulary Learning Support with Text Generation Models
Xingbo Wang*, Zhenhui Peng*, Qiushi Han, Junkai Zhu, Xiaojuan Ma, Huamin Qu.
The ACM Symposium on User Interface Software and Technology (UIST 2023).
VideoPro: A Visual Analytics Approach for Interactive Video Programming
Jianben He, Xingbo Wang, Kam Kwai Wong, Xijie Huang, Changjian Chen, Zixin Chen, Fengjie Wang, Min Zhu, Huamin Qu.
IEEE Transactions on Visualization and Computer Graphics (VIS 2023).
PDF | arXiv (opens new window) | IEEEXplore (opens new window)
PromptMagician: Interactive Prompt Engineering for Text-to-Image Creation
Yingchaojie Feng, Xingbo Wang, Kam Kwai Wong, Sijia Wang, Yuhong Lu, Minfeng Zhu, Baicheng Wang, Wei Chen.
IEEE Transactions on Visualization and Computer Graphics (VIS 2023).
XNLI: Explaining and Diagnosing NLI-based Visual Data Analysis
Yingchaojie Feng, Xingbo Wang, Bo Pan, Kam Kwai Wong, Yi Ren, Shi Liu, Zihan Yan, Yuxin Ma, Huamin Qu, and Wei Chen.
IEEE Transactions on Visualization and Computer Graphics, 2023.
PDF | arXiv (opens new window) | IEEEXplore (opens new window) | Slides
M2Lens: Visualizing and Explaining Multimodal Models for Sentiment Analysis
Xingbo Wang, Jianben He, Zhihua Jin, Muqiao Yang, Yong Wang, Huamin Qu.
IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE VIS'21).
🏆 Best Paper Honorable Mention
PDF | arXiv (opens new window) | IEEEXplore (opens new window) | 💻 Project Page (opens new window) | 🌐 Demo (opens new window)
Selected Awards & Honors
2021 Best Paper Honorable Mention, IEEE VIS 2021
2021 Silver Award, the 6th China International College Students’ “Internet+“ Innovation and Entrepreneurship Competition
2020 Silver Award, the 12th “Challenge Cup“ National College Student Business Plan Competition
2020 SENG Academic Award for Continuing PhD Students 2019 - 2020, HKUST
2020 First Prize, the 6th Hong Kong University Student Innovation and Entrepreneurship Competition Project: Early Screening and Training Platform for Neurodevelopmental Disorder
2019 Student Team Award, HKUST-SINO One Million Dollar Entrepreneurship Competition Project: Early Screening and Training Platform for Neurodevelopmental Disorder
2018 Role Models of WHU, (4/408 National Scholarship Winners)
2017 China National Scholarship (1.6%)
2016 Wangzhizhuo Innovative Talents Scholarship (2%, Highest Major Scholarship & Invited Talk)
2015 China National Scholarship (2%)
2015 National Second Prize, The Chinese Mathematics Competitions (CMC)