Research

Our lab conducts research at the intersection of security, privacy, machine learning, and human-computer interaction. Our work spans three major research directions.

AI Security

AI Security

We investigate the security and trustworthiness of machine learning models, including adversarial attacks, data poisoning, backdoor attacks on model merging, and environmental injection attacks on AI agents. Our work addresses threats across the ML pipeline — from training-time poisoning to deployment-time adversarial manipulation — and explores defenses such as automated vulnerability repair.

View publications ↓
Data Privacy

Data Privacy

We study privacy risks and compliance in software systems, including GDPR enforcement, personal information disclosure in online communities, location privacy in recommendation systems, and user perceptions of data collection in IoT and public WiFi environments. Our research combines automated program analysis with empirical user studies to advance privacy protection.

View publications ↓
System Security

System Security

We analyze the security of software systems including voice-controlled platforms, IoT ecosystems, authentication protocols, smart home automations, and extended reality (XR). Our work spans vulnerability discovery in voice assistants, permission analysis, OAuth security verification, WebAssembly runtime fuzzing, and usable security tools for end users.

View publications ↓

AI Security

  • EIA: Environmental Injection Attack on Generalist Web Agents for Privacy Leakage ICLR 2025 Details →

  • BadMerging: Backdoor Attacks Against Model Merging CCS 2024 Details →

  • Chimera: Creating Digitally Signed Fake Photos by Fooling Image Recapture and Deepfake Detectors USENIX Security 2025 Details →

  • SoK: Towards Effective Automated Vulnerability Repair USENIX Security 2025 Details →

  • Conditional Supervised Contrastive Learning for Fair Text Classification EMNLP Findings 2022 Details →

  • Model-Targeted Poisoning Attacks with Provable Convergence ICML 2021 Details →

  • Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries USENIX Security 2020 Details →

Data Privacy

  • Breaking the Illusion: Automated Reasoning of GDPR Consent Violations IEEE S&P (Oakland) 2026 Details →

  • Location-Enhanced Information Flow Analysis for Smart Home Automations PETS 2026 Details →

  • Free WiFi is not ultimately free: Privacy Perceptions of Users regarding City-wide WiFi Services PETS 2025 Details →

  • Where have you been? A Study of Privacy Risk for Point-of-Interest Recommendation SIGKDD 2024 Details →

  • CHKPLUG: Checking GDPR Compliance of WordPress Plugins via Cross-language Code Property Graph NDSS 2023 Details →

  • SenRev: Measurement of Personal Information Disclosure in Online Health Communities PETS 2023 Details →

  • Exploring Smart Commercial Building Occupants Perceptions and Notification Preferences of IoT Data Collection EuroS&P 2023 Details →

  • Birthday, Name and Bifacial-security: Understanding Passwords of Chinese Web Users USENIX Security 2019 Details →

System Security

  • From Perception to Protection: A Developer-Centered Study of Security and Privacy Threats in Extended Reality (XR) NDSS 2026 Details →

  • Waltzz: WebAssembly Runtime Fuzzing with Stack-Invariant Transformation USENIX Security 2025 Details →

  • AuthSaber: Automated Safety Verification of OpenID Connect Programs CCS 2024 Details →

  • Alexa, is the skill always safe? Uncover Lenient Skill Vetting Process and Protect User Privacy at Run Time ICSE 2024 Details →

  • Towards Real-time Voice Interaction Data Collection Monitoring and Ambient Light Privacy Notification for Voice-controlled Services USEC 2024 Details →

  • Towards Usable Security Analysis Tools for Trigger-Action Programming SOUPS 2023 Details →

  • Your Microphone Array Retains Your Identity: A Robust Voice Liveness Detection System for Smart Speakers USENIX Security 2022 Details →

  • SkillBot: Identifying Risky Content for Children in Alexa Skills ACM TOIT 2022 Details →

  • VerHealth: Vetting Medical Voice Applications through Policy Enforcement Ubicomp 2021 Details →

  • Read Between the Lines: An Empirical Measurement of Sensitive Applications of Voice Personal Assistant Systems WWW 2020 Details →

  • TKPERM: Cross-platform Permission Knowledge Transfer to Detect Overprivileged Third-party Applications NDSS 2020 Details →

  • OAuthLint: An Empirical Study on OAuth Bugs in Android Applications ASE 2019 Details →

  • Poster: Attack the Dedicated Short-Range Communication for Connected Vehicles Oakland 2019 Details →

  • Side Channel Attacks in GPU-Virtualization-Based Computation-Offload Systems SafeThings 2019 Details →