• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Home
  • Research
  • Publications
  • People
  • Teaching
  • Media Outreach
  • News
  • AI Spies News

SPIES Lab, Computer Science and Engineering

Texas A&M University College of Engineering

AI Spies News

News: Security and Accessibility Gaps in Web Authentication for Blind and Visually Impaired Users

Posted on June 30, 2025 by Jimmy Dani

College Station, TX — June 2025

This news story was fully generated by AI, the text using GPT-4.5 and the image using GPT-4o, with necessary review and corrections by the SPIES researchers.

SHARE THIS

🐦 Twitter
📘 Facebook
🔗 LinkedIn
✉️ Email

In groundbreaking research presented at the ACM Web Conference 2025 (WWW), researchers from Texas A&M University’s Security and Privacy in Emerging Computing and Networking Systems (SPIES) lab have highlighted significant vulnerabilities and accessibility challenges in two-factor (2FA) and passwordless authentication methods for blind and visually impaired users relying on screen readers.

Illustration showing a blind user using a laptop and phone with screen reader, facing push notification and phishing risks

The study, titled “Broken Access: On the Challenges of Screen Reader Assisted Two-Factor and Passwordless Authentication,” reveals how commonly used authentication methods, such as Google, Microsoft, and Duo’s OTP-2FA, phone call 2FA, push notifications, and FIDO-based MFA, often fail to effectively accommodate the specific needs of blind and visually impaired individuals. Through systematic evaluation using the team’s newly developed Authentication Workflows Accessibility Review and Evaluation (AWARE) framework, researchers found numerous critical security issues, including susceptibility to phishing, notification fatigue, and concurrent login attacks.

“Our goal was to expose overlooked gaps in the current authentication landscape that disproportionately affect blind and visually impaired users,” said Md Mojibur Rahman Redoy Akanda, lead author and PhD student working with Dr. Nitesh Saxena. “Despite being promoted as secure and usable, many real-world 2FA and passwordless systems are simply not designed with accessibility in mind.”

Key findings highlight how imprecise instructions and insufficient accessibility considerations significantly increase vulnerability for visually impaired users. Specifically, the researchers identified critical conflicts between simultaneous authentication steps (such as receiving OTP codes via phone calls) and screen reader audio prompts, leading to confusion and potential security breaches. Additionally, they discovered screen readers mispronouncing numeric OTPs, interpreting them incorrectly as continuous numbers rather than distinct digits, and observed difficulties in managing authentication prompts when users concurrently used screen readers on both smartphones and PCs.

“This research opens up a much-needed conversation at the intersection of accessibility and cybersecurity,” said Dr. Nitesh Saxena, Director of the SPIES Lab at Texas A&M University. “We hope these findings will guide system designers, developers, and policymakers to adopt more inclusive authentication practices—making secure access a right, not a privilege.”

This research underscores the urgent need for developers to implement clearer authentication workflows and better integration of accessibility standards. The SPIES team offers concrete recommendations for enhancing security and usability, such as explicit instructions, automated phishing detection, and optimized communication between authentication interfaces and screen readers.

The findings presented at WWW ’25 are a pivotal step toward ensuring digital authentication methods are secure and inclusive for all users, particularly the visually impaired.

To read the full paper, click here.

Citation:
Md Mojibur Rahman Redoy Akanda, Ahmed Tanvir Mahdad, and Nitesh Saxena. 2025. Broken Access: On the Challenges of Screen Reader Assisted Two-Factor and Passwordless Authentication. In Proceedings of the ACM Web Conference 2025 (WWW ’25), April 28–May 2, 2025, Sydney, NSW, Australia. ACM, New York, NY, USA, 13 pages. https://doi.org/10.1145/3696410.3714579

Read more stories like this on AI Spies News.

Follow us on Medium.

Filed Under: AI Spies News

AI Spies News — BPSniff (IEEE S&P 2025) Paper News Story

Posted on May 12, 2025 by nsaxena

New Study Uncovers Privacy Risks: VR Headsets Can Secretly Monitor Your Blood Pressure

College Station, TX — May 2025

This news story was fully generated by AI, the text using GPT-4o and the image using GPT, with necessary review and corrections by the SPIES researchers.

SHARE THIS

🐦 Twitter
📘 Facebook
🔗 LinkedIn
✉️ Email

A team of researchers from Temple University, Texas A&M University, Rutgers University and New Jersey Institute of Technology has uncovered a serious privacy vulnerability in consumer virtual reality (VR) headsets. The study reveals that built-in motion sensors, typically used to enhance immersive VR experiences, can be covertly exploited to continuously infer users’ blood pressure without their knowledge or consent. The full findings are being presented at the 2025 IEEE Symposium on Security and Privacy (S&P), one of the leading conferences in cybersecurity and privacy research.

The attack, dubbed BPSniff, demonstrates that blood-pressure-related vibrations—specifically ballistocardiogram (BCG) signals generated by blood flow—can be detected by high-frequency motion sensors embedded in devices like Meta Quest and Meta Quest 2. By analyzing these subtle physiological movements, attackers can estimate both systolic and diastolic blood pressure with a level of accuracy comparable to clinical-grade devices.

Unlike traditional health monitoring systems that require user calibration or consent, BPSniff bypasses both. The research shows that malicious apps or web-based scripts can access motion sensor data from VR headsets without explicit permissions. This allows adversaries to passively collect highly sensitive biometric data in real time, raising alarms about user surveillance in metaverse environments.

BPSniff utilizes advanced machine learning models, combining variational autoencoders (VAE) and long short-term memory (LSTM) networks, to reconstruct blood flow patterns from sensor data. These reconstructions are then used to estimate blood pressure continuously, achieving mean errors of just 1.75 mmHg (systolic) and 1.34 mmHg (diastolic)—well within FDA and AAMI medical standards.

The researchers tested the attack across multiple use cases, including various physical postures, headset models, and user movements. Even with noise introduced by normal VR activity like gaming or walking, BPSniff remained effective. The system’s robustness was further confirmed through an eight-week longitudinal study with 37 participants.

The implications are broad and alarming. Unauthorized access to blood pressure data can reveal information about a person’s health status, stress levels, emotional states, and reactions to stimuli—potentially enabling manipulation, discrimination, or psychological profiling. This threat escalates when combined with identity linkage from other data sources, opening the door to highly personalized and invasive surveillance.

To mitigate the risk, the researchers advocate for stronger privacy controls on motion sensor access, including real-time usage monitoring, permission-based frameworks, and AI-driven auditing tools within VR platforms. As the metaverse grows into a space for entertainment, collaboration, and even healthcare, this study highlights the urgent need to secure embedded sensors against misuse.

To read the full paper, click here.

Citation:
Ye, Zhengkun, Ahmed Tanvir Mahdad, Yan Wang, Cong Shi, Yingying Chen, and Nitesh Saxena. “BPSniff: Continuously surveilling private blood pressure information in the metaverse via unrestricted inbuilt motion sensors.” In 2025 IEEE Symposium on Security and Privacy (SP), pp. 4356-4374. IEEE, 2025.

Read more stories like this on AI Spies News.

Follow us on Medium.

Filed Under: AI Spies News

Recent News

  • Paper accepted to eCrime 2025 September 9, 2025
  • Paper accepted to IEEE S&P (Magazine) August 24, 2025
  • Another recent SPIES graduate to take up faculty position August 21, 2025
  • Paper accepted to CSCML 2025 August 13, 2025
  • SPIES graduate to start as Assistant Professor July 22, 2025
  • Paper accepted to ACM CCS 2025 July 2, 2025
  • News: Security and Accessibility Gaps in Web Authentication for Blind and Visually Impaired Users June 30, 2025
  • Paper accepted to ICME 2025 June 24, 2025
  • SPIES Lab’s Browser Fingerprinting Work in the News June 23, 2025
  • Journal paper accepted to IEEE TIFS June 19, 2025
  • SPIES Lab’s Browser Fingerprinting Work Features in News June 18, 2025
  • Paper Accepted to USENIX Security 2025 June 6, 2025
  • 2 Papers Accepted to PST 2025 June 6, 2025
  • AI Spies News — BPSniff (IEEE S&P 2025) Paper News Story May 12, 2025
  • Launching the AI Spies News Channel May 12, 2025
  • Paper accepted to WiSec 2025 May 11, 2025
  • SPIES Lab’s Secure Messaging Work Features in News May 3, 2025
  • SPIES Lab Student to Start as an Assistant Professor April 18, 2025
  • Dr. Saxena’s Primer on Secure Communications in News Media March 31, 2025
  • Dr. Saxena recognized with the Dean’s Excellence Award! February 14, 2025

© 2016–2025 SPIES Lab, Computer Science and Engineering Log in

Texas A&M Engineering Experiment Station Logo
  • College of Engineering
  • Facebook
  • Twitter
  • State of Texas
  • Open Records
  • Risk, Fraud & Misconduct Hotline
  • Statewide Search
  • Site Links & Policies
  • Accommodations
  • Environmental Health, Safety & Security
  • Employment