Most current AI models are based on static datasets, limiting their adaptability and real-time diagnostic potential. To address this gap, researchers have developed a novel proof-of-concept deep learning model that leverages real-time data to assist in diagnosing nystagmus — a condition characterized by involuntary, rhythmic eye movements often linked to vestibular or neurological disorders.

The platform allows patients to record their eye movements using a smartphone, securely upload the video to a cloud-based system, and receive remote diagnostic analysis from vestibular and balance experts — all without leaving their home.

At the heart of this innovation is a deep learning framework that uses real-time facial landmark tracking to analyze eye movements. The AI system automatically maps 468 facial landmarks and evaluates slow-phase velocity — a key metric for identifying nystagmus intensity, duration, and direction. It then generates intuitive graphs and reports that can easily be interpreted by audiologists and other clinicians during virtual consultations.

Researchers are also experimenting with a wearable headset equipped with deep learning capabilities to detect nystagmus in real time. Early tests in controlled environments have shown promise, although improvements are still needed to address challenges such as sensor noise and variability among individual users. (Image credit: Florida Atlantic University)

For more information, visit here  .



Magazine cover
Medical Design Briefs Magazine

This article first appeared in the November, 2025 issue of Medical Design Briefs Magazine (Vol. 15 No. 11).

Read more articles from this issue here.

Read more articles from the archives here.