Each year, we look for 5–6 highly motivated master’s students to join our research lab. Our research covers some of the most exciting and challenging topics in mobile computing systems, wearable sensing, and Edge AI. We have a strong track record of conducting cutting-edge, impactful research, with numerous publications in top-tier conferences and journals—many of which are based on, or include significant contributions from, MSc students.
The projects listed below are just some examples that reflect the ongoing research themes in the lab. Students are encouraged to propose their own ideas or initiate discussions for custom projects, as long as they are aligned with the current research themes. We value creativity, independence, and curiosity, and are happy to create and shape new project ideas based on individual interests and background.
NOTES: Before reaching out, please take the time to read the research topics and example projects listed below (and relevant references, if needed). We receive a high volume of inquiries, and thus, generic or non-specific requests will not be considered. To facilitate a productive discussion, your email should clearly state which research direction you are interested in and how your background aligns with the topic. When applying for a project, please also include your CV and a list of completed courses.
We are looking for a motivated student interested in on-device machine learning and Large Language Models (LLMs). This is a hands-on, engineering-oriented project focused on deploying LLMs locally on a range of computing platforms, including desktop GPUs, edge devices, smartphones, and augmented reality headsets.
Objectives. How effecient are large language models when running on everyday devices? This project aims to explore and deploy a range of LLMs across diverse local, edge, and cloud platforms, including desktop GPUs, mobile devices, and embedded systems. The students will conduct a systematic measurement study to evaluate key system-level metrics such as inference latency, memory footprint, and runtime performance of LLMs under various deployment configurations. We aim to understand the trade-offs and limitations of on-device LLM execution, and to identify practical challenges, such as resource constraints, thermal throttling, and hardware-software compatibility, that impact their efficient use on real-world devices. The insights from this study will help inform the design of future lightweight and optimized LLM solutions for edge intelligence.
Who should apply? Students with an interest in AI systems, performance analysis, and embedded or mobile platforms are encouraged to apply. Prior experience with Python, Linux, and machine learning concepts is beneficial. Success in this project will require enthusiasm for hands-on experimentation, system implementation, and a strong inclination toward system-level thinking.
References:
An example project is to explore the use of Neural Acoustic Fields (NAFs) for mobile sensing and localization. This is a research-driven project inspired by recent advances in Neural Radiance Fields (NeRFs), extending their principles to acoustic modeling and sensing.
Objectives. An exemplary project is to develop a mobile localization system that can estimate the position of the smartphone from acoustic impulse responses of the signal generated by the smartphone speaker using NAFs-based model. Students will adapt and optimize existing NAF architectures for localization, collect real acoustic data in indoor settings, and evaluate the localization performance across various room geometries, noise conditions, and hardware setups. Benchmarking against conventional methods and highlighting the trade-offs and robustness of NAF-based localization.
Who should apply? Students interested in signal processing, machine learning, and mobile sensing are encouraged to apply. Prior experience with PyTorch, acoustics, or smartphone sensing will be helpful. You should be curious about neural field modeling and enthusiastic about combining theoretical understanding with hands-on experimentation and system development.
References:
We are looking for motivated students to explore the use of event cameras for high-speed eye tracking and cognitive-aware applications. This project aims to leverage the unique advantages of event-based vision sensors to overcome the limitations of traditional frame-based eye-tracking systems for ultra-high speed eye tracking.
Objectives. One direction of this project aims to develop an ultra-fast eye-tracking system using event cameras, capable of capturing rapid eye dynamics such as microsaccades with sub-millisecond latency. Students will build (or contribute to) an end-to-end software pipeline for eye movement capture, develop signal processing for event-based data stream, and neural algorithms to extract gaze and motion features, and evaluate the system’s performance in controlled and naturalistic settings. A second direction is to explore potential applications in cognitive state monitoring, emotion detection, attention modeling, and authentication using high-frequency eye movement signals captured by event-based cameras.
Who should apply? Students with interests in computer vision, signal processing, deep learning, and cognitive systems are encouraged to apply. Experience with Python, PyTorch, computer vision, image signal processing, or working with event-based sensors is a plus.
References:
We are looking for motivated students interested in developing novel human-centric sensing applications using head-mounted platforms, such as wearable eye trackers and AR/VR headsets. This project lies at the intersection of pervasive computing, eye tracking, and human-context sensing, with a strong emphasis on building real-time, context-aware wearable sensing systems.
Objectives. One research direction is to design resource-efficient deep learning methods that can leverage multimodal signals from head-mounted devices—such as eye gaze, head pose, and motion data—to infer rich user states, including attention, cognitive workload, emotional engagement, and fatigue. These inferred states can enable adaptive and personalized applications in AR/VR environments, such as attention-aware interfaces, cognitive load balancing, and emotion-adaptive virtual agents. A key challenge lies in optimizing these models to run in real-time and on-device, under strict computational and energy constraints typical of wearable systems. Another complementary direction is to investigate these head-mounted systems from a privacy and security perspective, aiming to identify unintended information leakage and system vulnerabilities. Recent research has demonstrated that sensitive attributes such as user identity, mental health indicators, and even PIN codes can be inferred from seemingly innocuous signals like gaze trajectories, blink patterns, or head motion. This line of work will involve conducting empirical evaluations of privacy risks, developing threat models specific to head-worn platforms, and exploring privacy-preserving mechanisms, such as signal obfuscation, differential privacy, or adversarial defenses, to mitigate these risks.
Who should apply? Students with an interest in human-centered computing, multimodal sensing, and privacy-aware machine learning are encouraged to apply. Background or coursework in computer vision, deep learning, or AR/VR systems will be helpful. Experience with tools such as Unity, PyTorch, or eye-tracking hardware (e.g., Pupil Labs, Meta Quest Pro) is a plus. This project is suitable for students who are excited about combining AI, sensing, and HCI, and who are curious about either developing efficient inference models for on-device understanding of user states, or analyzing and mitigating privacy risks in next-generation wearable systems.
References: