MSc Thesis Projects

Example project directions for students interested in joining the lab for research-oriented MSc thesis work in mobile systems, human-centered sensing, wearable AI, edge intelligence, and immersive computing.

We are always looking for highly motivated candidates to join our research lab. We have both lab-level and university-level funding opportunities to support candidates at different levels, including Research Assistant, MSc, PhD, Postdoc, and Research Assistant Professor positions. Our research covers some of the most exciting and challenging topics in mobile systems, human-centric computing, edge AI, and immersive computing (AR/VR). We maintain a strong track record of conducting cutting-edge and impactful research, with publications in top-tier conferences and journals.

The projects listed below are only examples that reflect the broader research themes in the lab. Candidates are encouraged to propose their own ideas or initiate discussions for custom projects, as long as they align well with the current research directions. We value creativity, independence, and curiosity, and we are happy to shape project ideas based on individual interests and background.

Before reaching out, please take the time to read the research directions and example projects below carefully, along with relevant references when needed.

We receive a high volume of inquiries, so generic or non-specific emails will not be considered. To facilitate a productive discussion, your email should clearly explain which research direction you are interested in and how your background aligns with the topic.

Example Topics

Topic #1

On-device LLM Deployment and System Measurement

We are looking for a motivated student interested in on-device machine learning and large language models (LLMs). This is a hands-on, engineering-oriented project focused on deploying LLMs locally on a range of computing platforms, including desktop GPUs, edge devices, smartphones, and augmented reality headsets.

Objectives

A central question in this project is: how efficient are large language models when running on everyday devices? The goal is to explore and deploy a range of LLMs across diverse local, edge, and cloud-connected platforms, including desktop GPUs, mobile devices, and embedded systems.

Students will conduct a systematic measurement study to evaluate key system-level metrics such as inference latency, memory footprint, runtime performance, and deployment stability under different configurations. The project also aims to understand practical constraints such as limited resources, thermal throttling, and hardware-software compatibility issues that affect real-world LLM deployment.

The resulting insights can help inform the design of future lightweight and optimized LLM solutions for edge intelligence.

Who should apply?

Students with an interest in AI systems, performance analysis, and embedded or mobile platforms are encouraged to apply. Prior experience with Python, Linux, and machine learning concepts is beneficial. This project is particularly suitable for students who enjoy hands-on experimentation, system implementation, and system-level thinking.

References
Topic #2

Neural Acoustic Fields-based Mobile Sensing and Localization

This project explores the use of Neural Acoustic Fields (NAFs) for mobile sensing and localization. It is a research-driven topic inspired by recent advances in Neural Radiance Fields (NeRFs), extending similar ideas to acoustic modeling and spatial sensing.

Objectives

One exemplary direction is to develop a mobile localization system that estimates the position of a smartphone from acoustic impulse responses generated by the phone speaker, using a NAF-based model. Students will adapt and optimize existing NAF architectures for localization, collect real acoustic data in indoor environments, and evaluate performance under different room geometries, noise conditions, and hardware settings.

The project will also involve benchmarking against conventional approaches and analyzing the trade-offs, limitations, and robustness of NAF-based localization pipelines.

Who should apply?

Students interested in signal processing, machine learning, and mobile sensing are encouraged to apply. Prior experience with PyTorch, acoustics, or smartphone sensing will be helpful. This topic is suitable for students who are curious about neural field modeling and enthusiastic about combining theoretical understanding with hands-on system development and experimentation.

References
Topic #3

Event Camera-based Eye Tracking and Applications

We are looking for motivated students to explore the use of event cameras for high-speed eye tracking and cognitive-aware applications. This project aims to leverage the unique advantages of event-based vision sensors to overcome the limitations of traditional frame-based eye-tracking systems and enable ultra-high-speed tracking.

Objectives

One direction of this project is to develop an ultra-fast eye-tracking system using event cameras, capable of capturing rapid eye dynamics such as microsaccades with sub-millisecond latency. Students will build or contribute to an end-to-end software pipeline for eye movement capture, develop signal processing methods for event-based data streams, and design neural algorithms to extract gaze and motion features.

A second direction is to explore potential applications in cognitive state monitoring, emotion detection, attention modeling, and authentication using high-frequency eye movement signals captured by event-based sensors.

Who should apply?

Students with interests in computer vision, signal processing, deep learning, and cognitive systems are encouraged to apply. Experience with Python, PyTorch, image processing, computer vision, or event-based sensors is a plus.

References
Topic #4

Adaptive and Secure Human-centric Applications for AR/VR Headsets

We are looking for motivated students interested in developing novel human-centric sensing applications using head-mounted platforms, such as wearable eye trackers and AR/VR headsets. This topic lies at the intersection of pervasive computing, eye tracking, and human-context sensing, with a strong emphasis on building real-time and context-aware wearable systems.

Objectives

One research direction is to design resource-efficient deep learning methods that leverage multimodal signals from head-mounted devices, such as eye gaze, head pose, and motion data, to infer rich user states including attention, cognitive workload, emotional engagement, and fatigue. These inferred states can support adaptive and personalized applications in AR/VR, such as attention-aware interfaces, cognitive load balancing, and emotion-adaptive virtual agents.

Another complementary direction is to investigate these systems from a privacy and security perspective. Recent research has shown that sensitive attributes such as user identity, mental health indicators, and even PIN codes can sometimes be inferred from signals like gaze trajectories, blink patterns, or head motion. This direction involves empirical evaluation of privacy risks, development of threat models for head-worn platforms, and exploration of privacy-preserving mechanisms such as signal obfuscation, differential privacy, or adversarial defenses.

Who should apply?

Students with an interest in human-centered computing, multimodal sensing, and privacy-aware machine learning are encouraged to apply. Background or coursework in computer vision, deep learning, AR/VR systems, or HCI will be helpful. Experience with Unity, PyTorch, or eye-tracking hardware such as Pupil Labs or Meta Quest Pro is a plus. This project is especially suitable for students excited about combining AI, sensing, and HCI in next-generation wearable systems.

References