Robotics engineer passionate about autonomous systems, computer vision, and building intelligent machines that interact with the real world.
I’m an M.Sc. student in Robotics Engineering with a strong focus on building real-world systems. I’m driven by the challenge of turning complex ideas into working hardware, from power distribution and embedded communication to intelligent control and machine learning.
My work spans embedded systems, power electronics, sensor integration, and applied computer vision. I enjoy architecting complete systems, designing PCBs, integrating distributed nodes, and debugging them until they’re robust and reliable.
What motivates me most is bridging hardware and software: taking a concept from schematic and code all the way to a functioning robotic platform. I’m particularly interested in system integration, intelligent automation, and applied AI in robotics.
This project focused on designing and integrating the full electrical and communication backbone of a bionic humanoid arm capable of intent-driven and adaptive grasping. The system combined EMG-based intent detection, computer vision, and distributed embedded control to enable intelligent manipulation. I designed and implemented a four-rail power distribution unit (12V, 7.4V, 6V, 5V) supporting ten actuators and five distributed ESP32 nodes, and helped architect a CAN-bus communication network connected to a Jetson AGX Orin for high-level processing. A major part of the work involved system-level integration, power integrity, and debugging physical-layer communication issues, including root-cause analysis of failures related to power-domain interactions and reverse current propagation. This project provided hands-on experience in real robotic system architecture, embedded communication, and hardware reliability engineering.
The goal of this project was to develop an affordable wearable glove capable of recognizing American Sign Language (ASL) alphabet gestures using low-cost sensing alternatives to traditional flex sensors. I built the full physical prototype, including soldering, sensor assembly, and wearable integration. The glove combined LED–photodiode bend sensors, Hall effect sensors, and an accelerometer to capture finger motion and hand orientation. Throughout development, I supported iterative hardware debugging and optimization under tight budget and time constraints. The result was a functional, low-cost sensing system demonstrating how accessible hardware can be used for gesture recognition and assistive technologies.
This project investigated age-related bias in facial emotion recognition (FER) systems and explored training strategies to improve performance on elderly faces. We merged three datasets (FACES, RAF-DBt, and Tsinghua) into a unified dataset of 16,399 images and structured it into training (70%), validation (15%), and test (15%) splits. Through diverse dataset fine-tuning and iterative experimentation, I helped improve validation accuracy from a 75.15% baseline to 97.6%. I also set up the full machine learning workflow for the Poster++ model, including data preprocessing, training pipelines, and experiment tracking, enabling reliable evaluation and reproducibility across the team. This work highlighted the importance of dataset diversity in building fair and robust AI systems.
I'm always interested in discussing robotics, collaborating on projects, or exploring opportunities. Feel free to reach out!