This page contains a non-exhaustive selection of projects I have worked on, grouped by institution and time period.
For context on my early entrepreneurship experience during my PhD, see the short section on venture formation.
Amazon (2023-Present)
At Amazon, I have led the development of multimodal agentic AI assistants that translate large language model capabilities into real-world, task-oriented products across healthcare and commerce.
A patient-facing, multimodal assistant designed to support text- and voice-based interactions in primary care, integrating language, audio, and structured workflows under safety and regulatory constraints.
Applied research and prototyping of video-based, multimodal AI experiences combining speech, vision, and structured interaction flows to enable natural, task-oriented user interactions.
A browser-based agentic system that helps customers complete purchases across third-party websites using multimodal perception, reasoning, and tool-enabled actions.
Continuous, production-scale measurement of accuracy, policy compliance, and user trust for Amazon’s generative shopping assistant, replacing manual review with automated evaluation and actionable quality signals across millions of model responses per week.
Tempus AI (2023)
At Tempus, my work focused on developing applied machine learning systems for clinical decision support, with an emphasis on computer vision models for quantitative analysis of pathology data used in oncology workflows.
A population health management initiative under Google Care Studio focused on using machine learning to identify high-risk outpatients and support proactive care through closed-loop prediction and measurement workflows.
A large-scale clinical data infrastructure and analytics initiative focused on aggregating, standardizing, and enabling analysis of population-scale EHR data to support clinical workflows and machine learning in healthcare.
MIT (2014-2019)
At MIT, my PhD research focused on applying AI and multimodal sensing to the objective measurement of human pain, combining physiological signals, computer vision, and affective computing. In parallel, I pursued several additional projects in human-centered AI.
A low-cost, citizen-science platform for measuring and analyzing air pollution particles using do-it-yourself atomic force microscopes and human-in-the-loop image analysis.
A large-scale NIH-funded study combining multimodal sensing and machine learning to model sleep, stress, and mental health dynamics in real-world social networks.
We investigated to use electrodermal activity (EDA), heart rate variability (HRV), and facial expression analysis as potential endpoints to determine quantitative pain scores.
Co-founded an early-stage startup through the Antler accelerator, incorporated and based in Singapore, focused on applying AI, computer vision, and IoT to workforce and task management in the construction industry.