Personalized Automatic Estimation of Self Reported Pain Intensity from Facial Expressions
Date:
I delivered an oral presentation at the Computer Vision and Pattern Recognition (CVPR 2017) Workshop on Deep Affective Learning and Context Modeling, where I presented our work on personalized estimation of self reported pain intensity from facial expressions. The project introduced a two stage machine learning framework that combines recurrent neural networks with a personalized Hidden Conditional Random Field model to estimate Visual Analog Scale (VAS) pain scores from facial landmarks.
As described in our paper, the first stage employed a bidirectional long short term memory network to estimate frame level Prkachin and Solomon Pain Intensity (PSPI) scores from sequences of facial landmarks. These estimates were then used in a personalized sequence model by incorporating an individual facial expressiveness score that quantifies how strongly each person tends to express pain relative to their self reported ratings.
In the presentation, I described how this personalized approach improves VAS prediction by accounting for large inter subject differences in facial expressiveness. Evaluated on the UNBC McMaster Shoulder Pain dataset, the method demonstrated clear advantages over non personalized baselines, particularly for individuals whose observed facial responses deviate substantially from their self reported pain levels.
