Skip to main content

Stereoscopic facial imaging for pain assessment using rotational offset microlens arrays based structured illumination

Abstract

Conventional pain assessment methods such as patients’ self-reporting restrict the possibility of easy pain monitoring while pain serves as an important role in clinical practice. Here we report a pain assessment method via 3D face reading camera assisted by dot pattern illumination. The face reading camera module (FRCM) consists of a stereo camera and a dot projector, which allow the quantitative measurement of facial expression changes without human subjective judgement. The rotational offset microlens arrays (roMLAs) in the dot projector form a uniform dense dot pattern on a human face. The dot projection facilitates evaluating three-dimensional change of facial expression by improving 3D reconstruction results of non-textured facial surfaces. In addition, the FRCM provides consistent pain rating from 3D data, regardless of head movement. This pain assessment method can provide a new guideline for precise, real-time, and continuous pain monitoring.

Introduction

Clinical procedure requires quantitative and precise assessment of pain because the degree of pain is the treatment result of chronic diseases [1] or non-verbal monitoring in patients [2]. Repeated pain monitoring has poor sustainability due to the heavy workload of hospital nurses, whereas regular pain evaluation improves treatment outcomes [3]. Pain rating methods such as Numerical Rating Scale (NRS) and Visual Analog Scale (VAS) often require self-report or regular visits of a medical team, leading to a limited application range [4,5,6,7]. The Wong-Baker Faces Pain Rating Scale (WBS) is widely utilized for facial expression-based pain assessment for non-communicative patients such as children or people with intellectual disabilities [8]. The WBS estimates the pain intensity by comparing reference images and patients’ facial expression. However, conventional methods involve long assessment time, confined applications, or inadequateness of continuous monitoring [9].

Facial Action Coding System (FACS) proposed in 1978 defines the specific facial muscle movements as action units (AUs) classifying the change of muscle movements by facial expression. A well-trained expert directly encodes the facial expression by looking at still images or video of a person [10]. The Prkachin and Solomon pain intensity (PSPI) utilizes several AUs associated with pain, serving as the ground truth of pain intensity [11, 12]. Ambiguous and subtle movements of a human face are subjectively classified by human raters for pain assessment using AUs [13]. Several previous works have also demonstrated the 2D facial expression of pain because the facial expression change is closely related to the pain intensity [9, 14]. However, the environment disturbance such as lighting, makeup, or eyebrow-free hinders the precise assessment due to 2D information of human faces [6, 15, 16]. Three-dimensional facial image acquisition further improves the pain assessment but still remains in technical issues for the precise assessment [17, 18]. As an alternative, bioelectrical signals such as electrocardiograph and electromyograph contribute to the pain monitoring but they still require the clear explanation of the relationship between bioelectrical signals and pain, and overcome high invasiveness [3, 19, 20]. The distance between a stereo camera and a subject is often estimated by the binocular disparity [21] while non-textured surfaces suchas a human face lower in 3D reconstruction quality [22]. Recently. structured illumination driven stereo camera substantially improves the binocular disparity for a non-textured sample [23].

Fig. 1
figure 1

Schematic illustrations of quantitative pain rating using a face reading camera module (FRCM). a Concept of 3D face imaging with the FRCM. A stereo camera is used to obtain depth information of a face. NIR dot projection enhances the performance of stereo matching. b An illustration of conventional pain metric and pain assessment using the FRCM. The FRCM measures pain intensity using changes of facial expression while conventional methods generally depend on self-reporting

Here we report a 3D face reading camera module (FRCM) for pain rating (Fig. 1). The FRCM consists of a stereo camera and a dot projector to read facialexpressions in three dimensions for numerical analysis of pain.The dot projector contains rotationaloffsetmicrolensarrays (roMLAs) as a diffractive optical element (DOE), which allows structured illumination for compact stereoscopic imaging system [24, 25].The roMLAs provide dense dot patterns of high uniformity and high contrast on a human face and efficientlyimproves the precise 3D reconstruction for quantifying small facial expression changes as well as reducing the influence of head movements under pain.

Fig. 2
figure 2

Rotational offset microlens arrays (roMLAs) and FRCM. a Fully packaged FRCM. b The main components of the FRCM. A dot projector consists of a laser diode, a collimating lens, and a DOE. c Fabrication process of roMLAs. d SEM images of roMLAs

Results and discussion

The FRCM involves a stereo camera (oCamS-1CGN-U, Withrobot, Korea) for obtaining image disparity and a dot projector for enhancing image quality (Fig. 2a, b). The dot projector contains a laser diode (L785P090, Thorlabs, U.S.), a collimating lens (A390TM-B, Thorlabs, U.S.), and roMLAs as a DOE (Fig. 2b). Near infrared light (785 nm) from the laser diode makes the dot pattern invisible to the human eye for preventing inconvenience during pain assessment. The FRCM obtains clear images within 50–100 cm distance range, and the illumination intensity at 50 cm distance from dot projector is \(10 {\upmu }\text{W}/\text{c}{\text{m}}^{2}\). The dot projector meets the safety level of Laser class 1 defined by International Electrotechnical Commission (IEC 60825-1 Ed. 3.0), which ensures the safety of eyes and skin under dot projection.The roMLAs were microfabricated by forming two hexagonal microlens arrays with rotational offset angle on both sides of a glass wafer (Fig. 2c) [24]. The microlenses of roMLAs show high lens curvature and high fill-factor to obtain a large field-of-view for diffraction. In particular, the rotational offset of microlens arrays is set to 13.25° for creating dot patterns in hexagonal arrangement with high contrast and density. Figure 2d shows scanning electron micrographs of the one side of the fabricated DOE.

The 3D reconstruction of a human face is substantially improved by using roMLAs driven structured light illumination (Fig. 3). The non-textured surface of a target subject often accompanies distorted or void portions in the reconstructed image due to information errors on the stereo matching algorithm. The dot projection of the FRCM facilitates accurate depth estimation for realizing the facial expression change by adding the texture on subject’s skin. The FRCM precisely estimates the feature depth within a fraction of one millimeter.

Fig. 3
figure 3

3D point cloud reconstruction obtained from the FRCM. a 3D face imaging without dot projection often results in incorrect stereo matching. Depth information acquisition fails or is obtained incorrectly in some parts of a non-textured surface. b Dot projection enhances the stereo matching results on non-textured surfaces

The numerical pain assessment was performed by using the Mahalanobis distance of geometric features related to pain. Figure 4a explains how the Mahalanobis distances appear in a two-dimensional space. This function fits for quantitative pain assessment because the weight of each variable should be set differently depending on its importance. Several distance features were selected for the pain intensity metric to measure the change of facial expression under pain (Fig. 4b). The distance features are designed to reflect facial expression with a significant pain relationship such as brow lowering and upper lip raising [9, 18]. The weight factor of each feature was set to a larger value as the number of corresponding AUs related to pain increases [11].

Fig. 4
figure 4

a Contour lines of the Mahalanobis distance in 2D space. b Distance features for pain rating. x1, x2: from eyebrow to mouth; x3, x4: from eyebrow to eye; x5: mouth height; x6: mouth width; x7, x8: from eye to mouth

The pain rating was performed while rotating a plaster cast (Bust of Marcus Vipsanius Agrippa), whose head size is similar to a real person, in order to verify tolerance to head motion. The pain intensities obtained without depth information were simultaneously calculated to show the rotational effect of human head. The pain rating using facial features extracted by the 2D images of significantly depends on the rotation angle while facial expression of the cast is relatively constant. The experimental results clearly indicate that pain intensities from 3D images are relatively consistent unlike those from 2D images (Fig. 5a). The substantial reduction of motion artifacts is very important for the quantitative and precise pain [26]. The evaluation of facial expression change was successfully demonstrated using the FRCM. A volunteer acts three different facial expressions representing neutral, moderate pain, and strong pains. The facial expression for strong pain shows a high value of the pain intensity (Fig. 5b), whose number is more continuous unlike conventional methods such as NRS and WBS indicating the discrete value of pain. Besides, human subjective judgement is not involved in pain assessment process using the FRCM.

Fig. 5
figure 5

Measured pain rating with the FRCM. a Comparison of the pain rating obtained from 3D and 2D imaging depending on the rotational angle of a plaster cast. The pain intensities obtained from 3D images (mean: 0.047, std.: 0.012) is more consistent pain intensity than those from 2D images (mean: 0.196, std. 0.078). b Measured pain intensity obtained from 3D images of human facial expression. The strong expressions exhibit the highest value

Conclusions

The dot pattern illumination based on roMLAs plays an important role for accurate 3D reconstruction of human facial expression. The numerical pain analysis has been successfully performed by using the 3D geometric features extracted by the FRCM. The FRCM achieves consistent pain intensities regardless of face direction due to 3D face reading. The facial image acquisition, feature extraction, and analysis were separately performed in the process of pain assessment. This procedure can be further performed in real-time by employing the detection algorithm for facial landmarks. The roMLA driven structured illumination opens up the potential for precise 3D imaging in order to improve an automatic toolkit for facial expression thanks to the simple and compact implementation. The experimental result clearly implies that 3D facial imaging based on dot projection overcomes the weaknesses of conventional pain assessment such as motion artifact, long assessment time, subjective decision. This new facial reading camera can provide a new direction for facile and real-time pain monitoring for advanced clinical applications.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

AUs:

Action units

DOE:

Diffractive optical element

FACS:

Facial action coding system

FRCM:

Face reading camera module

NRS:

Numerical rating scale

PSPI:

Prkachin and Solomon pain intensity

roMLAs:

Rotational offset microlens arrays

VAS:

Visual analog scale

WBS:

Wong-Baker faces pain rating scale

References

  1. Pathak A, Sharma S, Mark P (2018) The utility and validity of pain intensity rating scales for use in developing countries. Pain Rep 3:5

    Google Scholar 

  2. Gélinas C, Fillion L, Kathleen A, Puntillo (2009) Item selection and content validity of the Critical-Care Pain Observation Tool for non‐verbal adults. J Adv Nurs 65(1):203–216

    Article  Google Scholar 

  3. Lucey P et al (2012) Painful monitoring: automatic pain monitoring using the UNBC-McMaster shoulder pain expression archive database. Image Vision Comput. 30 (3):197–205

  4. Beltramini A, Milojevic K, Dominique Pateron (2017) Pain assessment in newborns, infants, and children. Pediatr Ann 46:e387–e395

    Article  Google Scholar 

  5. Thong ISK (2018) The validity of pain intensity measures: what do the NRS, VAS, VRS, and FPS-R measure? Scand J Pain 181:99–107

    Article  Google Scholar 

  6. Saeijs RWJJ, Walther E (2016) Tjon a Ten, and Peter HN de With. Dense-Hog-based 3D face tracking for infant pain monitoring. In: 2016 IEEE International Conference on Image Processing (ICIP). IEEE

  7. Alghadir AH et al (2018) Test–retest reliability, validity, and minimum detectable change of visual analog, numerical rating, and verbal rating scales for measurement of osteoarthritic knee pain. Journal of pain research 11:851

    Article  Google Scholar 

  8. Cote CJ, Lerman J, David Todres I (2012) A Practice of Anesthesia for Infants and Children E-Book: Expert Consult: Online and Print. Elsevier Health Sciences

  9. Werner P, Niese R (2014) Comparative learning applied to intensity rating of facial expressions of pain. Int J Pattern recognit Artif Intell 28:1451008

    Article  Google Scholar 

  10. Ekman P, Friesen WV (1978) Facial action coding system: Investigator’s guide. Consulting Psychologists Press

  11. Prkachin KM (1992) The consistency of facial expressions of pain: a comparison across modalities. Pain 51(3):297–306

    Article  Google Scholar 

  12. Lucey P et al (2011) Painful data: The UNBC-McMaster shoulder pain expression archive database. In: Face and Gesture

  13. Hamm J et al (2011) Automated facial action coding system for dynamic analysis of facial expressions in neuropsychiatric disorders. J Neurosci Methods 200(2):237–256

    Article  Google Scholar 

  14. Happy SL, Routray A (2015) “Automatic facial expression recognition using features of salient facial patches. " IEEE transactions on Affective Computing 6(1):1–12

    Article  Google Scholar 

  15. Tsalakanidou F (2010) “Real-time 2D+ 3D facial action and expression recognition. Pattern Recogn 43:1763–1775

    Article  Google Scholar 

  16. Jeni LA, Cohn JF, Kanade T (2017) Dense 3d face alignment from 2d video for real-time use. Image Vis Comput 58:13–24

    Article  Google Scholar 

  17. Werner P, Niese R (2012) Pain recognition and intensity rating based on comparative learning. In: 19th IEEE International Conference on Image Processing. IEEE

  18. Werner P et al (2016) Automatic pain assessment with facial activity descriptors. IEEE Trans Affect Comput 83:286–299

    Google Scholar 

  19. Olugbade TA et al (2015) Pain level recognition using kinematics and muscle activity for physical rehabilitation in chronic pain. In: International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE

  20. Gruss S et al (2015) Pain intensity recognition rates via biopotential feature patterns with support vector machines. PLoS ONE 10:e0140330

    Article  Google Scholar 

  21. Tippetts B et al (2016) Review of stereo vision algorithms and their suitability for resource-limited systems. J Real-Time Image Proc 11(1):5–25

    Article  Google Scholar 

  22. Shi C et al (2015) High-accuracy stereo matching based on adaptive ground control points. IEEE Trans Image Process 24(4):1412–1423

    Article  MathSciNet  Google Scholar 

  23. Pribanic T, Obradovic N, Salvi J (2012) Stereo computation combining structured light and passive stereo matching. Opt Commun 285(6):1017–1022

    Article  Google Scholar 

  24. Yang S-P, et al (2020) Rotational offset microlens arrays for highly efficient structured pattern projection. Adv Optical Mater 2000:395

  25. Jung H, Ki-Hun J (2015) Monolithic polymer microlens arrays with high numerical aperture and high packing density. ACS Appl Mater Interfaces 74:2160–2165

    Article  Google Scholar 

  26. Hodges PW, Rob Smeets J (2015) Interaction between pain, movement, and physical activity: short-term benefits, long-term consequences, and targets for treatment. Clin J Pain 31(2):97–107

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This work was financially supported by the National Research Foundation, funded by the Ministry of Science and ICT (2021R1A2B5B03002428), and Technology Innovation Program (20012464), funded by the Ministry of Trade, Industry & Energy (MOTIE, Korea).

Author information

Authors and Affiliations

Authors

Contributions

JMK and KHJ conceived the idea. JMK performed the experiments and analyzed data. SPY designed and fabricated rotational offset microlens arrays. JMK and KHJ wrote the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Ki-Hun Jeong.

Ethics declarations

Ethics approval and consent to participate

All experiments involving a human subject were approved by KAIST IRB (KH2020-105).

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Pain intensity metric

Let \({\varvec{x}}_{\varvec{N}}=\left({n}_{1},{n}_{2},\ldots ,{n}_{8}\right)\) denote measured values of the neutral face, and let \(\varvec{x}=\left({x}_{1},{x}_{2},\ldots ,{x}_{8}\right)\) denote measured values of a painful facial expression. The pain intensity is calculated by \( {\rm{f}}\left( {\varvec{x}} \right) = \sqrt {{{\left( {{{\varvec{x}}_{\varvec{N}}} - {\varvec{x}}} \right)}^{{\rm T}}}{\varvec{S}}\left( {{{\varvec{x}}_{\varvec{N}}} - {\varvec{x}}} \right)}\cdot {\varvec{S}}\) denotes an eight-by-eight matrix representing importance and correlation of each feature defined by

$$ s = \left[ {\begin{array}{*{20}{c}} {{s_{11}}}&0&0& \cdots &0\\ 0&{{s_{22}}}&0& \cdots &0\\ 0&0&{{s_{33}}}& \cdots &0\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0&0&0& \cdots &{{s_{88}}} \end{array}} \right]$$

where \({s}_{ii}={c}_{i}^{2}/{n}_{i}^{2}\) such that \(\left({c}_{1}, {c}_{2}, \dots , {c}_{8}\right)=(0.5, 0.5, 0.35, 0.35, 0.4, 0.4, 0.2, 0.2)\).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kwon, JM., Yang, SP. & Jeong, KH. Stereoscopic facial imaging for pain assessment using rotational offset microlens arrays based structured illumination. Micro and Nano Syst Lett 9, 11 (2021). https://doi.org/10.1186/s40486-021-00139-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40486-021-00139-y

Keywords

  • Pain assessment
  • 3D face imaging
  • Stereo imaging
  • Microlens arrays
  • Structured light
  • Optical MEMS