Date of Completion

12-16-2015

Embargo Period

6-12-2016

Keywords

Smartphone; breathing; respiratory rate; tidal volume; acoustical; optical; respiratory sounds; time-frequency analysis; fractal analysis; signal processing

Major Advisor

Dr. Ki Chon

Associate Advisor

Dr. Kazunori Hoshino

Associate Advisor

Dr. Sabato Santaniello

Associate Advisor

Dr. Sonia Charleston-Villalobos

Field of Study

Biomedical Engineering

Degree

Doctor of Philosophy

Open Access

Open Access

Abstract

Respiratory rate (RR) and tidal volume (VT) are two parameters that a breathing monitor should provide. Several techniques have been developed for monitoring these parameters in clinical and research settings. However, as being designed for such settings, they employ specialized devices that are not translated easily to everyday use due to their high costs, need for skilled operators, or limited mobility. Hence, there is still a lack of portable monitoring devices that can noninvasively estimate RR and VT on a daily basis for the general population. Among VT and RR monitoring methods, respiratory acoustical and noncontact optical approaches have provided promising results. Nowadays it is recognized that more information beyond those parameters can be extracted from respiratory sounds via their digital analysis, which in turns involves the classification of their breath phases. Such classification is trivial when using an external reference like airflow/volume from spirometry, but generally this availability cannot be taken for granted outside clinical and research settings. In this dissertation, we explored the feasibility of developing a portable, accurate, and ease-of-use breathing monitoring device based on a smartphone. Smartphones are an enticing option for developing health applications due to their ubiquity, and software and hardware capabilities. First, we tested the reliability of acquiring real tracheal sounds using smartphones. Then, we performed the tracking of RR at each time instant (IRR) and the estimation of VT from the smartphone-acquired tracheal sounds. Next, we estimated these parameters directly on the smartphone via a noncontact optical monitoring approach able to track the temporal variations of the volumetric changes produced while breathing. Finally, we explored the automatic breath-phase classification of smartphone-acquired respiratory sounds directly on a smartphone. The results obtained in this research regarding the acquisition of tracheal sounds via smartphones, the estimation of IRR and VT via acoustical and optical approaches, and the automatic classification of respiratory breath phases directly on the smartphone, would help to provide the basis for developing a portable, easy-to-use, and reliable breathing monitoring system that would expand the monitoring options available on a daily-basis to the general population.

COinS