Chewing Detection on Low Power Embedded Systems

More Info
expand_more

Abstract

Analyzing food consumption patterns can provide valuable insights into the development of obesity and eating disorders. The detection and quantification of chewing strokes are essential to facilitate this analysis. One approach to food intake analysis involves evaluating chewing sounds generated during the eating process. These sounds were recorded by microphones placed to the user’s outer
ear canal. Aside from discovering the most accurate solution, the algorithms used must demonstrate sufficient efficiency to operate on low-power embedded ear-worn hardware. Three algorithms for automated chewing detection were evaluated with the help of two datasets. The first dataset consists of the food intake sounds from the consumption of three types of food. The second dataset consists of environmental noise. The data were manually labeled by recognizing mastication sounds’ visual and audio characteristics. Precision of over 80%
was achieved by all algorithms in the dataset consisting of only chewing sounds. Finally, an efficient solution has been developed to distinguish between speech and chewing sounds.