Meet Inspiring Speakers and Experts at our 3000+ Global Conference Series Events with over 1000+ Conferences, 1000+ Symposiums
and 1000+ Workshops on Medical, Pharma, Engineering, Science, Technology and Business.

Explore and learn more about Conference Series : World's leading Event Organizer

Back

Pratool Bharti

Pratool Bharti

University of South Florida, USA

Title: HuMAn: complex activity recognition with multi-modal multi-positional body sensing

Biography

Biography: Pratool Bharti

Abstract

Current state-of-the-art systems in the literature using wearables are not capable of distinguishing many fine-grained and/or complex human activities, which may appear similar but with vital differences in context, such as lying on floor vs. lying on bed vs. lying on sofa. This paper fills this gap by proposing a novel system, called HuMAn, that recognizes and classifies complex at-home activities of humans with wearable sensing. Specifically, HuMAn makes such classifications feasible by leveraging selective multi-modal sensor suites from wearable devices, and enhances the richness of sensed information for activity classification by carefully leveraging placement of the wearable devices across multiple positions on the human body. The HuMAn system consists of the following components: Practical feature set extraction from specifically selected multi-modal sensor suites; a novel two-level structured classification algorithm that improves accuracy by leveraging sensors in multiple body positions; and improved refinement in classification of complex activities with minimal external infrastructure support (e.g., only a few Bluetooth beacons used for location context). The proposed system is evaluated with 10 users in real home environments. Experimental results demonstrate that the HuMAn can detect a total of 21 complex at-home activities with high degree of accuracy. For same-user evaluation strategy, the average activity classification accuracy is as high as 97% over all the 21 activities. For the case of 10-fold cross- validation evaluation strategy, the average classification accuracy is 94%, and for the case of leave-one-out cross-validation strategy, the average classification accuracy is 76%.