Unsupervised learning spatio-temporal features for human activity recognition from RGB-D video data

Chen, G., Zhang, F., Giuliani, M., Buckl, C. and Knoll, A. (2013) Unsupervised learning spatio-temporal features for human activity recognition from RGB-D video data. In: Herrmann, G., Pearson, M., Lenz, A., Bremner, P., Spiers, A. and Leonards, U., eds. (2013) Social Robotics. Bristol, UK: Springer International Publishing, pp. 341-350. ISBN 9783319026749 Available from: http://eprints.uwe.ac.uk/31035

Full text not available from this repository

Publisher's URL: http://dx.doi.org/10.1007/978-3-319-02675-6_34

Abstract/Description

Being able to recognize human activities is essential for several applications, including social robotics. The recently developed commodity depth sensors open up new possibilities of dealing with this problem. Existing techniques extract hand-tuned features, such as HOG3D or STIP, from video data. They are not adapting easily to new modalities. In addition, as the depth video data is low quality due to the noise, we face a problem: does the depth video data provide extra information for activity recognition? To address this issue, we propose to use an unsupervised learning approach generally adapted to RGB and depth video data. we further employ the multi kernel learning (MKL) classifier to take into account the combinations of different modalities. We show that the low-quality depth video is discriminative for activity recognition. We also demonstrate that our approach achieves superior performance to the state-of-the-art approaches on two challenging RGB-D activity recognition datasets.

Item Type:Book Section
Uncontrolled Keywords:activity recognitio, unsupervised learning, depth video
Faculty/Department:Faculty of Environment and Technology > Department of Engineering Design and Mathematics
ID Code:31035
Deposited By: Dr M. Giuliani
Deposited On:21 Feb 2017 16:17
Last Modified:21 Feb 2017 16:17

Request a change to this item