[1] S. Angerbauer, A. Palmanshofer, S. Selinger, and M. Kurz, “applied sciences Comparing Human Activity Recognition Models Based on Complexity and Resource Usage,” 2021.
[2] and O. U. P. Turaga, R. Chellappa, V. S. Subrahmanian, “Machine recognition of human activities: A survey, Circuits and Systems for Video Technology,” IEEE Trans., vol. 18, no. 11, pp. 1473–1488, 2008.
[3] Z. Hussain, M. Sheng, and W. E. Zhang, “Different Approaches for Human Activity Recognition: A Survey,” pp. 1–28, 2019, doi: 10.1016/j.jnca.2020.102738.
[4] L. Chen, J. Hoey, C. D. Nugent, D. J. Cook, and Z. Yu, “Sensor-based activity recognition,” IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews, vol. 42, no. 6. pp. 790–808, 2012, doi: 10.1109/TSMCC.2012.2198883.
[5] D. Bouchabou, S. M. Nguyen, C. Lohr, B. Leduc, and I. Kanellos, “A survey of human activity recognition in smart homes based on iot sensors algorithms: Taxonomies, challenges, and opportunities with deep learning,” Sensors, vol. 21, no. 18. MDPI, Sep. 01, 2021, doi: 10.3390/s21186037.
[6] L. Minh Dang, K. Min, H. Wang, M. Jalil Piran, C. Hee Lee, and H. Moon, “Sensor-based and vision-based human activity recognition: A comprehensive survey,” Pattern Recognit., vol. 108, Dec. 2020, doi: 10.1016/j.patcog.2020.107561.
[7] C. Kim and W. Lee, “Human Activity Recognition by the Image Type Encoding Method of 3-Axial Sensor Data,” Appl. Sci., vol. 13, no. 8, 2023, doi: 10.3390/app13084961.
[8] M. M. and A. A. B. and M. Y. and R. B. Gopaluni, “A Vision-based Deep Learning Platform for Human Motor Activity Recognition,” pp. 1–4, 2023, doi: 10.1109/MOCAST57943.2023.10176420.
[9] L. Xia, C. Chen, and J. K. Aggarwal, “View Invariant Human Action Recognition Using Histograms of 3D Joints The University of Texas at Austin.”
[10] M. G. Morshed, T. Sultana, A. Alam, and Y. K. Lee, “Human Action Recognition: A Taxonomy-Based Survey, Updates, and Opportunities,” Sensors, vol. 23, no. 4. MDPI, Feb. 01, 2023, doi: 10.3390/s23042182.
[11] C. Schüldt, I. Laptev, and B. Caputo, “Recognizing human actions: A local SVM approach,” in Proceedings - International Conference on Pattern Recognition, 2004, vol. 3, pp. 32–36, doi: 10.1109/ICPR.2004.1334462.
[12] J. Liu, A. Shahroudy, M. L. Perez, G. Wang, L. Duan, and A. K. Chichung, “NTU RGB + D 120 : A Large-Scale Benchmark for 3D Human Activity Understanding,” no. 1, p. 120.
[13] Institute of Electrical and Electronics Engineers, Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on : date, 13-18 June 2010. .
[14] W. Li, Y. Wong, A.-A. Liu, Y. Li, Y.-T. Su, and M. Kankanhalli, “Multi-Camera Action Dataset for Cross-Camera Action Recognition Benchmarking,” Jul. 2016, doi: 10.1109/WACV.2017.28.
[15] IEEE Staff and IEEE Staff, 2012 IEEE Conference on Computer Vision and Pattern Recognition. .
[16] S. Singh, S. A. Velastin, and H. Ragheb, “MuHAVi: A multicamera human action video dataset for the evaluation of action recognition methods,” in Proceedings - IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS 2010, 2010, pp. 48–55, doi: 10.1109/AVSS.2010.63.
[17] K. K. Reddy and M. Shah, “Recognizing 50 human action categories of web videos,” Mach. Vis. Appl., vol. 24, no. 5, pp. 971–981, 2013, doi: 10.1007/s00138-012-0450-4.
[18] IEEE Computer Society., 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) : date, 7-12 June 2015. .
[19] W. Kay et al., “The Kinetics Human Action Video Dataset,” May 2017, [Online]. Available: http://arxiv.org/abs/1705.06950.
[20] Institute of Electrical and Electronics Engineers, IEEE International Conference on Computer Vision 13 2011.11.06-13 Barcelona, and ICCV 13 2011.11.06-13 Barcelona, IEEE International Conference on Computer Vision (ICCV), 2011 6 - 13 Nov. 2011, Barcelona, Spain. .
[21] Computer Vision and Pattern Recognition, 2009, CVPR 2009, IEEE Conference on : dates: 20-25 June 2009. IEEE, 2009.
[22] G. Morshed, T. Sultana, A. Alam, and Y. Lee, “Human Action Recognition: A Taxonomy-Based Survey, Updates, and Opportunities,” pp. 1–40, 2023.
[23] M. G. Amin and B. Erol, “Understanding Deep Neural Networks Performance for Radar-based Human Motion Recognition,” 2018 IEEE Radar Conf., pp. 1461–1465, 2018, doi: 10.1109/RADAR.2018.8378780.
[24] C. C. Vu, “Human Motion Recognition by Textile Sensors Based,” 2018, doi: 10.3390/s18093109.
[25] F. Hafizhelmi, K. Zaman, H. Ali, A. A. Shafie, and Z. I. Rizman, “Efficient Human Motion Detection with Adaptive Background for Vision- Efficient Human Motion Detection with Adaptive Background for Vision-Based Security System,” no. June, 2017, doi: 10.18517/ijaseit.7.3.1329.
[26] Y. Shao, S. Guo, L. Sun, and W. Chen, “Human Motion Classification Based on Range Information with Deep Convolutional Neural Network,” pp. 1520–1524, 2017, doi: 10.1109/ICISCE.2017.317.
[27] S. U. Park, J. H. Park, M. A. Al-masni, M. A. Al-antari, Z. Uddin, and T. Kim, “A Depth Camera-based Human Activity Recognition via Deep Learning Recurrent Neural Network for Health and Social Care Services,” Procedia - Procedia Comput. Sci., vol. 100, pp. 78–84, 2016, doi: 10.1016/j.procs.2016.09.126.
[28] M. Gong and Y. Shu, “Real-time Detection and Motion Recognition of Human Moving Objects Based on Deep Learning and Multi-scale Feature Fusion in Video,” 2020, doi: 10.1109/ACCESS.2020.2971283.
[29] F. Zhang, T. Y. Wu, J. S. Pan, G. Ding, and Z. Li, “Human motion recognition based on SVM in VR art media interaction environment,” 2019.
[30] D. Hendry, K. Chai, A. Campbell, L. Hopper, P. O’Sullivan, and L. Straker, “Development of a Human Activity Recognition System for Ballet Tasks,” Sport. Med. - Open, vol. 6, no. 1, 2020, doi: 10.1186/s40798-020-0237-5.
[31] V. C. Mariani and S. Coelho, “Video-Based Human Activity Recognition Using Deep Learning Approaches,” pp. 1–15, 2023.
[32] V. Chavan, “ScienceDirect ScienceDirect Real-Time Deep Learning Approach for Pedestrian Detection and Real-Time Deep Learning Approach for Pedestrian Detection and Suspicious Suspicious Activity Activity Recognition Recognition,” Procedia Comput. Sci., vol. 218, pp. 2438–2447, 2023, doi: 10.1016/j.procs.2023.01.219.
[33] K. Guo, P. Wang, P. Shi, and C. He, “applied sciences A New Partitioned Spatial – Temporal Graph Attention Convolution Network for Human Motion Recognition,” 2023.
[34] J. Park, W. Lim, D. Kim, and J. Lee, “Engineering Applications of Artificial Intelligence GTSNet : Flexible architecture under budget constraint for real-time human activity recognition from wearable sensor,” Eng. Appl. Artif. Intell., vol. 124, no. May 2022, p. 106543, 2023, doi: 10.1016/j.engappai.2023.106543.
[35] H. Bilen, B. Fernando, E. Gavves, and A. Vedaldi, “Action Recognition with Dynamic Image Networks,” no. December, 2018, doi: 10.1109/TPAMI.2017.2769085.
[36] T. Tan, “DCNN-Based Elderly Activity Recognition Using Binary Sensors,” no. February 2018, 2017, doi: 10.1109/ICECTA.2017.8252040.
[37] M. A. A. A. Ahmed M. Helmi, “Human activity recognition using marine predators algorithm with deep learning,” ScienceDirect, 2023, [Online]. Available: https://www.sciencedirect.com/science/article/abs/pii/S0167739X23000134#preview-section-snippets.
[38] V. Jain, G. Gupta, M. Gupta, D. Kumar, and U. Ghosh, “Ambient intelligence-based multimodal human action recognition for autonomous systems,” ISA Trans., vol. 132, pp. 94–108, 2023, doi: 10.1016/j.isatra.2022.10.034.
[39] M. Muaaz, A. Chelli, M. Wulf, and G. Matthias, “Wi-Sense : a passive human activity recognition system using Wi-Fi and convolutional neural network and its integration in health information systems,” pp. 163–175, 2022.
[40] M. Islam, S. Nooruddin, and F. Karray, “Multimodal Human Activity Recognition for Smart Healthcare Applications,” 2022 IEEE Int. Conf. Syst. Man, Cybern., no. November, pp. 196–203, 2022, doi: 10.1109/SMC53654.2022.9945513.
[41] Z. Gu, T. He, Z. Wang, and Y. Xu, “Device-Free Human Activity Recognition Based on Dual-Channel Transformer Using WiFi Signals,” vol. 2022, 2022.
[42] M. A. Hanif et al., “Smart Devices Based Multisensory Approach for Complex Human Activity Recognition,” 2022, doi: 10.32604/cmc.2022.019815.
[43] S. Khater, M. Hadhoud, and M. B. Fayek, “Open Access A novel human activity recognition architecture : using residual inception ConvLSTM layer,” vol. 0, 2022.
[44] R. G. Ramos, J. D. Domingo, E. Zalama, J. Gómez-garcía-bermejo, and J. López, “SDHAR-HOME : A Sensor Dataset for Human Activity Recognition at Home,” pp. 1–27, 2022.
[45] M. Mohtadifar, M. Cheffena, and A. Pourafzal, “Acoustic- and Radio-Frequency-Based Human Activity Recognition,” 2022.
[46] D. Human, A. Recognition, and N. Sensors, “Non-Intrusive Sensors,” pp. 1–19, 2021.
[47] M. Lu, Y. Hu, and X. Lu, “Driver action recognition using deformable and dilated faster R-CNN with optimized region proposals,” 2019.
[48] S. D. Camera, “Real-Time Action Recognition System for Elderly People Using Stereo Depth Camera,” 2021.
[49] R. Vrskova, P. Kamencay, R. Hudec, and P. Sykora, “A New Deep-Learning Method for Human Activity Recognition,” 2023.
[50] T. Singh and D. Kumar, “A deeply coupled ConvNet for human activity recognition using dynamic and RGB images,” Neural Comput. Appl., vol. 0123456789, 2020, doi: 10.1007/s00521-020-05018-y.
[51] Z. Wang et al., “Swimming Motion Analysis and Posture Recognition Based on Wearable Inertial Sensors,” pp. 3371–3376, 2019.
[52] A. Ullah, K. Muhammad, W. Ding, V. Palade, and I. Ul, “Efficient activity recognition using lightweight CNN and DS-GRU network for surveillance applications,” vol. 103, 2021, doi: 10.1016/j.asoc.2021.107102.
[53] R. Amerineni et al., “Fusion Models for Generalized Classification of Multi-Axial Human Movement : Validation in Sport Performance,” 2021.
[54] T. Shanableh and S. Member, “ViCo-MoCo-DL : Video Coding and Motion Compensation Solutions for Human Activity Recognition Using Deep Learning,” IEEE Access, vol. 11, no. July, pp. 73971–73981, 2023, doi: 10.1109/ACCESS.2023.3296252.
[55] G. Kim, H. Yoo, and K. Chung, “SlowFast Based Real-Time Human Motion Recognition with Action Localization,” 2023, doi: 10.32604/csse.2023.041030.
[56] S. Li and Y. Liu, “Human motion recognition based on Nano-CMOS Image sensor,” vol. 20, no. January, pp. 10135–10152, 2023, doi: 10.3934/mbe.2023444.
[57] X. Yang, W. Luo, and X. An, “passive RFID and multi-model fusion,” no. July, 2023, doi: 10.1117/12.2685530.
[58] L. Rock, L. Rock, and L. Rock, “Human Actions Recognition Based on 3D Deep Neural Network,” no. March, 2017, doi: 10.1109/NTICT.2017.7976123.
[59] F. Al-azzo and A. M. Taqi, “3D Human Action Recognition using Hu Moment Invariants and Euclidean Distance Classifier,” vol. 8, no. 4, pp. 13–21, 2017.