Human Activity Recognition using Smartphones

Home
breadCrumb image
Human Activity Recognition using Smartphones

Leeds Beckett University



DISSERTATION



Human Activity recognition using smartphones

School of Built Environment, Engineering, and Computing

Leeds Beckett University






Student Name:

Student ID:

<December, 2021>

List of Figures



Chapter1: Introduction

The world is expanding itself in the terms of wealth and economy, but one that keeps on declining with this era of success is health. There are many health issues and this needs to be highlighted along with the attempt to resolve this problem. Although, until that is not done, the resolution to the problem would be to gain a confidential work of the task that provides a more elaborate study of the issues one person has. This makes the person aware of various problems that may rise in the prospects of health and the issues related. It will continue to make an estimation of the health and will help in the tracking of it.

Machine learning is one such technology, that is at growth in the current time, and is continuously expanded in the current times. The moment this is being written, there must be some research being done on the concept of ML and AI. The use of all these values includes the expansion of the sectors that were not touched by the technology before. It is the essential need, that will bring in the certainty of the validation in the progress of the task (Chen et al. 2021). One such sector to be touched and covered by the help of system and frameworks of ML is healthcare. This makes various possibilities to come across in the process, specifically the Tracking of someone’s health by the concept of HAR by using smartphones.

Background

The increasing trends of technology being involved in every possible field, makes one to think of it in the health sector too. It is essential in the world today to keep a grip over the technology that is in use, and the hold over the study in the context of the work to be allocated in the system. This allocation of the functions to the valued technology, leads to the amalgamation of the technical aspects for the betterment of the social cause. This is the reason for the project to come to an estimate approach in the field of evaluative study, such that it may perform good evaluation of problem in health if any.

Figure 1: Assessment of Parkinson by ML

(Source: Zhan et al. 2018)

The approaches in the field are taken in the validation, by the help of the work to be acquired in the practice, by the help of a certain estimation to be provided within the system. It will not only guide the system to a better regulation of health issues, but also create an alert whenever a threat is assumed. The use is also to be implemented by the help of the guidance to be provided to a valid point of the system (Farnham et al. 2018). It will cause for a relation to form between technology and the biological aspects of a body. Thus, giving in a continuous remote access to the patient or suspicion.

The patient devices can be made, but the idea is to keep the track of general people, could be beneficial in keeping a healthy society. This requirement is to be met, by the help of various attributes of health to be added to the system and to make a certainty over the values, that enunciate the health wellness. It is the deliberate approach, to keep track of every person the part of this. It is intended to be done, by the use of the smartphones, that will bring in the consistent update of the health prospects.

Problem Statement

A cause that helps many people, is a cause served well. This makes the considerate change in looking towards the prospect of any service that could be offered. The problem of physical and mental health issues, seems to be rising in the world day-by-day. This scenario could be observed and is well known in the social and economic domain today. The issues could be resolved in their own ways, but the people who live alone, or are elderly, can face serious problems when there is no one to take care. This makes the evaluation more elaborate and consistent to the cause (Bulbul et al. 2018).

This is the focus of the project, to create a system that may send alerts and may also keep a track of health of people at the time of emergency. This considerate change could be assessed by the use of a simple smartphone and its sensors. This helps in the making of a better access to the data of health of any individual, such that it creates an open study to the health. It will be based on ML hence being capable of decisions, it will send the alerts when required and cause the problems to be dodged on time. It may also diagnose some diseases by the use of the concept known as HAR.

Aim and Objective

Aim: The aim of the project is to track a person’s health by the use of Human Activity Recognition, by taking smartphones as a medium.

Objectives:

  • To create an understanding data such that it will create an open study of the health.

  • To make a platform that will include the concepts of ML to study one’s health and its fluctuations.

  • To make a framework that uses Human Activity recognition as a technology to track human health.

  • To use smartphones in the assessment of the health by the use of sensors that exist in smartphones.

  • To present the tool for health care and support, that will work on ML and be based on a simple smartphone.

Rationale

The rationale to be given on the chosen topic can be simply cantered to the intention of a better cause to be held in within the society. The accessibility of the technology is increased by the help of this smartphone used in the process. The continuation is also generated in the process by the of the sensors such as GPS and gyroscope, to be used, for knowing the location and position of people. It will deliberately deliver the message of emergency to the nearest healthcare, saving the time. All these factors and the training of the data by the use of HAR, makes the system accurate and full of conceptual expansion.

Chapter2: Literature Review

This section is comprehensive literature that highlights different concepts of human activities and different technologies to deploy the features of human activity recognition in smartphones. Human activities are recognized using different technologies such as machine learning, artificial intelligence, deep learning, and advanced technology-based sensors (Almaslukh et al. 2018). These technologies are deployed in different wearables, tools, and devices then controlled using smartphones based on the collected, observed, and analyzed. The need for human activity detection attracts more users specifically from the healthcare sector due to the recent Covid-19 pandemic impacts. Each process, service, and comfort have been invented for human utilization but if humans are not there then, there will be no sense of invention and creation of such things. Therefore, each human need appropriate care, security, and health services to ensure that they are up-to-date on their health status to take appropriate action on time.

With the understanding of social needs and special care, the health industry is focusing on the human activity recognition system implementation using portable devices such as smartphones, tablets, and other smart tools. In this manner, to implement the system and provide better services to humans, several specifications are required to measure and analyze. In this manner, the overall literature is conducted in an understandable and approachable manner for further utilization (Taylor et al. 2020).

2.1 Methods of Human Activity Recognition

Multiple advanced technologies and methods are identified and implemented by developers for recognizing human activities according to their needs. The developers have been researched on a broad level to ensure how the features and functionalities of advanced technologies or methods can be used to observe the human data, generate notifications, and send them to the users so, that the most significant action can be taken timely. These methods and technologies are integrated with each other to enhance the accuracy and fast solutions and then embedded in smartphones to make them easier to use for human beings. HAR systems are implemented using video-based and sensor-based models.

The sensor-based model involves wearable sensor-based and smartphone sensor-based models which incorporate devices, equipment, and tools to monitor the surrounding environment and entire human behavior as well. Furthermore, this study focuses on the deep learning method that enabled HAR to monitor fine-grained movement patterns by better usability of poorly labeled sensor datasets (Zhou et al. 2020). Concerning numerous sensor datasets retrieved by several types of wearable equipment, a semi-supervised learning model is addressed in order to involve a high number of unlabeled datasets along with the small area of the labeled dataset in order to maximize the HAR accuracy in the healthcare environment. Particularly, comparing & contrasting with presented relevant researchers, the contribution of this study can be summarized as below-mentioned points-

  • A semi-supervised deep learning model needs to be developed with auto-labeling data and a Long short-term memory-specified classification model, which can effectively use the extreme data to train the classifier and enhance HAR accuracy (Mekruksavanich et al. 2020).

  • A smart auto-labeling scheme on the basis of DQN or Deep Q-network is created with a standard distance-specific reward protocol that can resolve issues of inadequacy labeled & improve the efficiency of learning.

  • LSTM-specific classifier is developed with multi-sensor-oriented data mechanism which furtherly utilized to deal with hierarchical motion dataset along with the detection of fine-grained patterns as per the acquired massive attributes.

Consequently, machine learning algorithms are also used broadly to design and development of most accurate and efficient HAR system. The machine learning approaches include support vector machines, KNN, random forest, and neural networks specifically based on real-time cases.

2.2 Human Activity technique using smartphones

HAR system procedure employs a particular stage set related to data extraction to activity classification. This process involves a collection of raw datasets sharing that are acquired from different wearable tools, devices, sensors, and equipment. To identify the major techniques and processes to design the HAR system using smartphones it is a key responsibility to follow a systematic procedure. The smartphones need to be set and work according to the main devices, tools, and sensors used to design and deploy the system for human activities recognition more conveniently. Hence, these techniques are-

  1. Data Extraction: This procedure is the first step to collect the raw datasets from different sensors and wearable devices and verify the overall case using a smartphone. Different types of sensors are embedded in smartphones to acquire the health conditions such as Blood pressure, temperature, heart-beat, etc. Then, the extracted raw data is required to be presented in a reliable manner to compute the overall conditions.

  2. Data Pre-processing: This step incorporates the use of python within the designed system of smartphones in order to make it useful for further analysis and artifacts.

  3. Segmentation of data: Data segmentation is a process to manage the sensors and control so, that exact momentum can be measured during activity occurrence.

  4. Feature acquisition: The main role of feature acquisition is to extract the major features of the most relevant datasets from output data, facilitated by classification algorithm while the reduction of data variables. Using this technique, raw data is transformed into a useful source of information that enhances the accuracy of classification. So, on the basis of raw data, the feature acquisition technique helps to obtain accurate activity recognition results dynamically (Fan, and Gao, 2021).

  5. Classification: This technique is required to train and test the overall utilized methods and algorithms. The classification model based on parameters is estimated during the training & testing procedures (Subasi et al. 2019). This step defines that smartphones have the functionalities to acquire, store, maintain, share, and explore the data in terms of results in a short duration. The conceptualized-based development influences the data availability, its pre-processing, and evaluation as shown in the below figure-

Shape1

Figure 2: Techniques of HAR using Smartphones

(Source: Author)

2.3 Challenges in Human Activity recognition using smartphones

Smartphone-based human activities recognition system faces some high-level challenges including 1. To manage the extensively available information that these devices can generate and their secular resilience. 2. The lack of information or knowledge related to how the user’s data will be related to defined motions.

However, several machine learning approaches have been considered in the HAR system that lead to some major technical challenges. Other challenges are based on computer vision, real-life applications, and dedicated methods. Some categories of such issues that need to be addressed are-

  • Feature Extraction- The activities recognition is based on a classification mechanism that distributes a general challenge including other classification issues, which are considered feature extraction. Specifically, for sensor-specific activity detection, feature extraction is highly complex due to inter-activity similarity. Other activities possibly have similar attributes like running, laying, walking, sitting, and more.

  • Annotation Scarcity- Training and analysis of proposed techniques need broad annotated data samples that are time-consuming and expensive to collect/annotate the dataset. Hence, annotation scarcity is a significant issue for smartphone-based activity. Class imbalance is another complicated challenge.

  • Interpretability- Similar to the texts or images, a sensor-based dataset is unreadable. Furthermore, this data inevitably incorporates numerous noise information. Thus, effective recognition solutions must have interpretability in this dataset.

  • Data Segmentation- An issue stimulated through composite activities referred to as data segmentation. Composite activity means a set of multiple activities. Thus, accurate recognition majorly depends on data segmentation approaches (Chen et al. 2021).

In addition, some other major challenges in human activities recognition are distribution discrepancy, composite activities, concurrent activities, multi-occupant activities, computational cost, privacy issues, smartphone battery limitation due to the use of multiple sensors, and deployment of position-aware solutions within smartphones.

2.4 Human Activity Recognition Framework

This is one of the most essential sections of the literature review that will contribute to generating the results as a specific model of human activity recognition will be comprehensively discussed to show how the model for HAR will be designed and embedded in smartphones (Subasi et al. 2019). The model will be particularly developed for recognizing the activities of human beings to provide them health information and secure their safety. As discussed in the proposal, a robust model will be designed using machine learning and its relevant algorithms. Python computation language will be used to do the programming to develop the automated HAR system for healthcare to recognize health activities like temperature, heart-beat, etc. Accelerometer and gyroscope sensors are specifically used to monitor human activities and measure them. The model will also represent how the sensors generate alerts and establish communication over smartphones using Bluetooth devices. The model will also cover the smartphone orientation and positioning analysis that will generate the data on each action taken by humans including walking, driving, running, sitting, and so on (Garcia-Gonzalez et al. 2020).

2.5 Smartphone sensors for HAR

Current smartphones are utilized as one of the most essential tools due to the availability of multiple sensors like a Global positioning system, gyroscope, Step detector sensor, accelerometer, step counter sensor, Barometer, Blood pressure & temperature sensors embedded in smartphones. These sensors are possibly utilized to bring out various activities such as step counting, heart rate & temperature measuring, as well as activity recognition. So, the most specific and common sensors are discussed below to give a particular overview of usability of such sensors in the accurate human activity recognition-

  1. Accelerometer Sensor: This is a sensor embedded in smartphones in order to measure the acceleration of items which is a change in an object’s velocity. This sensor facilitates a three-axis accelerometer (including x, y, and z-axis) which may be acquired from this sensor. The accelerometer generates the values that are furtherly used to optimize the acceleration of humans, however, classifiers need to be implemented to accurately infer tasks including running, walking, sitting from raw data of the accelerometer. An overview is outlined below in which the x-axis defines the lateral motion of the smartphone, the y-axis represents the vertical motion of the phone. On the other hand, the z-axis outlines movements of the plane’s in and out described by the x & y-axes (Ogbuabor and La 2018).

Figure 3: Smartphone's Accelerometer axes

(Source: Ogbuabor and La 2018)

This sensor is used in multiple ways along with wearable, smartphones, and other electronic devices and controlled adequately in order to generate the most accurate data for keeping the users aware of their health status on a timely basis. Consequently, the accelerometer is used to detect, monitor, and track vital signals. To deploy these sensors in smartphones is very simple task along with the alignment of setting such as screen orientation changes, shake movements, and more. The main benefits of accelerometer sensors once the deployment with smartphones are-

  • High impedance

  • High sensitivity

  • High frequency

  • Built-in signal conditioning in order to measure the capacitance

  1. Gyroscope Sensor: This sensor is utilized to measure the smartphone’s orientation charge by monitoring and detecting the pitch, roll, as well as yaw movements of smartphones including the x-axis, y-axis, and z-axis respectively. The below figure is illustrated in the below section-

Figure 4: Smartphone’s Gyroscope axes

(Source: Ogbuabor and La 2018)

Moreover, this sensor can measure as well as maintain the angular velocity along with the orientation of a particular object. These sensors are more advanced as compared to accelerometers as this measure the tilt as well as lateral orientations of items or particular subjects effectively. Gyroscope sensors operate by preserving the angular boost. This sensor has a spinning wheel or rotor that is mounted on pivots. Pivots enable the rotation of the rotor based on the specific axis that is known as a gimbal. Simultaneously, the gyroscope is integrated with an accelerometer to measure the object’s orientation in 3-dimensional space (Ogbuabor and La 2018).



References

Almaslukh, B., Artoli, A.M. and Al-Muhtadi, J., 2018. A robust deep learning approach for position-independent smartphone-based human activity recognition. Sensors, 18(11), p.3726. http://dx.doi.org/10.3390/s18113726

Bulbul, E., Cetin, A. and Dogru, I.A., 2018, October. Human activity recognition using smartphones. In 2018 2nd international symposium on multidisciplinary studies and innovative technologies (ismsit) (pp. 1-6). IEEE. Link: https://www.researchgate.net/publication/329559026

Chen, K., Zhang, D., Yao, L., Guo, B., Yu, Z. and Liu, Y., 2021. Deep learning for sensor-based human activity recognition: Overview, challenges, and opportunities. ACM Computing Surveys (CSUR)54(4), pp.1-40. Doi: https://doi.org/10.1145/3447744

Chen, K., Zhang, D., Yao, L., Guo, B., Yu, Z. and Liu, Y., 2021. Deep learning for sensor-based human activity recognition: Overview, challenges, and opportunities. ACM Computing Surveys (CSUR), 54(4), pp.1-40. https://doi.org/10.1145/3447744

Fan, C. and Gao, F., 2021. Enhanced human activity recognition using wearable sensors via a hybrid feature selection method. Sensors, 21(19), p.6434. https://dx.doi.org/10.3390%2Fs21196434

Farnham, A., Ziegler, S., Blanke, U., Stone, E., Hatz, C. and Puhan, M.A., 2018. Does the DOSPERT scale predict risk-taking behaviour during travel? A study using smartphones. Journal of travel medicine25(1), p.tay064. Doi: https://doi.org/10.1093/jtm/tay064

Garcia-Gonzalez, D., Rivero, D., Fernandez-Blanco, E. and Luaces, M.R., 2020. A public domain dataset for real-life human activity recognition using smartphone sensors. Sensors, 20(8), p.2200. http://dx.doi.org/10.3390/s20082200

Mekruksavanich, S., Jitpattanakul, A., Youplao, P. and Yupapin, P., 2020. Enhanced hand-oriented activity recognition based on smartwatch sensor data using lstms. Symmetry, 12(9), p.1570. https://doi.org/10.3390/sym12091570

Ogbuabor, G. and La, R., 2018, February. Human activity recognition for healthcare using smartphones. In Proceedings of the 2018 10th international conference on machine learning and computing (pp. 41-46). https://doi.org/10.1145/3195106.3195157

Subasi, A., Fllatah, A., Alzobidi, K., Brahimi, T. and Sarirete, A., 2019. Smartphone-based human activity recognition using bagging and boosting. Procedia Computer Science, 163, pp.54-61.10.1016/j.procs.2019.12.086

Taylor, W., Shah, S.A., Dashtipour, K., Zahid, A., Abbasi, Q.H. and Imran, M.A., 2020. An intelligent non-invasive real-time human activity recognition system for next-generation healthcare. Sensors, 20(9), p.2653. http://dx.doi.org/10.3390/s20092653

Zhan, A., Mohan, S., Tarolli, C., Schneider, R.B., Adams, J.L., Sharma, S., Elson, M.J., Spear, K.L., Glidden, A.M., Little, M.A. and Terzis, A., 2018. Using smartphones and machine learning to quantify Parkinson disease severity: the mobile Parkinson disease score. JAMA neurology75(7), pp.876-880. Doi: https://dx.doi.org/10.1001%2Fjamaneurol.2018.0809

Zhou, X., Liang, W., Kevin, I., Wang, K., Wang, H., Yang, L.T. and Jin, Q., 2020. Deep-learning-enhanced human activity recognition for Internet of healthcare things. IEEE Internet of Things Journal, 7(7), pp.6429-6438. https://ntnuopen.ntnu.no/ntnu-xmlui/bitstream/handle/11250/2650589/HAR4IoHT_final_2.pdf?sequence=2

7


FAQ's