+1 (208) 254-6996 [email protected]
  

Human Activity Recognition Case Study

The objective is to write a formal analysis report finishing the objectives set forth in the final case analysis document. A sample template for the final report is provided that contains minimum requirements for the report including the following sections: Introduction, Analysis and Results, Methodology, Limitations and Conclusion.

Don't use plagiarized sources. Get Your Custom Essay on
Human Activity Recognition Case Study_analysis Report
Just from $13/Page
Order Essay

You are required to follow the report template and explain the findings including final models ran for the task.

Case Analysis 4: Human Activity Recognition Machine Learning

Introduction

We all wear smart devices that have the ability achieving countless feats such as keeping track of biometrics, giving us directions, and making recommendations be active. How does the technology recognize that we are inactive such as sitting or lying down? Smart wearable devices such as smartwatches and phones contain two key types of sensors capable of measuring our orientation and motion in reference to the ground. Airplanes have been equipped with such technology since the inception. The two types of sensors are known as accelerometer and gyroscope. Accelerometer measures Triaxial acceleration and the estimated body acceleration and gyroscope measures Triaxial Angular velocity. The following picture gives an idea of the aforementioned dimensions and measurements.

x

Micromachines | Free Full-Text | Activity Recognition Using Fusion of  Low-Cost Sensors on a Smartphone for Mobile Navigation Application | HTML

The smart devices are constantly measuring these metrics and sending the information to remote cloud servers. That data and information is pipelined through machine leaning algorithms to identify the type of motion a person may be performing. This final case study is the analysis and application of classification related algorithms applying on a large dataset collected in a lab setting on human activity. You are given 500 dimensions of data collected and derived from sensors doing “six” different kinds of activities by people. Those activities are WALKING, WALKING_UPSTAIRS, WALKING_DOWNSTAIRS, SITTING, STANDING, LAYING

Human Activity Recognition Using Smartphones Sensor Data | by Xiaoshan Sun  | Medium

You may watch the video of the experiment here:

Data Details:

The experiments was done on volunteers within an age bracket of 19-48 years. Each person performed six activities (WALKING, WALKING_UPSTAIRS, WALKING_DOWNSTAIRS, SITTING, STANDING, LAYING) wearing a smartphone (Samsung Galaxy S II) on the waist. Using its embedded accelerometer and gyroscope, 3-axial linear acceleration and 3-axial angular velocity at a constant rate of 50Hz. The experiments have been video-recorded to label the data manually.

There are more than 10000 rows in the data, where each row represents one of the six activities. The activities are listed as 1 through 6 in y column and labeled in the Activity column in the dataset. There are 560+ predictor columns derived from the sensors measuring various metrics or characteristics of a certain activity.

The main objective is to catch the hidden signal in the feature space (predictors) through machine learning classification algorithms mapping predictors to the activity column. Make sure to reduce dimensions, as 560 are too many, and then try various classification algorithms on the reduced dimensions.

Report your findings in formal report form.

Case Analysis 4: Human Activity Recognition Machine Learning

Introduction

We all wear smart devices that have the ability achieving countless feats such as keeping track of biometrics, giving us directions, and making recommendations be active. How does the technology recognize that we are inactive such as sitting or lying down? Smart wearable devices such as smartwatches and phones contain two key types of sensors capable of measuring our orientation and motion in reference to the ground. Airplanes have been equipped with such technology since the inception. The two types of sensors are known as accelerometer and gyroscope. Accelerometer measures Triaxial acceleration and the estimated body acceleration and gyroscope measures Triaxial Angular velocity. The following picture gives an idea of the aforementioned dimensions and measurements.

x

Micromachines | Free Full-Text | Activity Recognition Using Fusion of  Low-Cost Sensors on a Smartphone for Mobile Navigation Application | HTML

The smart devices are constantly measuring these metrics and sending the information to remote cloud servers. That data and information is pipelined through machine leaning algorithms to identify the type of motion a person may be performing. This final case study is the analysis and application of classification related algorithms applying on a large dataset collected in a lab setting on human activity. You are given 500 dimensions of data collected and derived from sensors doing “six” different kinds of activities by people. Those activities are WALKING, WALKING_UPSTAIRS, WALKING_DOWNSTAIRS, SITTING, STANDING, LAYING

Human Activity Recognition Using Smartphones Sensor Data | by Xiaoshan Sun  | Medium

You may watch the video of the experiment here:

Data Details:

The experiments was done on volunteers within an age bracket of 19-48 years. Each person performed six activities (WALKING, WALKING_UPSTAIRS, WALKING_DOWNSTAIRS, SITTING, STANDING, LAYING) wearing a smartphone (Samsung Galaxy S II) on the waist. Using its embedded accelerometer and gyroscope, 3-axial linear acceleration and 3-axial angular velocity at a constant rate of 50Hz. The experiments have been video-recorded to label the data manually.

There are more than 10000 rows in the data, where each row represents one of the six activities. The activities are listed as 1 through 6 in y column and labeled in the Activity column in the dataset. There are 560+ predictor columns derived from the sensors measuring various metrics or characteristics of a certain activity.

The main objective is to catch the hidden signal in the feature space (predictors) through machine learning classification algorithms mapping predictors to the activity column. Make sure to reduce dimensions, as 560 are too many, and then try various classification algorithms on the reduced dimensions.

Report your findings in formal report form.

Order your essay today and save 10% with the discount code ESSAYHELP