Code
library(tidyverse)
library(stringr)
library(readxl)
::opts_chunk$set(echo = TRUE, warning=FALSE, message=FALSE) knitr
Animesh Sengupta
August 17, 2022
Today’s challenge is to:
pivot_longer
For this challenge we choose the USA Households Data. The following section shows how the data is loaded, processed and pivoted to make it a tidier data.
##Processing
#! label: Data loading
US_household_data <- read_excel("../posts/_data/USA Households by Total Money Income, Race, and Hispanic Origin of Householder 1967 to 2019.xlsx",skip = 5, n_max = 353, col_names = c( "Year", "Number","Total","pd_<15000","pd_15000-24999","pd_25000-34999","pd_35000-49999","pd_50000-74999","pd_75000-99999","pd_100000-149999","pd_150000-199999","pd_>200000","median_income_estimate","median_income_moe","mean_income_estimate","mean_income_moe"))
###Data Preprocessing
#! label: Data processing
US_processed_data <- US_household_data%>%
rowwise()%>% #to ensure the following operation runs row wise
mutate(Race=case_when(
is.na(Number) ~ Year
))%>%
ungroup()%>% # to stop rowwise operation
fill(Race,.direction = "down")%>%
subset(!is.na(Number))%>%
rowwise()%>%
mutate(
Year=strsplit(Year,' ')[[1]][1],
Race=ifelse(grepl("[0-9]", Race ,perl=TRUE)[1],strsplit(Race," \\s*(?=[^ ]+$)",perl=TRUE)[[1]][1],Race)
)
#head(US_processed_data,10)
The US household data gives an insight of the income statistics of a generic household in USA. The feature set of the data is the following Year, Number, Total, pd_<15000, pd_15000-24999, pd_25000-34999, pd_35000-49999, pd_50000-74999, pd_75000-99999, pd_100000-149999, pd_150000-199999, pd_>200000, median_income_estimate, median_income_moe, mean_income_estimate, mean_income_moe, Race. Out of the following columns , the percent distribution of income classes can be pivoted longer. This transformation can be done using pivot_longer method.
The first step in pivoting the data is to try to come up with a concrete vision of what the end product should look like - that way you will know whether or not your pivoting was successful.
One easy way to do this is to think about the dimensions of your current data (tibble, dataframe, or matrix), and then calculate what the dimensions of the pivoted data should be.
Suppose you have a dataset with \(n\) rows and \(k\) variables. In our example, 3 of the variables are used to identify a case, so you will be pivoting \(k-3\) variables into a longer format where the \(k-3\) variable names will move into the names_to
variable and the current values in each of those columns will move into the values_to
variable. Therefore, we would expect \(n * (k-3)\) rows in the pivoted dataframe!
Lets see if this works with a simple example.
# A tibble: 6 × 5
country year trade outgoing incoming
<chr> <dbl> <chr> <dbl> <dbl>
1 Mexico 1980 NAFTA 327. 1396.
2 USA 1990 NAFTA 853. 2241.
3 France 1980 EU 91.5 728.
4 Mexico 1990 NAFTA 2448. 1037.
5 USA 1980 NAFTA 678. 1029.
6 France 1990 EU 415. 3044.
[1] 6
[1] 5
[1] 12
[1] 5
Or simple example has \(n = 6\) rows and \(k - 3 = 2\) variables being pivoted, so we expect a new dataframe to have \(n * 2 = 12\) rows x \(3 + 2 = 5\) columns.
Document your work here.
[1] 340
[1] 17
[1] 10
[1] 3060
Any additional comments?
Our dataset had 17 columns initially and after pivoting it will come down to 10 features.
Now we will pivot the data, and compare our pivoted data dimensions to the dimensions calculated above as a “sanity” check.
# A tibble: 12 × 5
country year trade trade_direction trade_value
<chr> <dbl> <chr> <chr> <dbl>
1 Mexico 1980 NAFTA outgoing 327.
2 Mexico 1980 NAFTA incoming 1396.
3 USA 1990 NAFTA outgoing 853.
4 USA 1990 NAFTA incoming 2241.
5 France 1980 EU outgoing 91.5
6 France 1980 EU incoming 728.
7 Mexico 1990 NAFTA outgoing 2448.
8 Mexico 1990 NAFTA incoming 1037.
9 USA 1980 NAFTA outgoing 678.
10 USA 1980 NAFTA incoming 1029.
11 France 1990 EU outgoing 415.
12 France 1990 EU incoming 3044.
Yes, once it is pivoted long, our resulting data are \(12x5\) - exactly what we expected!
The pivoted data will have two new columns namely: 1. Income Range : Range of income (Names column) 2. Percent_distribution: Values of all the percent distribution for that particular income_range.
# A tibble: 10 × 10
Year Number Total median_inc…¹ media…² mean_…³ mean_…⁴ Race incom…⁵ perce…⁶
<chr> <chr> <dbl> <dbl> <dbl> <chr> <chr> <chr> <chr> <dbl>
1 2019 128451 100 68703 904 98088 1042 ALL … <15000 9.1
2 2019 128451 100 68703 904 98088 1042 ALL … 15000-… 8
3 2019 128451 100 68703 904 98088 1042 ALL … 25000-… 8.3
4 2019 128451 100 68703 904 98088 1042 ALL … 35000-… 11.7
5 2019 128451 100 68703 904 98088 1042 ALL … 50000-… 16.5
6 2019 128451 100 68703 904 98088 1042 ALL … 75000-… 12.3
7 2019 128451 100 68703 904 98088 1042 ALL … 100000… 15.5
8 2019 128451 100 68703 904 98088 1042 ALL … 150000… 8.3
9 2019 128451 100 68703 904 98088 1042 ALL … >200000 10.3
10 2018 128579 100 64324 704 91652 914 ALL … <15000 10.1
# … with abbreviated variable names ¹median_income_estimate,
# ²median_income_moe, ³mean_income_estimate, ⁴mean_income_moe, ⁵income_range,
# ⁶`percent distribution`
[1] 3060 10
Our pivoted data has been properly processed and the dimensions also matches the expected row and column measures. The challenge is successfully completed
---
title: "Challenge 3"
author: "Animesh Sengupta"
desription: "Tidy Data: Pivoting"
date: "08/17/2022"
format:
html:
toc: true
code-fold: true
code-copy: true
code-tools: true
categories:
- challenge_3
- Animesh Sengupta
- us_hh
---
```{r}
#| label: setup
#| warning: false
#| message: false
library(tidyverse)
library(stringr)
library(readxl)
knitr::opts_chunk$set(echo = TRUE, warning=FALSE, message=FALSE)
```
## Challenge Overview
Today's challenge is to:
1. read in a data set, and describe the data set using both words and any supporting information (e.g., tables, etc)
2. identify what needs to be done to tidy the current data
3. anticipate the shape of pivoted data
4. pivot the data into tidy format using `pivot_longer`
## Challenge
For this challenge we choose the USA Households Data. The following section shows how the data is loaded, processed and pivoted to make it a tidier data.
##Processing
```{r}
#! label: Data loading
US_household_data <- read_excel("../posts/_data/USA Households by Total Money Income, Race, and Hispanic Origin of Householder 1967 to 2019.xlsx",skip = 5, n_max = 353, col_names = c( "Year", "Number","Total","pd_<15000","pd_15000-24999","pd_25000-34999","pd_35000-49999","pd_50000-74999","pd_75000-99999","pd_100000-149999","pd_150000-199999","pd_>200000","median_income_estimate","median_income_moe","mean_income_estimate","mean_income_moe"))
```
###Data Preprocessing
```{r}
#! label: Data processing
US_processed_data <- US_household_data%>%
rowwise()%>% #to ensure the following operation runs row wise
mutate(Race=case_when(
is.na(Number) ~ Year
))%>%
ungroup()%>% # to stop rowwise operation
fill(Race,.direction = "down")%>%
subset(!is.na(Number))%>%
rowwise()%>%
mutate(
Year=strsplit(Year,' ')[[1]][1],
Race=ifelse(grepl("[0-9]", Race ,perl=TRUE)[1],strsplit(Race," \\s*(?=[^ ]+$)",perl=TRUE)[[1]][1],Race)
)
#head(US_processed_data,10)
```
### Briefly describe the data
The US household data gives an insight of the income statistics of a generic household in USA. The feature set of the data is the following `r colnames(US_processed_data)`. Out of the following columns , the percent distribution of income classes can be pivoted longer. This transformation can be done using pivot_longer method.
## Anticipate the End Result
The first step in pivoting the data is to try to come up with a concrete vision of what the end product *should* look like - that way you will know whether or not your pivoting was successful.
One easy way to do this is to think about the dimensions of your current data (tibble, dataframe, or matrix), and then calculate what the dimensions of the pivoted data should be.
Suppose you have a dataset with $n$ rows and $k$ variables. In our example, 3 of the variables are used to identify a case, so you will be pivoting $k-3$ variables into a longer format where the $k-3$ variable names will move into the `names_to` variable and the current values in each of those columns will move into the `values_to` variable. Therefore, we would expect $n * (k-3)$ rows in the pivoted dataframe!
### Example: find current and future data dimensions
Lets see if this works with a simple example.
```{r}
#| tbl-cap: Example
df<-tibble(country = rep(c("Mexico", "USA", "France"),2),
year = rep(c(1980,1990), 3),
trade = rep(c("NAFTA", "NAFTA", "EU"),2),
outgoing = rnorm(6, mean=1000, sd=500),
incoming = rlogis(6, location=1000,
scale = 400))
df
#existing rows/cases
nrow(df)
#existing columns/cases
ncol(df)
#expected rows/cases
nrow(df) * (ncol(df)-3)
# expected columns
3 + 2
```
Or simple example has $n = 6$ rows and $k - 3 = 2$ variables being pivoted, so we expect a new dataframe to have $n * 2 = 12$ rows x $3 + 2 = 5$ columns.
### Challenge: Describe the final dimensions
Document your work here.
```{r}
#! label: Data Dimensions
nrow(US_processed_data)
ncol(US_processed_data)
expected_columns <- ncol(US_processed_data)-9+2
expected_rows <- nrow(US_processed_data) * (9)
expected_columns
expected_rows
```
Any additional comments?
Our dataset had 17 columns initially and after pivoting it will come down to 10 features.
## Pivot the Data
Now we will pivot the data, and compare our pivoted data dimensions to the dimensions calculated above as a "sanity" check.
### Example
```{r}
#| tbl-cap: Pivoted Example
df<-pivot_longer(df, col = c(outgoing, incoming),
names_to="trade_direction",
values_to = "trade_value")
df
```
Yes, once it is pivoted long, our resulting data are $12x5$ - exactly what we expected!
### Challenge: Pivot the Chosen Data
The pivoted data will have two new columns namely:
1. Income Range : Range of income (Names column)
2. Percent_distribution: Values of all the percent distribution for that particular income_range.
```{r}
#! label: pivoting
US_pivot_data<-US_processed_data%>%
pivot_longer(
cols = starts_with("pd"),
names_to = "income_range",
values_to = "percent distribution",
names_prefix="pd_"
)
head(US_pivot_data,10)
dim(US_pivot_data)
```
Our pivoted data has been properly processed and the dimensions also matches the expected row and column measures. The challenge is successfully completed