Code
library(tidyverse)
::opts_chunk$set(echo = TRUE, warning=FALSE, message=FALSE) knitr
Janhvi Joshi
November 5, 2022
Today’s challenge is to:
pivot_longer
Read in one (or more) of the following datasets, using the correct R package and command.
# A tibble: 120 × 6
month year large_half_dozen large_dozen extra_large_half_dozen extra_l…¹
<chr> <dbl> <dbl> <dbl> <dbl> <dbl>
1 January 2004 126 230 132 230
2 February 2004 128. 226. 134. 230
3 March 2004 131 225 137 230
4 April 2004 131 225 137 234.
5 May 2004 131 225 137 236
6 June 2004 134. 231. 137 241
7 July 2004 134. 234. 137 241
8 August 2004 134. 234. 137 241
9 September 2004 130. 234. 136. 241
10 October 2004 128. 234. 136. 241
# … with 110 more rows, and abbreviated variable name ¹extra_large_dozen
month year large_half_dozen large_dozen
Length:120 Min. :2004 Min. :126.0 Min. :225.0
Class :character 1st Qu.:2006 1st Qu.:129.4 1st Qu.:233.5
Mode :character Median :2008 Median :174.5 Median :267.5
Mean :2008 Mean :155.2 Mean :254.2
3rd Qu.:2011 3rd Qu.:174.5 3rd Qu.:268.0
Max. :2013 Max. :178.0 Max. :277.5
extra_large_half_dozen extra_large_dozen
Min. :132.0 Min. :230.0
1st Qu.:135.8 1st Qu.:241.5
Median :185.5 Median :285.5
Mean :164.2 Mean :266.8
3rd Qu.:185.5 3rd Qu.:285.5
Max. :188.1 Max. :290.0
This dataset has 120 rows and 6 columns and describes the prices of different types of eggs from year 2004 to 2013. 4 types of eggs are described - large_half_dozen, large_dozen, extra_large_half_dozen, extra_large_dozen. I chose to tidy this dataset because it currently stores the price of 2 types of eggs (large and extra large) for both dozen and half dozen, but all are stored in 4 different columns. It would be better to have one row for one eggs size and quantity. This would help in analysing trends of how the prices different sizes eggs changed over the years and in what quantities.
The first step in pivoting the data is to try to come up with a concrete vision of what the end product should look like - that way you will know whether or not your pivoting was successful.
One easy way to do this is to think about the dimensions of your current data (tibble, dataframe, or matrix), and then calculate what the dimensions of the pivoted data should be.
Suppose you have a dataset with \(n\) rows and \(k\) variables. In our example, 3 of the variables are used to identify a case, so you will be pivoting \(k-3\) variables into a longer format where the \(k-3\) variable names will move into the names_to
variable and the current values in each of those columns will move into the values_to
variable. Therefore, we would expect \(n * (k-3)\) rows in the pivoted dataframe!
Lets see if this works with a simple example.
# A tibble: 6 × 5
country year trade outgoing incoming
<chr> <dbl> <chr> <dbl> <dbl>
1 Mexico 1980 NAFTA 910. 2026.
2 USA 1990 NAFTA 1546. 2008.
3 France 1980 EU 1082. 1436.
4 Mexico 1990 NAFTA 1575. 432.
5 USA 1980 NAFTA 909. 1283.
6 France 1990 EU 655. 1787.
[1] 6
[1] 5
[1] 12
[1] 5
Or simple example has \(n = 6\) rows and \(k - 3 = 2\) variables being pivoted, so we expect a new dataframe to have \(n * 2 = 12\) rows x \(3 + 2 = 5\) columns.
After pivoting the table, I expect the dataset the dataset to have the month, year, size of the egg and the quantity of the eggs as columns. The resulting dataset will have data 4 times longer. The columns will change from 6 to 5.
[1] 120
[1] 6
[1] 480
Any additional comments?
Now we will pivot the data, and compare our pivoted data dimensions to the dimensions calculated above as a “sanity” check.
# A tibble: 12 × 5
country year trade trade_direction trade_value
<chr> <dbl> <chr> <chr> <dbl>
1 Mexico 1980 NAFTA outgoing 910.
2 Mexico 1980 NAFTA incoming 2026.
3 USA 1990 NAFTA outgoing 1546.
4 USA 1990 NAFTA incoming 2008.
5 France 1980 EU outgoing 1082.
6 France 1980 EU incoming 1436.
7 Mexico 1990 NAFTA outgoing 1575.
8 Mexico 1990 NAFTA incoming 432.
9 USA 1980 NAFTA outgoing 909.
10 USA 1980 NAFTA incoming 1283.
11 France 1990 EU outgoing 655.
12 France 1990 EU incoming 1787.
Yes, once it is pivoted long, our resulting data are \(12x5\) - exactly what we expected!
As expected, the resulting data is 4 times longer (from 120 -> 480). The number of columns has been reduced by 1 from 6 -> 5. Now, we have a single record of one egg size and quantity per row and helps in easy understanding of data for future analysis.
# A tibble: 480 × 5
month year Size Quantity Cost
<chr> <dbl> <chr> <chr> <dbl>
1 January 2004 large half 126
2 January 2004 large dozen 230
3 January 2004 extra large 132
4 January 2004 extra large 230
5 February 2004 large half 128.
6 February 2004 large dozen 226.
7 February 2004 extra large 134.
8 February 2004 extra large 230
9 March 2004 large half 131
10 March 2004 large dozen 225
# … with 470 more rows
Any additional comments? Yes, another optimization could be to take an average of the cost of a size and quantity combination for each month for better trend analysis.
---
title: "Challenge 3"
author: "Janhvi Joshi"
desription: "Tidy Data: Pivoting"
date: "11/05/2022"
format:
html:
toc: true
code-fold: true
code-copy: true
code-tools: true
categories:
- challenge_3
- animal_weights
- eggs
- australian_marriage
- usa_households
- sce_labor
---
```{r}
#| label: setup
#| warning: false
#| message: false
library(tidyverse)
knitr::opts_chunk$set(echo = TRUE, warning=FALSE, message=FALSE)
```
## Challenge Overview
Today's challenge is to:
1. read in a data set, and describe the data set using both words and any supporting information (e.g., tables, etc)
2. identify what needs to be done to tidy the current data
3. anticipate the shape of pivoted data
4. pivot the data into tidy format using `pivot_longer`
## Read in data
Read in one (or more) of the following datasets, using the correct R package and command.
- animal_weights.csv ⭐
- eggs_tidy.csv ⭐⭐ or organiceggpoultry.xls ⭐⭐⭐
- australian_marriage\*.xls ⭐⭐⭐
- USA Households\*.xlsx ⭐⭐⭐⭐
- sce_labor_chart_data_public.xlsx 🌟🌟🌟🌟🌟
```{r}
eggs_tidy <- read_csv('_data/eggs_tidy.csv')
eggs_tidy
```
```{r}
summary(eggs_tidy)
```
### Briefly describe the data
This dataset has 120 rows and 6 columns and describes the prices of different types of eggs from year 2004 to 2013. 4 types of eggs are described - large_half_dozen, large_dozen, extra_large_half_dozen, extra_large_dozen. I chose to tidy this dataset because it currently stores the price of 2 types of eggs (large and extra large) for both dozen and half dozen, but all are stored in 4 different columns. It would be better to have one row for one eggs size and quantity. This would help in analysing trends of how the prices different sizes eggs changed over the years and in what quantities.
## Anticipate the End Result
The first step in pivoting the data is to try to come up with a concrete vision of what the end product *should* look like - that way you will know whether or not your pivoting was successful.
One easy way to do this is to think about the dimensions of your current data (tibble, dataframe, or matrix), and then calculate what the dimensions of the pivoted data should be.
Suppose you have a dataset with $n$ rows and $k$ variables. In our example, 3 of the variables are used to identify a case, so you will be pivoting $k-3$ variables into a longer format where the $k-3$ variable names will move into the `names_to` variable and the current values in each of those columns will move into the `values_to` variable. Therefore, we would expect $n * (k-3)$ rows in the pivoted dataframe!
### Example: find current and future data dimensions
Lets see if this works with a simple example.
```{r}
#| tbl-cap: Example
df<-tibble(country = rep(c("Mexico", "USA", "France"),2),
year = rep(c(1980,1990), 3),
trade = rep(c("NAFTA", "NAFTA", "EU"),2),
outgoing = rnorm(6, mean=1000, sd=500),
incoming = rlogis(6, location=1000,
scale = 400))
df
#existing rows/cases
nrow(df)
#existing columns/cases
ncol(df)
#expected rows/cases
nrow(df) * (ncol(df)-3)
# expected columns
3 + 2
```
Or simple example has $n = 6$ rows and $k - 3 = 2$ variables being pivoted, so we expect a new dataframe to have $n * 2 = 12$ rows x $3 + 2 = 5$ columns.
### Challenge: Describe the final dimensions
After pivoting the table, I expect the dataset the dataset to have the month, year, size of the egg and the quantity of the eggs as columns. The resulting dataset will have data 4 times longer. The columns will change from 6 to 5.
```{r}
#existing rows/cases
nrow(eggs_tidy)
#existing columns/cases
ncol(eggs_tidy)
#expected rows/cases
nrow(eggs_tidy) * (ncol(eggs_tidy)-2)
```
Any additional comments?
## Pivot the Data
Now we will pivot the data, and compare our pivoted data dimensions to the dimensions calculated above as a "sanity" check.
### Example
```{r}
#| tbl-cap: Pivoted Example
df<-pivot_longer(df, col = c(outgoing, incoming),
names_to="trade_direction",
values_to = "trade_value")
df
```
Yes, once it is pivoted long, our resulting data are $12x5$ - exactly what we expected!
### Challenge: Pivot the Chosen Data
As expected, the resulting data is 4 times longer (from 120 -> 480). The number of columns has been reduced by 1 from 6 -> 5. Now, we have a single record of one egg size and quantity per row and helps in easy understanding of data for future analysis.
```{r}
eggs_longer <- eggs_tidy%>%
pivot_longer(cols=contains("large"),
names_to = c("Size", "Quantity"),
names_sep="_",
values_to = "Cost"
)
eggs_longer
```
Any additional comments?
Yes, another optimization could be to take an average of the cost of a size and quantity combination for each month for better trend analysis.