Code
library(tidyverse)
::opts_chunk$set(echo = TRUE, warning=FALSE, message=FALSE) knitr
Prasann Desai
June 30, 2023
Today’s challenge is to:
pivot_longer
Read in one (or more) of the following datasets, using the correct R package and command.
# A tibble: 9 × 17
IPCC A…¹ Cattl…² Cattl…³ Buffa…⁴ Swine…⁵ Swine…⁶ Chick…⁷ Chick…⁸ Ducks Turkeys
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Indian … 275 110 295 28 28 0.9 1.8 2.7 6.8
2 Eastern… 550 391 380 50 180 0.9 1.8 2.7 6.8
3 Africa 275 173 380 28 28 0.9 1.8 2.7 6.8
4 Oceania 500 330 380 45 180 0.9 1.8 2.7 6.8
5 Western… 600 420 380 50 198 0.9 1.8 2.7 6.8
6 Latin A… 400 305 380 28 28 0.9 1.8 2.7 6.8
7 Asia 350 391 380 50 180 0.9 1.8 2.7 6.8
8 Middle … 275 173 380 28 28 0.9 1.8 2.7 6.8
9 Norther… 604 389 380 46 198 0.9 1.8 2.7 6.8
# … with 7 more variables: Sheep <dbl>, Goats <dbl>, Horses <dbl>, Asses <dbl>,
# Mules <dbl>, Camels <dbl>, Llamas <dbl>, and abbreviated variable names
# ¹`IPCC Area`, ²`Cattle - dairy`, ³`Cattle - non-dairy`, ⁴Buffaloes,
# ⁵`Swine - market`, ⁶`Swine - breeding`, ⁷`Chicken - Broilers`,
# ⁸`Chicken - Layers`
Describe the data, and be sure to comment on why you are planning to pivot it to make it “tidy”
Response:
From the above output, we can see that there are 3 distinct features (even if we have 17 columns) in the dataset namely - IPCC Area (dimension), Animal-category (dimension) and Weight (measure). Judging by the dataset file name and the underlying data, we can make a fair guess that the dataset contains a numerical representation of weights of animals in different regions of the world.
We want to make it “tidy” because it is not preferable to have a separate weight column for each animal category. Also, in future, if we someone wishes to add a animal category to this dataset, it will require us to add another column. It is inconvenient to calculate aggregate measures of each of these columns without enlisting each of the column names which I think is not a scalable design of the dataset.
The first step in pivoting the data is to try to come up with a concrete vision of what the end product should look like - that way you will know whether or not your pivoting was successful.
One easy way to do this is to think about the dimensions of your current data (tibble, dataframe, or matrix), and then calculate what the dimensions of the pivoted data should be.
Suppose you have a dataset with \(n\) rows and \(k\) variables. In our example, 3 of the variables are used to identify a case, so you will be pivoting \(k-3\) variables into a longer format where the \(k-3\) variable names will move into the names_to
variable and the current values in each of those columns will move into the values_to
variable. Therefore, we would expect \(n * (k-3)\) rows in the pivoted dataframe!
Lets see if this works with a simple example.
# A tibble: 6 × 5
country year trade outgoing incoming
<chr> <dbl> <chr> <dbl> <dbl>
1 Mexico 1980 NAFTA 846. 1270.
2 USA 1990 NAFTA 646. 751.
3 France 1980 EU 911. 848.
4 Mexico 1990 NAFTA 1276. 1484.
5 USA 1980 NAFTA 1004. 1268.
6 France 1990 EU 723. 1051.
[1] 6
[1] 5
[1] 12
[1] 5
Or simple example has \(n = 6\) rows and \(k - 3 = 2\) variables being pivoted, so we expect a new dataframe to have \(n * 2 = 12\) rows x \(3 + 2 = 5\) columns.
Document your work here.
Response: In the pivoted data, I expect that for each IPCC area, we will have 16 rows. Therefore, the final dimensions of the pivoted data would be 144 rows and 3 columns (IPCC area, animal_category, weight)
[1] 9
[1] 17
[1] 36
[1] 3
Any additional comments?
Now we will pivot the data, and compare our pivoted data dimensions to the dimensions calculated above as a “sanity” check.
# A tibble: 12 × 5
country year trade trade_direction trade_value
<chr> <dbl> <chr> <chr> <dbl>
1 Mexico 1980 NAFTA outgoing 846.
2 Mexico 1980 NAFTA incoming 1270.
3 USA 1990 NAFTA outgoing 646.
4 USA 1990 NAFTA incoming 751.
5 France 1980 EU outgoing 911.
6 France 1980 EU incoming 848.
7 Mexico 1990 NAFTA outgoing 1276.
8 Mexico 1990 NAFTA incoming 1484.
9 USA 1980 NAFTA outgoing 1004.
10 USA 1980 NAFTA incoming 1268.
11 France 1990 EU outgoing 723.
12 France 1990 EU incoming 1051.
Yes, once it is pivoted long, our resulting data are \(12x5\) - exactly what we expected!
Document your work here. What will a new “case” be once you have pivoted the data? How does it meet requirements for tidy data?
# A tibble: 144 × 3
`IPCC Area` animal_category weight
<chr> <chr> <dbl>
1 Indian Subcontinent Cattle - dairy 275
2 Indian Subcontinent Cattle - non-dairy 110
3 Indian Subcontinent Buffaloes 295
4 Indian Subcontinent Swine - market 28
5 Indian Subcontinent Swine - breeding 28
6 Indian Subcontinent Chicken - Broilers 0.9
7 Indian Subcontinent Chicken - Layers 1.8
8 Indian Subcontinent Ducks 2.7
9 Indian Subcontinent Turkeys 6.8
10 Indian Subcontinent Sheep 28
# … with 134 more rows
Each case in the pivoted dataset is a unique combination of IPCC Area and animal_category. It’s a tidy dataframe because there’s no duplicate data in the pivoted data and also there no loss or no new information.
Any additional comments?
---
title: "Challenge 3"
author: "Prasann Desai"
description: "Tidy Data: Pivoting"
date: "6/30/2023"
format:
html:
toc: true
code-fold: true
code-copy: true
code-tools: true
categories:
- challenge_3
- animal_weights
- Prasann Desai
---
```{r}
#| label: setup
#| warning: false
#| message: false
library(tidyverse)
knitr::opts_chunk$set(echo = TRUE, warning=FALSE, message=FALSE)
```
## Challenge Overview
Today's challenge is to:
1. read in a data set, and describe the data set using both words and any supporting information (e.g., tables, etc)
2. identify what needs to be done to tidy the current data
3. anticipate the shape of pivoted data
4. pivot the data into tidy format using `pivot_longer`
## Read in data
Read in one (or more) of the following datasets, using the correct R package and command.
- animal_weights.csv ⭐
- eggs_tidy.csv ⭐⭐ or organiceggpoultry.xls ⭐⭐⭐
- australian_marriage\*.xls ⭐⭐⭐
- USA Households\*.xlsx ⭐⭐⭐⭐
- sce_labor_chart_data_public.xlsx 🌟🌟🌟🌟🌟
```{r}
# Function call to read a csv file
animal_weights <- read_csv("_data/animal_weight.csv")
```
```{r}
animal_weights
```
### Briefly describe the data
Describe the data, and be sure to comment on why you are planning to pivot it to make it "tidy"
Response:
From the above output, we can see that there are 3 distinct features (even if we have 17 columns) in the dataset namely - IPCC Area (dimension), Animal-category (dimension) and Weight (measure). Judging by the dataset file name and the underlying data, we can make a fair guess that the dataset contains a numerical representation of weights of animals in different regions of the world.
We want to make it "tidy" because it is not preferable to have a separate weight column for each animal category. Also, in future, if we someone wishes to add a animal category to this dataset, it will require us to add another column. It is inconvenient to calculate aggregate measures of each of these columns without enlisting each of the column names which I think is not a scalable design of the dataset.
## Anticipate the End Result
The first step in pivoting the data is to try to come up with a concrete vision of what the end product *should* look like - that way you will know whether or not your pivoting was successful.
One easy way to do this is to think about the dimensions of your current data (tibble, dataframe, or matrix), and then calculate what the dimensions of the pivoted data should be.
Suppose you have a dataset with $n$ rows and $k$ variables. In our example, 3 of the variables are used to identify a case, so you will be pivoting $k-3$ variables into a longer format where the $k-3$ variable names will move into the `names_to` variable and the current values in each of those columns will move into the `values_to` variable. Therefore, we would expect $n * (k-3)$ rows in the pivoted dataframe!
### Example: find current and future data dimensions
Lets see if this works with a simple example.
```{r}
#| tbl-cap: Example
df<-tibble(country = rep(c("Mexico", "USA", "France"),2),
year = rep(c(1980,1990), 3),
trade = rep(c("NAFTA", "NAFTA", "EU"),2),
outgoing = rnorm(6, mean=1000, sd=500),
incoming = rlogis(6, location=1000,
scale = 400))
df
#existing rows/cases
nrow(df)
#existing columns/cases
ncol(df)
#expected rows/cases
nrow(df) * (ncol(df)-3)
# expected columns
3 + 2
```
Or simple example has $n = 6$ rows and $k - 3 = 2$ variables being pivoted, so we expect a new dataframe to have $n * 2 = 12$ rows x $3 + 2 = 5$ columns.
### Challenge: Describe the final dimensions
Document your work here.
Response: In the pivoted data, I expect that for each IPCC area, we will have 16 rows. Therefore, the final dimensions of the pivoted data would be 144 rows and 3 columns (IPCC area, animal_category, weight)
```{r}
#existing rows/cases
nrow(animal_weights)
#existing columns/cases
ncol(animal_weights)
#expected rows/cases
nrow(animal_weights) * (ncol(df)-1)
# expected columns
1 + 2
```
Any additional comments?
## Pivot the Data
Now we will pivot the data, and compare our pivoted data dimensions to the dimensions calculated above as a "sanity" check.
### Example
```{r}
#| tbl-cap: Pivoted Example
df<-pivot_longer(df, col = c(outgoing, incoming),
names_to="trade_direction",
values_to = "trade_value")
df
```
Yes, once it is pivoted long, our resulting data are $12x5$ - exactly what we expected!
### Challenge: Pivot the Chosen Data
Document your work here. What will a new "case" be once you have pivoted the data? How does it meet requirements for tidy data?
```{r}
# Using the pivot_longer function to pivot the dataset
animal_weights_pivoted <- pivot_longer(animal_weights, col = !'IPCC Area', names_to = "animal_category", values_to = "weight")
```
```{r}
# Viewing the pivoted dataset
animal_weights_pivoted
```
Each case in the pivoted dataset is a unique combination of IPCC Area and animal_category. It's a tidy dataframe because there's no duplicate data in the pivoted data and also there no loss or no new information.
Any additional comments?