challenge_3
animal_weights
Tidy Data: Pivoting
Author

Hunter Major

Published

June 8, 2023

Code
library(tidyverse)
library(dplyr)
library(readr)
library(readxl)


knitr::opts_chunk$set(echo = TRUE, warning=FALSE, message=FALSE)

Challenge Overview

Today’s challenge is to:

  1. read in a data set, and describe the data set using both words and any supporting information (e.g., tables, etc)
  2. identify what needs to be done to tidy the current data
  3. anticipate the shape of pivoted data
  4. pivot the data into tidy format using pivot_longer

Read in data

Read in one (or more) of the following datasets, using the correct R package and command.

  • For challenge 3, I’ll be reading in data from animal_weights.csv ⭐
Code
read.csv("_data/animal_weight.csv")
Code
weights <- read.csv("_data/animal_weight.csv")

I renamed “animal_weight” to “weights” for convenience.

Code
summary(weights)
  IPCC.Area         Cattle...dairy  Cattle...non.dairy   Buffaloes    
 Length:9           Min.   :275.0   Min.   :110        Min.   :295.0  
 Class :character   1st Qu.:275.0   1st Qu.:173        1st Qu.:380.0  
 Mode  :character   Median :400.0   Median :330        Median :380.0  
                    Mean   :425.4   Mean   :298        Mean   :370.6  
                    3rd Qu.:550.0   3rd Qu.:391        3rd Qu.:380.0  
                    Max.   :604.0   Max.   :420        Max.   :380.0  
 Swine...market  Swine...breeding Chicken...Broilers Chicken...Layers
 Min.   :28.00   Min.   : 28.0    Min.   :0.9        Min.   :1.8     
 1st Qu.:28.00   1st Qu.: 28.0    1st Qu.:0.9        1st Qu.:1.8     
 Median :45.00   Median :180.0    Median :0.9        Median :1.8     
 Mean   :39.22   Mean   :116.4    Mean   :0.9        Mean   :1.8     
 3rd Qu.:50.00   3rd Qu.:180.0    3rd Qu.:0.9        3rd Qu.:1.8     
 Max.   :50.00   Max.   :198.0    Max.   :0.9        Max.   :1.8     
     Ducks        Turkeys        Sheep           Goats           Horses     
 Min.   :2.7   Min.   :6.8   Min.   :28.00   Min.   :30.00   Min.   :238.0  
 1st Qu.:2.7   1st Qu.:6.8   1st Qu.:28.00   1st Qu.:30.00   1st Qu.:238.0  
 Median :2.7   Median :6.8   Median :48.50   Median :38.50   Median :377.0  
 Mean   :2.7   Mean   :6.8   Mean   :39.39   Mean   :34.72   Mean   :315.2  
 3rd Qu.:2.7   3rd Qu.:6.8   3rd Qu.:48.50   3rd Qu.:38.50   3rd Qu.:377.0  
 Max.   :2.7   Max.   :6.8   Max.   :48.50   Max.   :38.50   Max.   :377.0  
     Asses         Mules         Camels        Llamas   
 Min.   :130   Min.   :130   Min.   :217   Min.   :217  
 1st Qu.:130   1st Qu.:130   1st Qu.:217   1st Qu.:217  
 Median :130   Median :130   Median :217   Median :217  
 Mean   :130   Mean   :130   Mean   :217   Mean   :217  
 3rd Qu.:130   3rd Qu.:130   3rd Qu.:217   3rd Qu.:217  
 Max.   :130   Max.   :130   Max.   :217   Max.   :217  
Code
dim(weights)
[1]  9 17

Briefly describe the data

This dataset provides information about animal weights by type of animal and region in the world in which they are found. From running the summary () and dim () functions, I determined that this dataset has This dataset contains 9 rows and 17 columns. The IPCC Area column contains character values, listing global regions. On the other hand, all other columns, organized by animal type, have numeric values–the weight of the animals. IPCC stands for the Intergovernmental Panel on Climate Change, which is a panel within the United Nations, so this dataset sourced from research undertaken by this body within the UN. From the summary () function we can see that some animal types have more than one column–namely cattle (which has a dairy and non-dairy column), swine (which has a market and breeding column), and chicken (which has a broilers and layers column). Notwithstanding the animal types with 2 columns, because each animal type currently has its own column(s), each animal type is currently its own variable. In other words, the overarching category of animal/animal type is not currently a variable. The overarching category of animal weight is also not its own distinct column/variable currently. To tidy the data, I’d pivot it so that the variable of animal weight can be sorted alongside variables of animal type and IPCC Area/global region.

Anticipate the End Result

The first step in pivoting the data is to try to come up with a concrete vision of what the end product should look like - that way you will know whether or not your pivoting was successful.

One easy way to do this is to think about the dimensions of your current data (tibble, dataframe, or matrix), and then calculate what the dimensions of the pivoted data should be.

Suppose you have a dataset with \(n\) rows and \(k\) variables. In our example, 3 of the variables are used to identify a case, so you will be pivoting \(k-3\) variables into a longer format where the \(k-3\) variable names will move into the names_to variable and the current values in each of those columns will move into the values_to variable. Therefore, we would expect \(n * (k-3)\) rows in the pivoted dataframe!

Example: find current and future data dimensions

Lets see if this works with a simple example.

Code
df<-tibble(country = rep(c("Mexico", "USA", "France"),2),
           year = rep(c(1980,1990), 3), 
           trade = rep(c("NAFTA", "NAFTA", "EU"),2),
           outgoing = rnorm(6, mean=1000, sd=500),
           incoming = rlogis(6, location=1000, 
                             scale = 400))
df

Example

Code
#existing rows/cases
nrow(df)
[1] 6
Code
#existing columns/cases
ncol(df)
[1] 5
Code
#expected rows/cases
nrow(df) * (ncol(df)-3)
[1] 12
Code
# expected columns 
3 + 2
[1] 5

Or simple example has \(n = 6\) rows and \(k - 3 = 2\) variables being pivoted, so we expect a new dataframe to have \(n * 2 = 12\) rows x \(3 + 2 = 5\) columns.

Challenge: Describe the final dimensions

As stated previously, the original version of the dataset that I read in had 9 rows and 17 columns/variables. IPCC Area will remain a variable/column, whereas the other 16 variables I am pivoting will transform into two columns/variables animal type (which will list character values) and weights (which will list numerical values). So n= 9 rows and k= 17 variables, and I’ll be pivoting 16 or (k-1) variables. The tidy version of this dataset should then have 144 rows, because of the 9 rows multiplied by the 16 variables pivoted and 3 columns/variables–one original variable-IPCC Area and two new ones.

Code
# original rows
nrow(weights)
[1] 9
Code
# original columns
ncol(weights)
[1] 17
Code
# expected rows
nrow(weights) * (ncol(weights)-1)
[1] 144
Code
# expected columns
1 + 1
[1] 2

The calculations determined that the dataset will now have 144 rows as expected.

Pivot the Data

Now we will pivot the data, and compare our pivoted data dimensions to the dimensions calculated above as a “sanity” check.

Example

Code
df<-pivot_longer(df, col = c(outgoing, incoming),
                 names_to="trade_direction",
                 values_to = "trade_value")
df

Pivoted Example

Yes, once it is pivoted long, our resulting data are \(12x5\) - exactly what we expected!

Challenge: Pivot the Chosen Data

Document your work here. What will a new “case” be once you have pivoted the data? How does it meet requirements for tidy data?

Code
pivot_longer(weights, "Cattle...dairy" : "Llamas",
             names_to = "animal_type",
             values_to = "weights")

The final dataset has 3 columns and 144 rows, which aligns with my original expectations. Before a row/case listed IPCC Area and weights for each of the 16 animal types included in the study. Now a row/case includes the weight of the animal by the type of the animal and IPCC Area/global region. It is evident that this dataset now better adhere’s to Wickham’s conditions for tidy data: Now, each variable has a distinct column–for example, weight values are no longer spread across multiple columns. Furthermore, each observation now has its own row–for example, we now clearly read across from left to right that in the Indian Subcontinent, sheeps have a weight value of 28.0. Lastly, each value has its own cell–I believe this was the case in the original version and in the tidy version I created.