Challenge 3 Will Munson

challenge_3
Tidy Data: Pivoting
Author

Will Munson

Published

August 17, 2022

Code
library(tidyverse)

knitr::opts_chunk$set(echo = TRUE, warning=FALSE, message=FALSE)

Challenge Overview

Today’s challenge is to:

  1. read in a data set, and describe the data set using both words and any supporting information (e.g., tables, etc)
  2. identify what needs to be done to tidy the current data
  3. anticipate the shape of pivoted data
  4. pivot the data into tidy format using pivot_longer

Read in data

Read in one (or more) of the following datasets, using the correct R package and command.

  • animal_weights.csv ⭐
  • eggs_tidy.csv ⭐⭐ or organicpoultry.xls ⭐⭐⭐
  • australian_marriage*.xlsx ⭐⭐⭐
  • USA Households*.xlsx ⭐⭐⭐⭐
  • sce_labor_chart_data_public.csv 🌟🌟🌟🌟🌟
Code
animal_weight<-read_csv("_data/animal_weight.csv",
                        show_col_types = FALSE)

Briefly describe the data

Describe the data, and be sure to comment on why you are planning to pivot it to make it “tidy”

Okay, so this data is basically explained by both the row AND the column. The observed variables (Item and Weight) are not listed, and instead, we see the items listed as variables for each column, and each weight represents these variables. What we need to do is reorganize this dataset so that there are columns that represent each variable. There should be three columns instead of 17.

Anticipate the End Result

The first step in pivoting the data is to try to come up with a concrete vision of what the end product should look like - that way you will know whether or not your pivoting was successful.

One easy way to do this is to think about the dimensions of your current data (tibble, dataframe, or matrix), and then calculate what the dimensions of the pivoted data should be.

Suppose you have a dataset with \(n\) rows and \(k\) variables. In our example, 3 of the variables are used to identify a case, so you will be pivoting \(k-3\) variables into a longer format where the \(k-3\) variable names will move into the names_to variable and the current values in each of those columns will move into the values_to variable. Therefore, we would expect \(n * (k-3)\) rows in the pivoted dataframe!

Example: find current and future data dimensions

Lets see if this works with a simple example.

Code
df<-tibble(country = rep(c("Mexico", "USA", "France"),2),
           year = rep(c(1980,1990), 3), 
           trade = rep(c("NAFTA", "NAFTA", "EU"),2),
           outgoing = rnorm(6, mean=1000, sd=500),
           incoming = rlogis(6, location=1000, 
                             scale = 400))
df
# A tibble: 6 × 5
  country  year trade outgoing incoming
  <chr>   <dbl> <chr>    <dbl>    <dbl>
1 Mexico   1980 NAFTA    1677.     466.
2 USA      1990 NAFTA    1291.    1340.
3 France   1980 EU        544.    3441.
4 Mexico   1990 NAFTA     659.    1271.
5 USA      1980 NAFTA    1081.     505.
6 France   1990 EU        634.    1045.
Code
#existing rows/cases
nrow(df)
[1] 6
Code
#existing columns/cases
ncol(df)
[1] 5
Code
#expected rows/cases
nrow(df) * (ncol(df)-3)
[1] 12
Code
# expected columns 
3 + 2
[1] 5

Or simple example has \(n = 6\) rows and \(k - 3 = 2\) variables being pivoted, so we expect a new dataframe to have \(n * 2 = 12\) rows x \(3 + 2 = 5\) columns.

Challenge: Describe the final dimensions

Document your work here.

Code
nrow(animal_weight)
[1] 9
Code
ncol(animal_weight)
[1] 17
Code
#Number of rows
nrow(animal_weight)*(ncol(animal_weight)-1)
[1] 144

Any additional comments? There are way too many columns in the original dataset. Let’s change this so we only get three of them. ## Pivot the Data

Now we will pivot the data, and compare our pivoted data dimensions to the dimensions calculated above as a “sanity” check.

Example

Code
df<-pivot_longer(df, col = c(outgoing, incoming),
                 names_to="trade_direction",
                 values_to = "trade_value")
df
# A tibble: 12 × 5
   country  year trade trade_direction trade_value
   <chr>   <dbl> <chr> <chr>                 <dbl>
 1 Mexico   1980 NAFTA outgoing              1677.
 2 Mexico   1980 NAFTA incoming               466.
 3 USA      1990 NAFTA outgoing              1291.
 4 USA      1990 NAFTA incoming              1340.
 5 France   1980 EU    outgoing               544.
 6 France   1980 EU    incoming              3441.
 7 Mexico   1990 NAFTA outgoing               659.
 8 Mexico   1990 NAFTA incoming              1271.
 9 USA      1980 NAFTA outgoing              1081.
10 USA      1980 NAFTA incoming               505.
11 France   1990 EU    outgoing               634.
12 France   1990 EU    incoming              1045.

Yes, once it is pivoted long, our resulting data are \(12x5\) - exactly what we expected!

Challenge: Pivot the Chosen Data

Document your work here. What will a new “case” be once you have pivoted the data? How does it meet requirements for tidy data?

Code
animal_weight <- pivot_longer(animal_weight, col = c(`Cattle - dairy`, `Cattle - non-dairy`, Buffaloes, `Swine - market`, `Swine - breeding`, `Chicken - Broilers`, `Chicken - Layers`, Ducks, Turkeys, Sheep, Goats, Horses, Asses, Mules, Camels, Llamas),
                              names_to = "Animal Type",
                              values_to = "Weight in lb")

animal_weight
# A tibble: 144 × 3
   `IPCC Area`         `Animal Type`      `Weight in lb`
   <chr>               <chr>                       <dbl>
 1 Indian Subcontinent Cattle - dairy              275  
 2 Indian Subcontinent Cattle - non-dairy          110  
 3 Indian Subcontinent Buffaloes                   295  
 4 Indian Subcontinent Swine - market               28  
 5 Indian Subcontinent Swine - breeding             28  
 6 Indian Subcontinent Chicken - Broilers            0.9
 7 Indian Subcontinent Chicken - Layers              1.8
 8 Indian Subcontinent Ducks                         2.7
 9 Indian Subcontinent Turkeys                       6.8
10 Indian Subcontinent Sheep                        28  
# … with 134 more rows
# ℹ Use `print(n = ...)` to see more rows

Any additional comments?

This was a very valuable lesson to learn when it comes to working with data in R. While it may seem more aesthetically pleasing to look at a dataset where you have variables in both the first row and the first column, it's not the most efficient way to analyze the data.