DACSS 601: Data Science Fundamentals - FALL 2022
  • Fall 2022 Posts
  • Contributors
  • DACSS

Challenge 3

  • Course information
    • Overview
    • Instructional Team
    • Course Schedule
  • Weekly materials
    • Fall 2022 posts
    • final posts

On this page

  • Challenge Overview
  • Read in data
    • Briefly describe the data
  • Anticipate the End Result
    • Example: find current and future data dimensions
    • Challenge: Describe the final dimensions
  • Pivot the Data
    • Example
    • Challenge: Pivot the Chosen Data

Challenge 3

  • Show All Code
  • Hide All Code

  • View Source
challenge_3
Aleacia Messiah
australian_marriage
tidyverse
readxl
summarytools
Author

Aleacia Messiah

Published

September 26, 2022

Code
library(tidyverse)
library(readxl)
library(summarytools)

knitr::opts_chunk$set(echo = TRUE, warning=FALSE, message=FALSE)

Challenge Overview

Today’s challenge is to:

  1. read in a data set, and describe the data set using both words and any supporting information (e.g., tables, etc)
  2. identify what needs to be done to tidy the current data
  3. anticipate the shape of pivoted data
  4. pivot the data into tidy format using pivot_longer

Read in data

Read in one (or more) of the following datasets, using the correct R package and command.

  • animal_weights.csv ⭐
  • eggs_tidy.csv ⭐⭐ or organiceggpoultry.xls ⭐⭐⭐
  • australian_marriage*.xls ⭐⭐⭐
  • USA Households*.xlsx ⭐⭐⭐⭐
  • sce_labor_chart_data_public.xlsx 🌟🌟🌟🌟🌟
Code
# read in the Table 2 sheet in the dataset marriage, removing the first 7 rows
table2 <- read_excel("_data/australian_marriage_law_postal_survey_2017_-_response_final.xls", sheet = "Table 2", col_names = c("Divisions", "Response_Clear_Yes", "Response_Clear_Yes_Percent", "Response_Clear_No", "Response_Clear_No_Percent", "Response_Clear_Total", "Response_Clear_Total_Percent", "delete", "Eligible_Response_Clear", "Eligible_Response_Clear_Percent", "Eligible_Response_Not_Clear", "Eligible_Response_Not_Clear_Percent", "Eligible_Response_Non_Responding", "Eligible_Response_Non_Responding_Percent", "Eligible_Response_Total", "Eligible_Response_Total_Percent"), skip = 7)                    
# view the first 6 rows of Table 2 
head(table2)
# A tibble: 6 × 16
  Divis…¹ Respo…² Respo…³ Respo…⁴ Respo…⁵ Respo…⁶ Respo…⁷ delete Eligi…⁸ Eligi…⁹
  <chr>     <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl> <lgl>    <dbl>   <dbl>
1 New So…      NA    NA        NA    NA        NA      NA NA          NA    NA  
2 Banks     37736    44.9   46343    55.1   84079     100 NA       84079    79.9
3 Barton    37153    43.6   47984    56.4   85137     100 NA       85137    77.8
4 Bennel…   42943    49.8   43215    50.2   86158     100 NA       86158    81  
5 Berowra   48471    54.6   40369    45.4   88840     100 NA       88840    84.5
6 Blaxla…   20406    26.1   57926    73.9   78332     100 NA       78332    75  
# … with 6 more variables: Eligible_Response_Not_Clear <dbl>,
#   Eligible_Response_Not_Clear_Percent <dbl>,
#   Eligible_Response_Non_Responding <dbl>,
#   Eligible_Response_Non_Responding_Percent <dbl>,
#   Eligible_Response_Total <dbl>, Eligible_Response_Total_Percent <dbl>, and
#   abbreviated variable names ¹​Divisions, ²​Response_Clear_Yes,
#   ³​Response_Clear_Yes_Percent, ⁴​Response_Clear_No, …
Code
# remove rows with totals and NAs
table2 <- table2[-c(1, 49:51, 89:91, 122:124, 136:138, 155:157, 163:165, 168:170, 173:184),]
# remove the "delete" column with NAs
table2 <- select(table2, !contains("delete"))
# remove columns with totals
table2 <- select(table2, !contains("Total") & !contains("Percent"))
# view a summary of Table 2
dfSummary(table2)
Data Frame Summary  
table2  
Dimensions: 150 x 6  
Duplicates: 0  

----------------------------------------------------------------------------------------------------------------------------------------
No   Variable                           Stats / Values                  Freqs (% of Valid)    Graph                 Valid      Missing  
---- ---------------------------------- ------------------------------- --------------------- --------------------- ---------- ---------
1    Divisions                          1. Adelaide                       1 ( 0.7%)                                 150        0        
     [character]                        2. Aston                          1 ( 0.7%)                                 (100.0%)   (0.0%)   
                                        3. Ballarat                       1 ( 0.7%)                                                     
                                        4. Banks                          1 ( 0.7%)                                                     
                                        5. Barker                         1 ( 0.7%)                                                     
                                        6. Barton                         1 ( 0.7%)                                                     
                                        7. Bass                           1 ( 0.7%)                                                     
                                        8. Batman                         1 ( 0.7%)                                                     
                                        9. Bendigo                        1 ( 0.7%)                                                     
                                        10. Bennelong                     1 ( 0.7%)                                                     
                                        [ 140 others ]                  140 (93.3%)           IIIIIIIIIIIIIIIIII                        

2    Response_Clear_Yes                 Mean (sd) : 52115 (12315.1)     150 distinct values         : :             150        0        
     [numeric]                          min < med < max:                                            : :             (100.0%)   (0.0%)   
                                        19026 < 51782.5 < 89590                                     : : :                               
                                        IQR (CV) : 15259 (0.2)                                    . : : :                               
                                                                                                . : : : : . .                           

3    Response_Clear_No                  Mean (sd) : 32493.2 (8262.8)    150 distinct values         :               150        0        
     [numeric]                          min < med < max:                                            :               (100.0%)   (0.0%)   
                                        14860 < 31653.5 < 57926                                     : :                                 
                                        IQR (CV) : 8274.5 (0.3)                                   : : : :                               
                                                                                              : : : : : : : :   .                       

4    Eligible_Response_Clear            Mean (sd) : 84608.2 (10318.9)   149 distinct values             : .         150        0        
     [numeric]                          min < med < max:                                                : :         (100.0%)   (0.0%)   
                                        34924 < 85726.5 < 120951                                        : :                             
                                        IQR (CV) : 10149 (0.1)                                        : : :                             
                                                                                                    . : : : :   .                       

5    Eligible_Response_Not_Clear        Mean (sd) : 244.6 (55.9)        109 distinct values       :                 150        0        
     [numeric]                          min < med < max:                                          : :               (100.0%)   (0.0%)   
                                        106 < 240 < 377                                         : : :                                   
                                        IQR (CV) : 68.8 (0.2)                                   : : : .                                 
                                                                                              . : : : : :                               

6    Eligible_Response_Non_Responding   Mean (sd) : 21855.1 (4197.5)    149 distinct values         :               150        0        
     [numeric]                          min < med < max:                                          . : :             (100.0%)   (0.0%)   
                                        13092 < 21416.5 < 35841                                 : : : : :                               
                                        IQR (CV) : 5562.2 (0.2)                                 : : : : : .                             
                                                                                              : : : : : : : :   .                       
----------------------------------------------------------------------------------------------------------------------------------------

Briefly describe the data

Describe the data, and be sure to comment on why you are planning to pivot it to make it “tidy”

Looking at the dataset, we can see this dataset is data collected from an Australian Marriage Law Postal Survey in which each observation is the Federal Electoral Division and the variables are clear affirmative responses, clear negative responses, clear eligible participants’ responses, not clear eligible participants’ responses, and non-responding eligible participants’ responses. Most of these variables such as the divisions variable have 150 distinct values (i.e. 150 distinct divisions). The data is current as of August 24, 2017 and there are some variables that include blank responses and more territories listed in the explanatory notes. There are some variables that can be condensed to make it easier to analyze so it is necessary to use pivot functions to make it tidy.

Anticipate the End Result

The first step in pivoting the data is to try to come up with a concrete vision of what the end product should look like - that way you will know whether or not your pivoting was successful.

One easy way to do this is to think about the dimensions of your current data (tibble, dataframe, or matrix), and then calculate what the dimensions of the pivoted data should be.

Suppose you have a dataset with n rows and k variables. In our example, 3 of the variables are used to identify a case, so you will be pivoting k−3 variables into a longer format where the k−3 variable names will move into the names_to variable and the current values in each of those columns will move into the values_to variable. Therefore, we would expect n∗(k−3) rows in the pivoted dataframe!

Example: find current and future data dimensions

Lets see if this works with a simple example.

Code
df<-tibble(country = rep(c("Mexico", "USA", "France"),2),
           year = rep(c(1980,1990), 3), 
           trade = rep(c("NAFTA", "NAFTA", "EU"),2),
           outgoing = rnorm(6, mean=1000, sd=500),
           incoming = rlogis(6, location=1000, 
                             scale = 400))
df
# A tibble: 6 × 5
  country  year trade outgoing incoming
  <chr>   <dbl> <chr>    <dbl>    <dbl>
1 Mexico   1980 NAFTA     346.     232.
2 USA      1990 NAFTA    1068.     902.
3 France   1980 EU        766.    1199.
4 Mexico   1990 NAFTA    2090.    1688.
5 USA      1980 NAFTA     523.     298.
6 France   1990 EU       1739.     239.
Code
#existing rows/cases
nrow(df)
[1] 6
Code
#existing columns/cases
ncol(df)
[1] 5
Code
#expected rows/cases
nrow(df) * (ncol(df)-3)
[1] 12
Code
# expected columns 
3 + 2
[1] 5

Our simple example has n=6 rows and k−3=2 variables being pivoted, so we expect a new dataframe to have n∗2=12 rows x 3+2=5 columns.

Challenge: Describe the final dimensions

Document your work here.

Code
# view the number of current rows/observations in Table 2
nrow(table2)
[1] 150
Code
# view the number of current columns/variables in Table 2
ncol(table2)
[1] 6
Code
# find the expected number of rows/observations in Table 2
nrow(table2) * (ncol(table2)-1)
[1] 750
Code
# find the expected number of columns/variables in Table 2
ncol(table2)-3
[1] 3

Any additional comments?

The current number of rows in Table 2 is 150 while the current number of columns is 6. There should be 750 rows and 3 columns in the pivoted dataset since the five response columns will be consolidated into rows.

Pivot the Data

Now we will pivot the data, and compare our pivoted data dimensions to the dimensions calculated above as a “sanity” check.

Example

Code
df<-pivot_longer(df, col = c(outgoing, incoming),
                 names_to="trade_direction",
                 values_to = "trade_value")
df
# A tibble: 12 × 5
   country  year trade trade_direction trade_value
   <chr>   <dbl> <chr> <chr>                 <dbl>
 1 Mexico   1980 NAFTA outgoing               346.
 2 Mexico   1980 NAFTA incoming               232.
 3 USA      1990 NAFTA outgoing              1068.
 4 USA      1990 NAFTA incoming               902.
 5 France   1980 EU    outgoing               766.
 6 France   1980 EU    incoming              1199.
 7 Mexico   1990 NAFTA outgoing              2090.
 8 Mexico   1990 NAFTA incoming              1688.
 9 USA      1980 NAFTA outgoing               523.
10 USA      1980 NAFTA incoming               298.
11 France   1990 EU    outgoing              1739.
12 France   1990 EU    incoming               239.

Yes, once it is pivoted long, our resulting data are 12x5 - exactly what we expected!

Challenge: Pivot the Chosen Data

Document your work here. What will a new “case” be once you have pivoted the data? How does it meet requirements for tidy data?

Code
table2_new <- pivot_longer(table2, col = c(Response_Clear_Yes, Response_Clear_No, Eligible_Response_Clear, Eligible_Response_Not_Clear, Eligible_Response_Non_Responding), names_to = "Type_of_Response", values_to = "No_of_Responses")
table2_new
# A tibble: 750 × 3
   Divisions Type_of_Response                 No_of_Responses
   <chr>     <chr>                                      <dbl>
 1 Banks     Response_Clear_Yes                         37736
 2 Banks     Response_Clear_No                          46343
 3 Banks     Eligible_Response_Clear                    84079
 4 Banks     Eligible_Response_Not_Clear                  247
 5 Banks     Eligible_Response_Non_Responding           20928
 6 Barton    Response_Clear_Yes                         37153
 7 Barton    Response_Clear_No                          47984
 8 Barton    Eligible_Response_Clear                    85137
 9 Barton    Eligible_Response_Not_Clear                  226
10 Barton    Eligible_Response_Non_Responding           24008
# … with 740 more rows

Any additional comments?

The new observations represent the type of response (clear, not clear, or non-responding) and how many responses were captured for that type for each division. This new dataset meets the requirements of tidy data in that each variable (division, type of response, and number of responses) is in a column, each observation is in a row, and the values are in the cells.

Source Code
---
title: "Challenge 3"
author: "Aleacia Messiah"
desription: "Tidy Data: Pivoting"
date: "09/26/2022"
format:
  html:
    toc: true
    code-fold: true
    code-copy: true
    code-tools: true
categories:
  - challenge_3
  - Aleacia Messiah
  - australian_marriage
  - tidyverse
  - readxl
  - summarytools
---

```{r}
#| label: setup
#| warning: false
#| message: false

library(tidyverse)
library(readxl)
library(summarytools)

knitr::opts_chunk$set(echo = TRUE, warning=FALSE, message=FALSE)
```

## Challenge Overview

Today's challenge is to:

1.  read in a data set, and describe the data set using both words and any supporting information (e.g., tables, etc)
2.  identify what needs to be done to tidy the current data
3.  anticipate the shape of pivoted data
4.  pivot the data into tidy format using `pivot_longer`

## Read in data

Read in one (or more) of the following datasets, using the correct R package and command.

-   animal_weights.csv ⭐
-   eggs_tidy.csv ⭐⭐ or organiceggpoultry.xls ⭐⭐⭐
-   australian_marriage\*.xls ⭐⭐⭐
-   USA Households\*.xlsx ⭐⭐⭐⭐
-   sce_labor_chart_data_public.xlsx 🌟🌟🌟🌟🌟

```{r}
# read in the Table 2 sheet in the dataset marriage, removing the first 7 rows
table2 <- read_excel("_data/australian_marriage_law_postal_survey_2017_-_response_final.xls", sheet = "Table 2", col_names = c("Divisions", "Response_Clear_Yes", "Response_Clear_Yes_Percent", "Response_Clear_No", "Response_Clear_No_Percent", "Response_Clear_Total", "Response_Clear_Total_Percent", "delete", "Eligible_Response_Clear", "Eligible_Response_Clear_Percent", "Eligible_Response_Not_Clear", "Eligible_Response_Not_Clear_Percent", "Eligible_Response_Non_Responding", "Eligible_Response_Non_Responding_Percent", "Eligible_Response_Total", "Eligible_Response_Total_Percent"), skip = 7)                    
# view the first 6 rows of Table 2 
head(table2)
# remove rows with totals and NAs
table2 <- table2[-c(1, 49:51, 89:91, 122:124, 136:138, 155:157, 163:165, 168:170, 173:184),]
# remove the "delete" column with NAs
table2 <- select(table2, !contains("delete"))
# remove columns with totals
table2 <- select(table2, !contains("Total") & !contains("Percent"))
# view a summary of Table 2
dfSummary(table2)
```

### Briefly describe the data

Describe the data, and be sure to comment on why you are planning to pivot it to make it "tidy"

Looking at the dataset, we can see this dataset is data collected from an Australian Marriage Law Postal Survey in which each observation is the Federal Electoral Division and the variables are clear affirmative responses, clear negative responses, clear eligible participants' responses, not clear eligible participants' responses, and non-responding eligible participants' responses. Most of these variables such as the divisions variable have 150 distinct values (i.e. 150 distinct divisions). The data is current as of August 24, 2017 and there are some variables that include blank responses and more territories listed in the explanatory notes. There are some variables that can be condensed to make it easier to analyze so it is necessary to use pivot functions to make it tidy. 

## Anticipate the End Result

The first step in pivoting the data is to try to come up with a concrete vision of what the end product *should* look like - that way you will know whether or not your pivoting was successful.

One easy way to do this is to think about the dimensions of your current data (tibble, dataframe, or matrix), and then calculate what the dimensions of the pivoted data should be.

Suppose you have a dataset with $n$ rows and $k$ variables. In our example, 3 of the variables are used to identify a case, so you will be pivoting $k-3$ variables into a longer format where the $k-3$ variable names will move into the `names_to` variable and the current values in each of those columns will move into the `values_to` variable. Therefore, we would expect $n * (k-3)$ rows in the pivoted dataframe!

### Example: find current and future data dimensions

Lets see if this works with a simple example.

```{r}
#| tbl-cap: Example

df<-tibble(country = rep(c("Mexico", "USA", "France"),2),
           year = rep(c(1980,1990), 3), 
           trade = rep(c("NAFTA", "NAFTA", "EU"),2),
           outgoing = rnorm(6, mean=1000, sd=500),
           incoming = rlogis(6, location=1000, 
                             scale = 400))
df

#existing rows/cases
nrow(df)

#existing columns/cases
ncol(df)

#expected rows/cases
nrow(df) * (ncol(df)-3)

# expected columns 
3 + 2
```

Our simple example has $n = 6$ rows and $k - 3 = 2$ variables being pivoted, so we expect a new dataframe to have $n * 2 = 12$ rows x $3 + 2 = 5$ columns.

### Challenge: Describe the final dimensions

Document your work here.

```{r}
# view the number of current rows/observations in Table 2
nrow(table2)
# view the number of current columns/variables in Table 2
ncol(table2)
# find the expected number of rows/observations in Table 2
nrow(table2) * (ncol(table2)-1)
# find the expected number of columns/variables in Table 2
ncol(table2)-3
```

Any additional comments?

The current number of rows in Table 2 is 150 while the current number of columns is 6. There should be 750 rows and 3 columns in the pivoted dataset since the five response columns will be consolidated into rows. 

## Pivot the Data

Now we will pivot the data, and compare our pivoted data dimensions to the dimensions calculated above as a "sanity" check.

### Example

```{r}
#| tbl-cap: Pivoted Example

df<-pivot_longer(df, col = c(outgoing, incoming),
                 names_to="trade_direction",
                 values_to = "trade_value")
df
```

Yes, once it is pivoted long, our resulting data are $12x5$ - exactly what we expected!

### Challenge: Pivot the Chosen Data

Document your work here. What will a new "case" be once you have pivoted the data? How does it meet requirements for tidy data?

```{r}
table2_new <- pivot_longer(table2, col = c(Response_Clear_Yes, Response_Clear_No, Eligible_Response_Clear, Eligible_Response_Not_Clear, Eligible_Response_Non_Responding), names_to = "Type_of_Response", values_to = "No_of_Responses")
table2_new
```

Any additional comments?

The new observations represent the type of response (clear, not clear, or non-responding) and how many responses were captured for that type for each division. This new dataset meets the requirements of tidy data in that each variable (division, type of response, and number of responses) is in a column, each observation is in a row, and the values are in the cells.