Challenge 6

challenge_6
usa_households
Visualizing Time and Relationships
Author

Kekai Liu

Published

April 30, 2023

library(tidyverse)
library(ggplot2)
library(lubridate)

knitr::opts_chunk$set(echo = TRUE, warning=FALSE, message=FALSE)

Challenge Overview

Today’s challenge is to:

  1. read in a data set, and describe the data set using both words and any supporting information (e.g., tables, etc)
  2. tidy data (as needed, including sanity checks)
  3. mutate variables as needed (including sanity checks)
  4. create at least one graph including time (evolution)
  • try to make them “publication” ready (optional)
  • Explain why you choose the specific graph type
  1. Create at least one graph depicting part-whole or flow relationships
  • try to make them “publication” ready (optional)
  • Explain why you choose the specific graph type

R Graph Gallery is a good starting point for thinking about what information is conveyed in standard graph types, and includes example R code.

(be sure to only include the category tags for the data you use!)

Read in data

Read in one (or more) of the following datasets, using the correct R package and command.

  • debt ⭐
  • fed_rate ⭐⭐
  • abc_poll ⭐⭐⭐
  • usa_hh ⭐⭐⭐
  • hotel_bookings ⭐⭐⭐⭐
  • AB_NYC ⭐⭐⭐⭐⭐

The dataset contains mean income and median income data from 1967 to 2019 across different race categories: all races, white, white alone, white alone not hispanic, white not hispanic, black, black alone or in combination, black alone, asian alone or in combination, asian alone, asian alone or in combination, asian and pacific islander, hispanic (any race). The dataset also breaks each race category down into income range percents: under 15000, 15000 to 24999, 25000 to 34999, 35000 to 49999, 50000 to 74999, 75000 to 99999, 100000 to 149999, 150000 to 199999, over 200000. A case is a race category in a year.

household <- readxl::read_excel("_data/USA Households by Total Money Income, Race, and Hispanic Origin of Householder 1967 to 2019.xlsx", sheet="tableA2", range="A5:P357")

household2 <- household %>%
  mutate(index = 1:n(), #add index and race columns
         race = case_when(between(index, 2, 56) ~ "all races",
           between(index, 58, 77) ~ "white alone",
           between(index, 79, 113) ~ "white",
           between(index, 115, 134) ~ "white alone not hispanic",
           between(index, 136, 165) ~ "white not hispanic",
           between(index, 167, 186) ~ "black alone or in combination", 
           between(index, 188, 207) ~ "black alone",
           between(index, 209, 243) ~ "black",
           between(index, 245, 264) ~ "asian alone or in combination",
           between(index, 266, 285) ~ "asian alone",
           between(index, 287, 301) ~ "asian and pacific islander",
           between(index, 303, 352) ~ "hispanic (any race)"
         ),
         year = str_sub(`...1`, 1, 4), #remove superscripts from year, there are duplicates
         number_in_thousands = `...2`) %>%
  select(-c(`...1`,`...2`, index)) %>%
  filter(!is.na(Total)) %>%
  select(race, year, everything()) %>% #reorder columns %>%
 distinct(race, year, .keep_all=TRUE) #keep the top duplicate since it is the most updated data

Briefly describe the data

Tidy Data (as needed)

Is your data already tidy, or is there work to be done? Be sure to anticipate your end result to provide a sanity check, and document your work here.

The data is not tidy. Each income range has its own percent column. There are ten such percent columns, and they can be turned into two columns: one column denoting the range, and another column for the percent amount. There are seven columns used to identify a case, so the number of expected rows after this pivot is nrow(household2) * (ncol(household2)-7) = 3240 rows. The data has 17 variables before the pivot and 10 variables to be pivoted into 2 variables, so the number of expected columns is 9 columns. The summary output of household25, which is the resulting dataset after this pivot, has 3240 rows and 9 columns as expected.

Each income estimate also has separate columns for mean, the corresponding margin of error, median, and the corresponding margin of error. These four columns can be turned into three columns: one denoting whether the estimate is a mean or median, another for the estimate value, and one for the margin of error. There are seven variables used to identify a case, so the number of expected rows after this pivot is nrow(household25) * (ncol(household25)-7) = 6480 rows. The data has 9 variables before the pivot and 4 variables to be pivoted into 3 variables, so the expected number of columns is 8 columns. The summary output of household 35, which is the resulting dataset after this pivot, has 6480 rows and 8 columns as expected.

head(household2)
# A tibble: 6 × 17
  race      year  Total Under …¹ $15,0…² $25,0…³ $35,0…⁴ $50,0…⁵ $75,0…⁶ $100,…⁷
  <chr>     <chr> <dbl>    <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>
1 all races 2019    100      9.1     8       8.3    11.7    16.5    12.3    15.5
2 all races 2018    100     10.1     8.8     8.7    12      17      12.5    15  
3 all races 2017    100     10       9.1     9.2    12      16.4    12.4    14.7
4 all races 2016    100     10.4     9       9.2    12.3    16.7    12.2    15  
5 all races 2015    100     10.6    10       9.6    12.1    16.1    12.4    14.9
6 all races 2014    100     11.4    10.5     9.6    12.6    16.4    12.1    14  
# … with 7 more variables: `$150,000\r\nto\r\n$199,999` <dbl>,
#   `$200,000 and over` <dbl>, Estimate...13 <dbl>,
#   `Margin of error1 (±)...14` <dbl>, Estimate...15 <chr>,
#   `Margin of error1 (±)...16` <chr>, number_in_thousands <chr>, and
#   abbreviated variable names ¹​`Under $15,000`, ²​`$15,000\r\nto\r\n$24,999`,
#   ³​`$25,000\r\nto\r\n$34,999`, ⁴​`$35,000\r\nto\r\n$49,999`,
#   ⁵​`$50,000\r\nto\r\n$74,999`, ⁶​`$75,000\r\nto\r\n$99,999`, …
str(household2)
tibble [324 × 17] (S3: tbl_df/tbl/data.frame)
 $ race                      : chr [1:324] "all races" "all races" "all races" "all races" ...
 $ year                      : chr [1:324] "2019" "2018" "2017" "2016" ...
 $ Total                     : num [1:324] 100 100 100 100 100 100 100 100 100 100 ...
 $ Under $15,000             : num [1:324] 9.1 10.1 10 10.4 10.6 11.4 11.4 11.4 11.6 11.2 ...
 $ $15,000
to
$24,999  : num [1:324] 8 8.8 9.1 9 10 10.5 10.3 10.6 10.2 10.7 ...
 $ $25,000
to
$34,999  : num [1:324] 8.3 8.7 9.2 9.2 9.6 9.6 9.5 10.1 10.2 9.4 ...
 $ $35,000
to
$49,999  : num [1:324] 11.7 12 12 12.3 12.1 12.6 12.5 12.5 13.1 13.3 ...
 $ $50,000
to
$74,999  : num [1:324] 16.5 17 16.4 16.7 16.1 16.4 16.8 17.4 17.2 16.8 ...
 $ $75,000
to
$99,999  : num [1:324] 12.3 12.5 12.4 12.2 12.4 12.1 12 12 11.9 12.4 ...
 $ $100,000
to
$149,999: num [1:324] 15.5 15 14.7 15 14.9 14 13.9 13.9 13.8 14.1 ...
 $ $150,000
to
$199,999: num [1:324] 8.3 7.2 7.3 7.2 7.1 6.6 6.7 6.3 6.2 6.3 ...
 $ $200,000 and over         : num [1:324] 10.3 8.8 8.9 8 7.2 6.8 6.9 5.9 5.8 5.9 ...
 $ Estimate...13             : num [1:324] 68703 64324 63761 62898 60987 ...
 $ Margin of error1 (±)...14 : num [1:324] 904 704 552 764 570 ...
 $ Estimate...15             : chr [1:324] "98088" "91652" "91406" "88578" ...
 $ Margin of error1 (±)...16 : chr [1:324] "1042" "914" "979" "822" ...
 $ number_in_thousands       : chr [1:324] "128451" "128579" "127669" "126224" ...
#existing rows/cases
nrow(household2)
[1] 324
#existing columns/cases
ncol(household2)
[1] 17
#expected rows/cases
nrow(household2) * (ncol(household2)-7)
[1] 3240
# expected columns after first pivot
17 - 10 + 2
[1] 9
household25 <- household2 %>% 
  pivot_longer(col = `Total`:`$200,000 and over`,
                 names_to="income_range",
                 values_to = "percent")

print(summarytools::dfSummary(household25, varnumbers = FALSE, plain.ascii = FALSE, style = "grid", graph.magnif = 0.70, valid.col = FALSE), method = 'render', table.classes = 'table-condensed')

Data Frame Summary

household25

Dimensions: 3240 x 9
Duplicates: 0
Variable Stats / Values Freqs (% of Valid) Graph Missing
race [character]
1. all races
2. hispanic (any race)
3. black
4. white
5. white not hispanic
6. asian alone
7. asian alone or in combina
8. black alone
9. black alone or in combina
10. white alone
[ 2 others ]
530 ( 16.4% )
480 ( 14.8% )
350 ( 10.8% )
350 ( 10.8% )
300 ( 9.3% )
180 ( 5.6% )
180 ( 5.6% )
180 ( 5.6% )
180 ( 5.6% )
180 ( 5.6% )
330 ( 10.2% )
0 (0.0%)
year [character]
1. 2002
2. 2003
3. 2004
4. 2005
5. 2006
6. 2007
7. 2008
8. 2009
9. 2010
10. 2011
[ 43 others ]
80 ( 2.5% )
80 ( 2.5% )
80 ( 2.5% )
80 ( 2.5% )
80 ( 2.5% )
80 ( 2.5% )
80 ( 2.5% )
80 ( 2.5% )
80 ( 2.5% )
80 ( 2.5% )
2440 ( 75.3% )
0 (0.0%)
Estimate...13 [numeric]
Mean (sd) : 55312.9 (14371.9)
min ≤ med ≤ max:
29026 ≤ 55674.5 ≤ 98174
IQR (CV) : 22339.2 (0.3)
323 distinct values 0 (0.0%)
Margin of error1 (±)...14 [numeric]
Mean (sd) : 1134.2 (1001.8)
min ≤ med ≤ max:
268 ≤ 797.5 ≤ 6080
IQR (CV) : 848.2 (0.9)
295 distinct values 0 (0.0%)
Estimate...15 [character]
1. 100041
2. 101732
3. 101962
4. 102300
5. 102588
6. 102752
7. 103291
8. 103725
9. 103815
10. 104521
[ 314 others ]
10 ( 0.3% )
10 ( 0.3% )
10 ( 0.3% )
10 ( 0.3% )
10 ( 0.3% )
10 ( 0.3% )
10 ( 0.3% )
10 ( 0.3% )
10 ( 0.3% )
10 ( 0.3% )
3140 ( 96.9% )
0 (0.0%)
Margin of error1 (±)...16 [character]
1. N
2. 1167
3. 1240
4. 3050
5. 306
6. 310
7. 316
8. 339
9. 401
10. 4112
[ 289 others ]
30 ( 0.9% )
20 ( 0.6% )
20 ( 0.6% )
20 ( 0.6% )
20 ( 0.6% )
20 ( 0.6% )
20 ( 0.6% )
20 ( 0.6% )
20 ( 0.6% )
20 ( 0.6% )
3030 ( 93.5% )
0 (0.0%)
number_in_thousands [character]
1. 6750
2. 100113
3. 10034
4. 100528
5. 100568
6. 101018
7. 10192
8. 102528
9. 103874
10. 10486
[ 313 others ]
20 ( 0.6% )
10 ( 0.3% )
10 ( 0.3% )
10 ( 0.3% )
10 ( 0.3% )
10 ( 0.3% )
10 ( 0.3% )
10 ( 0.3% )
10 ( 0.3% )
10 ( 0.3% )
3130 ( 96.6% )
0 (0.0%)
income_range [character]
1. $100,000
to
$149,999
2. $15,000
to
$24,999
3. $150,000
to
$199,999
4. $200,000 and over
5. $25,000
to
$34,999
6. $35,000
to
$49,999
7. $50,000
to
$74,999
8. $75,000
to
$99,999
9. Total
10. Under $15,000
324 ( 10.0% )
324 ( 10.0% )
324 ( 10.0% )
324 ( 10.0% )
324 ( 10.0% )
324 ( 10.0% )
324 ( 10.0% )
324 ( 10.0% )
324 ( 10.0% )
324 ( 10.0% )
0 (0.0%)
percent [numeric]
Mean (sd) : 20 (27.1)
min ≤ med ≤ max:
0.1 ≤ 11.9 ≤ 100
IQR (CV) : 7.4 (1.4)
255 distinct values 0 (0.0%)

Generated by summarytools 1.0.1 (R version 4.2.2)
2023-04-30

#existing rows/cases
nrow(household25)
[1] 3240
#existing columns/cases
ncol(household25)
[1] 9
#expected rows/cases
nrow(household25) * (ncol(household25)-7)
[1] 6480
# expected columns after first pivot
9 - 4 + 3
[1] 8
household35 <- household25 %>%   
  mutate(median1estimate = `Estimate...13`, median1margin_of_error = `Margin of error1 (±)...14`, mean1estimate = as.numeric(`Estimate...15`), mean1margin_of_error = as.numeric(`Margin of error1 (±)...16`)) %>%
  select(-c(`Estimate...13`, `Margin of error1 (±)...14`, `Estimate...15`, `Margin of error1 (±)...16`)) %>%
  pivot_longer(c('median1estimate', 'median1margin_of_error', 'mean1estimate', 'mean1margin_of_error'), names_to=c("est_type", ".value"), names_sep = "\\d")

head(household35)
# A tibble: 6 × 8
  race      year  number_in_thousands income_r…¹ percent est_t…² estim…³ margi…⁴
  <chr>     <chr> <chr>               <chr>        <dbl> <chr>     <dbl>   <dbl>
1 all races 2019  128451              "Total"      100   median    68703     904
2 all races 2019  128451              "Total"      100   mean      98088    1042
3 all races 2019  128451              "Under $1…     9.1 median    68703     904
4 all races 2019  128451              "Under $1…     9.1 mean      98088    1042
5 all races 2019  128451              "$15,000\…     8   median    68703     904
6 all races 2019  128451              "$15,000\…     8   mean      98088    1042
# … with abbreviated variable names ¹​income_range, ²​est_type, ³​estimate,
#   ⁴​margin_of_error

Are there any variables that require mutation to be usable in your analysis stream? For example, do you need to calculate new values in order to graph them? Can string values be represented numerically? Do you need to turn any variables into factors and reorder for ease of graphics and visualization?

The pivot procedures did not translate the income ranges into clean strings. The income_range variable contains extraneous characters such as backslash r and backslash n which need to be removed; this can be fixed by using the mutate and case_when to clean the strings. Also the mean estimate and margin of error are of character types, because there is a cell which contains N as a value. It corresponds to Asian and Pacific Islander in 1987, and the footnote states that a new data processing system was being instituted that year. These mean and margin of error columns will need to be converted to numeric in order to plot.

Document your work here.

household3 <- household2 %>% 
  pivot_longer(col = `Total`:`$200,000 and over`,
                 names_to="income_range",
                 values_to = "percent") %>%
  mutate(number_in_thousands = as.numeric(case_when(number_in_thousands == "N" ~ "", TRUE ~ number_in_thousands)), median1estimate = `Estimate...13`, median1margin_of_error = `Margin of error1 (±)...14`, mean1estimate = as.numeric(`Estimate...15`), mean1margin_of_error = as.numeric(`Margin of error1 (±)...16`), income_range = case_when(income_range == "Total" ~ "Total", income_range == "Under $15,000" ~ "Under $15,000", income_range == "$15,000\r\nto\r\n$24,999" ~ "$15,000 to $24,999", income_range == "$25,000\r\nto\r\n$34,999" ~ "$25,000 to $34,999", income_range == "$35,000\r\nto\r\n$49,999" ~ "$35,000 to $49,999", income_range == "$50,000\r\nto\r\n$74,999" ~ "$50,000 to $74,999", income_range == "$75,000\r\nto\r\n$99,999" ~ "$75,000 to $99,999", income_range == "$100,000\r\nto\r\n$149,999" ~ "$100,000 to $149,999", income_range == "$150,000\r\nto\r\n$199,999" ~ "$150,000 to $199,999", income_range == "$200,000 and over" ~ "$200,000 and over")) %>%
  select(-c(`Estimate...13`, `Margin of error1 (±)...14`, `Estimate...15`, `Margin of error1 (±)...16`)) %>%
  pivot_longer(c('median1estimate', 'median1margin_of_error', 'mean1estimate', 'mean1margin_of_error'), names_to=c("est_type", ".value"), names_sep = "\\d")

# categorize estimate and margin of error into mean, median
#NA's introduced in 1987 for mean because it is is "N" - footnote says because no data due to implementation of new processing system

head(household3)
# A tibble: 6 × 8
  race      year  number_in_thousands income_r…¹ percent est_t…² estim…³ margi…⁴
  <chr>     <chr>               <dbl> <chr>        <dbl> <chr>     <dbl>   <dbl>
1 all races 2019               128451 Total        100   median    68703     904
2 all races 2019               128451 Total        100   mean      98088    1042
3 all races 2019               128451 Under $15…     9.1 median    68703     904
4 all races 2019               128451 Under $15…     9.1 mean      98088    1042
5 all races 2019               128451 $15,000 t…     8   median    68703     904
6 all races 2019               128451 $15,000 t…     8   mean      98088    1042
# … with abbreviated variable names ¹​income_range, ²​est_type, ³​estimate,
#   ⁴​margin_of_error
print(summarytools::dfSummary(household3, varnumbers = FALSE, plain.ascii = FALSE, style = "grid", graph.magnif = 0.70, valid.col = FALSE), method = 'render', table.classes = 'table-condensed')

Data Frame Summary

household3

Dimensions: 6480 x 8
Duplicates: 0
Variable Stats / Values Freqs (% of Valid) Graph Missing
race [character]
1. all races
2. hispanic (any race)
3. black
4. white
5. white not hispanic
6. asian alone
7. asian alone or in combina
8. black alone
9. black alone or in combina
10. white alone
[ 2 others ]
1060 ( 16.4% )
960 ( 14.8% )
700 ( 10.8% )
700 ( 10.8% )
600 ( 9.3% )
360 ( 5.6% )
360 ( 5.6% )
360 ( 5.6% )
360 ( 5.6% )
360 ( 5.6% )
660 ( 10.2% )
0 (0.0%)
year [character]
1. 2002
2. 2003
3. 2004
4. 2005
5. 2006
6. 2007
7. 2008
8. 2009
9. 2010
10. 2011
[ 43 others ]
160 ( 2.5% )
160 ( 2.5% )
160 ( 2.5% )
160 ( 2.5% )
160 ( 2.5% )
160 ( 2.5% )
160 ( 2.5% )
160 ( 2.5% )
160 ( 2.5% )
160 ( 2.5% )
4880 ( 75.3% )
0 (0.0%)
number_in_thousands [numeric]
Mean (sd) : 45511.2 (40022.8)
min ≤ med ≤ max:
1913 ≤ 17322 ≤ 128579
IQR (CV) : 74879 (0.9)
322 distinct values 20 (0.3%)
income_range [character]
1. $100,000 to $149,999
2. $15,000 to $24,999
3. $150,000 to $199,999
4. $200,000 and over
5. $25,000 to $34,999
6. $35,000 to $49,999
7. $50,000 to $74,999
8. $75,000 to $99,999
9. Total
10. Under $15,000
648 ( 10.0% )
648 ( 10.0% )
648 ( 10.0% )
648 ( 10.0% )
648 ( 10.0% )
648 ( 10.0% )
648 ( 10.0% )
648 ( 10.0% )
648 ( 10.0% )
648 ( 10.0% )
0 (0.0%)
percent [numeric]
Mean (sd) : 20 (27.1)
min ≤ med ≤ max:
0.1 ≤ 11.9 ≤ 100
IQR (CV) : 7.4 (1.4)
255 distinct values 0 (0.0%)
est_type [character]
1. mean
2. median
3240 ( 50.0% )
3240 ( 50.0% )
0 (0.0%)
estimate [numeric]
Mean (sd) : 63815.7 (19374.2)
min ≤ med ≤ max:
29026 ≤ 61126 ≤ 133111
IQR (CV) : 25097 (0.3)
644 distinct values 10 (0.2%)
margin_of_error [numeric]
Mean (sd) : 1257.7 (1148.3)
min ≤ med ≤ max:
268 ≤ 876 ≤ 8076
IQR (CV) : 900 (0.9)
538 distinct values 30 (0.5%)

Generated by summarytools 1.0.1 (R version 4.2.2)
2023-04-30

Time Dependent Visualization

In order to plot a variable on the y-axis against time variable on the x-axis, the time variable needs to be a date variable. A date variable contains year, month, and day, so the variable year needs to be converted to a year-month-day format. By converting year to a date variable, it provides the flexibility of changing the number of years to show on the x-axis.

A line plot is a good choice for a time dependent visualization, because it clearly displays how data is moving over time. This is especially true if a plot contains multiple groups. For example, if one were to use a scatterplot, even if the groups were color coded, the data evolution would not be as easy to follow as a line plot which connects the data points.

#median household income of all races over time, x-axis time variable needs to be date in order to specify number of breaks, date variable requires year, month, and day

household3 %>%
  filter(race=="all races", est_type=="median", income_range=="Total") %>%
  ggplot(aes(x=as.Date(ISOdate(year, 1, 1)), y=estimate)) +
  geom_line(size=2, color='red') + #change thickness and color of line +
  xlab("") +
  ylab("median household income") +
  labs(title = "Median U.S. Household Income (All Races)") +
  theme(plot.title = element_text(hjust = 0.5, size = 14, color="red"), axis.title.y = element_text(size = 10, color="red")) + # Center ggplot title and change size of title and y-axis title and color
  scale_x_date(date_breaks = "5 year", date_labels = "%Y")

#percent of each race under $15,000 household income under time (white, black, asian, hispanic, all races)

household3 %>%
  filter(income_range != "Total", est_type=="median", income_range=="Under $15,000", race != "black alone") %>% #keep only one row out of median and mean (as the percent is the same)
  mutate(race = case_when(grepl("white", race, ignore.case = TRUE) ~ "white",
                          grepl("black", race, ignore.case = TRUE) ~ "black",
                          grepl("asian", race, ignore.case = TRUE) ~"asian",
                          TRUE ~ race)) %>% #recode races
  ggplot(aes(x=as.Date(ISOdate(year, 1, 1)), y=percent, group=race, color=race)) +
  geom_line(size=1) + 
  xlab("") +
  ylab("percent of households under $15,000") +
  labs(title = "Percent of U.S. Households under $15,000 by Race") +
  theme(plot.title = element_text(hjust = 0.5, size = 14, color="blue"), axis.title.y = element_text(size = 10, color="blue")) + # Center ggplot title and change size of title and y-axis title and color
  scale_x_date(date_breaks = "5 year", date_labels = "%Y")

Visualizing Part-Whole Relationships

A stacked bar graph is a good way to visualize part-whole relationships. You can split up a whole into its constituent parts and see how each part contributes to the whole. A stacked bar graph is also convenient for comparing multiple part-whole relationships, as you can place bars side-by-side. This enables one to compare distributions such as income distributions across race, gender, occupation, etc.

#income distribution of U.S. households in 2019, by race

household3 %>%
  filter(income_range != "Total", est_type=="median", race != "black alone", race != "all races", year==2019) %>% #keep only one row out of median and mean (as the percent is the same)
  mutate(race = case_when(grepl("white", race, ignore.case = TRUE) ~ "white",
                          grepl("black", race, ignore.case = TRUE) ~ "black",
                          grepl("asian", race, ignore.case = TRUE) ~"asian",
                          TRUE ~ race)) %>% #recode races
ggplot(aes(fill=factor(income_range, levels=c('$200,000 and over', '$150,000 to $199,999', '$100,000 to $149,999', '$75,000 to $99,999', '$50,000 to $74,999', '$35,000 to $49,999', '$25,000 to $34,999', '$15,000 to $24,999', 'Under $15,000')), y=percent, x=race, label=percent)) + 
    geom_bar(position="fill", stat="identity") + 
  ylab("percent") +
  labs(title = "Income Distribution of U.S. Households in 2019, by Race") +
  theme(plot.title = element_text(hjust = 0.5, size = 14, color="blue"), axis.title.y = element_text(size = 10, color="blue")) + # Center ggplot title and change size of title and y-axis title and color 
scale_fill_discrete(name = "Income Range", breaks=c('$200,000 and over', '$150,000 to $199,999', '$100,000 to $149,999', '$75,000 to $99,999', '$50,000 to $74,999', '$35,000 to $49,999', '$25,000 to $34,999', '$15,000 to $24,999', 'Under $15,000')) #change legend title and order for items in legend