final_project
tidyverse
lubridate
ggplot2
kableExtra
DACSS 601 Final Project
Author

Miranda Manka

Published

September 2, 2022

Code
library(tidyverse)
library(ggplot2)
library(lubridate)
library(knitr)
library(kableExtra)

knitr::opts_chunk$set(echo = TRUE, warning=FALSE, message=FALSE)

Introduction & Research Questions

I decided to use a data set from the U.S. Department of Transportation on Non-major Safety Events. This data contains the transportation agency name, the location of the event, the type of event, the number of events, the number of injuries, and more variables that are detailed later. I joined it with 2 other data sets, one called Federal Funding Allocation which I used for location and population information, and one called abbreviations which I used for transit agency name abbreviations.

The research questions that I wanted to answer for this project are:

What modes of transit have the most total injuries? Are the number and type of events changing over time? Which agencies have the most total events? What is the proportion of different injury types over time?

I wanted to use these as a way to learn about the data, and provide some interesting insights.

Preparing Data for Use (Reading In, Cleaning, and Joining)

Reading In Data

This is just reading in the 3 different data sets to use. For simplicity, the data sets are labeled as: nmse = Non-major Safety Events, ffa = Federal Funding Allocation, abb = Abbreviations.

Code
nmse = read_csv("_data/MirandaManka_data/non_major_safety_events.csv", show_col_types = FALSE)
ffa = read_csv("_data/MirandaManka_data/Federal_Funding_Allocation.csv", show_col_types = FALSE)
abb = read_csv("_data/MirandaManka_data/abbreviations.csv", show_col_types = FALSE)

Data Cleaning

Non-major Safety Events Data

I changed the variable names to be easier to work with and more uniform (snake_case). Next, I used the month and year variables to create a new date variable (in a date format). I dropped a variable I won’t be using (drop ntd_id_4 because I’ll use ntd_id_5 instead).

Code
nmse = nmse %>%
  rename(ntd_id_5 = `5 Digit NTD ID`, ntd_id_4 = `4 Digit NTD ID`, 
     agency = Agency, mode = Mode, service_type = `Type of Service`, 
     month = Month, year = Year, sft_sec = `Safety/Security`, 
     event_type = `Event Type`, location = Location, 
     location_group = `Location Group`, total_events = `Total Events`, 
     customer_injuries = `Customer Injuries`, worker_injuries = `Worker Injuries`, 
     other_injuries = `Other Injuries`, total_injuries = `Total Injuries`)

nmse = nmse %>% 
  mutate("date" = make_date(year = year, month = month)) %>% 
     relocate(date, .after = year)

nmse = nmse %>%
  select(-c(ntd_id_4))

Federal Funding Allocation Data

This data set has more variables than I need so I selected just the variables I want to keep, then renaming them. Next, I kept only distinct observations (getting rid of any repeated rows, which I have because I dropped most of the variables in the data set that differentiated them). I did this to get this data ready to join to the nmse data, so I want this data to have unique observations for my key variable (ntd_id_5). Then, I changed ntd_id_5 to numeric (there were some cases that had a “-” in it but I didn’t need those because they indicated a different agency type so I didn’t need to join them), then I filtered out the observations where ntd_id_5 was NA. The last step in getting this data ready for the join was to finish narrowing down the observations to unique cases. I filtered out any observations where prim_uza_code, prim_uza_name, and prim_uza_pop were NA because they would not provide any information and some of them were duplicate observations where the other had the information. Finally, for the remaining cases where ntd_id_5 was not unique, I had to go through and look up online which one was the correct/true case. I did this on the U.S. Department of Transportation website, searching through the NTD Transit Agency Profiles page, listed in my bibliography.

Code
ffa = ffa %>%
  select(c("5 Digit NTD ID", "Agency", "Primary UZA Code", 
     "Primary UZA Name", "Primary  UZA Population"))

ffa = ffa %>%
  rename(ntd_id_5 = `5 Digit NTD ID`, agency_ffa = Agency, 
     prim_uza_code = `Primary UZA Code`, prim_uza_name = `Primary UZA Name`, 
     prim_uza_pop = `Primary  UZA Population`)

ffa = ffa %>%
  distinct()

ffa = ffa %>% 
  mutate_at("ntd_id_5", as.numeric)

ffa = ffa %>% 
  filter(!is.na(ntd_id_5))

ffa = ffa %>% 
  filter(!is.na(prim_uza_code) & !is.na(prim_uza_name) & !is.na(prim_uza_pop))

ffa = ffa %>% 
  filter(!row_number() %in% c(440, 445, 470, 501, 512, 516, 732, 764, 911, 912, 1124))

ffa = ffa %>% 
  filter(!row_number() %in% c(53, 236, 382, 542, 543, 574, 934))

Abbreviation Data

I renamed a variable, then trimmed any leading or trailing space from both variables. Since I am joining by strings, I want them to match as close as possible so I wanted to make sure a space wouldn’t be an issue.

Code
abb = abb %>%
  rename(agency = Full)

abb = abb %>% 
  mutate(Abbreviation = str_trim(Abbreviation, side = "both"))

abb = abb %>% 
  mutate(agency = str_trim(agency, side = "both"))

Data Joining

I joined nmse and ffa using a left join, because I want to keep all of the observations in nmse and only join in the ffa data where it matches by ntd_id_5. The total rows in the joined data set is 81857 which is what I expected (total rows from nmse, which doesn’t change), as well as the 20 columns (16+5-1), so I am good to move forward. Next, I dropped agency_ffa (the ffa copy of agency, which I had just to do a quality check if something went wrong), which brings the total columns down to 19. Next, I trimmed any leading or trailing spaces just in agency just in case, to get it ready for another join for abbreviations (since I am joining on the name). Finally, I did another left join by agency, bringing in the abbreviations that matched with my data set. The total rows in the joined transit data set is 81857 which is what I expected (total rows from join/nmse, which doesn’t change), as well as the 20 columns (19+2-1), so I am good to move forward. After this I just have some cleaning to do of this joined transit data set.

Code
join = left_join(nmse, ffa, by = "ntd_id_5")

join = join %>%
  select(-c(agency_ffa))

join = join %>% 
  mutate(agency = str_trim(agency, side = "both"))

transit = left_join(join, abb, by = "agency")

Data Cleaning

Transit Data

I renamed and relocated the abbreviation variable to make it easier to work with. I individually changed some of the abbreviations after looking at my data because I noticed some of the strings didn’t quite match up because of slight differences. After this I dropped prim_uza_code because I won’t use it.

Code
transit = transit %>%
  rename(abbrev = Abbreviation) %>%
  relocate(abbrev, .after = agency)

transit = transit %>% 
  mutate(abbrev = ifelse(ntd_id_5 == 40034, "MTD", abbrev),
         abbrev = ifelse(ntd_id_5 == 40003, "MATA", abbrev),
         abbrev = ifelse(ntd_id_5 == 60008, "METRO", abbrev),
         abbrev = ifelse(ntd_id_5 == 30034, "MTA", abbrev),
         abbrev = ifelse(ntd_id_5 == 40105, "PRHTA", abbrev),
         abbrev = ifelse(ntd_id_5 == 50003, "KT", abbrev),
         abbrev = ifelse(ntd_id_5 == 40008, "CATS", abbrev),
         abbrev = ifelse(ntd_id_5 == 90002, "DTS", abbrev),
         abbrev = ifelse(ntd_id_5 == 1, "King County Metro", abbrev))

transit = transit %>% 
  select(-c(prim_uza_code))

Describe the Data

The data set I have now after the joining and cleaning is made up of 81,857 rows and 19 columns. This data set details different transit records. Each observation contains transit agency (numerical id, full agency name, and abbreviation), various categorical variables (mode of service [bus, heavy rail, trolleybus, etc.], type of service [directly operated, purchased, etc.], safety or security, event type [robbery, fire, etc.], location [in transit vehicle, etc], location group [facility, vehicle, or other], and nearest primary urbanized area name), as well as some numerical variables (month, year, date, total events, customer injuries, worker injuries, other injuries, total_injuries, and nearest primary urbanized area population). There are many cases where there may be multiple rows with the same agency, month, and year, but is due to the different combinations of categorical variables (for example, one may be agency a, mode of bus, month 1 and year 2008; the next agency a, mode of train, month 1 and year 2008). From my data source: “there will be one entry for any transit mode/location with at least one occurrence for the given month”.

Code
print(summarytools::dfSummary(transit,
                              varnumbers = FALSE,
                              plain.ascii  = FALSE,
                              style        = "grid",
                              graph.magnif = 0.70,
                              valid.col    = FALSE),
      method = 'render',
      table.classes = 'table-condensed')

Data Frame Summary

transit

Dimensions: 81857 x 19
Duplicates: 17
Variable Stats / Values Freqs (% of Valid) Graph Missing
ntd_id_5 [numeric]
Mean (sd) : 47354.4 (28531)
min ≤ med ≤ max:
1 ≤ 40097 ≤ 99425
IQR (CV) : 39993 (0.6)
612 distinct values 0 (0.0%)
agency [character]
1. Massachusetts Bay Transpo
2. MTA New York City Transit
3. Chicago Transit Authority
4. Los Angeles County Metrop
5. Southeastern Pennsylvania
6. Metropolitan Atlanta Rapi
7. County of Miami-Dade
8. Washington Metropolitan A
9. City and County of San Fr
10. Maryland Transit Administ
[ 602 others ]
3211(3.9%)
2970(3.6%)
2679(3.3%)
2636(3.2%)
2454(3.0%)
2116(2.6%)
2003(2.4%)
1976(2.4%)
1722(2.1%)
1598(2.0%)
58492(71.5%)
0 (0.0%)
abbrev [character]
1. MBTA
2. NYCT
3. CTA
4. LACMTA
5. SEPTA
6. MARTA
7. MTD
8. WMATA
9. MTA
10. GCRTA
[ 30 others ]
3211(7.7%)
2970(7.1%)
2679(6.4%)
2636(6.3%)
2454(5.9%)
2116(5.0%)
2003(4.8%)
1976(4.7%)
1598(3.8%)
1467(3.5%)
18834(44.9%)
39913 (48.8%)
mode [character]
1. MB
2. HR
3. DR
4. LR
5. FB
6. TB
7. RB
8. MG
9. SR
10. CB
[ 7 others ]
45959(56.1%)
12711(15.5%)
9702(11.9%)
8963(10.9%)
1121(1.4%)
952(1.2%)
641(0.8%)
536(0.7%)
402(0.5%)
392(0.5%)
478(0.6%)
0 (0.0%)
service_type [character]
1. DO
2. PT
3. TX
67490(82.4%)
14213(17.4%)
154(0.2%)
0 (0.0%)
month [numeric]
Mean (sd) : 6.4 (3.4)
min ≤ med ≤ max:
1 ≤ 6 ≤ 12
IQR (CV) : 6 (0.5)
12 distinct values 0 (0.0%)
year [numeric]
Mean (sd) : 2013.5 (4.4)
min ≤ med ≤ max:
2008 ≤ 2013 ≤ 2022
IQR (CV) : 8 (0)
15 distinct values 0 (0.0%)
date [Date]
min : 2008-01-01
med : 2013-05-01
max : 2022-08-01
range : 14y 7m 0d
176 distinct values 0 (0.0%)
sft_sec [character]
1. SEC
2. SFT
17345(21.2%)
64512(78.8%)
0 (0.0%)
event_type [character]
1. Not Otherwise Classified
2. Fire
3. Larceny
4. Non-Aggravated Assault
5. Robbery
6. Other Arrests
7. Trespassing
8. Vandalism
9. Fare Evasion
10. Motor Vehicle Theft
[ 2 others ]
60484(73.9%)
4028(4.9%)
3611(4.4%)
2712(3.3%)
2189(2.7%)
2183(2.7%)
2178(2.7%)
1500(1.8%)
1488(1.8%)
669(0.8%)
815(1.0%)
0 (0.0%)
location [character]
1. Not a securement issue
2. Boarding / alighting: Wit
3. Boarding / alighting: Wit
4. Other
5. Boarding / alighting: Wit
6. Revenue facility: transit
7. Securement issue
8. In transit vehicle
9. No Location Specified
10. Non-revenue facility
[ 10 others ]
18196(22.2%)
8564(10.5%)
8117(9.9%)
7412(9.1%)
5840(7.1%)
5160(6.3%)
5063(6.2%)
4888(6.0%)
4154(5.1%)
3327(4.1%)
11136(13.6%)
0 (0.0%)
location_group [character]
1. Facility
2. Other
3. Vehicle
46425(56.7%)
9355(11.4%)
26077(31.9%)
0 (0.0%)
total_events [numeric]
Mean (sd) : 12.5 (133.3)
min ≤ med ≤ max:
1 ≤ 1 ≤ 5386
IQR (CV) : 2 (10.7)
699 distinct values 0 (0.0%)
customer_injuries [numeric]
Mean (sd) : 2.6 (8.3)
min ≤ med ≤ max:
0 ≤ 1 ≤ 1385
IQR (CV) : 1 (3.1)
121 distinct values 13372 (16.3%)
worker_injuries [numeric]
Mean (sd) : 0.3 (1.3)
min ≤ med ≤ max:
0 ≤ 0 ≤ 36
IQR (CV) : 0 (4.8)
37 distinct values 13372 (16.3%)
other_injuries [numeric]
Mean (sd) : 0.1 (2.4)
min ≤ med ≤ max:
0 ≤ 0 ≤ 170
IQR (CV) : 0 (22.5)
53 distinct values 13372 (16.3%)
total_injuries [numeric]
Mean (sd) : 3 (8.8)
min ≤ med ≤ max:
1 ≤ 1 ≤ 1385
IQR (CV) : 2 (2.9)
129 distinct values 13372 (16.3%)
prim_uza_name [character]
1. New York-Newark, NY-NJ-CT
2. Los Angeles-Long Beach-An
3. San Francisco-Oakland, CA
4. Philadelphia, PA-NJ-DE-MD
5. Boston, MA-NH-RI
6. Chicago, IL-IN
7. Miami, FL
8. Seattle, WA
9. Washington, DC-VA-MD
10. Atlanta, GA
[ 340 others ]
5932(7.3%)
4520(5.6%)
4378(5.4%)
3554(4.4%)
3377(4.1%)
3367(4.1%)
2783(3.4%)
2536(3.1%)
2509(3.1%)
2375(2.9%)
46082(56.6%)
444 (0.5%)
prim_uza_pop [numeric]
Mean (sd) : 4394923 (4899281)
min ≤ med ≤ max:
50996 ≤ 2956746 ≤ 18351295
IQR (CV) : 4061831 (1.1)
350 distinct values 444 (0.5%)

Generated by summarytools 1.0.1 (R version 4.2.1)
2022-09-02

Visualizations & Tables

What modes of transit have the most total injuries?

I am following my research questions and started by looking at which modes of transit have the most total injuries. I have a table of the top 5 modes of transit by the sum of the total injuries. These top 5 in order are MB = Bus, HR = Heavy Rail, LR = Light Rail, TB = Trolleybus, and DR = Demand Response. This may mean that there are more worrisome aspects about these modes of transit, however, it could be that simply more people take these modes of transit in larger cities, where there are generally more incidents because of more people. Further analysis of ridership could be analyzed to make further conclusions.

Code
transit %>%
  select(mode, total_injuries) %>%
  group_by(mode) %>%
  summarise(sum_tot_inj = sum(total_injuries, na.rm = TRUE)) %>%
  arrange(desc(sum_tot_inj)) %>%
  slice(1:5) %>%
  kable(col.names = c("Mode of Transit", "Sum of Total Injuries"), 
     caption = "Sum of Total Injuries by Mode of Transit") %>% 
  kable_minimal()
Sum of Total Injuries by Mode of Transit
Mode of Transit Sum of Total Injuries
MB 102898
HR 72406
LR 13282
DR 12065
TB 1759

Are the number and type of events changing over time?

The next question I looked at was whether the number and type of events changed over time. I made a histogram and used facet_wrap to make a separate graph for each event type, then added color to show which events are safety events and which are security events. While looking at the graphs I noticed something weird, that for most event types there was a higher count for the first few years that looks like a bump, then the count decreased after that. That was the case for all security events but not the safety events. I wanted to investigate that further.

Code
ggplot(transit, aes(x = date, color = sft_sec)) + 
  geom_histogram(bins = 36, aes(fill = sft_sec)) + 
  facet_wrap(~ event_type, nrow = 4) +
  labs(title = "Event Type over Year by Safety or Security Event", x = "Year", 
     y = "Count", fill = guide_legend("Safety or Security"), 
     color = guide_legend("Safety or Security")) +  
  theme_bw() +
  scale_color_brewer(palette = "Paired") +
  scale_fill_brewer(palette = "Paired")

Data Inconsistencies

I created two simple tables to show the inconsistencies I discovered with this data. The first table shows the count of total events by year. I saw that the count was 10830 for 2008, 11495 for 2009, and 8776 for 2010, and then drops dramatically to around 4000 for 2011 and after (and only 2099 for 2022 because the year is still in progress). This is important because it identifies that the counts by year vary a lot more than would be expected, and indicate there is likely something going on.

The second is a table with year, event type, and count. This table shows the issue most clearly. I made it so that it shows 2010 and 2011, because those are the years it changes. In 2008-2010, the data have all of the event types, but for some reason 2011 and after only have “Fire” and “Not Otherwise Classified Safety Events”. The graph from above showed an unexpected insight, those two event types make up “safety” events in the sft_sec variable, and the rest are security. So for some reason, security events stopped being added after 2010. I tried to Google this to see if I could find anything about a different reporting system, but I settled on the fact that the data set is just flawed.

At this point, it was too late to change data sets. Of course, I wish I would have noticed this earlier but because it was buried a little bit, I didn’t notice in the cleaning stage. So I decided to just work with this data the best I can.

Code
transit %>%
  select(total_events, year) %>%
  group_by(year) %>%
  summarize(n=n()) %>%
  kable(col.names = c("Year", "Count"), 
     caption = "Count of Total Events by Year") %>% 
  kable_minimal()
Count of Total Events by Year
Year Count
2008 10830
2009 11495
2010 8776
2011 4131
2012 3993
2013 4159
2014 4399
2015 4651
2016 4737
2017 4697
2018 4904
2019 4776
2020 3988
2021 4222
2022 2099
Code
transit %>%
  select(total_events, year, event_type) %>%
  filter(year == 2010 | year == 2011)  %>%
  group_by(year, event_type) %>%
  summarize(n=n()) %>%
  kable(col.names = c("Year", "Event Type", "Count"), 
     caption = "Count of Event Type by Year for 2010 and 2011") %>% 
  kable_minimal()
Count of Event Type by Year for 2010 and 2011
Year Event Type Count
2010 Burglary 83
2010 Fare Evasion 369
2010 Fire 221
2010 Larceny 968
2010 Motor Vehicle Theft 152
2010 Non-Aggravated Assault 696
2010 Non-Violent Civil Disturbance 60
2010 Not Otherwise Classified Safety Events 4024
2010 Other Arrests 726
2010 Robbery 531
2010 Trespassing 571
2010 Vandalism 375
2011 Fire 260
2011 Not Otherwise Classified Safety Events 3871

So now that I figured out the problem and it’s too late to turn back, I have to decide: what do I do about this data inconsistency? I decided to split my data into two parts. The first was a 3 year data set from 2008 to 2010 that is essentially balanced (each year has approximately similar total cases) and more complete with all of the event types, but short (only a few years of data). The second was an 11 year data set that had more longitudinal data but lacking most of the event types. I’m not sure there even is a perfect solution but I thought this would be fine for the purpose of this project. I felt like continuing with just the one data set when the first few years had such different counts to later years (around 11000 compared to 4000) would make the interpretation a little bit unreliable or less significant.

Code
transit_2008to2010 = transit %>% 
  filter(year < 2011)

transit_2011to2021 = transit %>% 
  filter(year > 2010 & year < 2022)

I wanted to re-do the graph from above with the shortened data set to see if it was better. It shows more consistency over the 3 years, and interestingly the not otherwise classified safety events are still the most common by far. This does make me wonder what those might be, what else is being reported? I tried to do some research but didn’t really see much online. I also find it interesting that only fire is listed separately from the not otherwise classified safety events category for safety (and those are the only 2 event types that fall under safety in this data), but for security there are 10 different categories. The counts for most of the different events types stayed similar across the 3 years.

Code
ggplot(transit_2008to2010, aes(x = date, color = sft_sec)) + 
  geom_histogram(bins = 36, aes(fill = sft_sec)) + 
  facet_wrap(~ event_type, nrow = 4) +
  labs(title = "Event Type over Year by Safety or Security Event", x = "Year", 
     y = "Count", fill = guide_legend("Safety or Security"), 
     color = guide_legend("Safety or Security")) +  
  theme_bw() +
  scale_color_brewer(palette = "Paired") +
  scale_fill_brewer(palette = "Paired")

Which agencies have the most total events?

I wanted to look a little bit at the different transit agencies and the number of total events. I decided to use the transit_2011to2021 data for this one because I wanted to do a line chart so I wanted more years of data. I started by making a temporary data frame with data grouped by abbrev (agency name abbreviation) and year and then had the sum for each agency year combo. Next, I made a data frame that removed the rows with NA values (I am only finding the top 5 agencies here and didn’t want NA) then found the top 5 agencies with the highest sum of total events. Then I just joined those two data frames so that I ended up with 55 rows and 3 columns, where there are 11 rows per agency (one for each year) and each has the sum of total events for that year for that agency, and this data set has the top 5 agencies. I then made this a line graph with year on the x-axis, the sum of the events on the y-axis, and each line a different color representing a different transit agency. I think what is really interesting about this visualization is how much the top agency (NYCT - MTA New York City Transit) stands out. It makes sense because New York City has so many people (including a high population density), which can come with more crime/things happening (or, non-major safety events). Also, all of the top 5 transit agencies shown in this graph are in big cities (CTA = Chicago, MBTA = Boston, SEPTA = Philadelphia, WMATA = Washington, D.C.), but are more even and overlapping with each other in terms of total events.

Code
temp = transit_2011to2021 %>%
  select(abbrev, year, total_events) %>%
  group_by(abbrev, year) %>%
  summarise(sum_total_events = sum(total_events, na.rm = TRUE))

transit_na_drop = transit_2011to2021 %>% 
  drop_na(abbrev)

top_5 = transit_na_drop %>%
  select(abbrev, total_events) %>%
  group_by(abbrev) %>%
  summarise(sum_total_events = sum(total_events, na.rm = TRUE)) %>%
  arrange(desc(sum_total_events)) %>%
  slice(1:5) %>%
  select(-sum_total_events)

join_for_graph = left_join(top_5, temp)

ggplot(join_for_graph, aes(x = year, y = sum_total_events, color = abbrev)) + 
  geom_line() + 
  labs(title = "Sum of Total Events by Year for Top 5 Transit Agencies", x = "Year", 
     y = "Sum of Total Events", color = guide_legend("Agency Abbreviation")) + 
  theme_bw() +
  scale_color_brewer(palette = "Set2")

What is the proportion of different injury types over time?

To answer this question, I first pivoted a few rows from the transit_2011to2021 data. I used this data since am looking at multiple years. I only used customer injuries, worker injuries, and other injuries (not total injuries because I wanted the parts of a whole). Then I made a percent stacked bar chart of the injury type over time, with the injury type shown by color. Looking at the graph, there does seem to be a slight increase in the proportion of worker injuries over time. This could be something to investigate in the future, if there is some reason more workers may be getting injured.

Code
long_transit_2011to2021 = transit_2011to2021 %>%
  select(date, customer_injuries:other_injuries) %>%
  pivot_longer(cols = customer_injuries:other_injuries,
               names_to = "injuriy_type", 
               values_to = "count")

ggplot(long_transit_2011to2021, aes(x = date, y = count, fill = injuriy_type)) +
  geom_bar(position = "fill", stat = "identity") +
  labs(title = "Injury Type Over Time", x = "Date", 
     y = "Percent", fill = guide_legend("Injury Type"), 
     color = guide_legend("Injury Type")) +  
  theme_bw() +
  scale_color_brewer(palette = "Set2") +
  scale_fill_brewer(palette = "Set2")

Reflection

I enjoyed doing this project and I learned a lot along the way. Although I have experience using R, I haven’t really used the tidyverse before, or piping, but after using them for a few weeks for this class and project, I can see how great they are. Since I haven’t used them, there was definitely a lot to adjust to and to get rid of some old habits. I found it really helpful for data cleaning though, so worth it, and will definitely be using it in the future. The most challenging part of this project for me was choosing the data set. I a day just looking for data, from a variety of different topics, and from different sources. I was happy with my data, began cleaning it, joined in some other data, then started creating visualizations and tables when I realized the problem with the data set (with the event type and total events being limited for most years). I continued with the exploration and analysis the best I could with the data I had, using different subsets of the data for different parts of the analysis. Knowing these issues with the data set, I probably would not continue with much further analysis of this data. I wish I would have known about the issues and would have chosen a different data set, either another one about transportation data, or something entirely different. I would have liked to do more statistical analysis and uncover more about this data, as well as keep working to make more complicated and interesting graphs. I would also like to learn how to keep data updated, so that when another month/year of data is added to the source website online, I can bring it into RStudio automatically but that may be a different project to learn.

Conclusion

My first question asked which mode of transit had the most injuries, and I determined it was the bus, with heavy rail in second and light rail in third. My second question showed that event type wasn’t really changing over time that much and for some reason, a lot of the events were being classified as “not otherwise classified safety events”. My last question determined that while the proportion of different injury types over time isn’t changing that much, there does seem to be a slow increase in the proportion of worker injuries. I thought the most interesting and informative graph was the line graph with the top 5 transit agencies sum of total events, which showed that the NYCT had a lot more total yearly events than the next closest. It was also the hardest to make but I probably didn’t take the most direct/efficient way (although I was proud of the process and that I was able to make it work in the end).

I ended up not using some variables, like what I joined in with the Federal Funding Allocation data for population. Given the data issues, there are still some questions left unanswered, like why is that data missing? I wonder if the event type distribution would have changed over the years, and if the number of total events and injuries would keep increasing? But there are also some questions that would be unanswered even with better data, like what statistical differences be found between different variables? I think there will always be some unanswered questions to think about, and there is always more that could be done.

Bibliography

“Abbreviations.” Department of Transportation, 2013. https://www.fhwa.dot.gov/policy/2013cpr/pdfs/abbreviations.pdfhttps://www.fhwa.dot.gov/policy/2013cpr/pdfs/abbreviations.pdf

“Federal Funding Allocation: Department of Transportation - Data Portal.” Data.Transportation.gov, https://data.transportation.gov/Public-Transit/Federal-Funding-Allocation/5x22-djnv/data.

Holtz, Yan. The R Graph Gallery, https://r-graph-gallery.com/.

“Non-Major Safety Events: Department of Transportation - Data Portal.” Data.Transportation.gov, https://data.transportation.gov/dataset/Non-major-Safety-Events/urir-txqm/data.

“NTD Transit Agency Profiles.” NTD Transit Agency Profiles | FTA, https://www.transit.dot.gov/ntd/transit-agency-profiles.

R Core Team (2022). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/.

Rolfe, Meredith. DACSS 601 August 2022 Course Blog Challenge Solutions. https://dacss.github.io/DACSS_601_August2022_v2/. Meredith Rolfe’s solution posts for challenges 1-8

Wickham, Hadley, and Garrett Grolemund. R For Data Science: Import, Tidy, Transform, Visualize and Model Data. O’Reilly, 2017.