2021-03-03 23:20:37 +08:00
# Data tidying {#data-tidy}
2016-07-24 22:16:08 +08:00
## Introduction
2022-02-24 07:17:19 +08:00
> "Happy families are all alike; every unhappy family is unhappy in its own way." --- Leo Tolstoy
2016-07-26 00:28:05 +08:00
2022-02-24 07:17:19 +08:00
> "Tidy datasets are all alike, but every messy dataset is messy in its own way." --- Hadley Wickham
2015-07-29 00:16:58 +08:00
2022-02-24 07:17:19 +08:00
In this chapter, you will learn a consistent way to organize your data in R using a system called **tidy data**.
Getting your data into this format requires some work up front, but that work pays off in the long term.
Once you have tidy data and the tidy tools provided by packages in the tidyverse, you will spend much less time munging data from one representation to another, allowing you to spend more time on the data questions you care about.
2015-07-29 00:16:58 +08:00
2022-05-02 21:09:55 +08:00
In this chapter, you'll first learn the definition of tidy data and see it applied to simple toy dataset.
Then we'll dive into the main tool you'll use for tidying data: pivoting.
Pivoting allows you to change the form of your data, without changing any of the values.
We'll finish up with a discussion of usefully untidy data, and how you can create it if needed.
If you particularly enjoy this chapter and learn more about the underlying theory, you can learn more about the history and theoretical underpinnings in the [Tidy Data](https://www.jstatsoft.org/article/view/v059i10) paper published in the Journal of Statistical Software.
2015-07-29 00:16:58 +08:00
2016-07-19 21:01:50 +08:00
### Prerequisites
2015-07-29 00:16:58 +08:00
2021-02-21 23:40:40 +08:00
In this chapter we'll focus on tidyr, a package that provides a bunch of tools to help tidy up your messy datasets.
tidyr is a member of the core tidyverse.
2016-07-26 00:28:05 +08:00
2016-07-27 21:23:28 +08:00
```{r setup, message = FALSE}
2016-10-04 01:30:24 +08:00
library(tidyverse)
2015-07-29 00:16:58 +08:00
```
2022-02-23 07:48:09 +08:00
From this chapter on, we'll suppress the loading message from `library(tidyverse)`.
2015-07-29 03:15:28 +08:00
## Tidy data
2015-07-29 00:16:58 +08:00
2021-02-21 23:40:40 +08:00
You can represent the same underlying data in multiple ways.
The example below shows the same data organised in four different ways.
2022-02-24 07:17:19 +08:00
Each dataset shows the same values of four variables *country*, *year*, *population*, and *cases*, but each dataset organizes the values in a different way.
2015-07-29 00:16:58 +08:00
```{r}
table1
table2
table3
2016-07-26 00:28:05 +08:00
# Spread across two tibbles
2021-02-06 00:44:17 +08:00
table4a # cases
table4b # population
2015-07-29 00:16:58 +08:00
```
2021-02-21 23:40:40 +08:00
These are all representations of the same underlying data, but they are not equally easy to use.
2022-03-03 23:31:56 +08:00
One of them, `table1`, will be much easier to work with inside the tidyverse because it's tidy.
There are three interrelated rules that make a dataset tidy:
2015-07-29 00:16:58 +08:00
2022-02-24 07:17:19 +08:00
1. Each variable is a column; each column is a variable.
2022-03-03 23:31:56 +08:00
2. Each observation is row; each row is an observation.
2022-02-24 07:17:19 +08:00
3. Each value is a cell; each cell is a single value.
2016-08-12 21:58:32 +08:00
Figure \@ref(fig:tidy-structure) shows the rules visually.
2022-02-24 07:17:19 +08:00
2022-03-21 22:27:14 +08:00
```{r tidy-structure}
#| echo: FALSE
#| out.width: NULL
2022-02-24 07:17:19 +08:00
#| fig.cap: >
#| Following three rules makes a dataset tidy: variables are columns,
#| observations are rows, and values are cells.
#| fig.alt: >
#| Three panels, each representing a tidy data frame. The first panel
#| shows that each variable is column. The second panel shows that each
#| observation is a row. The third panel shows that each value is
#| a cell.
2022-03-21 22:27:14 +08:00
knitr::include_graphics("images/tidy-1.png", dpi = 270)
2015-12-12 03:28:10 +08:00
```
2021-02-21 23:40:40 +08:00
Why ensure that your data is tidy?
There are two main advantages:
2015-07-29 00:16:58 +08:00
2021-02-21 23:40:40 +08:00
1. There's a general advantage to picking one consistent way of storing data.
If you have a consistent data structure, it's easier to learn the tools that work with it because they have an underlying uniformity.
2016-07-27 06:44:17 +08:00
2021-02-21 23:40:40 +08:00
2. There's a specific advantage to placing variables in columns because it allows R's vectorised nature to shine.
2022-02-25 05:07:23 +08:00
As you learned in Sections \@ref(mutate) and \@ref(summarise), most built-in R functions work with vectors of values.
2021-02-21 23:40:40 +08:00
That makes transforming tidy data feel particularly natural.
dplyr, ggplot2, and all the other packages in the tidyverse are designed to work with tidy data.
Here are a couple of small examples showing how you might work with `table1`.
2015-07-29 00:16:58 +08:00
2022-02-24 07:17:19 +08:00
```{r fig.width = 5}
#| fig.alt: >
#| This figure shows the numbers of cases in 1999 and 2000 for
#| Afghanistan, Brazil, and China, with year on the x-axis and number
#| of cases on the y-axis. Each point on the plot represents the number
#| of cases in a given country in a given year. The points for each
#| country are differentiated from others by color and shape and connected
#| with a line, resulting in three, non-parallel, non-intersecting lines.
#| The numbers of cases in China are highest for both 1999 and 2000, with
#| values above 200,000 for both years. The number of cases in Brazil is
#| approximately 40,000 in 1999 and approximately 75,000 in 2000. The
#| numbers of cases in Afghanistan are lowest for both 1999 and 2000, with
#| values that appear to be very close to 0 on this scale.
2016-08-12 21:58:32 +08:00
# Compute rate per 10,000
2022-02-24 03:15:52 +08:00
table1 |>
2022-02-24 07:17:19 +08:00
mutate(
rate = cases / population * 10000
)
2015-07-29 00:16:58 +08:00
2016-07-26 00:28:05 +08:00
# Compute cases per year
2022-02-24 03:15:52 +08:00
table1 |>
2016-07-26 00:28:05 +08:00
count(year, wt = cases)
2015-07-29 00:16:58 +08:00
2016-07-26 00:28:05 +08:00
# Visualise changes over time
2021-02-06 00:44:17 +08:00
ggplot(table1, aes(year, cases)) +
geom_line(aes(group = country), colour = "grey50") +
geom_point(aes(colour = country, shape = country)) +
scale_x_continuous(breaks = c(1999, 2000))
2015-07-29 00:16:58 +08:00
```
2016-07-26 00:28:05 +08:00
### Exercises
2015-07-29 00:16:58 +08:00
2021-02-21 23:40:40 +08:00
1. Using prose, describe how the variables and observations are organised in each of the sample tables.
2015-07-29 00:16:58 +08:00
2021-02-21 23:40:40 +08:00
2. Compute the `rate` for `table2`, and `table4a` + `table4b`.
2016-07-26 00:28:05 +08:00
You will need to perform four operations:
2015-07-29 00:16:58 +08:00
2021-02-06 00:44:17 +08:00
a. Extract the number of TB cases per country per year.
2021-02-21 23:40:40 +08:00
b. Extract the matching population per country per year.
c. Divide cases by population, and multiply by 10000.
d. Store back in the appropriate place.
Which representation is easiest to work with?
Which is hardest?
Why?
2015-07-29 00:16:58 +08:00
2021-02-21 23:40:40 +08:00
3. Recreate the plot showing change in cases over time using `table2` instead of `table1`.
What do you need to do first?
2015-07-29 00:16:58 +08:00
2020-01-16 00:17:10 +08:00
## Pivoting
2015-07-29 00:16:58 +08:00
2022-03-03 23:31:56 +08:00
The principles of tidy data might seem so obvious that you wonder if you'll ever encounter a dataset that isn't tidy.
Unfortunately, however, most real data is untidy.
2021-02-21 23:40:40 +08:00
There are two main reasons:
2022-03-03 23:31:56 +08:00
1. Data is often organised to facilitate some goal other than analysis.
For example, it's common for data to be structure to make recording it easy.
2016-07-27 21:23:28 +08:00
2022-03-03 23:31:56 +08:00
2. Most people aren't familiar with the principles of tidy data, and it's hard to derive them yourself unless you spend a *lot* of time working with data.
2021-02-21 23:40:40 +08:00
2022-03-03 23:31:56 +08:00
This means that most real analyses will require at least a little tidying.
You'll begin by figuring out what the underlying variables and observations are.
2021-02-21 23:40:40 +08:00
Sometimes this is easy; other times you'll need to consult with the people who originally generated the data.
2022-03-03 23:31:56 +08:00
Next, you'll **pivot** your data into a tidy form, with variables in the columns and observations in the rows.
2015-07-29 00:16:58 +08:00
2022-03-03 23:31:56 +08:00
tidyr provides two functions for pivoting data: `pivot_longer()`, which makes datasets **longer** by increasing rows and reducing columns, and `pivot_wider()` which makes datasets **wider** by increasing columns and reducing rows.
2022-04-14 10:41:24 +08:00
`pivot_longer()` is very useful for tidying data; `pivot_wider()` is more useful for making non-tidy data (we'll come back to this in Section \@ref(rectangle-data)), but is occasionally also needed for tidying..
2016-07-27 21:23:28 +08:00
2022-02-25 04:31:14 +08:00
The following sections work through the use of `pivot_longer()` and `pivot_wider()` to tackle a wide range of realistic datasets.
These examples are drawn from `vignette("pivot", package = "tidyr")` which includes more variations and more challenging problems.
2016-07-27 21:23:28 +08:00
2022-03-03 23:31:56 +08:00
### Data in column names {#billboard}
2016-01-29 05:10:04 +08:00
2022-03-03 23:31:56 +08:00
The `billboard` dataset records the billboard rank of songs in the year 2000:
2015-07-29 00:16:58 +08:00
```{r}
2022-03-03 23:31:56 +08:00
billboard
2015-07-29 00:16:58 +08:00
```
2022-03-03 23:31:56 +08:00
In this dataset, the observation is a song.
We have data about song and how it has performed over time.
The first three columns, `artist`, `track`, and `date.entered`, are variables.
Then we have 76 columns (`wk1`-`wk76`) used to describe the rank of the song in each week.
Here the column names one variable (the `week`) and the cell values are another (the `rank`).
To tidy this data we need to use `pivot_longer()`.
There are three key arguments:
2021-02-06 00:44:17 +08:00
2022-03-03 23:31:56 +08:00
- `cols` specifies which which columns need to be pivoted (the columns that aren't variables) using the same syntax as `select()`. In this case, we could say `!c(artist, track, date.entered)` or `starts_with("wk")`
- `names_to` names of the variable stored in the column names.
- `values_to` names the variable stored in the cell values.
2021-02-06 00:44:17 +08:00
2022-03-03 23:31:56 +08:00
This gives the following call:
2021-02-06 00:44:17 +08:00
2022-02-24 07:17:19 +08:00
```{r}
2022-03-03 23:31:56 +08:00
billboard |>
2022-02-25 04:31:14 +08:00
pivot_longer(
2022-03-03 23:31:56 +08:00
cols = starts_with("wk"),
names_to = "week",
values_to = "rank",
2022-02-25 04:31:14 +08:00
)
2021-02-06 00:44:17 +08:00
```
2022-03-03 23:31:56 +08:00
What happens if a song is in the top 100 for less than 76 weeks?
You can that 2 Pacs "Baby Don't Cry" was only in the top100 for 7 weeks, and all the remaining rows are filled in with missing values.
These `NA`s don't really represent unknown observations; they're force to exist by the structure of the dataset.
We can ask `pivot_longer` to get rid of the by setting `values_drop_na = TRUE`:
2015-07-29 00:16:58 +08:00
```{r}
2022-03-03 23:31:56 +08:00
billboard |>
2021-02-06 00:44:17 +08:00
pivot_longer(
2022-02-25 04:31:14 +08:00
cols = starts_with("wk"),
names_to = "week",
values_to = "rank",
values_drop_na = TRUE
)
2021-02-06 00:44:17 +08:00
```
2022-03-03 23:31:56 +08:00
You might also wonder what happens if a song is in the top 100 for more than 76 weeks?
We can't tell from this data, but you might guess that additional columns `wk77`, `wk78`, ... would be added to the dataset.
2020-01-16 00:17:10 +08:00
2022-03-03 23:31:56 +08:00
This data is now tidy, but we could make future computation a bit easier by converting `week` into a number.
We do this by using `mutate()` + `parse_number()`.
You'll learn more about `parse_number()` and friends in Chapter \@ref(data-import).
2015-07-29 00:16:58 +08:00
2016-07-26 00:28:05 +08:00
```{r}
2022-03-03 23:31:56 +08:00
billboard_tidy <- billboard |>
2021-02-06 00:44:17 +08:00
pivot_longer(
2022-02-25 04:31:14 +08:00
cols = starts_with("wk"),
names_to = "week",
values_to = "rank",
values_drop_na = TRUE
) |>
2022-03-03 23:31:56 +08:00
mutate(week = parse_number(week))
2022-02-25 04:31:14 +08:00
billboard_tidy
2016-07-26 00:28:05 +08:00
```
2015-07-29 00:16:58 +08:00
2022-02-25 04:31:14 +08:00
Now we're in a good position to look at the typical course of a song's rank by drawing a plot.
2015-07-29 00:16:58 +08:00
```{r}
2022-02-25 04:31:14 +08:00
#| fig.alt: >
#| A line plot with week on the x-axis and rank on the y-axis, where
#| each line represents a song. Most songs appear to start at a high rank,
#| rapidly accelerate to a low rank, and then decay again. There are
#| suprisingly few tracks in the region when week is >20 and rank is
#| >50.
billboard_tidy |>
ggplot(aes(week, rank, group = track)) +
geom_line(alpha = 1/3) +
scale_y_reverse()
2015-07-29 00:16:58 +08:00
```
2022-03-04 00:14:00 +08:00
### How does pivoting work?
Now that you've seen what pivoting can do for you, it's worth taking a little time to gain some intuition for what's happening to the data.
Let's make a very simple dataset to make it easier to see what's happening:
```{r}
df <- tribble(
~var, ~col1, ~col2,
"A", 1, 2,
"B", 3, 4,
"C", 5, 6
)
```
Here we'll say there are three variables `var` (already in a variable), `name` (the column names in the column names), and `value` (the cell values).
So we can tidy it with:
```{r}
df |>
pivot_longer(
cols = col1:col2,
names_to = "names",
values_to = "values"
)
```
How does this transformation take place?
It's easier to see if we take it component by component.
Columns that are already variables need to be repeated, once for each column in `cols`, as shown in Figure \@ref(fig:pivot-variables).
```{r pivot-variables}
2022-03-21 22:27:14 +08:00
#| echo: FALSE
#| out.width: NULL
2022-03-04 00:14:00 +08:00
#| fig.cap: >
#| Columns that are already variables need to be repeated, once for
#| each column that is pivotted.
2022-03-21 22:27:14 +08:00
knitr::include_graphics("diagrams/tidy-data/variables.png", dpi = 270)
2022-03-04 00:14:00 +08:00
```
The column names become values in a new variable, whose name is given by `names_to`, as shown in Figure \@ref(fig:pivot-names).
They need to be repeated for each row in the original dataset.
```{r pivot-names}
2022-03-21 22:27:14 +08:00
#| echo: FALSE
#| out.width: NULL
2022-03-04 00:14:00 +08:00
#| fig.cap: >
#| The column names of pivoted columns become a new column.
2022-03-21 22:27:14 +08:00
knitr::include_graphics("diagrams/tidy-data/column-names.png", dpi = 270)
2022-03-04 00:14:00 +08:00
```
The cell values also become values in a new variable, with name given by `values_to`.
The are unwound row by row.
Figure \@ref(fig:pivot-values) illustrates the process.
```{r pivot-values}
2022-03-21 22:27:14 +08:00
#| echo: FALSE
#| out.width: NULL
2022-03-04 00:14:00 +08:00
#| fig.cap: >
#| The number of values are preserved (not repeated), but unwound
#| row-by-row.
2022-03-21 22:27:14 +08:00
knitr::include_graphics("diagrams/tidy-data/cell-values.png", dpi = 270)
2022-03-04 00:14:00 +08:00
```
2022-02-25 04:31:14 +08:00
### Many variables in column names
2016-07-27 06:44:17 +08:00
2022-02-25 04:31:14 +08:00
A more challenging situation occurs when you have multiple variables crammed into the column names.
2022-03-17 22:46:11 +08:00
For example, take the `who2` dataset:
2015-07-29 00:16:58 +08:00
```{r}
2022-02-25 04:31:14 +08:00
who2
2015-07-29 00:16:58 +08:00
```
2022-03-03 23:31:56 +08:00
This dataset records information about tuberculosis data collected by the WHO.
There are two columns that are easy to interpret: `country` and `year`.
They are followed by 56 column like `sp_m_014`, `ep_m_4554`, and `rel_m_3544`.
If you stare at these column for long enough, you'll notice there's a pattern.
Each column name is made up of three pieces separated by `_`.
The first piece, `sp`/`rel`/`ep`, describes the method used for the `diagnosis`, the second piece, `m`/`f` is the `gender`, and the third piece, `014`/`1524`/`2535`/`3544`/`4554`/`65` is the `age` range.
2015-07-29 00:16:58 +08:00
2022-03-03 23:31:56 +08:00
So in this case we have six variables: two variables are already columns, three variables are contained in the column name, and one variable is in the cell name.
This requires two changes to our call to `pivot_longer()`: `names_to` gets a vector of column names and `names_sep` describes how to split the variable name up into pieces:
2016-07-27 06:44:17 +08:00
```{r}
2022-03-03 23:31:56 +08:00
who2 |>
2022-02-25 04:31:14 +08:00
pivot_longer(
cols = !(country:year),
names_to = c("diagnosis", "gender", "age"),
names_sep = "_",
values_to = "count"
)
2015-07-29 00:16:58 +08:00
```
2022-03-03 23:31:56 +08:00
An alternative to `names_sep` is `names_pattern`, which you can use to extract variables from more complicated naming scenarios, once you've learned about regular expressions in Chapter \@ref(regular-expressions).
2021-02-06 00:44:17 +08:00
2022-03-03 23:31:56 +08:00
### Data and variable names in the column headers
2021-02-06 00:44:17 +08:00
2022-03-03 23:31:56 +08:00
The next step up in complexity is when the column names include a mix of variable values and variable names.
2022-03-18 03:15:15 +08:00
For example, take the `household` dataset:
2021-02-06 00:44:17 +08:00
2022-02-24 07:17:19 +08:00
```{r}
2022-03-21 22:27:14 +08:00
household
2021-02-06 00:44:17 +08:00
```
2022-03-17 22:46:11 +08:00
This dataset contains data about five families, with the names and dates of birth of up to two children.
2022-03-21 23:57:16 +08:00
The new challenge in this dataset is that the column names contain the name of two variables (`dob`, `name)` and the values of another (`child,` with values 1 and 2).
We again we need to supply a vector to `names_to` but this time we use the special `".value"` sentinel.
This overrides the usual `values_to` argument and keeps the first component of the column name as a variable name.
2022-02-25 04:31:14 +08:00
```{r}
2022-03-18 03:15:15 +08:00
household |>
2022-02-25 04:31:14 +08:00
pivot_longer(
cols = !family,
names_to = c(".value", "child"),
names_sep = "_",
values_drop_na = TRUE
2022-03-03 23:31:56 +08:00
) |>
mutate(child = parse_number(child))
2021-02-06 00:44:17 +08:00
```
2022-03-03 23:31:56 +08:00
We again use `values_drop_na = TRUE`, since the shape of the input forces the creation of explicit missing variables (e.g. for families with only one child), and `parse_number()` to convert (e.g.) `child1` into 1.
2021-02-06 00:44:17 +08:00
2022-03-17 22:46:11 +08:00
### Widening data
2021-02-06 00:44:17 +08:00
2022-03-03 23:31:56 +08:00
So far we've used `pivot_longer()` to solves the common class of problems where values have ended up in column names.
2022-02-25 04:31:14 +08:00
Next we'll pivot (HA HA) to `pivot_wider()`, which helps when one observation is spread across multiple rows.
2022-03-17 22:46:11 +08:00
This seems to be less needed problem in practice, but it's common when dealing with governmental data and arises in a few other places as well.
2022-03-03 23:31:56 +08:00
2022-03-17 22:46:11 +08:00
We'll start with `cms_patient_experience`, a dataset from the Centers of Medicare and Medicaid services that provides information about patient experiences:
2022-02-25 04:31:14 +08:00
```{r}
2022-03-17 22:46:11 +08:00
cms_patient_experience
2021-02-06 00:44:17 +08:00
```
2022-03-17 22:46:11 +08:00
An observation is an organisation, but each organisation is spread across six rows.
There's one row for each variable, or measure.
We can see the complete set of variables across the whole dataset with `distinct()`:
2021-02-06 00:44:17 +08:00
```{r}
2022-03-17 22:46:11 +08:00
cms_patient_experience |>
distinct(measure_cd, measure_title)
2021-02-06 00:44:17 +08:00
```
2022-03-17 22:46:11 +08:00
Neither of these variables make particularly great variable names: `measure_cd` doesn't hint at the meaning of the variable and `measure_title` is a long sentence containing spaces.
We'll use `measure_cd` for now.
2016-07-26 00:28:05 +08:00
2022-03-17 22:46:11 +08:00
`pivot_wider()` has the opposite interface to `pivot_longer()` we need to provide the existing columns that define the values (`values_from`) and the column name (`names_from)`:
2021-02-21 23:40:40 +08:00
2022-02-25 04:31:14 +08:00
```{r}
2022-03-17 22:46:11 +08:00
cms_patient_experience |>
2022-02-25 04:31:14 +08:00
pivot_wider(
2022-03-17 22:46:11 +08:00
names_from = measure_cd,
values_from = prf_rate
2022-02-25 04:31:14 +08:00
)
```
2016-07-26 00:28:05 +08:00
2022-03-17 22:46:11 +08:00
The output doesn't look quite right as we still seem to have multiple rows for each organistaion.
That's because, by default, `pivot_wider()` will attempt to preservere all the existing columns including `measure_title` which has six distinct observations.
To fix this problem we need to tell `pivot_wider()` which columns identify each row; in this case that's the variables starting with `org`:
2021-02-21 23:40:40 +08:00
2022-03-17 22:46:11 +08:00
```{r}
cms_patient_experience |>
pivot_wider(
id_cols = starts_with("org"),
names_from = measure_cd,
values_from = prf_rate
)
```
2016-07-27 06:44:17 +08:00
2022-03-17 22:46:11 +08:00
### Widening multiple variables
2016-07-27 06:44:17 +08:00
2022-03-17 22:46:11 +08:00
`cms_patient_care` has a similar structure:
2015-07-29 00:16:58 +08:00
2015-07-29 03:15:28 +08:00
```{r}
2022-03-17 22:46:11 +08:00
cms_patient_care
2015-07-29 00:16:58 +08:00
```
2022-03-17 22:46:11 +08:00
Depending on what you want to do next I think there are three meaningful ways:
2021-02-06 00:44:17 +08:00
```{r}
2022-03-17 22:46:11 +08:00
cms_patient_care |>
pivot_wider(
names_from = type,
values_from = score
)
2016-07-27 06:44:17 +08:00
2022-03-17 22:46:11 +08:00
cms_patient_care |>
pivot_wider(
names_from = measure_abbr,
values_from = score
)
2016-07-27 06:44:17 +08:00
2022-03-17 22:46:11 +08:00
cms_patient_care |>
pivot_wider(
names_from = c(measure_abbr, type),
values_from = score
)
2022-02-25 04:31:14 +08:00
```
2021-02-06 00:44:17 +08:00
2022-03-17 22:46:11 +08:00
We'll come back to this idea in the next section; for different analysis purposes you may want to consider different things to be variables
## Untidy data
2021-02-06 00:44:17 +08:00
2022-04-30 04:01:43 +08:00
While I showed a couple of examples of using `pivot_wider()` to make tidy data, it's real strength is making **untidy** data.
While that sounds like a bad thing, untidy isn't a pejorative term: there are many untidy data structures that are extremely useful.
2022-05-02 21:09:55 +08:00
Tidy data is a great starting point for most analyses but it's not the only data format you'll even need.
2021-02-06 00:44:17 +08:00
2022-05-02 21:09:55 +08:00
The following sections will show a few examples of `pivot_wider()` making usefully untidy data for presenting data to other humans, for multivariate statistics, and just for pragmatically solving data manipulation challenges.
2021-02-06 00:44:17 +08:00
2022-05-02 21:09:55 +08:00
### Presenting data to humans
2022-03-17 22:46:11 +08:00
2022-04-30 04:01:43 +08:00
As you've seen, `dplyr::count()` produces tidy data --- it makes one row for each group, with one column for each grouping variable, and one column for the number of observations:
2016-07-27 06:44:17 +08:00
```{r}
2022-03-17 22:46:11 +08:00
diamonds |>
count(clarity, color)
2016-07-27 06:44:17 +08:00
```
2022-03-17 22:46:11 +08:00
This is easy to visualize or summarize further, but it's not the most compact form for display.
You can use `pivot_wider()` to create a form more suitable for display to other humans:
2016-07-27 06:44:17 +08:00
```{r}
2022-03-17 22:46:11 +08:00
diamonds |>
count(clarity, color) |>
2022-02-25 04:31:14 +08:00
pivot_wider(
2022-03-17 22:46:11 +08:00
names_from = color,
values_from = n
2021-02-06 00:44:17 +08:00
)
2016-07-26 00:28:05 +08:00
```
2015-07-29 00:16:58 +08:00
2022-04-30 04:01:43 +08:00
This display also makes it easily compare in two directions, horizontally and vertically, like `facet_grid()`.
2022-02-25 04:31:14 +08:00
2022-04-30 04:01:43 +08:00
Making a compact table is more challenging if you have multiple aggregates.
For example, take this dataset which summarizes each combination of clarity and color with the mean carat size **and** the number of observations:
2015-07-29 00:16:58 +08:00
```{r}
2022-03-17 22:46:11 +08:00
average_size <- diamonds |>
group_by(clarity, color) |>
summarise(
n = n(),
carat = mean(carat),
.groups = "drop"
)
average_size
2015-07-29 00:16:58 +08:00
```
2022-03-17 22:46:11 +08:00
If you copy the same pivoting code from above, you'll only get one count in each row because both `clarity` and `carat` are used to define each row:
2015-07-29 00:16:58 +08:00
```{r}
2022-03-17 22:46:11 +08:00
average_size |>
pivot_wider(
names_from = color,
values_from = carat
)
2015-07-29 00:16:58 +08:00
```
2022-05-02 21:09:55 +08:00
That because, by default, `pivot_wider()` uses all the unmentioned columns to identify a row in the new dataset.
To get the display you are looking forward, you can either `select()` off the variables you don't care about, or use the `id_cols` arguments to explicitly define which columns identify each row in the result:
2015-07-29 00:16:58 +08:00
2022-02-24 07:17:19 +08:00
```{r}
2022-03-17 22:46:11 +08:00
average_size |>
2022-02-25 04:31:14 +08:00
pivot_wider(
2022-03-17 22:46:11 +08:00
id_cols = clarity,
names_from = color,
values_from = carat
2022-02-25 04:31:14 +08:00
)
2015-07-29 00:16:58 +08:00
```
2022-04-30 04:01:43 +08:00
`pivot_wider()` is great for quickly sketching out a table.
For real presentation tables, we highly suggest learning a package like [gt](https://gt.rstudio.com).
2022-05-02 21:09:55 +08:00
gt is similar ggplot2 in that it provides an extremely powerful grammar for laying out tables.
2022-04-30 04:01:43 +08:00
It takes some work to learn but the payoff is the ability to make just about any table you can imagine.
2016-07-26 00:28:05 +08:00
2022-04-30 04:01:43 +08:00
### Multivariate statistics
2021-02-21 23:40:40 +08:00
2022-05-02 21:09:55 +08:00
Most classical multivariate statistical methods (like dimension reduction and clustering) require a matrix representation of your data, where each column is time point, or a location, or gene, or species.
2022-04-30 04:01:43 +08:00
Sometimes these formats have substantial performance or space advantages or sometimes they're just necessary to get closer to the underlying matrix mathematics.
2022-03-17 22:46:11 +08:00
2022-05-02 21:09:55 +08:00
We're not going to cover these statisticals methods here, but it is useful to know how to get your data into the form that they need.
For example, lets imagine you wanted to cluster the gapminder data to find countries that had similar progression of `gdpPercap` over time.
To do this, we need one country in each row, and hence one year in each column:
2022-03-17 22:46:11 +08:00
```{r}
library(gapminder)
2022-05-02 21:09:55 +08:00
2022-04-30 04:01:43 +08:00
col_year <- gapminder |>
mutate(gdpPercap = log10(gdpPercap)) |>
2022-03-17 22:46:11 +08:00
pivot_wider(
2022-04-30 04:01:43 +08:00
id_cols = country,
names_from = year,
2022-03-17 22:46:11 +08:00
values_from = gdpPercap
2022-04-30 04:01:43 +08:00
)
col_year
2022-02-25 04:31:14 +08:00
```
2022-05-02 21:09:55 +08:00
This structure uses a column, `country`, to label each row.
Most classic statistcal methods don't want the identifier as an explicit variable, but instead want it in the so-called row names.
We move the year out of the columns into the row names with `column_to_rowname()`:
2022-02-25 04:31:14 +08:00
```{r}
2022-04-30 04:01:43 +08:00
col_year <- col_year |>
2022-05-02 21:09:55 +08:00
column_to_rownames("country")
2022-02-25 04:31:14 +08:00
2022-05-02 21:09:55 +08:00
head(col_year)
2022-04-30 04:01:43 +08:00
```
2022-03-17 22:46:11 +08:00
2022-05-02 21:09:55 +08:00
This produces a data frame, because tibbles don't support row names[^data-tidy-1].
[^data-tidy-1]: tibbles don't use row names because they only work for a subset of important cases: when observations can be identified by a single character vector.
We're now ready to cluster with (e.g.) `kmeans():`
2022-03-17 22:46:11 +08:00
2022-04-30 04:01:43 +08:00
```{r}
cluster <- stats::kmeans(col_year, centers = 6)
```
2022-03-17 22:46:11 +08:00
2022-04-30 04:01:43 +08:00
Extracting the data out of this object into a form you can work with is a challenge we'll need to come back to later in the book, once you've learned more about lists.
2022-05-02 21:09:55 +08:00
But for now, you can get the clustering membership out with this code:
2022-02-25 04:31:14 +08:00
```{r}
2022-04-30 04:01:43 +08:00
cluster_id <- cluster$cluster |>
enframe() |>
rename(country = name, cluster_id = value)
cluster_id
2022-02-25 04:31:14 +08:00
```
2022-04-30 04:01:43 +08:00
You could then combine this back with the original data using one of the joins you'll learn about in Chapter \@ref(relational-data).
2022-03-17 22:46:11 +08:00
```{r}
2022-04-30 04:01:43 +08:00
gapminder |> left_join(cluster_id)
```
### Pragmatic computation
2022-03-17 22:46:11 +08:00
2022-05-02 21:09:55 +08:00
Finally, sometimes it's just easier to answer a question using untidy data.
2022-04-30 04:01:43 +08:00
For example, if you're interested in just the total number of missing values in `cms_patient_experience`, it's easier to work with the untidy form:
```{r}
cms_patient_experience |>
group_by(org_pac_id) |>
summarise(
n_miss = sum(is.na(prf_rate)),
n = n(),
)
2022-03-17 22:46:11 +08:00
```
2022-04-30 04:01:43 +08:00
2022-05-02 21:09:55 +08:00
This partly comes back to our original definition of tidy data, where I said tidy data has one variable in each column, but I didn't actually define what a variable is (and it's surprisingly hard to do so).
2022-04-30 04:01:43 +08:00
It's totally fine to be pragmatic and to say a variable is whatever makes your analysis easiest.
So if you're stuck figuring out how to do some computation, maybe it's time to switch up the organisation of your data.
For computations involving a fixed number of values (like computing differences or ratios), it's usually easier if the data is columns; for those with a variable of number of values (like sums or means) it's usually easier in rows.
2022-05-02 21:09:55 +08:00
Don't be afraid to untidy, transform, and re-tidy if needed.