2022-06-20 23:08:33 +08:00
# Joins {#sec-relational-data}
2022-05-03 22:38:41 +08:00
2022-05-14 04:46:49 +08:00
```{r}
#| results: "asis"
#| echo: false
source("_common.R")
2022-05-03 22:38:41 +08:00
status("restructuring")
```
2016-01-08 00:20:29 +08:00
2016-07-19 21:01:50 +08:00
## Introduction
2016-01-08 00:20:29 +08:00
2021-04-21 20:25:51 +08:00
It's rare that a data analysis involves only a single data frame.
2022-06-20 23:08:33 +08:00
Typically you have many data frames, and you must **join** them together to answer the questions that you're interested in.
2022-05-03 22:38:41 +08:00
All the verbs in this chapter use a pair of data frames.
2022-08-30 21:38:46 +08:00
Fortunately this is enough, since you can solve any more complex problem a pair at a time.
2016-01-15 21:05:45 +08:00
2022-08-30 21:38:46 +08:00
You'll learn about important types of joins in this chapter:
2016-01-08 00:20:29 +08:00
2022-08-30 21:38:46 +08:00
- **Mutating joins** add new variables to one data frame from matching observations in another.
- **Filtering joins**, filters observations from one data frame based on whether or not they match an observation in another.
If you're familiar with SQL, you should find the ideas in this chapter familiar, as their realization in dplyr is very similar.
2022-06-20 23:08:33 +08:00
We'll point out any important differences as we go.
2022-08-30 21:38:46 +08:00
Don't worry if you're not familiar with SQL as you'll learn more about it in @sec-import-databases.
2016-01-08 00:20:29 +08:00
2016-07-19 21:01:50 +08:00
### Prerequisites
2022-06-20 23:08:33 +08:00
We will explore relational data from nycflights13 using the join functions from dplyr.
2016-07-19 21:01:50 +08:00
2022-05-14 04:46:49 +08:00
```{r}
#| label: setup
#| message: false
2016-10-04 01:30:24 +08:00
library(tidyverse)
2016-07-19 21:01:50 +08:00
library(nycflights13)
```
2022-05-14 04:46:49 +08:00
## nycflights13 {#sec-nycflights13-relational}
2016-01-08 00:20:29 +08:00
2022-08-30 21:38:46 +08:00
As well as the `flights` data frame that you used in @sec-data-transform, four addition related tibbles:
2016-01-08 00:20:29 +08:00
2021-02-21 23:40:40 +08:00
- `airlines` lets you look up the full carrier name from its abbreviated code:
2016-01-30 21:13:46 +08:00
2016-01-08 00:20:29 +08:00
```{r}
airlines
```
2021-02-21 23:40:40 +08:00
- `airports` gives information about each airport, identified by the `faa` airport code:
2016-01-30 21:13:46 +08:00
2016-01-08 00:20:29 +08:00
```{r}
airports
```
2016-01-13 23:06:43 +08:00
2021-02-21 23:40:40 +08:00
- `planes` gives information about each plane, identified by its `tailnum`:
2016-01-13 23:06:43 +08:00
```{r}
planes
```
2016-01-30 21:13:46 +08:00
2021-02-21 23:40:40 +08:00
- `weather` gives the weather at each NYC airport for each hour:
2016-01-08 00:20:29 +08:00
```{r}
weather
```
2022-05-03 22:38:41 +08:00
These datasets are connected as follows:
2016-01-08 00:20:29 +08:00
2022-08-30 21:38:46 +08:00
- `flights` connects to `planes` through the `tailnum`.
2016-01-08 00:20:29 +08:00
2021-02-21 23:40:40 +08:00
- `flights` connects to `airlines` through the `carrier` variable.
2016-07-27 05:31:26 +08:00
2022-08-30 21:38:46 +08:00
- `flights` connects to `airports` in two ways: through the origin (`origin)` and through the destination (`dest)`.
2016-01-30 21:13:46 +08:00
2022-08-30 21:38:46 +08:00
- `flights` connects to `weather` through two variables at the same time: the location (`origin)` and the time (`time_hour`).
2016-01-08 00:20:29 +08:00
2022-05-14 04:46:49 +08:00
One way to show the relationships between the different data frames is with a diagram, as in @fig-flights-relationships.
2022-05-03 22:38:41 +08:00
This diagram is a little overwhelming, but it's simple compared to some you'll see in the wild!
The key to understanding diagrams like this is that you'll solve real problems by working with pairs of data frames.
You don't need to understand the whole thing; you just need to understand the chain of connections between the two data frames that you're interested in.
2022-05-14 04:46:49 +08:00
```{r}
#| label: fig-flights-relationships
2022-05-03 22:38:41 +08:00
#| echo: false
2022-08-30 21:38:46 +08:00
#| out-width: ~
2022-05-14 04:46:49 +08:00
#| fig-cap: >
2022-08-30 21:38:46 +08:00
#| Connections between all five data frames in the nycflights package.
2022-05-14 04:46:49 +08:00
#| fig-alt: >
2022-05-03 22:38:41 +08:00
#| Diagram showing the relationships between airports, planes, flights,
#| weather, and airlines datasets from the nycflights13 package. The faa
#| variable in the airports data frame is connected to the origin and dest
#| variables in the flights data frame. The tailnum variable in the planes
2022-08-30 21:38:46 +08:00
#| data frame is connected to the tailnum variable in flights. The
#| time_hour and origin variables in the weather data frame are connected
#| to the variables with the same name in the flights data frame. And
#| finally the carrier variables in the airlines data frame is connected
#| to the carrier variable in the flights data frame. There are no direct
#| connections between airports, planes, airlines, and weather data
#| frames.
knitr::include_graphics("diagrams/relational.png", dpi = 270)
2022-05-03 22:38:41 +08:00
```
2016-01-13 00:43:09 +08:00
### Exercises
2016-01-08 22:53:19 +08:00
2021-02-21 23:40:40 +08:00
1. Imagine you wanted to draw (approximately) the route each plane flies from its origin to its destination.
What variables would you need?
2021-04-21 20:25:51 +08:00
What data frames would you need to combine?
2016-01-08 22:53:19 +08:00
2022-08-10 00:43:12 +08:00
2. We forgot to draw the relationship between `weather` and `airports`.
2016-08-12 21:58:32 +08:00
What is the relationship and how should it appear in the diagram?
2016-01-30 21:13:46 +08:00
2021-02-21 23:40:40 +08:00
3. `weather` only contains information for the origin (NYC) airports.
If it contained weather records for all airports in the USA, what additional relation would it define with `flights`?
2016-01-13 23:06:43 +08:00
2016-01-15 21:05:45 +08:00
## Keys
2021-04-21 20:25:51 +08:00
The variables used to connect each pair of data frames are called **keys**.
2021-02-21 23:40:40 +08:00
A key is a variable (or set of variables) that uniquely identifies an observation.
In simple cases, a single variable is sufficient to identify an observation.
For example, each plane is uniquely identified by its `tailnum`.
In other cases, multiple variables may be needed.
2022-08-30 21:38:46 +08:00
For example, to identify an observation in `weather` you need two variables: `time_hour` and `origin`.
2016-01-15 21:05:45 +08:00
There are two types of keys:
2021-04-21 20:25:51 +08:00
- A **primary key** uniquely identifies an observation in its own data frame.
For example, `planes$tailnum` is a primary key because it uniquely identifies each plane in the `planes` data frame.
2016-01-15 21:05:45 +08:00
2021-04-21 20:25:51 +08:00
- A **foreign key** uniquely identifies an observation in another data frame.
For example, `flights$tailnum` is a foreign key because it appears in the `flights` data frame where it matches each flight to a unique plane.
2016-01-15 21:05:45 +08:00
2021-02-21 23:40:40 +08:00
A variable can be both a primary key *and* a foreign key.
2021-04-21 20:25:51 +08:00
For example, `origin` is part of the `weather` primary key, and is also a foreign key for the `airports` data frame.
2016-01-15 21:05:45 +08:00
2021-04-21 20:25:51 +08:00
Once you've identified the primary keys in your data frames, it's good practice to verify that they do indeed uniquely identify each observation.
2021-02-21 23:40:40 +08:00
One way to do that is to `count()` the primary keys and look for entries where `n` is greater than one:
2016-01-15 21:05:45 +08:00
```{r}
2022-02-24 03:15:52 +08:00
planes |>
count(tailnum) |>
2016-08-01 00:32:16 +08:00
filter(n > 1)
2022-02-24 03:15:52 +08:00
weather |>
2022-08-30 21:38:46 +08:00
count(time_hour, origin) |>
2016-08-01 00:32:16 +08:00
filter(n > 1)
2016-01-15 21:05:45 +08:00
```
2022-08-30 21:38:46 +08:00
Sometimes a data frame doesn't have an explicit primary key and only an unwieldy combination of variables reliably identifies an observation.
For example, to uniquely identify a flight, we need the hour the flight departs, the carrier, and the flight number:
2016-01-15 21:05:45 +08:00
```{r}
2022-02-24 03:15:52 +08:00
flights |>
2022-08-30 21:38:46 +08:00
count(time_hour, carrier, flight) |>
2016-08-01 00:32:16 +08:00
filter(n > 1)
2016-01-15 21:05:45 +08:00
```
2022-05-03 22:38:41 +08:00
When starting to work with this data, we had naively assumed that each flight number would be only used once per day: that would make it much easier to communicate problems with a specific flight.
2022-09-02 06:27:59 +08:00
Unfortunately that is not the case, and form a primary key for `flights` we have to assume that flight number will never be re-used within a hour.
2022-08-30 21:38:46 +08:00
2021-04-21 20:25:51 +08:00
If a data frame lacks a primary key, it's sometimes useful to add one with `mutate()` and `row_number()`.
2021-02-21 23:40:40 +08:00
That makes it easier to match observations if you've done some filtering and want to check back in with the original data.
This is called a **surrogate key**.
2016-01-15 21:05:45 +08:00
### Exercises
2016-08-12 21:58:32 +08:00
1. Add a surrogate key to `flights`.
2022-08-30 21:38:46 +08:00
2. The year, month, day, hour, and origin variables almost form a compound key for weather, but there's one hour that has duplicate observations.
Can you figure out what's special about this time?
3. We know that some days of the year are "special", and fewer people than usual fly on them.
2021-04-18 22:41:05 +08:00
How might you represent that data as a data frame?
2021-04-21 20:25:51 +08:00
What would be the primary keys of that data frame?
How would it connect to the existing data frames?
2021-04-18 22:41:05 +08:00
2022-08-30 21:38:46 +08:00
4. Identify the keys in the following datasets
2016-01-15 21:05:45 +08:00
2021-04-21 20:25:51 +08:00
a. `Lahman::Batting`
2021-02-22 01:32:37 +08:00
b. `babynames::babynames`
c. `nasaweather::atmos`
d. `fueleconomy::vehicles`
e. `ggplot2::diamonds`
2021-02-21 23:40:40 +08:00
2016-07-27 05:31:26 +08:00
(You might need to install some packages and read some documentation.)
2016-01-30 21:13:46 +08:00
2022-08-30 21:38:46 +08:00
5. Draw a diagram illustrating the connections between the `Batting`, `People`, and `Salaries` data frames in the Lahman package.
2021-02-21 23:40:40 +08:00
Draw another diagram that shows the relationship between `People`, `Managers`, `AwardsManagers`.
2016-01-30 21:13:46 +08:00
2021-04-21 20:25:51 +08:00
How would you characterise the relationship between the `Batting`, `Pitching`, and `Fielding` data frames?
2016-01-15 21:05:45 +08:00
2022-09-02 06:27:59 +08:00
## Understanding joins
2016-01-08 00:20:29 +08:00
2022-09-02 06:27:59 +08:00
To help you learn how joins work, we'll start with a visual representation of the two simple tibbles defined below.
Figure @fig-join-setup.
2022-08-31 23:06:56 +08:00
The coloured column represents the keys of the two data frames, here literally called `key`.
The grey column represents the "value" column that is carried along for the ride.
In these examples we'll use a single key variable, but the idea generalizes to multiple keys and multiple values.
2021-02-21 23:40:40 +08:00
2016-01-08 22:53:19 +08:00
```{r}
2016-10-07 21:16:09 +08:00
x <- tribble(
~key, ~val_x,
1, "x1",
2, "x2",
3, "x3"
)
y <- tribble(
~key, ~val_y,
1, "y1",
2, "y2",
4, "y3"
)
2016-01-08 22:53:19 +08:00
```
2022-08-31 23:06:56 +08:00
```{r}
#| label: fig-join-setup
#| echo: false
#| out-width: ~
#| fig-cap: >
2022-09-01 19:43:57 +08:00
#| Graphical representation of two simple tables.
2022-08-31 23:06:56 +08:00
#| fig-alt: >
#| x and y are two data frames with 2 columns and 3 rows each. The first
#| column in each is the key and the second is the value. The contents of
#| these data frames are given in the subsequent code chunk.
knitr::include_graphics("diagrams/join/setup.png", dpi = 270)
```
2016-01-08 22:53:19 +08:00
2021-02-21 23:40:40 +08:00
A join is a way of connecting each row in `x` to zero, one, or more rows in `y`.
2022-08-31 23:06:56 +08:00
@fig-join-setup2 shows each potential match as an intersection of a pair of lines.
If you look closely, you'll notice that we've switched the order of the key and value columns in `x`.
This is to emphasize that joins match based on the key; the other columns are just carried along for the ride.
2016-01-13 00:43:09 +08:00
2022-05-14 04:46:49 +08:00
```{r}
2022-08-31 23:06:56 +08:00
#| label: fig-join-setup2
2022-05-14 04:46:49 +08:00
#| echo: false
2022-08-31 23:06:56 +08:00
#| out-width: ~
#| fig-cap: >
#| To prepare to show how joins work we create a grid showing every
#| possible match between the two tibbles.
2022-05-14 04:46:49 +08:00
#| fig-alt: >
2022-08-31 23:06:56 +08:00
#| x and y data frames placed next to each other, with the key variable
2022-05-14 04:46:49 +08:00
#| moved up front in y so that the key variable in x and key variable
#| in y appear next to each other.
2022-08-31 23:06:56 +08:00
knitr::include_graphics("diagrams/join/setup2.png", dpi = 270)
2016-01-08 22:53:19 +08:00
```
2022-08-31 23:06:56 +08:00
In an actual join, matches will be indicated with dots, as in @fig-join-inner.
2022-09-02 06:27:59 +08:00
The number of dots = the number of matches = the number of rows in the output, a new data frame that contains the key, the x values, and the y values.
The join shown here is a so-called **inner join**, where the output contains only the rows that appear in both `x` and `y`.
2016-01-14 23:51:33 +08:00
2022-05-14 04:46:49 +08:00
```{r}
2022-08-31 23:06:56 +08:00
#| label: fig-join-inner
2022-05-14 04:46:49 +08:00
#| echo: false
2022-08-31 23:06:56 +08:00
#| out-width: ~
#| fig-cap: >
#| A join showing which rows in the x table match rows in the y table.
2022-05-14 04:46:49 +08:00
#| fig-alt: >
#| Keys 1 and 2 in x and y data frames are matched and indicated with lines
#| joining these rows with dot in the middle. Hence, there are two dots in
#| this diagram. The resulting joined data frame has two rows and 3 columns:
#| key, val_x, and val_y. Values in the key column are 1 and 2, the matched
#| values.
2022-08-31 23:06:56 +08:00
knitr::include_graphics("diagrams/join/inner.png", dpi = 270)
2016-01-14 23:51:33 +08:00
```
2021-04-21 20:25:51 +08:00
An **outer join** keeps observations that appear in at least one of the data frames.
2022-08-31 23:06:56 +08:00
These joins work by adding an additional "virtual" observation to each data frame.
This observation has a key that matches if no other key matches, and values filled with `NA`.
2021-02-21 23:40:40 +08:00
There are three types of outer joins:
2016-01-08 22:53:19 +08:00
2022-08-31 23:06:56 +08:00
- A **left join** keeps all observations in `x`, @fig-join-left.
2016-01-08 22:53:19 +08:00
2022-08-31 23:06:56 +08:00
```{r}
#| label: fig-join-left
#| echo: false
#| out-width: ~
#| fig-cap: >
#| A visual representation of the left join. Every row of `x` is
#| preserved in the output because it can fallback to matching a
#| row of `NA`s in `y`.
#| fig-alt: >
#| Left join: keys 1 and 2 from x are matched to those in y, key 3 is
#| also carried along to the joined result since it's on the left data
#| frame, but key 4 from y is not carried along since it's on the right
#| but not on the left. The result has 3 rows: keys 1, 2, and 3,
#| all values from val_x, and the corresponding values from val_y for
#| keys 1 and 2 with an NA for key 3, val_y.
knitr::include_graphics("diagrams/join/left.png", dpi = 270)
```
2016-01-08 22:53:19 +08:00
2022-08-31 23:06:56 +08:00
- A **right join** keeps all observations in `y`, @fig-join-right.
2016-01-13 23:06:43 +08:00
2022-08-31 23:06:56 +08:00
```{r}
#| label: fig-join-right
#| echo: false
#| out-width: ~
#| fig-cap: >
#| A visual representation of the right join. Every row of `y` is
#| preserved in the output because it can fallback to matching a
#| row of `NA`s in `x`.
#| fig-alt: >
#| Keys 1 and 2 from x are matched to those in y, key 4 is
#| also carried along to the joined result since it's on the right data frame,
#| but key 3 from x is not carried along since it's on the left but not on the
#| right. The result is a data frame with 3 rows: keys 1, 2, and 4, all values
#| from val_y, and the corresponding values from val_x for keys 1 and 2 with
#| an NA for key 4, val_x.
knitr::include_graphics("diagrams/join/right.png", dpi = 270)
```
- A **full join** keeps all observations in `x` and `y`, @fig-join-full.
```{r}
#| label: fig-join-full
#| echo: false
#| out-width: ~
#| fig-cap: >
#| A visual representation of the full join. Every row of `x` and `y`
#| is included in the output because both `x` and `y` have a fallback
#| row of `NA`s.
#| fig-alt: >
#| The result has 4 rows: keys 1, 2, 3, and 4 with all values
#| from val_x and val_y, however key 2, val_y and key 4, val_x are NAs since
#| those keys aren't present in their respective data frames.
knitr::include_graphics("diagrams/join/full.png", dpi = 270)
```
2016-01-08 22:53:19 +08:00
2022-08-31 23:06:56 +08:00
Another way to show how the outer joins differ is with a Venn diagram, @fig-join-venn.
This, however, is not a great representation because while it might jog your memory about which rows are preserved, it fails to illustrate what's happening with the columns.
2016-01-08 22:53:19 +08:00
2022-05-14 04:46:49 +08:00
```{r}
2022-08-31 23:06:56 +08:00
#| label: fig-join-venn
2022-05-14 04:46:49 +08:00
#| echo: false
2022-08-31 23:06:56 +08:00
#| out-width: ~
#| fig-cap: >
#| Venn diagrams showing the difference between inner, left, right, and
#| full joins.
2022-05-14 04:46:49 +08:00
#| fig-alt: >
#| Venn diagrams for inner, full, left, and right joins. Each join represented
#| with two intersecting circles representing data frames x and y, with x on
#| the right and y on the left. Shading indicates the result of the join.
#| Inner join: Only intersection is shaded. Full join: Everything is shaded.
#| Left join: Only x is shaded, but not the area in y that doesn't intersect
#| with x. Right join: Only y is shaded, but not the area in x that doesn't
#| intersect with y.
2022-08-31 23:06:56 +08:00
knitr::include_graphics("diagrams/join/venn.png", dpi = 270)
```
2022-09-02 06:27:59 +08:00
## Join columns {#sec-mutating-joins}
Now you've got the basic idea of joins under your belt, lets use them with the flights data.
2022-08-31 23:06:56 +08:00
2022-09-02 06:27:59 +08:00
We call the four inner and outer joins **mutating joins** because their primary role is to add additional column to the `x` data frame.
(They also have a secondary impact on the rows, which we'll come back to next).
A mutating join allows you to combine variables from two data frames.
It first matches observations by their keys, then copies across variables from one data frame to the other.
The most commonly used join is the left join: you use this whenever you look up additional data from another data frame, because it preserves the original observations even when there isn't a match.
The left join should be your default join: use it unless you have a strong reason to prefer one of the others.
2022-08-31 23:06:56 +08:00
2022-09-02 06:27:59 +08:00
Like `mutate()`, the join functions add variables to the right, so if you have a lot of variables already, the new variables won't get printed out.
For these examples, we'll make it easier to see what's going on in the examples by creating a narrower dataset:
2022-08-31 23:06:56 +08:00
```{r}
2022-09-02 06:27:59 +08:00
flights2 <- flights |>
select(year, time_hour, origin, dest, tailnum, carrier)
flights2
```
(Remember, when you're in RStudio, you can also use `View()` to avoid this problem.)
Imagine you want to add the full airline name to the `flights2` data.
You can combine the `airlines` and `flights2` data frames with `left_join()`:
```{r}
flights2 |>
left_join(airlines)
```
The result of joining `airlines` to `flights2` is an additional variable: `name`.
This is why we call this type of join a mutating join.
### Join keys
Our join diagrams made an important simplification: that the tables are connected by a single join key, and that key has the same name in both data frames.
In this section, you'll learn how to specify the join keys used by dplyr's joins.
By default, joins will use all variables that appear in both data frames as the join key, the so called **natural** join.
We saw this above where joining `flights2` with `airlines` joined by the `carrier` column.
This also works when there's more than one variable required to match rows in the two tables, for example flights and weather:
```{r}
flights2 |>
left_join(weather)
```
This is a useful heuristic, but it doesn't always work.
What happens if we try to join `flights` with `planes`?
```{r}
flights2 |>
left_join(planes)
```
We get a lot of missing matches because both `flights` and `planes` have a `year` column but they mean different things: the year the flight occurred and the year the plane was built.
We only want to join on the `tailnum` column so we need an explicit specification:
```{r}
flights2 |>
left_join(planes, join_by(tailnum))
```
Note that the `year` variables (which appear in both input data frames, but are not constrained to be equal) are disambiguated in the output with a suffix.
You can control this with the `suffix` argument.
`join_by(tailnum)` indicates that we want to join using the `tailnum` column in both `x` and `y`.
What happens if the variable name is different?
It turns out that `join_by(key)` is a shorthand for `join_by(tailnum == tailnum)`, which is in turn shorthand for `join_by(x$tailnum == y$tailnum)`.
For example, there are two ways to join the `flight2` and `airports` table: either by `dest` or `origin:`
```{r}
flights2 |>
left_join(airports, join_by(dest == faa))
flights2 |>
left_join(airports, join_by(origin == faa))
```
In older code you might see a different way of specifying the join keys, using a character vector.
`by = "x"` corresponds to `join_by(x)` and `by = c("a" = "x")` corresponds to `join_by(a == x)`.
We now prefer `join_by()` as it's a more flexible specification that supports many other types of join, as you'll learn in @sec-non-equi-joins.
### Exercises
1. Compute the average delay by destination, then join on the `airports` data frame so you can show the spatial distribution of delays.
Here's an easy way to draw a map of the United States:
```{r}
#| eval: false
airports |>
semi_join(flights, join_by(faa == dest)) |>
ggplot(aes(lon, lat)) +
borders("state") +
geom_point() +
coord_quickmap()
```
(Don't worry if you don't understand what `semi_join()` does --- you'll learn about it later.)
You might want to use the `size` or `colour` of the points to display the average delay for each airport.
2. Add the location of the origin *and* destination (i.e. the `lat` and `lon`) to `flights`.
Is it easier to rename the columns before or after the join?
3. Is there a relationship between the age of a plane and its delays?
4. What weather conditions make it more likely to see a delay?
5. What happened on June 13 2013?
Display the spatial pattern of delays, and then use Google to cross-reference with the weather.
```{r}
#| eval: false
#| include: false
worst <- filter(flights, !is.na(dep_time), month == 6, day == 13)
worst |>
group_by(dest) |>
summarise(delay = mean(arr_delay), n = n()) |>
filter(n > 5) |>
inner_join(airports, by = c("dest" = "faa")) |>
ggplot(aes(lon, lat)) +
borders("state") +
geom_point(aes(size = n, colour = delay)) +
coord_quickmap()
```
## Join rows
While the most obvious impact of a join is a on the columns, joins also affect the number of rows.
A row in `x` can match 0, 1, or \>1 rows in `y`.
Most obviously, `inner_join()` will drop rows from `x` that don't have a match in `y`; that's why we recommend using `left_join()` as your go-to join.
All joins can also increase the number of rows if a row in `x` matches multiple rows in `y`.
It's easy to be surprised by this behavior so by default equi-joins will warn about this behavior.
We'll start by discussing the most important and most common type of join, the many-to-1 join.
We'll then discuss the inverse, a 1-to-many join.
Next comes the many-to-many join.
And we'll finish off with the 1-to-1 which is relatively uncommon, but still useful.
### Many-to-one joins {#sec-join-matches}
A **many-to-one** join arises when many rows in `x` match the same row in `y`, as in @fig-join-one-to-many.
This is a very common type of join because it arises when key in `x` is a foreign key that matches a primary key in `y`.
```{r}
#| label: fig-join-many-to-one
2022-08-31 23:06:56 +08:00
#| echo: false
#| out-width: ~
#| fig-cap: >
2022-09-02 06:27:59 +08:00
#| In a many-to-one join, multiple rows in `x` match the same row `y`.
#| We show the key column in a slightly different position in the output,
#| because the key is usually a foreign key in `x` and a primary key in
#| `y`.
2022-08-31 23:06:56 +08:00
#| fig-alt: >
2022-09-02 06:27:59 +08:00
#| A iagram describing a left join where one of the data frames (x) has
2022-08-31 23:06:56 +08:00
#| duplicate keys. Data frame x is on the left, has 4 rows and 2 columns
#| (key, val_x), and has the keys 1, 2, 2, and 1. Data frame y is on the
#| right, has 2 rows and 2 columns (key, val_y), and has the keys 1 and 2.
#| Left joining these two data frames yields a data frame with 4 rows
#| (keys 1, 2, 2, and 1) and 3 columns (val_x, key, val_y). All values
#| from x$val_x are carried along, values in y for key 1 and 2 are duplicated.
2022-09-02 06:27:59 +08:00
knitr::include_graphics("diagrams/join/many-to-one.png", dpi = 270)
2016-01-08 22:53:19 +08:00
```
2022-09-02 06:27:59 +08:00
One-to-many joins naturally arise when you want to supplement one table with the data from another.
There are many cases where this comes up with the flights data.
2022-08-31 23:06:56 +08:00
For example, the following code shows how we might the carrier name or plane information to the flights dataset:
2016-01-08 00:20:29 +08:00
2022-08-31 23:06:56 +08:00
```{r}
2022-09-02 06:27:59 +08:00
flights2 |>
2022-08-31 23:06:56 +08:00
left_join(airlines, by = "carrier")
2016-01-08 00:20:29 +08:00
2022-09-02 06:27:59 +08:00
flights2 |>
2022-08-31 23:06:56 +08:00
left_join(planes, by = "tailnum")
```
2016-01-08 00:20:29 +08:00
2022-09-02 06:27:59 +08:00
### One-to-many joins
2016-01-30 21:13:46 +08:00
2022-09-02 06:27:59 +08:00
A **one-to-many** join is very similar to many-to-one join with `x` and `y` swapped as in @fig-join-one-to-many.
```{r}
#| label: fig-join-one-to-many
#| echo: false
#| out-width: ~
#| fig-cap: >
#| A one-to-many join is ...
#| fig-alt: >
#| TBA
knitr::include_graphics("diagrams/join/one-to-many.png", dpi = 270)
```
Flipping the join from the previous section answers a slightly different question.
Instead of give me the information about for the plane used for this flight, it's more like tell me all the flights that this plane flew.
2016-01-13 00:43:09 +08:00
2022-08-31 23:06:56 +08:00
```{r}
planes |>
select(tailnum, type, engines) |>
left_join(flights, by = "tailnum")
```
2016-01-13 00:43:09 +08:00
2022-09-02 06:27:59 +08:00
We believe one-to-many joins to be relatively rare and potentially confusing because they can radically increase the number of rows in the output.
For this reason, you'll need to set `multple = "all"` to avoid the warning.
```{r}
planes |>
select(tailnum, type, engines) |>
left_join(flights, by = "tailnum", multiple = "all")
```
2022-08-31 23:06:56 +08:00
### Many-to-many joins
2016-01-08 00:20:29 +08:00
2022-09-01 19:43:57 +08:00
A **many-to-many** join arises when when both data frames have duplicate keys, as in @fig-join-many-to-many.
When duplicated keys match, they generate all possible combinations, the Cartesian product.
2022-08-29 23:16:18 +08:00
2022-08-31 23:06:56 +08:00
```{r}
#| label: fig-join-many-to-many
#| echo: false
#| out-width: ~
#| fig-cap: >
#| A many-to-many join is usually undesired because it produces an
#| explosion of new rows.
#| fig-alt: >
#| Diagram describing a left join where both data frames (x and y) have
#| duplicate keys. Data frame x is on the left, has 4 rows and 2 columns
#| (key, val_x), and has the keys 1, 2, 2, and 3. Data frame y is on the
#| right, has 4 rows and 2 columns (key, val_y), and has the keys 1, 2, 2,
#| and 3 as well. Left joining these two data frames yields a data frame
#| with 6 rows (keys 1, 2, 2, 2, 2, and 3) and 3 columns (key, val_x,
#| val_y). All values from both datasets are included.
knitr::include_graphics("diagrams/join/many-to-many.png", dpi = 270)
```
2016-01-30 21:13:46 +08:00
2022-08-31 23:06:56 +08:00
Many-to-many joins are usually a mistake because you get all possible combinations, increasing the total number of rows.
If you do a many-to-many join in dplyr, you'll get a warning:
2016-01-30 21:13:46 +08:00
2022-08-31 23:06:56 +08:00
```{r}
x3 <- tribble(
~key, ~val_x,
1, "x1",
2, "x2",
2, "x3",
3, "x4"
)
y3 <- tribble(
~key, ~val_y,
1, "y1",
2, "y2",
2, "y3",
3, "y4"
)
x3 |>
left_join(y3, by = "key")
```
Silence the warning by fixing the underlying data, or if you really do want a many-to-many join (which can be useful in some circumstances), set `multiple = "all"`.
```{r}
x3 |>
left_join(y3, by = "key", multiple = "all")
```
2016-01-08 00:20:29 +08:00
2022-09-02 06:27:59 +08:00
### One-to-one joins
2016-01-08 00:20:29 +08:00
2022-09-02 06:27:59 +08:00
To ensure that an `inner_join()` is a one-to-one join you need to set two options:
2016-01-30 21:13:46 +08:00
2022-09-02 06:27:59 +08:00
- `multiple = "error"` ensures that every row in `x` matches at most one row in `y`.
- `unmatched = "error"` ensures that every row in `x` matches at least one row `y`.\`
2016-01-30 21:13:46 +08:00
2022-09-02 06:27:59 +08:00
One-to-one joins are relatively rare, and usually only come up when something that makes sense as one table has to be split across multiple files for some structural reason.
For example, there may be are a very large number of columns, and it's easier to work with subsets spread across multiple files.
Or maybe some of the columns are confidential and can only be accessed by certain people.
For example, think of an employees table --- it's ok for everyone to see the names of their colleagues, but only some people should be able to see their home addresses or salaries.
2016-01-30 21:13:46 +08:00
2022-09-02 06:27:59 +08:00
## Non-equi joins {#sec-non-equi-joins}
2016-01-08 22:53:19 +08:00
2022-08-29 23:16:18 +08:00
So far we've focused on the so called "equi-joins" because the joins are defined by equality: the keys in x must be equal to the keys in y for the rows to match.
This allows us to make an important simplification in both the diagrams and the return values of the join frames: we only ever include the join key from one table.
We can request that dplyr keep both keys with `keep = TRUE`.
This is shown in the code below and in @fig-inner-both.
```{r}
x |> left_join(y, by = "key", keep = TRUE)
```
```{r}
#| label: fig-inner-both
#| fig-cap: >
#| Inner join showing keys from both `x` and `y`. This is not the
#| default because for equi-joins, the keys are the same so showing
#| both doesn't add anything.
#| echo: false
2022-08-31 23:06:56 +08:00
#| out-width: ~
2022-08-29 23:16:18 +08:00
knitr::include_graphics("diagrams/join/inner-both.png", dpi = 270)
```
This distinction between the keys becomes much more important as we move away from equi-joins because the key values are much more likely to be different.
Because of this, dplyr defaults to showing both keys.
2022-08-31 23:06:56 +08:00
For example, instead of requiring that the `x` and `y` keys be equal, we could request that key from `x` be less than the key from `y`, as in the code below and @fig-join-gte.
2022-08-29 23:16:18 +08:00
```{r}
2022-08-31 23:06:56 +08:00
x |> inner_join(y, join_by(key >= key))
2022-08-29 23:16:18 +08:00
```
```{r}
2022-08-31 23:06:56 +08:00
#| label: fig-join-gte
2022-08-29 23:16:18 +08:00
#| echo: false
#| fig-cap: >
#| A non-equijoin where the `x` key must be less than the `y` key.
2022-08-31 23:06:56 +08:00
knitr::include_graphics("diagrams/join/gte.png", dpi = 270)
2022-08-29 23:16:18 +08:00
```
2022-09-02 06:27:59 +08:00
Non-equi join isn't a particularly useful term because it only tells you what the join is not, not what it is. dplyr helps a bit by identifying three useful types of non-equi join
2016-01-08 22:53:19 +08:00
2022-08-29 23:16:18 +08:00
- **Inequality-joins** use `<`, `<=`, `>`, `>=` instead of `==`.
- **Rolling joins** use `following(x, y)` and `preceding(x, y).`
- **Overlap joins** use `between(x$val, y$lower, y$upper)`, `within(x$lower, x$upper, y$lower, y$upper)` and `overlaps(x$lower, x$upper, y$lower, y$upper).`
2022-09-02 06:27:59 +08:00
Each of these is described in more detail in the following sections.
2022-08-29 23:16:18 +08:00
### Inequality joins
Inequality joins are extremely general, so general that it's hard to find specific meaning use cases.
One small useful technique is to generate all pairs:
```{r}
df <- tibble(id = 1:4, name = c("John", "Simon", "Tracy", "Max"))
df |> left_join(df, join_by(id < id))
```
Here we perform a self-join (i.e we join a table to itself), then use the inequality join to ensure that we one of the two possible pairs (e.g. just (a, b) not also (b, a)) and don't match the same row.
### Rolling joins
2022-08-31 23:06:56 +08:00
```{r}
#| label: fig-join-following
#| echo: false
#| out-width: ~
#| fig-cap: >
#| A following join is similar to a greater-than-or-equal inequality join
#| but only matches the first value.
knitr::include_graphics("diagrams/join/following.png", dpi = 270)
```
2022-09-02 06:27:59 +08:00
Rolling joins are a special type of inequality join where instead of getting *every* row that satisfies the inequality, you get just the closest row.
2022-08-31 23:07:10 +08:00
They're particularly useful when you have two tables of dates that don't perfectly line up and you want to find (e.g.) the closest date in table 1 that comes before (or after) some date in table 2.
2022-09-02 06:27:59 +08:00
You can turn any inequality join into a rolling join by adding `closest()`.
For example `join_by(closest(x <= y))` finds the smallest `y` that's greater than or equal to x, and `join_by(closest(x > y))` finds the biggest `y` that's less than x.
2022-08-31 23:07:10 +08:00
For example, imagine that you're in charge of office birthdays.
Your company is rather stingy so instead of having individual parties, you only have a party once each quarter.
Parties are always on a Monday, and you skip the first week of January since a lot of people are on holiday and the first Monday of Q3 is July 4, so that has to be pushed back a week.
That leads to the following party days:
```{r}
parties <- tibble(
q = 1:4,
party = lubridate::ymd(c("2022-01-10", "2022-04-04", "2022-07-11", "2022-10-03"))
)
```
Then we have a table of employees along with their birthdays:
```{r}
set.seed(1014)
employees <- tibble(
name = wakefield::name(100),
birthday = lubridate::ymd("2022-01-01") + (sample(365, 100, replace = TRUE) - 1)
)
employees
```
To find out which party each employee will use to celebrate their birthday, we can use a rolling join.
2022-09-02 06:27:59 +08:00
We have to frame the
2022-08-31 23:07:10 +08:00
We want to find the first party that's before their birthday so we can use following:
```{r}
employees |>
left_join(parties, join_by(preceding(birthday, party)))
```
2022-08-29 23:16:18 +08:00
2022-09-02 06:27:59 +08:00
```{r, eval = FALSE}
employees |>
left_join(parties, join_by(closest(birthday >= party)))
employees |>
left_join(parties, join_by(closest(y$party <= x$birthday)))
```
2022-08-29 23:16:18 +08:00
### Overlap joins
2022-08-31 23:07:10 +08:00
There's one problem with the strategy uses for assigning birthday parties above: there's no party preceding the birthdays Jan 1-9.
So maybe we'd be better off being explicit about the date ranges that each party spans, and make a special case for those early bithdays:
```{r}
parties <- tibble(
q = 1:4,
party = lubridate::ymd(c("2022-01-10", "2022-04-04", "2022-07-11", "2022-10-03")),
start = lubridate::ymd(c("2022-01-01", "2022-04-04", "2022-07-11", "2022-10-03")),
end = lubridate::ymd(c("2022-04-03", "2022-07-11", "2022-10-02", "2022-12-31"))
)
parties
```
This is a good place to use `unmatched = "error"` because I want to find out if any employees didn't get assigned a birthday.
```{r}
employees |>
inner_join(parties, join_by(between(birthday, start, end)), unmatched = "error")
```
We could also flip the question around and ask which employees will celebrate in each party:
I'm hopelessly bad at data entry so I also want to check that my party periods don't overlap.
```{r}
parties |>
inner_join(parties, join_by(overlaps(start, end, start, end), q < q))
```
2022-08-29 23:16:18 +08:00
Find all flights in the air
```{r}
flights2 <- flights |>
mutate(
dep_date_time = lubridate::make_datetime(year, month, day, dep_time %/% 100, dep_time %% 100),
arr_date_time = lubridate::make_datetime(year, month, day, arr_time %/% 100, arr_time %% 100),
arr_date_time = if_else(arr_date_time < dep_date_time, arr_date_time + lubridate::days(1), arr_date_time),
id = row_number()
) |>
select(id, dep_date_time, arr_date_time, origin, dest, carrier, flight)
flights2
flights2 |>
inner_join(flights2, join_by(origin, dest, overlaps(dep_date_time, arr_date_time, dep_date_time, arr_date_time), id < id))
```
### Exercises
1. What's going on with the keys in the following `full_join()`?
```{r}
x |> full_join(y, by = "key")
x |> full_join(y, by = "key", keep = TRUE)
```
2016-01-08 22:53:19 +08:00
2022-05-14 04:46:49 +08:00
## Filtering joins {#sec-filtering-joins}
2016-01-08 00:20:29 +08:00
2021-02-21 23:40:40 +08:00
Filtering joins match observations in the same way as mutating joins, but affect the observations, not the variables.
There are two types:
2016-01-08 00:20:29 +08:00
2021-02-21 23:40:40 +08:00
- `semi_join(x, y)` **keeps** all observations in `x` that have a match in `y`.
- `anti_join(x, y)` **drops** all observations in `x` that have a match in `y`.
2016-01-08 00:20:29 +08:00
2021-04-21 20:25:51 +08:00
Semi-joins are useful for matching filtered summary data frames back to the original rows.
2021-02-21 23:40:40 +08:00
For example, imagine you've found the top ten most popular destinations:
2016-01-08 00:20:29 +08:00
```{r}
2022-02-24 03:15:52 +08:00
top_dest <- flights |>
count(dest, sort = TRUE) |>
2016-01-08 00:20:29 +08:00
head(10)
top_dest
```
2021-02-21 23:40:40 +08:00
Now you want to find each flight that went to one of those destinations.
You could construct a filter yourself:
2016-01-08 00:20:29 +08:00
```{r}
2022-02-24 03:15:52 +08:00
flights |>
2016-08-01 00:32:16 +08:00
filter(dest %in% top_dest$dest)
2016-01-08 00:20:29 +08:00
```
2021-02-21 23:40:40 +08:00
But it's difficult to extend that approach to multiple variables.
For example, imagine that you'd found the 10 days with highest average delays.
How would you construct the filter statement that used `year`, `month`, and `day` to match it back to `flights`?
2016-01-08 00:20:29 +08:00
2021-04-21 20:25:51 +08:00
Instead you can use a semi-join, which connects the two data frames like a mutating join, but instead of adding new columns, only keeps the rows in `x` that have a match in `y`:
2016-01-08 00:20:29 +08:00
```{r}
2022-02-24 03:15:52 +08:00
flights |>
2016-08-01 00:32:16 +08:00
semi_join(top_dest)
2016-01-08 00:20:29 +08:00
```
2022-08-31 23:06:56 +08:00
@fig-join-semi shows what semi-join looks.
Only the existence of a match is important; it doesn't matter which observation is matched.
This means that filtering joins never duplicate rows like mutating joins do.
2016-01-14 23:12:14 +08:00
2022-05-14 04:46:49 +08:00
```{r}
2022-08-31 23:06:56 +08:00
#| label: fig-join-semi
2022-05-14 04:46:49 +08:00
#| echo: false
#| out-width: null
2022-08-31 23:06:56 +08:00
#| fig-cap: >
#| In a semi-join it only matters that there is a match; otherwise
#| values in `y` don't affect the output.
2022-05-14 04:46:49 +08:00
#| fig-alt: >
#| Diagram of a semi join. Data frame x is on the left and has two columns
#| (key and val_x) with keys 1, 2, and 3. Diagram y is on the right and also
#| has two columns (key and val_y) with keys 1, 2, and 4. Semi joining these
#| two results in a data frame with two rows and two columns (key and val_x),
#| with keys 1 and 2 (the only keys that match between the two data frames).
2022-08-29 23:16:18 +08:00
knitr::include_graphics("diagrams/join/semi.png")
2016-01-14 23:12:14 +08:00
```
2022-08-31 23:06:56 +08:00
The inverse of a semi-join is an anti-join.
An anti-join keeps the rows that *don't* have a match, as shown in @fig-join-anti.
Anti-joins are useful for diagnosing join mismatches.
For example, when connecting `flights` and `planes`, you might be interested to know that there are many `flights` that don't have a match in `planes`:
2016-01-14 23:12:14 +08:00
2022-05-14 04:46:49 +08:00
```{r}
2022-08-31 23:06:56 +08:00
flights |>
anti_join(planes, by = "tailnum") |>
count(tailnum, sort = TRUE)
2016-01-14 23:12:14 +08:00
```
2022-05-14 04:46:49 +08:00
```{r}
2022-08-31 23:06:56 +08:00
#| label: fig-join-anti
2022-05-14 04:46:49 +08:00
#| echo: false
#| out-width: null
2022-08-31 23:06:56 +08:00
#| fig-cap: >
#| An anti-join is the inverse of a semi-join, dropping rows from `x`
#| that have a match in `y`.
2022-05-14 04:46:49 +08:00
#| fig-alt: >
#| Diagram of an anti join. Data frame x is on the left and has two columns
#| (key and val_x) with keys 1, 2, and 3. Diagram y is on the right and also
#| has two columns (key and val_y) with keys 1, 2, and 4. Anti joining these
#| two results in a data frame with one row and two columns (key and val_x),
#| with keys 3 only (the only key in x that is not in y).
2022-08-31 23:06:56 +08:00
knitr::include_graphics("diagrams/join/anti.png", dpi = 270)
2016-01-08 00:20:29 +08:00
```
### Exercises
2021-02-21 23:40:40 +08:00
1. What does it mean for a flight to have a missing `tailnum`?
What do the tail numbers that don't have a matching record in `planes` have in common?
(Hint: one variable explains \~90% of the problems.)
2016-01-08 00:20:29 +08:00
2021-02-21 23:40:40 +08:00
2. Filter flights to only show flights with planes that have flown at least 100 flights.
2016-08-12 21:58:32 +08:00
2021-02-21 23:40:40 +08:00
3. Combine `fueleconomy::vehicles` and `fueleconomy::common` to find only the records for the most common models.
2016-08-12 21:58:32 +08:00
2021-02-21 23:40:40 +08:00
4. Find the 48 hours (over the course of the whole year) that have the worst delays.
Cross-reference it with the `weather` data.
Can you see any patterns?
2016-01-30 21:13:46 +08:00
2021-02-21 23:40:40 +08:00
5. What does `anti_join(flights, airports, by = c("dest" = "faa"))` tell you?
2016-03-25 01:25:56 +08:00
What does `anti_join(airports, flights, by = c("faa" = "dest"))` tell you?
2016-01-08 00:20:29 +08:00
2021-02-21 23:40:40 +08:00
6. You might expect that there's an implicit relationship between plane and airline, because each plane is flown by a single airline.
Confirm or reject this hypothesis using the tools you've learned above.
2016-10-07 21:16:20 +08:00
2016-01-18 22:21:22 +08:00
## Join problems
2021-02-21 23:40:40 +08:00
The data you've been working with in this chapter has been cleaned up so that you'll have as few problems as possible.
Your own data is unlikely to be so nice, so there are a few things that you should do with your own data to make your joins go smoothly.
2016-01-18 22:21:22 +08:00
2021-04-21 20:25:51 +08:00
1. Start by identifying the variables that form the primary key in each data frame.
2021-02-21 23:40:40 +08:00
You should usually do this based on your understanding of the data, not empirically by looking for a combination of variables that give a unique identifier.
If you just look for variables without thinking about what they mean, you might get (un)lucky and find a combination that's unique in your current data but the relationship might not be true in general.
2016-01-30 21:13:46 +08:00
2021-02-21 23:40:40 +08:00
For example, the altitude and longitude uniquely identify each airport, but they are not good identifiers!
2016-07-27 05:31:26 +08:00
2016-01-18 22:21:22 +08:00
```{r}
2022-02-24 03:15:52 +08:00
airports |> count(alt, lon) |> filter(n > 1)
2016-01-18 22:21:22 +08:00
```
2021-02-21 23:40:40 +08:00
2. Check that none of the variables in the primary key are missing.
If a value is missing then it can't identify an observation!
2016-01-30 21:13:46 +08:00
2021-04-21 20:25:51 +08:00
3. Check that your foreign keys match primary keys in another data frame.
2021-02-21 23:40:40 +08:00
The best way to do this is with an `anti_join()`.
It's common for keys not to match because of data entry errors.
Fixing these is often a lot of work.
2016-01-30 21:13:46 +08:00
2021-02-21 23:40:40 +08:00
If you do have missing keys, you'll need to be thoughtful about your use of inner vs. outer joins, carefully considering whether or not you want to drop rows that don't have a match.
2016-01-18 22:21:22 +08:00
2021-02-21 23:40:40 +08:00
Be aware that simply checking the number of rows before and after the join is not sufficient to ensure that your join has gone smoothly.
2021-04-21 20:25:51 +08:00
If you have an inner join with duplicate keys in both data frames, you might get unlucky as the number of dropped rows might exactly equal the number of duplicated rows!