r4ds/joins.qmd

892 lines
33 KiB
Plaintext
Raw Normal View History

2022-06-20 23:08:33 +08:00
# Joins {#sec-relational-data}
```{r}
#| results: "asis"
#| echo: false
source("_common.R")
status("restructuring")
```
2016-07-19 21:01:50 +08:00
## Introduction
It's rare that a data analysis involves only a single data frame.
2022-06-20 23:08:33 +08:00
Typically you have many data frames, and you must **join** them together to answer the questions that you're interested in.
All the verbs in this chapter use a pair of data frames.
Fortunately this is enough, since you can solve any more complex problem a pair at a time.
2016-01-15 21:05:45 +08:00
You'll learn about important types of joins in this chapter:
- **Mutating joins** add new variables to one data frame from matching observations in another.
- **Filtering joins**, filters observations from one data frame based on whether or not they match an observation in another.
If you're familiar with SQL, you should find the ideas in this chapter familiar, as their realization in dplyr is very similar.
2022-06-20 23:08:33 +08:00
We'll point out any important differences as we go.
Don't worry if you're not familiar with SQL as you'll learn more about it in @sec-import-databases.
2016-07-19 21:01:50 +08:00
### Prerequisites
2022-09-06 23:25:59 +08:00
We'll explore the five related datasets from nycflights13 using the join functions from dplyr.
2016-07-19 21:01:50 +08:00
```{r}
#| label: setup
#| message: false
2016-10-04 01:30:24 +08:00
library(tidyverse)
2016-07-19 21:01:50 +08:00
library(nycflights13)
```
2022-09-06 23:25:59 +08:00
## Keys
The connection between two tables is defined by a pair of keys.
In this section, you'll learn what those terms mean, and how they apply to the datasets in the nycflights13 package.
You'll also learn how to check that your keys are valid, and what to do if your table lacks a key.
2022-09-06 23:25:59 +08:00
### Primary and foreign keys
2022-09-06 23:25:59 +08:00
To understand joins, you need to first understand how two tables might be connected.
which come in pairs of primary and foreign key.
A **primary key** is a variable (or group of variables) that uniquely identifies an observation.
A **foreign key** is the value of a primary key in another table and is used to connect two tables.
Let's make those terms concrete by looking at four other data frames in nycfights13:
- `airlines` lets you look up the full carrier name from its abbreviated code.
Its primary key is the two letter `carrier` code.
2016-01-30 21:13:46 +08:00
```{r}
airlines
```
2022-09-06 23:25:59 +08:00
- `airports` gives information about each airport.
Its primary key is the three `faa` airport code.
2016-01-30 21:13:46 +08:00
```{r}
airports
```
2016-01-13 23:06:43 +08:00
2022-09-06 23:25:59 +08:00
- `planes` gives information about each plane.
It's primary key is the `tailnum`.
2016-01-13 23:06:43 +08:00
```{r}
planes
```
2016-01-30 21:13:46 +08:00
2022-09-06 23:25:59 +08:00
- `weather` gives the weather at each NYC airport for each hour.
It has a compound primary key; to uniquely identify each observation you need to know both `origin` (the location) and `time_hour` (the time).
```{r}
weather
```
2022-09-06 23:25:59 +08:00
These datasets are all connected via the `flights` data frame because the `tailnum`, `carrier`, `origin`, `dest`, and `time_hour` variables are all primary keys in other datasets making them foreign keys.
- `flights$tailnum` connects to primary key `planes$tailnum`.
2022-09-06 23:25:59 +08:00
- `flights$carrier` connecet to primary key `airlines$carrer`.
2022-09-06 23:25:59 +08:00
- `flights$origin` connects to primary key `airports$faa`.
2016-07-27 05:31:26 +08:00
2022-09-06 23:25:59 +08:00
- `flights$dest` connects to primary key `airports$faa` .
2016-01-30 21:13:46 +08:00
2022-09-06 23:25:59 +08:00
- `flights$origin`-`flights$time_hour` connects to primary key `weather$origin`-`weather$time_hour`.
2022-09-06 23:25:59 +08:00
We can also draw these relationships, as in @fig-flights-relationships.
This diagram is a little overwhelming, but it's simple compared to some you'll see in the wild!
The key to understanding diagrams like this is that you'll solve real problems by working with pairs of data frames.
You don't need to understand the whole thing; you just need to understand the chain of connections between the two data frames that you're interested in.
```{r}
#| label: fig-flights-relationships
#| echo: false
#| out-width: ~
#| fig-cap: >
#| Connections between all five data frames in the nycflights package.
2022-09-06 23:25:59 +08:00
#| Variables making up a primary key are coloured grey, and are connected
#| to their correpsonding foreign keys with arrows.
#| fig-alt: >
#| Diagram showing the relationships between airports, planes, flights,
#| weather, and airlines datasets from the nycflights13 package. The faa
#| variable in the airports data frame is connected to the origin and dest
#| variables in the flights data frame. The tailnum variable in the planes
#| data frame is connected to the tailnum variable in flights. The
#| time_hour and origin variables in the weather data frame are connected
#| to the variables with the same name in the flights data frame. And
#| finally the carrier variables in the airlines data frame is connected
#| to the carrier variable in the flights data frame. There are no direct
#| connections between airports, planes, airlines, and weather data
#| frames.
knitr::include_graphics("diagrams/relational.png", dpi = 270)
```
2022-09-06 23:25:59 +08:00
### Checking primary keys
2016-01-15 21:05:45 +08:00
2022-09-06 23:25:59 +08:00
That that we've identified the primary keys, it's good practice to verify that they do indeed uniquely identify each observation.
One way to do that is to `count()` the primary keys and look for entries where `n` is greater than one.
This reveals that `planes` and `weather` both look good:
2016-01-15 21:05:45 +08:00
```{r}
2022-02-24 03:15:52 +08:00
planes |>
count(tailnum) |>
2016-08-01 00:32:16 +08:00
filter(n > 1)
2022-02-24 03:15:52 +08:00
weather |>
count(time_hour, origin) |>
2016-08-01 00:32:16 +08:00
filter(n > 1)
2016-01-15 21:05:45 +08:00
```
2022-09-06 23:25:59 +08:00
You should also check for missing values in your primary keys --- if a value is missing then it can't identify an observation!
```{r}
planes |>
filter(is.na(tailnum))
weather |>
filter(is.na(time_hour) | is.na(origin))
```
### Surrogate keys
So far we haven't talked about the primary key for `flights`.
It's not super important here, because there are no data frames that use it as a foreign key, but it's still useful to think about because it makes it easier to work with observations if have some way to uniquely identify them.
There's clearly no one variable or even a pair of variables that uniquely identifies a flight, but we can find three together that work:
2016-01-15 21:05:45 +08:00
```{r}
2022-02-24 03:15:52 +08:00
flights |>
count(time_hour, carrier, flight) |>
2016-08-01 00:32:16 +08:00
filter(n > 1)
2016-01-15 21:05:45 +08:00
```
2022-09-06 23:25:59 +08:00
Does that make `time_hour`-`carrier`-`flight` a primary key?
It's certainly a good start, but it doesn't guarantee it.
For example, are altitude and longitude a primary key for `airports`?
```{r}
airports |>
count(alt, lat) |>
filter(n > 1)
```
Identifying an airport by it's altitude and latitude is clearly a bad idea, and in general it's not possible to know from the data itself whether or not a combination of variables that uniquely identifies an observation is a primary key.
For flights, the combination of `time_hour`, `carrier`, and `flight` seems like a reasonable primary key because it would be really confusing for the airline if there were multiple flights with the same number in the air at the same time.
That said, we might be better off introducing a simple numeric **surrogate** key using the row number:
```{r}
flights2 <- flights |>
mutate(id = row_number(), .before = 1)
flights2
```
2022-09-06 23:25:59 +08:00
Surrogate keys can be particular useful when communicating to other humans: it's much easier to tell someone to take a look at flight 2001 than to say look at the UA430 which departed 9am 2013-01-03.
2016-01-15 21:05:45 +08:00
### Exercises
2022-09-06 23:25:59 +08:00
1. We forgot to draw the relationship between `weather` and `airports` in @fig-flights-relationships.
What is the relationship and how should it appear in the diagram?
2016-08-12 21:58:32 +08:00
2022-09-06 23:25:59 +08:00
2. `weather` only contains information for the origin (NYC) airports.
If it contained weather records for all airports in the USA, what additional relation would it define with `flights`?
3. The year, month, day, hour, and origin variables almost form a compound key for weather, but there's one hour that has duplicate observations.
Can you figure out what's special about this time?
2022-09-06 23:25:59 +08:00
4. We know that some days of the year are "special" and fewer people than usual fly on them.
How might you represent that data as a data frame?
What would be the primary keys of that data frame?
How would it connect to the existing data frames?
5. Draw a diagram illustrating the connections between the `Batting`, `People`, and `Salaries` data frames in the Lahman package.
Draw another diagram that shows the relationship between `People`, `Managers`, `AwardsManagers`.
2016-01-30 21:13:46 +08:00
How would you characterise the relationship between the `Batting`, `Pitching`, and `Fielding` data frames?
2016-01-15 21:05:45 +08:00
2022-09-06 23:25:59 +08:00
## Basic joins {#sec-mutating-joins}
Now that you understand how data frames are connected via keys, we can start to using them to better understand the `flights` dataset.
We'll first show you the mutating joins, so called because their primary role[^joins-1] is to add additional column to the `x` data frame, just like `mutate()`. You'll learn learn about join keys, and finish up with a discussion of the filtering joins, which work like a `filter()` rather than a `mutate()`.
[^joins-1]: They also affect the number of rows; we'll come back to that shortly.
### Mutating joins
A **mutating join** allows you to combine variables from two data frames: it first matches observations by their keys, then copies across variables from one data frame to the other.
Like `mutate()`, the join functions add variables to the right, so if you have a lot of variables already, you won't see the new variables.
For these examples, we'll make it easier to see what's going on in the examples by creating a narrower dataset:
```{r}
flights2 <- flights |>
select(year, time_hour, origin, dest, tailnum, carrier)
flights2
```
(Remember that in RStudio you can also use `View()` to avoid this problem.)
As you'll learn shortly, there are four types of mutating join, but the one that should be your default is `left_join()`.
It preserves the rows in `x` even when there's no match in `y`, filling in the new variables with missing values.
The primary use of `left_join()` is to add in additional metadata.
For example, we can use `left_join()` to add the full airline name to the `flights2` data:
```{r}
flights2 |>
left_join(airlines)
```
Or we could find out the temperature and wind speed when each plane departed:
```{r}
flights2 |>
left_join(weather |> select(origin, time_hour, temp, wind_speed))
```
Or what sort of plane was flying:
```{r}
flights2 |>
left_join(planes |> select(tailnum, type))
```
Note that in each of these cases the number of rows has stayed the same, but we've added new columns to the right.
### Specifying join keys
By default, `left_join()` will use all variables that appear in both data frames as the join key, the so called **natural** join.
This is a useful heuristic, but it doesn't always work.
What happens if we try to join `flights` with the complete `planes`?
```{r}
flights2 |>
left_join(planes)
```
We get a lot of missing matches because both `flights` and `planes` have a `year` column but they mean different things: the year the flight occurred and the year the plane was built.
We only want to join on the `tailnum` column so we need switch to an explicit specification:
```{r}
flights2 |>
left_join(planes, join_by(tailnum))
```
Note that the `year` variables are disambiguated in the output with a suffix.
You can control this with the `suffix` argument.
`join_by(tailnum)` is short for `join_by(tailnum == tailnum)`.
This fuller form is important because it's how you specify different join keys in each table.
For example, there are two ways to join the `flight2` and `airports` table: either by `dest` or `origin:`
```{r}
flights2 |>
left_join(airports, join_by(dest == faa))
flights2 |>
left_join(airports, join_by(origin == faa))
```
In older code you might see a different way of specifying the join keys, using a character vector:
- `by = "x"` corresponds to `join_by(x)`
- `by = c("a" = "x")` corresponds to `join_by(a == x)`.
Now that it exists, we prefer `join_by()` as it's a more flexible specification that supports more types of join, as you'll learn in @sec-non-equi-joins.
### Filtering joins
As you might guess the primary action of a **filtering join** is to filter the rows.
There are two types: semi-joins and anti-joins.
**Semi-joins** keep all rows in `x` that have a match in `y` are useful for matching filtered summary data frames back to the original rows.
For example, we could use to filter the `airports` dataset to show just the origin airports:
```{r}
airports |>
semi_join(flights2, join_by(faa == origin))
```
Or just the destinations:
```{r}
airports |>
semi_join(flights2, join_by(faa == dest))
```
**Anti-joins** are the opposite: they return all rows in `x` that don't have a match in `y`.
They're useful for figuring out what's missing.
For example, we can figure out which flights are missing information about the destination airport:
```{r}
flights2 |>
anti_join(airports, join_by(dest == faa))
```
Or which flights lack metadata about their plane:
```{r}
flights2 |>
anti_join(planes, join_by(tailnum)) |>
distinct(tailnum)
```
### Exercises
1. Does every departing flight have corresponding weather data for that hour?
2. Find the 48 hours (over the course of the whole year) that have the worst delays.
Cross-reference it with the `weather` data.
Can you see any patterns?
3. Imagine you've found the top 10 most popular destinations using this code:
```{r}
top_dest <- flights2 |>
count(dest, sort = TRUE) |>
head(10)
```
How can you find all flights to that destination?
4. What does it mean for a flight to have a missing `tailnum`?
What do the tail numbers that don't have a matching record in `planes` have in common?
(Hint: one variable explains \~90% of the problems.)
5. You might expect that there's an implicit relationship between plane and airline, because each plane is flown by a single airline.
Confirm or reject this hypothesis using the tools you've learned above.
6. Add the location of the origin *and* destination (i.e. the `lat` and `lon`) to `flights`.
Is it easier to rename the columns before or after the join?
7. Compute the average delay by destination, then join on the `airports` data frame so you can show the spatial distribution of delays.
Here's an easy way to draw a map of the United States:
```{r}
#| eval: false
airports |>
semi_join(flights, join_by(faa == dest)) |>
ggplot(aes(lon, lat)) +
borders("state") +
geom_point() +
coord_quickmap()
```
You might want to use the `size` or `colour` of the points to display the average delay for each airport.
8. What happened on June 13 2013?
Display the spatial pattern of delays, and then use Google to cross-reference with the weather.
```{r}
#| eval: false
#| include: false
worst <- filter(flights, !is.na(dep_time), month == 6, day == 13)
worst |>
group_by(dest) |>
summarise(delay = mean(arr_delay), n = n()) |>
filter(n > 5) |>
inner_join(airports, by = c("dest" = "faa")) |>
ggplot(aes(lon, lat)) +
borders("state") +
geom_point(aes(size = n, colour = delay)) +
coord_quickmap()
```
## How do joins work?
Now that you've used a few joins it's time to learn more about how they work, focusing especially on how each row in `x` matches with each row in `y`.
2022-09-06 23:25:59 +08:00
We'll start with a visual representation of the two simple tibbles defined below.
2022-09-02 06:27:59 +08:00
Figure @fig-join-setup.
2022-08-31 23:06:56 +08:00
The coloured column represents the keys of the two data frames, here literally called `key`.
The grey column represents the "value" column that is carried along for the ride.
In these examples we'll use a single key variable, but the idea generalizes to multiple keys and multiple values.
```{r}
2016-10-07 21:16:09 +08:00
x <- tribble(
~key, ~val_x,
1, "x1",
2, "x2",
3, "x3"
)
y <- tribble(
~key, ~val_y,
1, "y1",
2, "y2",
4, "y3"
)
```
2022-08-31 23:06:56 +08:00
```{r}
#| label: fig-join-setup
#| echo: false
#| out-width: ~
#| fig-cap: >
#| Graphical representation of two simple tables.
2022-08-31 23:06:56 +08:00
#| fig-alt: >
#| x and y are two data frames with 2 columns and 3 rows each. The first
#| column in each is the key and the second is the value. The contents of
#| these data frames are given in the subsequent code chunk.
knitr::include_graphics("diagrams/join/setup.png", dpi = 270)
```
A join is a way of connecting each row in `x` to zero, one, or more rows in `y`.
2022-08-31 23:06:56 +08:00
@fig-join-setup2 shows each potential match as an intersection of a pair of lines.
If you look closely, you'll notice that we've switched the order of the key and value columns in `x`.
This is to emphasize that joins match based on the key; the other columns are just carried along for the ride.
```{r}
2022-08-31 23:06:56 +08:00
#| label: fig-join-setup2
#| echo: false
2022-08-31 23:06:56 +08:00
#| out-width: ~
#| fig-cap: >
#| To prepare to show how joins work we create a grid showing every
#| possible match between the two tibbles.
#| fig-alt: >
2022-08-31 23:06:56 +08:00
#| x and y data frames placed next to each other, with the key variable
#| moved up front in y so that the key variable in x and key variable
#| in y appear next to each other.
2022-08-31 23:06:56 +08:00
knitr::include_graphics("diagrams/join/setup2.png", dpi = 270)
```
2022-08-31 23:06:56 +08:00
In an actual join, matches will be indicated with dots, as in @fig-join-inner.
2022-09-02 06:27:59 +08:00
The number of dots = the number of matches = the number of rows in the output, a new data frame that contains the key, the x values, and the y values.
The join shown here is a so-called **inner join**, where the output contains only the rows that appear in both `x` and `y`.
2016-01-14 23:51:33 +08:00
```{r}
2022-08-31 23:06:56 +08:00
#| label: fig-join-inner
#| echo: false
2022-08-31 23:06:56 +08:00
#| out-width: ~
#| fig-cap: >
#| A join showing which rows in the x table match rows in the y table.
#| fig-alt: >
#| Keys 1 and 2 in x and y data frames are matched and indicated with lines
#| joining these rows with dot in the middle. Hence, there are two dots in
#| this diagram. The resulting joined data frame has two rows and 3 columns:
#| key, val_x, and val_y. Values in the key column are 1 and 2, the matched
#| values.
2022-08-31 23:06:56 +08:00
knitr::include_graphics("diagrams/join/inner.png", dpi = 270)
2016-01-14 23:51:33 +08:00
```
An **outer join** keeps observations that appear in at least one of the data frames.
2022-08-31 23:06:56 +08:00
These joins work by adding an additional "virtual" observation to each data frame.
This observation has a key that matches if no other key matches, and values filled with `NA`.
There are three types of outer joins:
2022-08-31 23:06:56 +08:00
- A **left join** keeps all observations in `x`, @fig-join-left.
2022-08-31 23:06:56 +08:00
```{r}
#| label: fig-join-left
#| echo: false
#| out-width: ~
#| fig-cap: >
#| A visual representation of the left join. Every row of `x` is
#| preserved in the output because it can fallback to matching a
#| row of `NA`s in `y`.
#| fig-alt: >
#| Left join: keys 1 and 2 from x are matched to those in y, key 3 is
#| also carried along to the joined result since it's on the left data
#| frame, but key 4 from y is not carried along since it's on the right
#| but not on the left. The result has 3 rows: keys 1, 2, and 3,
#| all values from val_x, and the corresponding values from val_y for
#| keys 1 and 2 with an NA for key 3, val_y.
knitr::include_graphics("diagrams/join/left.png", dpi = 270)
```
2022-08-31 23:06:56 +08:00
- A **right join** keeps all observations in `y`, @fig-join-right.
2016-01-13 23:06:43 +08:00
2022-08-31 23:06:56 +08:00
```{r}
#| label: fig-join-right
#| echo: false
#| out-width: ~
#| fig-cap: >
#| A visual representation of the right join. Every row of `y` is
#| preserved in the output because it can fallback to matching a
#| row of `NA`s in `x`.
#| fig-alt: >
#| Keys 1 and 2 from x are matched to those in y, key 4 is
#| also carried along to the joined result since it's on the right data frame,
#| but key 3 from x is not carried along since it's on the left but not on the
#| right. The result is a data frame with 3 rows: keys 1, 2, and 4, all values
#| from val_y, and the corresponding values from val_x for keys 1 and 2 with
#| an NA for key 4, val_x.
knitr::include_graphics("diagrams/join/right.png", dpi = 270)
```
- A **full join** keeps all observations in `x` and `y`, @fig-join-full.
```{r}
#| label: fig-join-full
#| echo: false
#| out-width: ~
#| fig-cap: >
#| A visual representation of the full join. Every row of `x` and `y`
#| is included in the output because both `x` and `y` have a fallback
#| row of `NA`s.
#| fig-alt: >
#| The result has 4 rows: keys 1, 2, 3, and 4 with all values
#| from val_x and val_y, however key 2, val_y and key 4, val_x are NAs since
#| those keys aren't present in their respective data frames.
knitr::include_graphics("diagrams/join/full.png", dpi = 270)
```
2022-08-31 23:06:56 +08:00
Another way to show how the outer joins differ is with a Venn diagram, @fig-join-venn.
This, however, is not a great representation because while it might jog your memory about which rows are preserved, it fails to illustrate what's happening with the columns.
```{r}
2022-08-31 23:06:56 +08:00
#| label: fig-join-venn
#| echo: false
2022-08-31 23:06:56 +08:00
#| out-width: ~
#| fig-cap: >
#| Venn diagrams showing the difference between inner, left, right, and
#| full joins.
#| fig-alt: >
#| Venn diagrams for inner, full, left, and right joins. Each join represented
#| with two intersecting circles representing data frames x and y, with x on
#| the right and y on the left. Shading indicates the result of the join.
#| Inner join: Only intersection is shaded. Full join: Everything is shaded.
#| Left join: Only x is shaded, but not the area in y that doesn't intersect
#| with x. Right join: Only y is shaded, but not the area in x that doesn't
#| intersect with y.
2022-08-31 23:06:56 +08:00
knitr::include_graphics("diagrams/join/venn.png", dpi = 270)
```
2022-09-06 23:25:59 +08:00
### Row matches
2022-09-02 06:27:59 +08:00
2022-09-06 23:25:59 +08:00
While the most visible impact of a join is on the columns, joins also affect the rows.
To understand what's going let's first narrow our focus to the `inner_join()` and think about each row in `x`.
What happens to each row of `x` depends on how many rows it matches in `y`:
2022-08-31 23:06:56 +08:00
2022-09-06 23:25:59 +08:00
- If it doesn't match anything, it's dropped.
- If it matches 1 row, it's kept as is.
- If it matches more than 1 row, it's duplicated once for each match.
2022-09-02 06:27:59 +08:00
2022-09-06 23:25:59 +08:00
@fig-join-match-types illustrates these three possibilities.
2022-08-31 23:06:56 +08:00
```{r}
2022-09-06 23:25:59 +08:00
#| label: fig-join-match-types
#| echo: false
#| out-width: ~
#| fig-cap: >
#| The three key ways a row in `x` can match. `x1` matches
#| one row in `y`, `x2` matches two rows in `y`, `x3` matches
#| zero rows in y. Note that while there are three rows in
#| `x` and three rows in the output, there isn't a direct
#| correspondence between the rows.
#| fig-alt: >
#| TBA
2022-09-02 06:27:59 +08:00
2022-09-06 23:25:59 +08:00
knitr::include_graphics("diagrams/join/match-types.png", dpi = 270)
2022-09-02 06:27:59 +08:00
```
2022-09-06 23:25:59 +08:00
In principle, this means that there are no guarantees about the number of rows in the output of an `inner_join()`:
2022-09-02 06:27:59 +08:00
2022-09-06 23:25:59 +08:00
- There might be the same number of rows if every row in `x` matches one row in `y`.
- There might be more rows if some rows in `x` match multiple rows in `y`.
- There might be fewer rows if some rows in `x` match no rows in `y`.
- There might be the same number of rows if the number of multiple matches precisely balances out with the rows that don't match.
2022-09-02 06:27:59 +08:00
2022-09-06 23:25:59 +08:00
This is pretty dangerous so by default dplyr will warn whenever there are multiple matches:
2022-09-02 06:27:59 +08:00
```{r}
2022-09-06 23:25:59 +08:00
df1 <- tibble(key = c(1, 2, 3), val_x = c("x1", "x2", "x3"))
df2 <- tibble(key = c(1, 2, 2), val_y = c("y1", "y2", "y3"))
2022-09-02 06:27:59 +08:00
2022-09-06 23:25:59 +08:00
df1 |>
inner_join(df2, join_by(key))
2022-09-02 06:27:59 +08:00
```
2022-09-06 23:25:59 +08:00
This is another reason we recommend the `left_join()` --- every row in `x` is guaranteed to match a "virtual" row in `y` so it'll never drop rows, and you'll always get a warning when it duplicates rows.
2022-09-02 06:27:59 +08:00
2022-09-06 23:25:59 +08:00
You can gain further more control with two arguments:
2022-09-02 06:27:59 +08:00
2022-09-06 23:25:59 +08:00
- `unmatched` controls what happens if a row in `x` doesn't match any rows in `y`. It defaults to `"drop"` which will silently drop any unmatched rows.
- `multiple` controls what happens if a row in `x` matches more than one row in `y`. For equi-joins, it defaults to `"warn"` which emits a warning message if there are any multiple matches.
2022-09-02 06:27:59 +08:00
2022-09-06 23:25:59 +08:00
There are two common cases in which you might want to customize: enforcing a one-to-one mapping or allowing multiple joins.
2022-09-02 06:27:59 +08:00
2022-09-06 23:25:59 +08:00
### One-to-one mapping
2022-09-02 06:27:59 +08:00
2022-09-06 23:25:59 +08:00
Both `unmatched` and `multiple` can take value `"error"` which means that the join will fail unless each row in `x` matches exactly one row in `y`:
2022-09-02 06:27:59 +08:00
2022-09-06 23:25:59 +08:00
```{r}
#| error: true
df1 |>
inner_join(df2, join_by(key), unmatched = "error", multiple = "error")
2022-09-02 06:27:59 +08:00
```
2022-09-06 23:25:59 +08:00
(`unmatched = "error"` is not useful with `left_join()` because as described above, a `left_join()` always matches a virtual row in `y` filled with missing values).
2022-09-02 06:27:59 +08:00
2022-09-06 23:25:59 +08:00
### Allow multiple rows
2022-09-02 06:27:59 +08:00
2022-09-06 23:25:59 +08:00
Sometimes it's useful to expand the number of rows in the output.
This often comes about by flipping the direction of the question you're asking.
For example, as we've seen above, it's natural to ask for additional information about the plane that flew each flight:
2022-08-31 23:06:56 +08:00
```{r}
2022-09-06 23:25:59 +08:00
#| results: false
2022-09-02 06:27:59 +08:00
flights2 |>
2022-08-31 23:06:56 +08:00
left_join(planes, by = "tailnum")
```
2022-09-06 23:25:59 +08:00
But it's also reasonable to ask what flights did each plane perform?
2022-09-02 06:27:59 +08:00
```{r}
2022-09-06 23:25:59 +08:00
plane_flights <- planes |>
left_join(flights2, by = "tailnum")
2022-09-02 06:27:59 +08:00
```
2022-09-06 23:25:59 +08:00
Since this duplicate rows in `x` (the planes), we need to explicitly say we're ok with the multiple matches by setting `multiple = "all"`:
2022-08-31 23:06:56 +08:00
```{r}
2022-09-06 23:25:59 +08:00
plane_flights <- planes |>
left_join(flights2, by = "tailnum", multiple = "all")
2022-09-02 06:27:59 +08:00
2022-09-06 23:25:59 +08:00
plane_flights
2022-09-02 06:27:59 +08:00
```
2022-09-06 23:25:59 +08:00
### Filtering joins {#sec-non-equi-joins}
2022-09-06 23:25:59 +08:00
The number of matches is closely related to what the filtering joins too.
The semi-join keeps rows in `x` that have one or more matches in `y`, as in @fig-join-semi. The anti-join keeps rows in `x` that don't have a match in `y`, as in @fig-join-anti.
In both cases, nly the existence of a match is important; it doesn't matter which observation is matched.
This means that filtering joins never duplicate rows like mutating joins do.
2022-08-29 23:16:18 +08:00
2022-08-31 23:06:56 +08:00
```{r}
2022-09-06 23:25:59 +08:00
#| label: fig-join-semi
2022-08-31 23:06:56 +08:00
#| echo: false
2022-09-06 23:25:59 +08:00
#| out-width: null
#| fig-cap: >
#| In a semi-join it only matters that there is a match; otherwise
#| values in `y` don't affect the output.
2022-08-31 23:06:56 +08:00
#| fig-alt: >
2022-09-06 23:25:59 +08:00
#| Diagram of a semi join. Data frame x is on the left and has two columns
#| (key and val_x) with keys 1, 2, and 3. Diagram y is on the right and also
#| has two columns (key and val_y) with keys 1, 2, and 4. Semi joining these
#| two results in a data frame with two rows and two columns (key and val_x),
#| with keys 1 and 2 (the only keys that match between the two data frames).
2022-08-31 23:06:56 +08:00
2022-09-06 23:25:59 +08:00
knitr::include_graphics("diagrams/join/semi.png")
2022-08-31 23:06:56 +08:00
```
2016-01-30 21:13:46 +08:00
2022-08-31 23:06:56 +08:00
```{r}
2022-09-06 23:25:59 +08:00
#| label: fig-join-anti
#| echo: false
#| out-width: null
#| fig-cap: >
#| An anti-join is the inverse of a semi-join, dropping rows from `x`
#| that have a match in `y`.
#| fig-alt: >
#| Diagram of an anti join. Data frame x is on the left and has two columns
#| (key and val_x) with keys 1, 2, and 3. Diagram y is on the right and also
#| has two columns (key and val_y) with keys 1, 2, and 4. Anti joining these
#| two results in a data frame with one row and two columns (key and val_x),
#| with keys 3 only (the only key in x that is not in y).
2022-08-31 23:06:56 +08:00
2022-09-06 23:25:59 +08:00
knitr::include_graphics("diagrams/join/anti.png", dpi = 270)
2022-08-31 23:06:56 +08:00
```
2022-09-06 23:25:59 +08:00
## Non-equi joins
2016-01-30 21:13:46 +08:00
2022-09-06 23:25:59 +08:00
So far we've discussed **equi-joins**, joins where the keys in x must equal the keys in y for rows to match.
2022-08-29 23:16:18 +08:00
This allows us to make an important simplification in both the diagrams and the return values of the join frames: we only ever include the join key from one table.
We can request that dplyr keep both keys with `keep = TRUE`.
This is shown in the code below and in @fig-inner-both.
```{r}
x |> left_join(y, by = "key", keep = TRUE)
```
```{r}
#| label: fig-inner-both
#| fig-cap: >
#| Inner join showing keys from both `x` and `y`. This is not the
#| default because for equi-joins, the keys are the same so showing
#| both doesn't add anything.
#| echo: false
2022-08-31 23:06:56 +08:00
#| out-width: ~
2022-08-29 23:16:18 +08:00
knitr::include_graphics("diagrams/join/inner-both.png", dpi = 270)
```
This distinction between the keys becomes much more important as we move away from equi-joins because the key values are much more likely to be different.
Because of this, dplyr defaults to showing both keys.
2022-08-31 23:06:56 +08:00
For example, instead of requiring that the `x` and `y` keys be equal, we could request that key from `x` be less than the key from `y`, as in the code below and @fig-join-gte.
2022-08-29 23:16:18 +08:00
```{r}
2022-08-31 23:06:56 +08:00
x |> inner_join(y, join_by(key >= key))
2022-08-29 23:16:18 +08:00
```
```{r}
2022-08-31 23:06:56 +08:00
#| label: fig-join-gte
2022-08-29 23:16:18 +08:00
#| echo: false
#| fig-cap: >
#| A non-equijoin where the `x` key must be less than the `y` key.
2022-08-31 23:06:56 +08:00
knitr::include_graphics("diagrams/join/gte.png", dpi = 270)
2022-08-29 23:16:18 +08:00
```
2022-09-02 06:27:59 +08:00
Non-equi join isn't a particularly useful term because it only tells you what the join is not, not what it is. dplyr helps a bit by identifying three useful types of non-equi join
2022-09-06 23:25:59 +08:00
- **Cross-joins** have no join keys
2022-08-29 23:16:18 +08:00
- **Inequality-joins** use `<`, `<=`, `>`, `>=` instead of `==`.
- **Rolling joins** use `following(x, y)` and `preceding(x, y).`
- **Overlap joins** use `between(x$val, y$lower, y$upper)`, `within(x$lower, x$upper, y$lower, y$upper)` and `overlaps(x$lower, x$upper, y$lower, y$upper).`
2022-09-02 06:27:59 +08:00
Each of these is described in more detail in the following sections.
2022-08-29 23:16:18 +08:00
2022-09-06 23:25:59 +08:00
### Cross-joins
```{r}
df <- tibble(name = c("John", "Simon", "Tracy", "Max"))
df |> left_join(df, join_by())
```
This is sometimes called a **self-join** because we're joining a table to itself.
```{r}
#| label: fig-join-cross
#| echo: false
#| out-width: ~
#| fig-cap: >
#| A cross join matches each row in `x` with every row in `y`.
knitr::include_graphics("diagrams/join/cross.png", dpi = 270)
```
2022-08-29 23:16:18 +08:00
### Inequality joins
Inequality joins are extremely general, so general that it's hard to find specific meaning use cases.
One small useful technique is to generate all pairs:
```{r}
df <- tibble(id = 1:4, name = c("John", "Simon", "Tracy", "Max"))
df |> left_join(df, join_by(id < id))
```
Here we perform a self-join (i.e we join a table to itself), then use the inequality join to ensure that we one of the two possible pairs (e.g. just (a, b) not also (b, a)) and don't match the same row.
2022-09-06 23:25:59 +08:00
```{r}
#| label: fig-cross-lt
#| echo: false
#| out-width: ~
#| fig-cap: >
#| An inequality join where `x` is joined to `y` on rows where the key
#| of `x` is less than the key of `y`.
knitr::include_graphics("diagrams/join/cross-lt.png", dpi = 270)
```
2022-08-29 23:16:18 +08:00
### Rolling joins
2022-09-06 23:25:59 +08:00
Rolling joins are a special type of inequality join where instead of getting *every* row that satisfies the inequality, you get just the closest row.
They're particularly useful when you have two tables of dates that don't perfectly line up and you want to find (e.g.) the closest date in table 1 that comes before (or after) some date in table 2.
@fig-join-following.
2022-08-31 23:06:56 +08:00
```{r}
#| label: fig-join-following
#| echo: false
#| out-width: ~
#| fig-cap: >
#| A following join is similar to a greater-than-or-equal inequality join
#| but only matches the first value.
knitr::include_graphics("diagrams/join/following.png", dpi = 270)
```
2022-09-02 06:27:59 +08:00
You can turn any inequality join into a rolling join by adding `closest()`.
For example `join_by(closest(x <= y))` finds the smallest `y` that's greater than or equal to x, and `join_by(closest(x > y))` finds the biggest `y` that's less than x.
2022-08-31 23:07:10 +08:00
For example, imagine that you're in charge of office birthdays.
Your company is rather stingy so instead of having individual parties, you only have a party once each quarter.
Parties are always on a Monday, and you skip the first week of January since a lot of people are on holiday and the first Monday of Q3 is July 4, so that has to be pushed back a week.
That leads to the following party days:
```{r}
parties <- tibble(
q = 1:4,
party = lubridate::ymd(c("2022-01-10", "2022-04-04", "2022-07-11", "2022-10-03"))
)
```
Then we have a table of employees along with their birthdays:
```{r}
set.seed(1014)
employees <- tibble(
name = wakefield::name(100),
birthday = lubridate::ymd("2022-01-01") + (sample(365, 100, replace = TRUE) - 1)
)
employees
```
To find out which party each employee will use to celebrate their birthday, we can use a rolling join.
2022-09-02 06:27:59 +08:00
We have to frame the
2022-09-06 23:25:59 +08:00
We want to find the first party that's before their birthday so we can use following rolling join:
2022-08-31 23:07:10 +08:00
```{r}
2022-09-06 23:25:59 +08:00
#| eval: false
2022-09-02 06:27:59 +08:00
employees |>
left_join(parties, join_by(closest(birthday >= party)))
2022-09-06 23:25:59 +08:00
```
2022-09-02 06:27:59 +08:00
2022-09-06 23:25:59 +08:00
```{r}
#| echo: false
2022-09-02 06:27:59 +08:00
employees |>
2022-09-06 23:25:59 +08:00
left_join(parties, join_by(preceding(birthday, party)))
2022-09-02 06:27:59 +08:00
```
2022-08-29 23:16:18 +08:00
### Overlap joins
2022-08-31 23:07:10 +08:00
There's one problem with the strategy uses for assigning birthday parties above: there's no party preceding the birthdays Jan 1-9.
So maybe we'd be better off being explicit about the date ranges that each party spans, and make a special case for those early bithdays:
```{r}
parties <- tibble(
q = 1:4,
party = lubridate::ymd(c("2022-01-10", "2022-04-04", "2022-07-11", "2022-10-03")),
start = lubridate::ymd(c("2022-01-01", "2022-04-04", "2022-07-11", "2022-10-03")),
end = lubridate::ymd(c("2022-04-03", "2022-07-11", "2022-10-02", "2022-12-31"))
)
parties
```
This is a good place to use `unmatched = "error"` because I want to find out if any employees didn't get assigned a birthday.
```{r}
employees |>
inner_join(parties, join_by(between(birthday, start, end)), unmatched = "error")
```
2022-09-06 23:25:59 +08:00
We could also flip the question around and ask which employees will celebrate in each party.
This requires explicitly specifying which table each variable comes from since otherwise `between()` assumes that the first argument comes from `x` and the second and third come from `y`.
2022-08-31 23:07:10 +08:00
```{r}
parties |>
2022-09-06 23:25:59 +08:00
inner_join(employees, join_by(between(y$birthday, x$start, x$end)))
2022-08-31 23:07:10 +08:00
```
2022-08-29 23:16:18 +08:00
2022-09-06 23:25:59 +08:00
Finally, I'm hopelessly bad at data entry so I also want to check that my party periods don't overlap.
I can perform an self-join and use an `overlaps()` join:
2022-08-29 23:16:18 +08:00
```{r}
2022-09-06 23:25:59 +08:00
parties |>
inner_join(parties, join_by(overlaps(start, end, start, end), q < q))
2022-08-29 23:16:18 +08:00
```
2022-09-06 23:25:59 +08:00
In other situations you might instead use `within()` which for each row in `x` finds all rows in `y` where the x internal is within the y interval.
2022-08-29 23:16:18 +08:00
### Exercises
2022-09-06 23:25:59 +08:00
1. Can you explain what's happening the keys in this equi-join?
Why are they different?
2022-08-29 23:16:18 +08:00
```{r}
x |> full_join(y, by = "key")
x |> full_join(y, by = "key", keep = TRUE)
```