Then you'll learn about a few more useful summaries before we finish up with a comparison of function variants that have similar names and similar actions, but are each designed for a specific use case.
It's surprising how much data science you can do with just counts and a little basic arithmetic, so dplyr strives to make counting as easy as possible with `count()`.
This function is great for quick exploration and checks during analysis:
(Despite the advice in Chapter \@ref(code-style), I usually put `count()` on a single line because I'm usually using it at the console for a quick check that my calculation is working as expected.)
Transformation functions work well with `mutate()` because their output is the same length as the input.
The vast majority of transformation functions are already built into base R.
It's impractical to list them all so this section will give show the most useful.
As an example, while R provides all the trigonometric functions that you might dream of, I don't list them here because they're rarely needed for data science.
These functions don't need a huge amount of explanation because they do what you learned in grade school.
But we need to briefly talk about the **recycling rules** which determine what happens when the left and right hand sides have different lengths.
This is important for operations like `flights |> mutate(air_time = air_time / 60)` because there are 336,776 numbers on the left of `/` but only one on the right.
These recycling rules are also applied to logical comparisons (`==`, `<`, `<=`, `>`, `>=`, `!=`) and can lead to a surprising result if you accidentally use `==` instead of `%in%` and the data frame has an unfortunate number of rows.
Because of the recycling rules it finds flights in odd numbered rows that departed in January and flights in even numbered rows that departed in February.
And unforuntately there's no warning because `nycflights` has an even number of rows.
Modular arithmetic is the technical name for the type of math you did before you learned about real numbers, i.e. division that yields a whole number and a remainder.
In R, `%/%` does integer division and `%%` computes the remainder:
We can combine that with the `mean(is.na(x))` trick from Section \@ref(logical-summaries) to see how the proportion of delayed flights varies over the course of the day.
The results are shown in Figure \@ref(fig:prop-cancelled).
```{r prop-cancelled}
#| fig.cap: >
#| A line plot with scheduled departure hour on the x-axis, and proportion
#| of cancelled flights on the y-axis. Cancellations seem to accumulate
#| over the course of the day until 8pm, very late flights are much
#| less likely to be cancelled.
#| fig.alt: >
#| A line plot showing how proportion of cancelled flights changes over
#| the course of the day. The proportion starts low at around 0.5% at
#| 6am, then steadily increases over the course of the day until peaking
#| at 4% at 7pm. The proportion of cancelled flights then drops rapidly
For example, take compounding interest --- the amount of money you have at `year + 1` is the amount of money you had at `year` multiplied by the interest rate.
That gives a formula like `money = starting * interest ^ year`:
This a straight line because a little algebra reveals that `log(money) = log(starting) + n * log(interest)`, which matches the pattern for a line, `y = m * x + b`.
This is a useful pattern: if you see a (roughly) straight line after log-transforming the y-axis, you know that there's underlying exponential growth.
If you're log-transforming your data with dplyr you have a choice of three logarithms provided by base R: `log()` (the natural log, base e), `log2()` (base 2), and `log10()` (base 10).
`log2()` is easy to interpret because difference of 1 on the log scale corresponds to doubling on the original scale and a difference of -1 corresponds to halving; whereas `log10()` is easy to back-transform because (e.g) 3 is 10\^3 = 1000.
`round()` uses what's known as "round half to even" or Banker's rounding: if a number is half way between two integers, it will be rounded to the **even** integer.
This is a good strategy because it keeps the rounding unbiased: half of all 0.5s are rounded up, and half are rounded down.
If `min_rank()` doesn't do what you need, look at the variants `dplyr::row_number()`, `dplyr::dense_rank()`, `dplyr::percent_rank()`, and `dplyr::cume_dist()`.
See the documentation for details.
```{r}
df <- data.frame(x = x)
df |> mutate(
row_number = row_number(x),
dense_rank = dense_rank(x),
percent_rank = percent_rank(x),
cume_dist = cume_dist(x)
)
```
You can achieve many of the same results by picking the appropriate `ties.method` argument to base R's `rank()`; you'll probably also want to set `na.last = "keep"` to keep `NA`s as `NA`.
`row_number()` can also be used without a variable when you're inside a dplyr verb, in which case it'll give within `mutate()`.
6. Delays are typically temporally correlated: even once the problem that caused the initial delay has been resolved, later flights are delayed to allow earlier flights to leave.
Don't forget what you learned in Section \@ref(sample-size): whenever creating numerical summaries, it's a good idea to include the number of observations in each group.
The interquartile range `IQR(x)` and median absolute deviation `mad(x)` are robust equivalents that may be more useful if you have outliers.
IQR is `quantile(x, 0.75) - quantile(x, 0.25)`.
`mad()` is derivied similarly to `sd()`, but inside being the average of the squared distances from the mean, it's the median of the absolute differences from the median.
Base R provides a powerful tool for extracting subsets of vectors called `[`.
This book doesn't cover `[` until Section \@ref(vector-subsetting) so for now we'll introduce three specialized functions that are useful inside of `summarise()` if you want to extract values at a specified position: `first()`, `last()`, and `nth()`.
For example, we can find the first and last departure for each day:
```{r}
flights |>
group_by(year, month, day) |>
summarise(
first_dep = first(dep_time),
last_dep = last(dep_time)
)
```
Compared to `[`, these functions allow you to set a `default` value if requested position doesn't exist (e.g. you're trying to get the 3rd element from a group that only has two elements) and you can use `order_by` argument.
Extracting values at positions is complementary to filtering on ranks.
Filtering gives you all variables, with each observation in a separate row:
As the names suggest, the summary functions are typically paired with `summarise()`, but they can also be usefully paired with `mutate()`, particularly when you want do some sort of group standardization.
- Summary functions take a vector and always return a length 1 vector. Typically used with `summarise()`
- Cumulative functions take a vector and return the same length. Used with `mutate()`.
- Paired functions take a pair of functions and return a vector the same length (using the recycling rules if the vectors aren't the same length). Used with `mutate()`