2022-05-14 04:46:49 +08:00
# Data import {#sec-data-import}
2015-12-12 02:34:20 +08:00
2022-05-14 04:46:49 +08:00
```{r}
#| echo: false
2023-03-07 21:52:43 +08:00
2022-05-14 04:46:49 +08:00
source("_common.R")
2021-05-04 21:10:39 +08:00
```
2016-07-07 02:59:50 +08:00
## Introduction
2016-01-27 21:33:35 +08:00
2023-01-03 15:06:27 +08:00
Working with data provided by R packages is a great way to learn data science tools, but you want to apply what you've learned to your own data at some point.
2022-12-16 14:41:10 +08:00
In this chapter, you'll learn the basics of reading data files into R.
Specifically, this chapter will focus on reading plain-text rectangular files.
2023-01-03 15:06:27 +08:00
We'll start with practical advice for handling features like column names, types, and missing data.
2022-12-16 14:41:10 +08:00
You will then learn about reading data from multiple files at once and writing data from R to a file.
2023-01-03 15:06:27 +08:00
Finally, you'll learn how to handcraft data frames in R.
2016-01-27 21:33:35 +08:00
2016-07-07 02:59:50 +08:00
### Prerequisites
2015-09-23 02:35:39 +08:00
2021-02-21 23:40:40 +08:00
In this chapter, you'll learn how to load flat files in R with the **readr** package, which is part of the core tidyverse.
2015-09-23 02:35:39 +08:00
2022-05-08 13:32:25 +08:00
```{r}
#| label: setup
#| message: false
2016-10-04 01:30:24 +08:00
library(tidyverse)
2016-07-07 02:59:50 +08:00
```
2015-09-23 02:35:39 +08:00
2021-10-27 03:00:33 +08:00
## Reading data from a file
2023-03-07 21:52:43 +08:00
To begin, we'll focus on the most common rectangular data file type: CSV, which is short for comma-separated values.
2022-11-17 23:56:08 +08:00
Here is what a simple CSV file looks like.
2023-01-03 15:06:27 +08:00
The first row, commonly called the header row, gives the column names, and the following six rows provide the data.
2023-03-07 21:52:43 +08:00
The columns are separated, aka delimited, by commas.
2021-10-27 03:00:33 +08:00
2022-05-08 13:32:25 +08:00
```{r}
#| echo: false
#| message: false
2023-03-07 21:52:43 +08:00
#| comment: ""
2022-05-08 13:32:25 +08:00
2022-02-24 03:15:52 +08:00
read_lines("data/students.csv") |> cat(sep = "\n")
2021-10-27 03:00:33 +08:00
```
2022-05-14 04:46:49 +08:00
@tbl-students-table shows a representation of the same data as a table.
2021-10-27 03:00:33 +08:00
2022-05-08 13:32:25 +08:00
```{r}
2022-05-14 04:46:49 +08:00
#| label: tbl-students-table
2022-05-08 13:32:25 +08:00
#| echo: false
#| message: false
2022-05-14 04:46:49 +08:00
#| tbl-cap: Data from the students.csv file as a table.
2022-05-08 13:32:25 +08:00
2022-02-24 03:15:52 +08:00
read_csv("data/students.csv") |>
2022-05-14 04:46:49 +08:00
knitr::kable()
2021-10-27 03:00:33 +08:00
```
2022-11-17 23:56:08 +08:00
We can read this file into R using `read_csv()`.
2023-03-07 21:52:43 +08:00
The first argument is the most important: the path to the file.
2023-03-10 21:12:25 +08:00
You can think about the path as the address of the file: the file is called `students.csv` and that it lives in the `data` folder.
2015-09-23 02:35:39 +08:00
2022-05-08 13:32:25 +08:00
```{r}
#| message: true
2021-10-27 03:00:33 +08:00
students <- read_csv("data/students.csv")
2016-07-07 02:59:50 +08:00
```
2015-09-23 02:35:39 +08:00
2023-03-07 21:52:43 +08:00
The code above will work if you have the `students.csv` file in a `data` folder in your project.
You can download the `students.csv` file from <https://pos.it/r4ds-students-csv> or you can read it directly from that URL with:
```{r}
#| eval: false
students <- read_csv("https://pos.it/r4ds-students-csv")
```
2023-01-03 15:06:27 +08:00
When you run `read_csv()`, it prints out a message telling you the number of rows and columns of data, the delimiter that was used, and the column specifications (names of columns organized by the type of data the column contains).
It also prints out some information about retrieving the full column specification and how to quiet this message.
This message is an integral part of readr, and we'll return to it in @sec-col-types.
2022-11-17 23:56:08 +08:00
### Practical advice
Once you read data in, the first step usually involves transforming it in some way to make it easier to work with in the rest of your analysis.
Let's take another look at the `students` data with that in mind.
2015-09-23 02:35:39 +08:00
2023-03-07 21:52:43 +08:00
```{r}
students
```
2023-01-03 15:06:27 +08:00
In the `favourite.food` column, there are a bunch of food items, and then the character string `N/A`, which should have been a real `NA` that R will recognize as "not available".
2022-11-17 23:56:08 +08:00
This is something we can address using the `na` argument.
2023-04-10 11:38:44 +08:00
By default, `read_csv()` only recognizes empty strings (`""`) in this dataset as `NA`s, we want it to also recognize the character string `"N/A"`.
2015-09-23 02:35:39 +08:00
2022-05-08 13:32:25 +08:00
```{r}
#| message: false
2022-11-17 23:56:08 +08:00
students <- read_csv("data/students.csv", na = c("N/A", ""))
2022-05-08 13:32:25 +08:00
2022-11-17 23:56:08 +08:00
students
2016-07-07 02:59:50 +08:00
```
2016-01-27 21:33:35 +08:00
2023-01-03 15:06:27 +08:00
You might also notice that the `Student ID` and `Full Name` columns are surrounded by backticks.
2023-03-07 21:52:43 +08:00
That's because they contain spaces, breaking R's usual rules for variable names; they're **non-syntactic** names.
To refer to these variables, you need to surround them with backticks, `` ` ``:
2021-02-21 23:40:40 +08:00
2022-11-17 23:56:08 +08:00
```{r}
students |>
rename(
student_id = `Student ID`,
full_name = `Full Name`
)
```
2015-09-23 02:35:39 +08:00
2022-11-17 23:56:08 +08:00
An alternative approach is to use `janitor::clean_names()` to use some heuristics to turn them all into snake case at once[^data-import-1].
2023-03-10 21:12:25 +08:00
[^data-import-1]: The [janitor](http://sfirke.github.io/janitor/) package is not part of the tidyverse, but it offers handy functions for data cleaning and works well within data pipelines that use `|>`.
2022-05-08 13:32:25 +08:00
2022-11-17 23:56:08 +08:00
```{r}
#| message: false
2021-02-21 23:40:40 +08:00
2022-11-17 23:56:08 +08:00
students |> janitor::clean_names()
```
2021-02-21 23:40:40 +08:00
2022-11-17 23:56:08 +08:00
Another common task after reading in data is to consider variable types.
2023-02-15 20:49:11 +08:00
For example, `meal_plan` is a categorical variable with a known set of possible values, which in R should be represented as a factor:
2021-02-21 23:40:40 +08:00
2022-11-17 23:56:08 +08:00
```{r}
students |>
janitor::clean_names() |>
2023-03-10 21:12:25 +08:00
mutate(meal_plan = factor(meal_plan))
2022-11-17 23:56:08 +08:00
```
2022-05-08 13:32:25 +08:00
2023-02-15 20:49:11 +08:00
Note that the values in the `meal_plan` variable have stayed the same, but the type of variable denoted underneath the variable name has changed from character (`<chr>`) to factor (`<fct>`).
2022-11-17 23:56:08 +08:00
You'll learn more about factors in @sec-factors.
2021-02-21 23:40:40 +08:00
2023-03-07 21:52:43 +08:00
Before you analyze these data, you'll probably want to fix the `age` and `id` columns.
Currently, `age` is a character variable because one of the observations is typed out as `five` instead of a numeric `5`.
2022-11-17 23:56:08 +08:00
We discuss the details of fixing this issue in @sec-import-spreadsheets.
2021-02-21 23:40:40 +08:00
2022-11-17 23:56:08 +08:00
```{r}
students <- students |>
janitor::clean_names() |>
mutate(
meal_plan = factor(meal_plan),
age = parse_number(if_else(age == "five", "5", age))
)
2021-02-21 23:40:40 +08:00
2022-11-17 23:56:08 +08:00
students
```
2022-05-08 13:32:25 +08:00
2023-03-09 22:19:02 +08:00
A new function here is `if_else()`, which has three arguments.
The first argument `test` should be a logical vector.
The result will contain the value of the second argument, `yes`, when `test` is `TRUE`, and the value of the third argument, `no`, when it is `FALSE`.
Here we're saying if `age` is the character string `"five"`, make it `"5"`, and if not leave it as `age`.
You will learn more about `if_else()` and logical vectors in @sec-logicals.
2022-11-17 23:56:08 +08:00
### Other arguments
2015-09-23 02:35:39 +08:00
2023-03-07 21:52:43 +08:00
There are a couple of other important arguments that we need to mention, and they'll be easier to demonstrate if we first show you a handy trick: `read_csv()` can read text strings that you've created and formatted like a CSV file:
2016-07-12 04:38:39 +08:00
2022-05-08 13:32:25 +08:00
```{r}
#| message: false
2022-11-17 23:56:08 +08:00
read_csv(
"a,b,c
1,2,3
4,5,6"
)
2016-07-12 04:38:39 +08:00
```
2023-01-03 15:06:27 +08:00
Usually, `read_csv()` uses the first line of the data for the column names, which is a very common convention.
But it's not uncommon for a few lines of metadata to be included at the top of the file.
2022-11-17 23:56:08 +08:00
You can use `skip = n` to skip the first `n` lines or use `comment = "#"` to drop all lines that start with (e.g.) `#`:
2021-10-27 03:00:33 +08:00
2022-05-08 13:32:25 +08:00
```{r}
#| message: false
2022-11-17 23:56:08 +08:00
read_csv(
"The first line of metadata
The second line of metadata
x,y,z
1,2,3",
skip = 2
)
2021-10-27 03:00:33 +08:00
2022-11-17 23:56:08 +08:00
read_csv(
"# A comment I want to skip
x,y,z
1,2,3",
comment = "#"
)
2021-10-27 03:00:33 +08:00
```
2022-11-17 23:56:08 +08:00
In other cases, the data might not have column names.
2023-01-03 15:06:27 +08:00
You can use `col_names = FALSE` to tell `read_csv()` not to treat the first row as headings and instead label them sequentially from `X1` to `Xn`:
2021-10-27 03:00:33 +08:00
2022-05-08 13:32:25 +08:00
```{r}
#| message: false
2022-11-17 23:56:08 +08:00
read_csv(
"1,2,3
4,5,6",
col_names = FALSE
)
2021-10-27 03:00:33 +08:00
```
2023-01-03 15:06:27 +08:00
Alternatively, you can pass `col_names` a character vector which will be used as the column names:
2021-10-27 03:00:33 +08:00
```{r}
2022-11-17 23:56:08 +08:00
#| message: false
2021-10-27 03:00:33 +08:00
2022-11-17 23:56:08 +08:00
read_csv(
"1,2,3
4,5,6",
col_names = c("x", "y", "z")
)
2021-10-27 03:00:33 +08:00
```
2022-11-17 23:56:08 +08:00
These arguments are all you need to know to read the majority of CSV files that you'll encounter in practice.
2023-01-03 15:06:27 +08:00
(For the rest, you'll need to carefully inspect your `.csv` file and read the documentation for `read_csv()`'s many other arguments.)
2015-09-23 02:35:39 +08:00
2022-11-17 23:56:08 +08:00
### Other file types
2016-01-27 21:33:35 +08:00
2022-11-17 23:56:08 +08:00
Once you've mastered `read_csv()`, using readr's other functions is straightforward; it's just a matter of knowing which function to reach for:
2016-01-27 21:33:35 +08:00
2023-01-03 15:06:27 +08:00
- `read_csv2()` reads semicolon-separated files.
These use `;` instead of `,` to separate fields and are common in countries that use `,` as the decimal marker.
2016-01-27 21:33:35 +08:00
2023-01-03 15:06:27 +08:00
- `read_tsv()` reads tab-delimited files.
2016-01-27 21:33:35 +08:00
2023-01-03 15:06:27 +08:00
- `read_delim()` reads in files with any delimiter, attempting to automatically guess the delimiter if you don't specify it.
2022-10-25 02:16:14 +08:00
2023-01-03 15:06:27 +08:00
- `read_fwf()` reads fixed-width files.
You can specify fields by their widths with `fwf_widths()` or by their positions with `fwf_positions()`.
2022-10-25 02:16:14 +08:00
2023-01-03 15:06:27 +08:00
- `read_table()` reads a common variation of fixed-width files where columns are separated by white space.
2022-10-25 02:16:14 +08:00
2023-01-03 15:06:27 +08:00
- `read_log()` reads Apache-style log files.
2022-10-25 02:16:14 +08:00
2016-07-12 09:29:17 +08:00
### Exercises
2016-07-09 05:23:19 +08:00
2022-03-05 12:58:23 +08:00
1. What function would you use to read a file where fields were separated with "\|"?
2021-02-21 23:40:40 +08:00
2. Apart from `file`, `skip`, and `comment`, what other arguments do `read_csv()` and `read_tsv()` have in common?
3. What are the most important arguments to `read_fwf()`?
4. Sometimes strings in a CSV file contain commas.
2023-01-03 15:06:27 +08:00
To prevent them from causing problems, they need to be surrounded by a quoting character, like `"` or `'`. By default, `read_csv()` assumes that the quoting character will be `"`.
To read the following text into a data frame, what argument to `read_csv()` do you need to specify?
2021-02-21 23:40:40 +08:00
2022-05-08 13:32:25 +08:00
```{r}
#| eval: false
2016-07-09 05:23:19 +08:00
"x,y\n1,'a,b'"
```
2021-02-21 23:40:40 +08:00
5. Identify what is wrong with each of the following inline CSV files.
2016-07-12 04:38:39 +08:00
What happens when you run the code?
2021-02-21 23:40:40 +08:00
2022-05-08 13:32:25 +08:00
```{r}
#| eval: false
2016-07-12 04:38:39 +08:00
read_csv("a,b\n1,2,3\n4,5,6")
read_csv("a,b,c\n1,2\n1,2,3,4")
read_csv("a,b\n\"1")
read_csv("a,b\n1,2\na,b")
read_csv("a;b\n1;3")
```
2016-07-09 05:23:19 +08:00
2022-10-25 02:16:14 +08:00
6. Practice referring to non-syntactic names in the following data frame by:
a. Extracting the variable called `1`.
2022-12-07 00:51:11 +08:00
b. Plotting a scatterplot of `1` vs. `2`.
2023-01-03 15:06:27 +08:00
c. Creating a new column called `3`, which is `2` divided by `1`.
d. Renaming the columns to `one`, `two`, and `three`.
2022-10-25 02:16:14 +08:00
```{r}
annoying <- tibble(
`1` = 1:10,
`2` = `1` * 2 + rnorm(length(`1`))
)
```
2022-11-17 23:56:08 +08:00
## Controlling column types {#sec-col-types}
2023-01-27 23:51:22 +08:00
A CSV file doesn't contain any information about the type of each variable (i.e. whether it's a logical, number, string, etc.), so readr will try to guess the type.
2023-01-03 15:06:27 +08:00
This section describes how the guessing process works, how to resolve some common problems that cause it to fail, and, if needed, how to supply the column types yourself.
Finally, we'll mention a few general strategies that are useful if readr is failing catastrophically and you need to get more insight into the structure of your file.
2022-11-17 23:56:08 +08:00
### Guessing types
readr uses a heuristic to figure out the column types.
2023-01-03 15:06:27 +08:00
For each column, it pulls the values of 1,000[^data-import-2] rows spaced evenly from the first row to the last, ignoring missing values.
2022-11-17 23:56:08 +08:00
It then works through the following questions:
[^data-import-2]: You can override the default of 1000 with the `guess_max` argument.
- Does it contain only `F`, `T`, `FALSE`, or `TRUE` (ignoring case)? If so, it's a logical.
2023-04-10 23:22:08 +08:00
- Does it contain only numbers (e.g., `1`, `-4.5`, `5e6`, `Inf`)? If so, it's a number.
2023-01-03 15:06:27 +08:00
- Does it match the ISO8601 standard? If so, it's a date or date-time. (We'll return to date-times in more detail in @sec-creating-datetimes).
2022-11-17 23:56:08 +08:00
- Otherwise, it must be a string.
You can see that behavior in action in this simple example:
```{r}
2023-03-10 21:12:25 +08:00
#| message: false
2022-11-17 23:56:08 +08:00
read_csv("
logical,numeric,date,string
TRUE,1,2021-01-15,abc
false,4.5,2021-02-15,def
2023-03-10 21:12:25 +08:00
T,Inf,2021-02-16,ghi
")
2022-11-17 23:56:08 +08:00
```
2023-01-03 15:06:27 +08:00
This heuristic works well if you have a clean dataset, but in real life, you'll encounter a selection of weird and beautiful failures.
2022-11-17 23:56:08 +08:00
### Missing values, column types, and problems
2023-01-03 15:06:27 +08:00
The most common way column detection fails is that a column contains unexpected values, and you get a character column instead of a more specific type.
2023-04-10 11:38:44 +08:00
One of the most common causes for this is a missing value, recorded using something other than the `NA` that readr expects.
2022-11-17 23:56:08 +08:00
Take this simple 1 column CSV file as an example:
```{r}
2023-03-07 21:52:43 +08:00
simple_csv <- "
2022-11-17 23:56:08 +08:00
x
10
.
20
30"
```
If we read it without any additional arguments, `x` becomes a character column:
```{r}
2023-03-10 21:12:25 +08:00
#| message: false
read_csv(simple_csv)
2022-11-17 23:56:08 +08:00
```
In this very small case, you can easily see the missing value `.`.
2023-05-26 04:51:17 +08:00
But what happens if you have thousands of rows with only a few missing values represented by `.`s sprinkled among them?
2022-11-17 23:56:08 +08:00
One approach is to tell readr that `x` is a numeric column, and then see where it fails.
2023-03-07 21:52:43 +08:00
You can do that with the `col_types` argument, which takes a named list where the names match the column names in the CSV file:
2022-11-17 23:56:08 +08:00
```{r}
2023-03-07 21:52:43 +08:00
df <- read_csv(
simple_csv,
col_types = list(x = col_double())
)
2022-11-17 23:56:08 +08:00
```
Now `read_csv()` reports that there was a problem, and tells us we can find out more with `problems()`:
```{r}
problems(df)
```
This tells us that there was a problem in row 3, col 1 where readr expected a double but got a `.`.
That suggests this dataset uses `.` for missing values.
So then we set `na = "."`, the automatic guessing succeeds, giving us the numeric column that we want:
```{r}
2023-03-10 21:12:25 +08:00
#| message: false
read_csv(simple_csv, na = ".")
2022-11-17 23:56:08 +08:00
```
### Column types
readr provides a total of nine column types for you to use:
- `col_logical()` and `col_double()` read logicals and real numbers. They're relatively rarely needed (except as above), since readr will usually guess them for you.
2023-04-10 11:38:44 +08:00
- `col_integer()` reads integers. We seldom distinguish integers and doubles in this book because they're functionally equivalent, but reading integers explicitly can occasionally be useful because they occupy half the memory of doubles.
2023-07-16 20:30:25 +08:00
- `col_character()` reads strings. This can be useful to specify explicitly when you have a column that is a numeric identifier, i.e., long series of digits that identifies an object but doesn't make sense to apply mathematical operations to. Examples include phone numbers, social security numbers, credit card numbers, etc.
2023-01-03 15:06:27 +08:00
- `col_factor()`, `col_date()`, and `col_datetime()` create factors, dates, and date-times respectively; you'll learn more about those when we get to those data types in @sec-factors and @sec-dates-and-times.
2022-11-17 23:56:08 +08:00
- `col_number()` is a permissive numeric parser that will ignore non-numeric components, and is particularly useful for currencies. You'll learn more about it in @sec-numbers.
2023-03-07 21:52:43 +08:00
- `col_skip()` skips a column so it's not included in the result, which can be useful for speeding up reading the data if you have a large CSV file and you only want to use some of the columns.
2022-11-17 23:56:08 +08:00
2023-04-10 11:38:44 +08:00
It's also possible to override the default column by switching from `list()` to `cols()` and specifying `.default`:
2022-11-17 23:56:08 +08:00
```{r}
2023-03-07 21:52:43 +08:00
another_csv <- "
2022-11-17 23:56:08 +08:00
x,y,z
1,2,3"
2023-03-07 21:52:43 +08:00
read_csv(
another_csv,
col_types = cols(.default = col_character())
)
2022-11-17 23:56:08 +08:00
```
Another useful helper is `cols_only()` which will read in only the columns you specify:
```{r}
read_csv(
2023-04-10 11:38:44 +08:00
another_csv,
2022-11-17 23:56:08 +08:00
col_types = cols_only(x = col_character())
)
```
2022-09-20 22:13:51 +08:00
## Reading data from multiple files {#sec-readr-directory}
2016-07-08 00:17:11 +08:00
2021-10-27 03:00:33 +08:00
Sometimes your data is split across multiple files instead of being contained in a single file.
For example, you might have sales data for multiple months, with each month's data in a separate file: `01-sales.csv` for January, `02-sales.csv` for February, and `03-sales.csv` for March.
With `read_csv()` you can read these data in at once and stack them on top of each other in a single data frame.
2015-09-23 21:58:16 +08:00
2016-07-07 02:59:50 +08:00
```{r}
2023-03-10 21:12:25 +08:00
#| message: false
2021-10-27 03:00:33 +08:00
sales_files <- c("data/01-sales.csv", "data/02-sales.csv", "data/03-sales.csv")
read_csv(sales_files, id = "file")
2016-07-07 02:59:50 +08:00
```
2015-09-23 21:58:16 +08:00
2023-03-07 21:52:43 +08:00
Once again, the code above will work if you have the CSV files in a `data` folder in your project.
You can download these files from <https://pos.it/r4ds-01-sales>, <https://pos.it/r4ds-02-sales>, and <https://pos.it/r4ds-03-sales> or you can read them directly with:
```{r}
#| eval: false
sales_files <- c(
"https://pos.it/r4ds-01-sales",
"https://pos.it/r4ds-02-sales",
"https://pos.it/r4ds-03-sales"
)
read_csv(sales_files, id = "file")
```
2023-03-10 21:12:25 +08:00
The `id` argument adds a new column called `file` to the resulting data frame that identifies the file the data come from.
2021-10-27 03:00:33 +08:00
This is especially helpful in circumstances where the files you're reading in do not have an identifying column that can help you trace the observations back to their original sources.
2016-07-07 02:59:50 +08:00
2021-10-27 03:00:33 +08:00
If you have many files you want to read in, it can get cumbersome to write out their names as a list.
2022-11-11 22:00:44 +08:00
Instead, you can use the base `list.files()` function to find the files for you by matching a pattern in the file names.
2022-11-17 23:56:08 +08:00
You'll learn more about these patterns in @sec-regular-expressions.
2016-07-07 02:59:50 +08:00
2016-07-10 22:19:56 +08:00
```{r}
2022-11-11 22:00:44 +08:00
sales_files <- list.files("data", pattern = "sales\\.csv$", full.names = TRUE)
2021-10-27 03:00:33 +08:00
sales_files
2016-07-07 02:59:50 +08:00
```
2022-05-14 04:46:49 +08:00
## Writing to a file {#sec-writing-to-a-file}
2015-09-23 21:58:16 +08:00
2021-02-21 23:40:40 +08:00
readr also comes with two useful functions for writing data back to disk: `write_csv()` and `write_tsv()`.
2023-03-09 00:40:53 +08:00
The most important arguments to these functions are `x` (the data frame to save) and `file` (the location to save it).
2021-02-21 23:40:40 +08:00
You can also specify how missing values are written with `na`, and if you want to `append` to an existing file.
2016-07-07 02:59:50 +08:00
2022-05-08 13:32:25 +08:00
```{r}
#| eval: false
2021-10-27 03:00:33 +08:00
write_csv(students, "students.csv")
2016-07-10 22:19:56 +08:00
```
2016-07-07 02:59:50 +08:00
2022-03-05 12:58:23 +08:00
Now let's read that csv file back in.
2023-03-07 21:52:43 +08:00
Note that the variable type information that you just set up is lost when you save to CSV because you're starting over with reading from a plain text file again:
2016-07-07 02:59:50 +08:00
2022-05-08 13:32:25 +08:00
```{r}
#| warning: false
#| message: false
2021-10-27 03:00:33 +08:00
students
write_csv(students, "students-2.csv")
read_csv("students-2.csv")
2016-07-07 02:59:50 +08:00
```
2015-09-23 21:58:16 +08:00
2021-02-21 23:40:40 +08:00
This makes CSVs a little unreliable for caching interim results---you need to recreate the column specification every time you load in.
2023-08-26 08:42:20 +08:00
There are two main alternatives:
2021-02-21 23:40:40 +08:00
1. `write_rds()` and `read_rds()` are uniform wrappers around the base functions `readRDS()` and `saveRDS()`.
2023-03-07 21:52:43 +08:00
These store data in R's custom binary format called RDS.
This means that when you reload the object, you are loading the *exact same* R object that you stored.
2015-09-23 21:58:16 +08:00
2016-07-10 22:19:56 +08:00
```{r}
2021-10-27 03:00:33 +08:00
write_rds(students, "students.rds")
read_rds("students.rds")
2016-07-10 22:19:56 +08:00
```
2021-02-21 23:40:40 +08:00
2022-12-08 09:43:11 +08:00
2. The arrow package allows you to read and write parquet files, a fast binary file format that can be shared across programming languages.
2023-01-03 15:06:27 +08:00
We'll return to arrow in more depth in @sec-arrow.
2021-02-21 23:40:40 +08:00
2022-05-08 13:32:25 +08:00
```{r}
#| eval: false
2022-11-17 23:56:08 +08:00
library(arrow)
write_parquet(students, "students.parquet")
read_parquet("students.parquet")
2021-10-27 03:00:33 +08:00
#> # A tibble: 6 × 5
#> student_id full_name favourite_food meal_plan age
#> <dbl> <chr> <chr> <fct> <dbl>
#> 1 1 Sunil Huffmann Strawberry yoghurt Lunch only 4
#> 2 2 Barclay Lynn French fries Lunch only 5
#> 3 3 Jayendra Lyne NA Breakfast and lunch 7
#> 4 4 Leon Rossini Anchovies Lunch only NA
#> 5 5 Chidiegwu Dunkel Pizza Breakfast and lunch 5
#> 6 6 Güvenç Attila Ice cream Lunch only 6
2016-07-10 22:19:56 +08:00
```
2015-09-21 21:41:14 +08:00
2022-12-08 09:43:11 +08:00
Parquet tends to be much faster than RDS and is usable outside of R, but does require the arrow package.
2015-09-21 21:41:14 +08:00
2022-05-08 13:32:25 +08:00
```{r}
#| include: false
2021-10-27 03:00:33 +08:00
file.remove("students-2.csv")
file.remove("students.rds")
2016-07-10 22:19:56 +08:00
```
2022-09-29 23:36:22 +08:00
2022-10-25 02:16:14 +08:00
## Data entry
Sometimes you'll need to assemble a tibble "by hand" doing a little data entry in your R script.
There are two useful functions to help you do this which differ in whether you layout the tibble by columns or by rows.
`tibble()` works by column:
```{r}
tibble(
x = c(1, 2, 5),
y = c("h", "m", "g"),
z = c(0.08, 0.83, 0.60)
)
```
Laying out the data by column can make it hard to see how the rows are related, so an alternative is `tribble()`, short for **tr**ansposed t**ibble**, which lets you lay out your data row by row.
`tribble()` is customized for data entry in code: column headings start with `~` and entries are separated by commas.
This makes it possible to lay out small amounts of data in an easy to read form:
```{r}
tribble(
~x, ~y, ~z,
2023-04-10 11:38:44 +08:00
1, "h", 0.08,
2, "m", 0.83,
5, "g", 0.60
2022-10-25 02:16:14 +08:00
)
```
2022-09-29 23:36:22 +08:00
## Summary
2022-10-25 02:16:14 +08:00
In this chapter, you've learned how to load CSV files with `read_csv()` and to do your own data entry with `tibble()` and `tribble()`.
2022-09-29 23:36:22 +08:00
You've learned how csv files work, some of the problems you might encounter, and how to overcome them.
2023-02-14 03:29:46 +08:00
We'll come to data import a few times in this book: @sec-import-spreadsheets from Excel and Google Sheets, @sec-import-databases will show you how to load data from databases, @sec-arrow from parquet files, @sec-rectangling from JSON, and @sec-scraping from websites.
2022-09-29 23:36:22 +08:00
2023-03-02 03:34:26 +08:00
We're just about at the end of this section of the book, but there's one important last topic to cover: how to get help.
So in the next chapter, you'll learn some good places to look for help, how to create a reprex to maximize your chances of getting good help, and some general advice on keeping up with the world of R.