Fix typos
This commit is contained in:
parent
c5b7811645
commit
9ca4cb9a95
|
@ -515,7 +515,7 @@ files <- map(paths, readxl::read_excel)
|
|||
```
|
||||
|
||||
Now that you have these data frames in a list, how do you get one out?
|
||||
You can use `files[[i]]` to extract the i-th element:
|
||||
You can use `files[[i]]` to extract the i<sup>th</sup> element:
|
||||
|
||||
```{r}
|
||||
files[[3]]
|
||||
|
@ -663,7 +663,7 @@ For large and richer datasets, using parquet might be a better choice than `.csv
|
|||
unlink("gapminder.csv")
|
||||
```
|
||||
|
||||
If you're working in a project, we'd suggest calling the file that does this sort of data prep work something like `0-cleanup.R`.
|
||||
If you're working in a project, we suggest calling the file that does this sort of data prep work something like `0-cleanup.R`.
|
||||
The `0` in the file name suggests that this should be run before anything else.
|
||||
|
||||
If your input data files change over time, you might consider learning a tool like [targets](https://docs.ropensci.org/targets/) to set up your data cleaning code to automatically re-run whenever one of the input files is modified.
|
||||
|
@ -768,7 +768,7 @@ Unfortunately, we're now going to leave you to figure that out on your own, but
|
|||
### Handling failures
|
||||
|
||||
Sometimes the structure of your data might be sufficiently wild that you can't even read all the files with a single command.
|
||||
And then you'll encounter one of the downsides of map: it succeeds or fails as a whole.
|
||||
And then you'll encounter one of the downsides of `map()`: it succeeds or fails as a whole.
|
||||
`map()` will either successfully read all of the files in a directory or fail with an error, reading zero files.
|
||||
This is annoying: why does one failure prevent you from accessing all the other successes?
|
||||
|
||||
|
|
|
@ -94,7 +94,7 @@ For example:
|
|||
|
||||
- `ipynb` for Jupyter Notebooks (`.ipynb`).
|
||||
|
||||
Remember, when generating a document to share with decision-makers, you can turn off the default display of code by setting global options in document YAML:
|
||||
Remember, when generating a document to share with decision-makers, you can turn off the default display of code by setting global options in the document YAML:
|
||||
|
||||
``` yaml
|
||||
execute:
|
||||
|
@ -187,8 +187,6 @@ textInput("name", "What is your name?")
|
|||
numericInput("age", "How old are you?", NA, min = 0, max = 150)
|
||||
```
|
||||
|
||||
And you also need a code chunk with chunk option `context: server` which contains the code that needs to run in a Shiny server.
|
||||
|
||||
```{r}
|
||||
#| echo: false
|
||||
#| out-width: null
|
||||
|
@ -199,6 +197,8 @@ And you also need a code chunk with chunk option `context: server` which contain
|
|||
knitr::include_graphics("quarto/quarto-shiny.png")
|
||||
```
|
||||
|
||||
And you also need a code chunk with chunk option `context: server` which contains the code that needs to run in a Shiny server.
|
||||
|
||||
You can then refer to the values with `input$name` and `input$age`, and the code that uses them will be automatically re-run whenever they change.
|
||||
|
||||
We can't show you a live shiny app here because shiny interactions occur on the **server-side**.
|
||||
|
|
50
quarto.qmd
50
quarto.qmd
|
@ -78,7 +78,7 @@ RStudio executes the code and displays the results inline with the code.
|
|||
knitr::include_graphics("quarto/diamond-sizes-notebook.png")
|
||||
```
|
||||
|
||||
If you don't like seeing your plots and output in your document and would rather make use of RStudio's console and plot panes, you can click on the gear icon next to "Render" and switch to "Chunk Output in Console", as shown in @fig-diamond-sizes-console-output.
|
||||
If you don't like seeing your plots and output in your document and would rather make use of RStudio's Console and Plot panes, you can click on the gear icon next to "Render" and switch to "Chunk Output in Console", as shown in @fig-diamond-sizes-console-output.
|
||||
|
||||
```{r}
|
||||
#| label: fig-diamond-sizes-console-output
|
||||
|
@ -589,33 +589,37 @@ On subsequent runs, knitr will check to see if the code has changed, and if it h
|
|||
The caching system must be used with care, because by default it is based on the code only, not its dependencies.
|
||||
For example, here the `processed_data` chunk depends on the `raw-data` chunk:
|
||||
|
||||
`r chunk`{r}
|
||||
#| label: raw-data
|
||||
```
|
||||
`r chunk`{r}
|
||||
#| label: raw-data
|
||||
|
||||
rawdata <- readr::read_csv("a_very_large_file.csv")
|
||||
`r chunk`
|
||||
rawdata <- readr::read_csv("a_very_large_file.csv")
|
||||
`r chunk`
|
||||
|
||||
`r chunk`{r}
|
||||
#| label: processed_data
|
||||
#| cache: true
|
||||
`r chunk`{r}
|
||||
#| label: processed_data
|
||||
#| cache: true
|
||||
|
||||
processed_data <- rawdata |>
|
||||
processed_data <- rawdata |>
|
||||
filter(!is.na(import_var)) |>
|
||||
mutate(new_variable = complicated_transformation(x, y, z))
|
||||
`r chunk`
|
||||
`r chunk`
|
||||
```
|
||||
|
||||
Caching the `processed_data` chunk means that it will get re-run if the dplyr pipeline is changed, but it won't get rerun if the `read_csv()` call changes.
|
||||
You can avoid that problem with the `dependson` chunk option:
|
||||
|
||||
`r chunk`{r}
|
||||
#| label: processed-data
|
||||
#| cache: true
|
||||
#| dependson: "raw-data"
|
||||
```
|
||||
`r chunk`{r}
|
||||
#| label: processed-data
|
||||
#| cache: true
|
||||
#| dependson: "raw-data"
|
||||
|
||||
processed_data <- rawdata |>
|
||||
processed_data <- rawdata |>
|
||||
filter(!is.na(import_var)) |>
|
||||
mutate(new_variable = complicated_transformation(x, y, z))
|
||||
`r chunk`
|
||||
`r chunk`
|
||||
```
|
||||
|
||||
`dependson` should contain a character vector of *every* chunk that the cached chunk depends on.
|
||||
Knitr will update the results for the cached chunk whenever it detects that one of its dependencies have changed.
|
||||
|
@ -626,12 +630,14 @@ This is an arbitrary R expression that will invalidate the cache whenever it cha
|
|||
A good function to use is `file.info()`: it returns a bunch of information about the file including when it was last modified.
|
||||
Then you can write:
|
||||
|
||||
`r chunk`{r}
|
||||
#| label: raw-data
|
||||
#| cache.extra: file.info("a_very_large_file.csv")
|
||||
```
|
||||
`r chunk`{r}
|
||||
#| label: raw-data
|
||||
#| cache.extra: file.info("a_very_large_file.csv")
|
||||
|
||||
rawdata <- readr::read_csv("a_very_large_file.csv")
|
||||
`r chunk`
|
||||
rawdata <- readr::read_csv("a_very_large_file.csv")
|
||||
`r chunk`
|
||||
```
|
||||
|
||||
We've followed the advice of [David Robinson](https://twitter.com/drob/status/738786604731490304) to name these chunks: each chunk is named after the primary object that it creates.
|
||||
This makes it easier to understand the `dependson` specification.
|
||||
|
@ -651,7 +657,7 @@ One common error in documents with code chunks is duplicated chunk labels, which
|
|||
To address this issue, all you need to do is to change one of your duplicated labels.
|
||||
|
||||
If the errors are due to the R code in the document, the first thing you should always try is to recreate the problem in an interactive session.
|
||||
Restart R, then "Run all chunks" (either from Code menu, under Run region), or with the keyboard shortcut Ctrl + Alt + R.
|
||||
Restart R, then "Run all chunks", either from the Code menu, under Run region or with the keyboard shortcut Ctrl + Alt + R.
|
||||
If you're lucky, that will recreate the problem, and you can figure out what's going on interactively.
|
||||
|
||||
If that doesn't help, there must be something different between your interactive environment and the Quarto environment.
|
||||
|
|
Loading…
Reference in New Issue