You are currently browsing hadleywickham’s articles.

I’m pleased to announce the release of the dbplyr package, which now contains all dplyr code related to connecting to databases. This shouldn’t affect you-as-a-user much, but it makes dplyr simpler, and makes it easier to release improvements just for database related code.

You can install the latest version of dbplyr with:

install.packages("dbplyr")

DBI and dplyr alignment

The biggest change in this release is that dplyr/dbplyr works much more directly with DBI database connections. This makes it much easier to switch between low-level queries written in SQL, and high-level data manipulation functions written with dplyr verbs.

To connect to a database, first use DBI::dbConnect() to create a database connection. For example, the following code connects to a temporary, in-memory, SQLite database, then uses DBI to copy over some data.

con <- DBI::dbConnect(RSQLite::SQLite(), ":memory:")
DBI::dbWriteTable(con, "iris", iris)
#> [1] TRUE
DBI::dbWriteTable(con, "mtcars", mtcars)
#> [1] TRUE

With this connection in hand, you can execute hand-written SQL queries:

DBI::dbGetQuery(con, "SELECT count() FROM iris")
#>   count()
#> 1     150

Or you can let dplyr generate the SQL for you:

iris2 <- tbl(con, "iris")

species_mean <- iris2 %>%
  group_by(Species) %>%
  summarise_all(mean)

species_mean %>% show_query()
#> <SQL>
#> SELECT `Species`, AVG(`Sepal.Length`) AS `Sepal.Length`, AVG(`Sepal.Width`) AS `Sepal.Width`, AVG(`Petal.Length`) AS `Petal.Length`, AVG(`Petal.Width`) AS `Petal.Width`
#> FROM `iris`
#> GROUP BY `Species`
species_mean
#> # Source:   lazy query [?? x 5]
#> # Database: sqlite 3.11.1 [:memory:]
#>      Species Sepal.Length Sepal.Width Petal.Length Petal.Width
#>        <chr>        <dbl>       <dbl>        <dbl>       <dbl>
#> 1     setosa        5.006       3.428        1.462       0.246
#> 2 versicolor        5.936       2.770        4.260       1.326
#> 3  virginica        6.588       2.974        5.552       2.026

This alignment is made possible thanks to the hard work of Kirill Muller who has been working to make DBI backends more consistent, comprehensive, and easier to use. This work has been funded by the R Consortium and will continue this year with improvements to backends for the two major open source databases MySQL/MariaDB and PostgreSQL.

(You can continue to the old style src_mysql(), src_postgres(), and src_sqlite() functions, which still live in dplyr, but I recommend that you switch to the new style for new code)

SQL translation

We’ve also worked to improve the translation of R code to SQL. Thanks to @hhoeflin, dbplyr now has a basic SQL optimiser that considerably reduces the number of subqueries needed in many expressions. For example, the following code used to generate three subqueries, but now generates idiomatic SQL:

con %>% 
  tbl("mtcars") %>%
  filter(cyl > 2) %>%
  select(mpg:hp) %>%
  head(10) %>%
  show_query()
#> <SQL>
#> SELECT `mpg` AS `mpg`, `cyl` AS `cyl`, `disp` AS `disp`, `hp` AS `hp`
#> FROM `mtcars`
#> WHERE (`cyl` > 2.0)
#> LIMIT 10

At a lower-level, dplyr now:

  • Can translate case_when():
    library(dbplyr)
    translate_sql(case_when(x > 1 ~ "big", y < 2 ~ "small"), con = con)
    #> <SQL> CASE
    #> WHEN (`x` > 1.0) THEN ('big')
    #> WHEN (`y` < 2.0) THEN ('small')
    #> END
  • Has better support for type coercions:
    translate_sql(as.character(cyl), con = con)
    #> <SQL> CAST(`cyl` AS TEXT)
    translate_sql(as.integer(cyl), con = con)
    #> <SQL> CAST(`cyl` AS INTEGER)
    translate_sql(as.double(cyl), con = con)
    #> <SQL> CAST(`cyl` AS NUMERIC)
  • Can more reliably translate %IN%:
    translate_sql(x %in% 1:5, con = con)
    #> <SQL> `x` IN (1, 2, 3, 4, 5)
    translate_sql(x %in% 1L, con = con)
    #> <SQL> `x` IN (1)
    translate_sql(x %in% c(1L), con = con)
    #> <SQL> `x` IN (1)

You can now use in_schema() to refer to tables in schema: in_schema("my_schema_name", "my_table_name"). You can use the result of this function anywhere you could previously use a table name.

We’ve also included better translations for Oracle, MS SQL Server, Hive and Impala. We’re working to add support for more databases over time, but adding support on your own is surprisingly easy. Submit an issue to dplyr and we’ll help you get started.

These are just the highlights: you can see the full set of improvements and bug fixes in the release notes

Contributors

As with all R packages, this is truly a community effort. A big thanks goes to all those who contributed code or documentation to this release: Austen Head, Edgar Ruiz, Greg Freedman Ellis, Hannes Mühleisen, Ian Cook, Karl Dunkle Werner, Michael Sumner, Mine Cetinkaya-Rundel, @shabbybanks and Sergio Oller

Vision

Since you’ve read this far, I also wanted to touch on RStudio’s vision for databases. Many analysts have most of their data in databases, and making it as easy as possible to get data out of the database and into R makes a huge difference. Thanks to the community, R already has strong tools for talking to the popular open source databases. But support for connecting to enterprise databases and solving enterprise challenges has lagged somewhat. At RStudio we are actively working to solve these problems.

As well as dbplyr and DBI, we are working on many other pain points in the database ecosystem. You’ll hear much more about these packages in the future, but I wanted to touch on the highlights so you can see where we are heading. These pieces are not yet as integrated as they should be, but they are valuable by themselves, and we will continue to work to make a seamless database experience, that is as good as (or better than!) any other environment.

  • The odbc package provides a DBI compliant backend for any database with an ODBC driver. Compared to the existing RODBC package, odbc is faster (~3x for reading, ~2x for writing), translates date/time data types, and is under active development.RStudio is also planning on providing best-of-breed ODBC drivers for the most important enterprise databases to our Pro customers. If you’ve felt the pain of connecting to your enterprise database and would like to learn more, please schedule a meeting with our sales team.
  • You should never record database credentials in your R scripts, so we are working on safer ways to store them that don’t add a lot of extra hassle. One piece of the puzzle is the keyring package, which allows you to securely store information in your system keychain, and only decrypt it when needed.Another piece of the puzzle is the config package, which makes it easy to parameterise your database connection credentials so that you can connect to your testing database when experimenting locally, and your production database when you deploy your code.
  • Connecting to databases from Shiny can be challenging because you don’t want a fresh connection every for every user action (because that’s slow), and you don’t want one connection per app (because that’s unreliable). The pool package allows you to manage a shared pool of connections for your app, giving you both speed and reliability.
  • We’re also working to make sure all of these pieces are easily used from the IDE and inside R Markdown. One neat feature that you might not have heard about is support for SQL chunks in R Markdown.

If any of these pieces sound interesting, please stay tuned to the blog for more upcoming announcements. Please also check out out new database website: https://db.rstudio.com. Over time, this website will expand to document all database best practices, so you can find everything you need in one place.

I’m pleased to announce that bigrquery 0.4.0 is now on CRAN. bigrquery makes it possible to talk to Google’s BigQuery cloud database. It provides both DBI and dplyr backends so you can interact with BigQuery using either low-level SQL or high-level dplyr verbs.

Install the latest version of bigrquery with:

install.packages("bigrquery")

Basic usage

Connect to a bigquery database using DBI:

library(dplyr)

con <- DBI::dbConnect(dbi_driver(),
  project = "publicdata",
  dataset = "samples",
  billing = "887175176791"
)
DBI::dbListTables(con)
#> [1] "github_nested"   "github_timeline" "gsod"            "natality"       
#> [5] "shakespeare"     "trigrams"        "wikipedia"

(You’ll be prompted to authenticate interactively, or you can use a service token with set_service_token().)

Then you can either submit your own SQL queries or use dplyr to write them for you:

shakespeare <- con %>% tbl("shakespeare")
shakespeare %>%
  group_by(word) %>%
  summarise(n = sum(word_count))
#> 0 bytes processed
#> # Source:   lazy query [?? x 2]
#> # Database: BigQueryConnection
#>            word     n
#>           <chr> <int>
#>  1   profession    20
#>  2       augury     2
#>  3 undertakings     3
#>  4      surmise     8
#>  5     religion    14
#>  6     advanced    16
#>  7     Wormwood     1
#>  8    parchment     8
#>  9      villany    49
#> 10         digs     3
#> # ... with more rows

New features

  • dplyr support has been updated to require dplyr 0.7.0 and use dbplyr. This means that you can now more naturally work directly with DBI connections. dplyr now translates to modern BigQuery SQL which supports a broader set of translations. Along the way I also fixed a vareity of SQL generation bugs.
  • New functions insert_extract_job() makes it possible to extract data and save in google storage, and insert_table() allows you to insert empty tables into a dataset.
  • All POST requests (inserts, updates, copies and query_exec) now take .... This allows you to add arbitrary additional data to the request body making it possible to use parts of the BigQuery API that are otherwise not exposed. snake_case argument names are automatically converted to camelCase so you can stick consistently to snake case in your R code.
  • Full support for DATE, TIME, and DATETIME types (#128).

There were a variety of bug fixes and other minor improvements: see the release notes for full details.

Contributors

bigrquery a community effort: a big thanks go to Christofer Bäcklin, Jarod G.R. Meng and Akhmed Umyarov for their pull requests. Thank you all for your contributions!

I’m pleased to announce that dplyr 0.7.0 is now on CRAN! (This was dplyr 0.6.0 previously; more on that below.) dplyr provides a “grammar” of data transformation, making it easy and elegant to solve the most common data manipulation challenges. dplyr supports multiple backends: as well as in-memory data frames, you can also use it with remote SQL databases. If you haven’t heard of dplyr before, the best place to start is the Data transformation chapter in R for Data Science.

You can install the latest version of dplyr with:

install.packages("dplyr")

Features

dplyr 0.7.0 is a major release including over 100 improvements and bug fixes, as described in the release notes. In this blog post, I want to discuss one big change and a handful of smaller updates. This version of dplyr also saw a major revamp of database connections. That’s a big topic, so it’ll get its own blog post next week.

Tidy evaluation

The biggest change is a new system for programming with dplyr, called tidy evaluation, or tidy eval for short. Tidy eval is a system for capturing expressions and later evaluating them in the correct context. It is is important because it allows you to interpolate values in contexts where dplyr usually works with expressions:

my_var <- quo(homeworld)

starwars %>%
  group_by(!!my_var) %>%
  summarise_at(vars(height:mass), mean, na.rm = TRUE)
#> # A tibble: 49 x 3
#>         homeworld   height  mass
#>                  
#>  1       Alderaan 176.3333  64.0
#>  2    Aleen Minor  79.0000  15.0
#>  3         Bespin 175.0000  79.0
#>  4     Bestine IV 180.0000 110.0
#>  5 Cato Neimoidia 191.0000  90.0
#>  6          Cerea 198.0000  82.0
#>  7       Champala 196.0000   NaN
#>  8      Chandrila 150.0000   NaN
#>  9   Concord Dawn 183.0000  79.0
#> 10       Corellia 175.0000  78.5
#> # ... with 39 more rows

This makes it possible to write your functions that work like dplyr functions, reducing the amount of copy-and-paste in your code:

starwars_mean <- function(my_var) {
  my_var <- enquo(my_var)
  
  starwars %>%
    group_by(!!my_var) %>%
    summarise_at(vars(height:mass), mean, na.rm = TRUE)
}
starwars_mean(homeworld)

You can also use the new .data pronoun to refer to variables with strings:

my_var <- "homeworld"

starwars %>%
  group_by(.data[[my_var]]) %>%
  summarise_at(vars(height:mass), mean, na.rm = TRUE)

This is useful when you’re writing packages that use dplyr code because it avoids an annoying note from R CMD check.

To learn more about how tidy eval helps solve data analysis challenge, please read the new programming with dplyr vignette. Tidy evaluation is implemented in the rlang package, which also provides a vignette on the theoretical underpinnings. Tidy eval is a rich system and takes a while to get your head around it, but we are confident that learning tidy eval will pay off, especially as it roles out to other packages in the tidyverse (tidyr and ggplot2 are next on the todo list).

The introduction of tidy evaluation means that the standard evaluation (underscored) version of each main verb (filter_(), select_() etc) is no longer needed, and so these functions have been deprecated (but remain around for backward compatibility).

Character encoding

We have done a lot of work to ensure that dplyr works with encodings other than Latin1 on Windows. This is most likely to affect you if you work with data that contains Chinese, Japanese, or Korean (CJK) characters. dplyr should now just work with such data. Please let us know if you have problems!

New datasets

dplyr has some new datasets that will help write more interesting examples:

  • starwars, shown above, contains information about characters from the Star Wars movies, sourced from the Star Wars API. It contains a number of list-columns.
    starwars
    #> # A tibble: 87 x 13
    #>                  name height  mass    hair_color  skin_color eye_color
    #>                                         
    #>  1     Luke Skywalker    172    77         blond        fair      blue
    #>  2              C-3PO    167    75                  gold    yellow
    #>  3              R2-D2     96    32           white, blue       red
    #>  4        Darth Vader    202   136          none       white    yellow
    #>  5        Leia Organa    150    49         brown       light     brown
    #>  6          Owen Lars    178   120   brown, grey       light      blue
    #>  7 Beru Whitesun lars    165    75         brown       light      blue
    #>  8              R5-D4     97    32            white, red       red
    #>  9  Biggs Darklighter    183    84         black       light     brown
    #> 10     Obi-Wan Kenobi    182    77 auburn, white        fair blue-gray
    #> # ... with 77 more rows, and 7 more variables: birth_year ,
    #> #   gender , homeworld , species , films ,
    #> #   vehicles , starships 
  • storms has the trajectories of ~200 tropical storms. It contains a strong grouping structure.
    storms
    #> # A tibble: 10,010 x 13
    #>     name  year month   day  hour   lat  long              status category
    #>                             
    #>  1   Amy  1975     6    27     0  27.5 -79.0 tropical depression       -1
    #>  2   Amy  1975     6    27     6  28.5 -79.0 tropical depression       -1
    #>  3   Amy  1975     6    27    12  29.5 -79.0 tropical depression       -1
    #>  4   Amy  1975     6    27    18  30.5 -79.0 tropical depression       -1
    #>  5   Amy  1975     6    28     0  31.5 -78.8 tropical depression       -1
    #>  6   Amy  1975     6    28     6  32.4 -78.7 tropical depression       -1
    #>  7   Amy  1975     6    28    12  33.3 -78.0 tropical depression       -1
    #>  8   Amy  1975     6    28    18  34.0 -77.0 tropical depression       -1
    #>  9   Amy  1975     6    29     0  34.4 -75.8      tropical storm        0
    #> 10   Amy  1975     6    29     6  34.0 -74.8      tropical storm        0
    #> # ... with 10,000 more rows, and 4 more variables: wind ,
    #> #   pressure , ts_diameter , hu_diameter 
  • band_members, band_instruments and band_instruments2 has a tiny amount of data about bands. It’s designed to be very simple so you can illustrate how joins work without getting distracted by the details of the data.
    band_members
    #> # A tibble: 3 x 2
    #>    name    band
    #>      
    #> 1  Mick  Stones
    #> 2  John Beatles
    #> 3  Paul Beatles
    band_instruments
    #> # A tibble: 3 x 2
    #>    name  plays
    #>     
    #> 1  John guitar
    #> 2  Paul   bass
    #> 3 Keith guitar

New and improved verbs

  • The pull() generic allows you to extract a single column either by name or position. It’s similar to select() but returns a vector, rather than a smaller tibble.
    mtcars %>% pull(-1) %>% str()
    #>  num [1:32] 4 4 1 1 2 1 4 2 2 4 ...
    mtcars %>% pull(cyl) %>% str()
    #>  num [1:32] 6 6 4 6 8 6 8 4 4 6 ...

    Thanks to Paul Poncet for the idea!

  • arrange() for grouped data frames gains a .by_group argument so you can choose to sort by groups if you want to (defaults to FALSE).
  • All single table verbs now have scoped variants suffixed with _if(), _at() and _all(). Use these if you want to do something to every variable (_all), variables selected by their names (_at), or variables that satisfy some predicate (_if).
    iris %>% summarise_if(is.numeric, mean)
    starwars %>% select_if(Negate(is.list))
    storms %>% group_by_at(vars(month:hour))

Other important changes

  • Local join functions can now control how missing values are matched. The default value is na_matches = "na", which treats two missing values as equal. To prevent missing values from matching, use na_matches = "never".

You can change the default behaviour by calling pkgconfig::set_config("dplyr::na_matches", "never").

  • bind_rows() and combine() are more strict when coercing. Logical values are no longer coerced to integer and numeric. Date, POSIXct and other integer or double-based classes are no longer coerced to integer or double to avoid dropping important metadata. We plan to continue improving this interface in the future.

Breaking changes

From time-to-time I discover that I made a mistake in an older version of dplyr and developed what is now a clearly suboptimal API. If the problem isn’t too big, I try to just leave it – the cost of making small improvements is not worth it when compared to the cost of breaking existing code. However, there are bigger improvements where I believe the short-term pain of breaking code is worth the long-term payoff of a better API.

Regardless, it’s still frustrating when an update to dplyr breaks your code. To minimise this pain, I plan to do two things going forward:

  • Adopt an odd-even release cycle so that API breaking changes only occur in odd numbered releases. Even numbered releases will only contain bug fixes and new features. This is why I’ve skipped dplyr 0.6.0 and gone directly to dplyr 0.7.0.
  • Invest time in developing better tools isolating packages across projects so that you can choose when to upgrade a package on a project-by-project basis, and if something goes wrong, easily roll back to a version that worked. Look for news about this later in the year.

Contributors

dplyr is truly a community effort. Apart from the dplyr team (myself, Kirill Müller, and Lionel Henry), this release wouldn’t have been possible without patches from Christophe Dervieux, Dean Attali, Ian Cook, Ian Lyttle, Jake Russ, Jay Hesselberth, Jennifer (Jenny) Bryan, @lindbrook, Mauro Lepore, Nicolas Coutin, Daniel, Tony Fischetti, Hiroaki Yutani and Sergio Oller. Thank you all for your contributions!

I’m planning to submit dplyr 0.6.0 to CRAN on May 11 (in four weeks time). In preparation, I’d like to announce that the release candidate, dplyr 0.5.0.9002 is now available. I would really appreciate it if you’d try it out and report any problems. This will ensure that the official release has as few bugs as possible.

Installation

Install the pre-release version with:

# install.packages("devtools")
devtools::install_github("tidyverse/dplyr")

If you discover any problems, please file a minimal reprex on GitHub. You can roll back to the released version with:

install.packages("dplyr")

Features

dplyr 0.6.0 is a major release including over 100 bug fixes and improvements. There are three big changes that I want to touch on here:

  • Databases
  • Improved encoding support (particularly for CJK on windows)
  • Tidyeval, a new framework for programming with dplyr

You can see a complete list of changes in the draft release notes.

Databases

Almost all database related code has been moved out of dplyr and into a new package, dbplyr. This makes dplyr simpler, and will make it easier to release fixes for bugs that only affect databases.

To install the development version of dbplyr so you can try it out, run:

devtools::install_github("hadley/dbplyr")

There’s one major change, as well as a whole heap of bug fixes and minor improvements. It is now no longer necessary to create a remote “src”. Instead you can work directly with the database connection returned by DBI, reflecting the robustness of the DBI ecosystem. Thanks largely to the work of Kirill Muller (funded by the R Consortium) DBI backends are now much more consistent, comprehensive, and easier to use. That means that there’s no longer a need for a layer between you and DBI.

You can continue to use src_mysql(), src_postgres(), and src_sqlite() (which still live in dplyr), but I recommend a new style that makes the connection to DBI more clear:

con <- DBI::dbConnect(RSQLite::SQLite(), ":memory:")
DBI::dbWriteTable(con, "iris", iris)
#> [1] TRUE

iris2 <- tbl(con, "iris")
iris2
#> Source:     table<iris> [?? x 5]
#> Database:   sqlite 3.11.1 [:memory:]
#> 
#>    Sepal.Length Sepal.Width Petal.Length Petal.Width Species
#>           <dbl>       <dbl>        <dbl>       <dbl>   <chr>
#> 1           5.1         3.5          1.4         0.2  setosa
#> 2           4.9         3.0          1.4         0.2  setosa
#> 3           4.7         3.2          1.3         0.2  setosa
#> 4           4.6         3.1          1.5         0.2  setosa
#> 5           5.0         3.6          1.4         0.2  setosa
#> 6           5.4         3.9          1.7         0.4  setosa
#> 7           4.6         3.4          1.4         0.3  setosa
#> 8           5.0         3.4          1.5         0.2  setosa
#> 9           4.4         2.9          1.4         0.2  setosa
#> 10          4.9         3.1          1.5         0.1  setosa
#> # ... with more rows

This is particularly useful if you want to perform non-SELECT queries as you can do whatever you want with DBI::dbGetQuery() and DBI::dbExecute().

If you’ve implemented a database backend for dplyr, please read the backend news to see what’s changed from your perspective (not much). If you want to ensure your package works with both the current and previous version of dplyr, see wrap_dbplyr_obj() for helpers.

Character encoding

We have done a lot of work to ensure that dplyr works with encodings other that Latin1 on Windows. This is most likely to affect you if you work with data that contains Chinese, Japanese, or Korean (CJK) characters. dplyr should now just work with such data.

Tidyeval

dplyr has a new approach to non-standard evaluation (NSE) called tidyeval. Tidyeval is described in detail in a new vignette about programming with dplyr but, in brief, it gives you the ability to interpolate values in contexts where dplyr usually works with expressions:

my_var <- quo(homeworld)

starwars %>%
  group_by(!!my_var) %>%
  summarise_at(vars(height:mass), mean, na.rm = TRUE)
#> # A tibble: 49 × 3
#>         homeworld   height  mass
#>             <chr>    <dbl> <dbl>
#> 1        Alderaan 176.3333  64.0
#> 2     Aleen Minor  79.0000  15.0
#> 3          Bespin 175.0000  79.0
#> 4      Bestine IV 180.0000 110.0
#> 5  Cato Neimoidia 191.0000  90.0
#> 6           Cerea 198.0000  82.0
#> 7        Champala 196.0000   NaN
#> 8       Chandrila 150.0000   NaN
#> 9    Concord Dawn 183.0000  79.0
#> 10       Corellia 175.0000  78.5
#> # ... with 39 more rows

This will make it much easier to eliminate copy-and-pasted dplyr code by extracting repeated code into a function.

This also means that the underscored version of each main verb (filter_(), select_() etc). is no longer needed, and so these functions have been deprecated (but remain around for backward compatibility).

Over the couple of months there have been a bunch of smaller releases to packages in the tidyverse. This includes:

  • forcats 0.2.0, for working with factors.
  • readr 1.1.0, for reading flat-files from disk.
  • stringr 1.2.0, for manipulating strings.
  • tibble 1.3.0, a modern re-imagining of the data frame.

This blog post summarises the most important new features, and points to the full release notes where you can learn more.

(If you’ve never heard of the tidyverse before, it’s an set of packages that are designed to work together to help you do data science. The best place to learn all about it is R for Data Science.)

forcats 0.2.0

forcats has three new functions:

  • as_factor() is a generic version of as.factor(), which creates factors from character vectors ordered by appearance, rather than alphabetically. This ensures means that as_factor(x) will always return the same result, regardless of the current locale.
  • fct_other() makes it easier to convert selected levels to “other”:
    x <- factor(rep(LETTERS[1:6], times = c(10, 5, 1, 1, 1, 1)))
    
    x %>% 
      fct_other(keep = c("A", "B")) %>% 
      fct_count()
    #> # A tibble: 3 × 2
    #>        f     n
    #>    
    #> 1      A    10
    #> 2      B     5
    #> 3  Other     4
    
    x %>% 
      fct_other(drop = c("A", "B")) %>% 
      fct_count()
    #> # A tibble: 5 × 2
    #>        f     n
    #>    
    #> 1      C     1
    #> 2      D     1
    #> 3      E     1
    #> 4      F     1
    #> 5  Other    15
  • fct_relabel() allows programmatic relabeling of levels:
    x <- factor(letters[1:3])
    x
    #> [1] a b c
    #> Levels: a b c
    
    x %>% fct_relabel(function(x) paste0("-", x, "-"))
    #> [1] -a- -b- -c-
    #> Levels: -a- -b- -c-

See the full list of other changes in the release notes.

stringr 1.2.0

This release includes a change to the API: str_match_all() now returns NA if an optional group doesn’t match (previously it returned “”). This is more consistent with str_match() and other match failures.

x <- c("a=1,b=2", "c=3", "d=")

x %>% str_match("(.)=(\\d)?")
#>      [,1]  [,2] [,3]
#> [1,] "a=1" "a"  "1" 
#> [2,] "c=3" "c"  "3" 
#> [3,] "d="  "d"  NA
x %>% str_match_all("(.)=(\\d)?,?")
#> [[1]]
#>      [,1]   [,2] [,3]
#> [1,] "a=1," "a"  "1" 
#> [2,] "b=2"  "b"  "2" 
#> 
#> [[2]]
#>      [,1]  [,2] [,3]
#> [1,] "c=3" "c"  "3" 
#> 
#> [[3]]
#>      [,1] [,2] [,3]
#> [1,] "d=" "d"  NA

There are three new features:

  • In str_replace(), replacement can now be a function. The function is once for each match and its return value will be used as the replacement.
    redact <- function(x) {
      str_dup("-", str_length(x))
    }
    
    x <- c("It cost $500", "We spent $1,200 on stickers")
    x %>% str_replace_all("\\$[0-9,]+", redact)
    #> [1] "It cost ----"                "We spent ------ on stickers"
  • New str_which() mimics grep():
    fruit <- c("apple", "banana", "pear", "pinapple")
    
    # Matching positions    
    str_which(fruit, "p")
    #> [1] 1 3 4
    
    # Matching values
    str_subset(fruit, "p")
    #> [1] "apple"    "pear"     "pinapple"
  • A new vignette (vignette("regular-expressions")) describes the details of the regular expressions supported by stringr. The main vignette (vignette("stringr")) has been updated to give a high-level overview of the package.

See the full list of other changes in the release notes.

readr 1.1.0

readr gains two new features:

  • All write_*() functions now support connections. This means that that you can write directly to compressed formats such as .gz, bz2 or .xz (and readr will automatically do so if you use one of those suffixes).
    write_csv(iris, "iris.csv.bz2")
  • parse_factor(levels = NULL) and col_factor(levels = NULL) will produce a factor column based on the levels in the data, mimicing factor parsing in base R (with the exception that levels are created in the order seen).
    iris2 <- read_csv("iris.csv.bz2", col_types = cols(
      Species = col_factor(levels = NULL)
    ))

See the full list of other changes in the release notes.

tibble 1.3.0

tibble has one handy new function: deframe() is the opposite of enframe(): it turns a two-column data frame into a named vector.

df <- tibble(x = c("a", "b", "c"), y = 1:3)
deframe(df)
#> a b c 
#> 1 2 3

See the full list of other changes in the release notes.

roxygen2 6.0.0 is now available on CRAN. roxygen2 helps you document your packages by turning specially formatted inline comments into R’s standard Rd format. It automates everything that can be automated, and provides helpers for sharing documentation between topics. Learn more at http://r-pkgs.had.co.nz/man.html. Install the latest version with:

install.packages("roxygen2")

There are two headline features in this version of roxygen2:

  • Markdown support.
  • Improved documentation inheritance.

These are described in detail below.

This release also included many minor improvements and bug fixes. For a full list of changes, please see release notes. A big thanks to all the contributors to this release: @dlebauer, @fmichonneau, @gaborcsardi, @HenrikBengtsson, @jefferis, @jeroenooms, @jimhester, @kevinushey, @krlmlr, @LiNk-NY, @lorenzwalthert, @maxheld83, @nteetor, @shrektan, @yutannihilation

Markdown

Thanks to the hard work of Gabor Csardi you can now write roxygen2 comments in markdown. While we have tried to make markdown mode as backward compatible as possible, there are a few cases where you will need to make some minor changes. For this reason, you’ll need to explicitly opt-in to markdown support. There are two ways to do so:

  • Add Roxygen: list(markdown = TRUE) to your DESCRIPTION to turn it on everywhere.
  • Add @md to individual roxygen blocks to enable for selected topics.

roxygen2’s markdown dialect supports inline formatting (bold, italics, code), lists (numbered and bulleted), and a number of helpful link shortcuts:

  • [func()]: links to a function in the current package, and is translated to \code{\link[=func]{func()}.
  • [object]: links to an object in the current package, and is translated to \link{object}.
  • [link text][object]: links to an object with custom text, and is translated to \link[=link text]{object}

Similarly, you can link to functions and objects in other packages with [pkg::func()][pkg::object], and [link text][pkg::object]. For a complete list of syntax, and how to handle common problems, please see vignette("markdown") for more details.

To convert an existing roxygen2 package to use markdown, try https://github.com/r-pkgs/roxygen2md. Happy markdown-ing!

Improved inheritance

Writing documentation is challenging because you want to reduce duplication as much as possible (so you don’t accidentally end up with inconsistent documentation) but you don’t want the user to have to follow a spider’s web of cross-references. This version of roxygen2 provides more support for writing documentation in one place then reusing in multiple topics.

The new @inherit tag allows to you inherit parameters, return, references, title, description, details, sections, and seealso from another topic. @inherit my_fun will inherit everything; @inherit my_fun return params will allow to you inherit specified components. @inherits fun sections will inherit all sections; if you’d like to inherit a single section, you can use @inheritSection fun title. You can also inherit from a topic in another package with @inherit pkg::fun.

Another new tag is @inheritDotParams, which allows you to automatically generate parameter documentation for ... for the common case where you pass ... on to another function. The documentation generated is similar to the style used in ?plot and will eventually be incorporated in to RStudio’s autocomplete. When you pass along ... you often override some arguments, so the tag has a flexible specification:

  • @inheritDotParams foo takes all parameters from foo().
  • @inheritDotParams foo a b e:h takes parameters ab, and all parameters between e and h.
  • @inheritDotParams foo -x -y takes all parameters except for x and y.

All the @inherit tags (including the existing @inheritParams) now work recursively, so you can inherit from a function that inherited from elsewhere.

If you want to generate a basic package documentation page (accessible from package?packagename and ?packagename), you can document the special sentinel value "_PACKAGE". It automatically uses the title, description, authors, url and bug reports fields from the DESCRIPTION. The simplest approach is to do this:

#' @keywords internal
"_PACKAGE"

It only includes what’s already in the DESCRIPTION, but it will typically be easier for R users to access.

I’m very pleased to announce ggplot2 2.2.0. It includes four major new features:

  • Subtitles and captions.
  • A large rewrite of the facetting system.
  • Improved theme options.
  • Better stacking.

It also includes as numerous bug fixes and minor improvements, as described in the release notes.

The majority of this work was carried out by Thomas Pederson, who I was lucky to have as my “ggplot2 intern” this summer. Make sure to check out his other visualisation packages: ggraphggforce, and tweenr.

Install ggplot2 with:

install.packages("ggplot2")

Subtitles and captions

Thanks to Bob Rudis, you can now add subtitles and captions to your plots:

ggplot(mpg, aes(displ, hwy)) +
  geom_point(aes(color = class)) +
  geom_smooth(se = FALSE, method = "loess") +
  labs(
    title = "Fuel efficiency generally decreases with engine size",
    subtitle = "Two seaters (sports cars) are an exception because of their light weight",
    caption = "Data from fueleconomy.gov"
  )

 

subtitle-1

These are controlled by the theme settings plot.subtitle and plot.caption.

The plot title is now aligned to the left by default. To return to the previous centered alignment, use theme(plot.title = element_text(hjust = 0.5)).

Facets

The facet and layout implementation has been moved to ggproto and received a large rewrite and refactoring. This will allow others to create their own facetting systems, as descrbied in the vignette("extending-ggplot2"). Along with the rewrite a number of features and improvements has been added, most notably:

  • ou can now use functions in facetting formulas, thanks to Dan Ruderman.
    ggplot(diamonds, aes(carat, price)) + 
      geom_hex(bins = 20) + 
      facet_wrap(~cut_number(depth, 6))

    facet-1-1

  • Axes are now drawn under the panels in facet_wrap() when the rentangle is not completely filled.
    ggplot(mpg, aes(displ, hwy)) + 
      geom_point() + 
      facet_wrap(~class)

    facet-2-1

  • You can set the position of the axes with the position argument.
    ggplot(mpg, aes(displ, hwy)) + 
      geom_point() + 
      scale_x_continuous(position = "top") + 
      scale_y_continuous(position = "right")

    facet-3-1

  • You can display a secondary axis that is a one-to-one transformation of the primary axis with sec.axis.
    ggplot(mpg, aes(displ, hwy)) + 
      geom_point() + 
      scale_y_continuous(
        "mpg (US)", 
        sec.axis = sec_axis(~ . * 1.20, name = "mpg (UK)")
      )

     

  • Strips can be placed on any side, and the placement with respect to axes can be controlled with the strip.placement theme option.
    ggplot(mpg, aes(displ, hwy)) + 
      geom_point() + 
      facet_wrap(~ drv, strip.position = "bottom") + 
      theme(
        strip.placement = "outside",
        strip.background = element_blank(),
        strip.text = element_text(face = "bold")
      ) +
      xlab(NULL)

    facet-5-1

Theming

  • The theme() function now has named arguments so autocomplete and documentation suggestions are vastly improved.
  • Blank elements can now be overridden again so you get the expected behavior when setting e.g. axis.line.x.
  • element_line() gets an arrow argument that lets you put arrows on axes.
    arrow <- arrow(length = unit(0.4, "cm"), type = "closed")
    
    ggplot(mpg, aes(displ, hwy)) + 
      geom_point() + 
      theme_minimal() + 
      theme(
        axis.line = element_line(arrow = arrow)
      )

    theme-1-1

  • Control of legend styling has been improved. The whole legend area can be aligned with the plot area and a box can be drawn around all legends:
    ggplot(mpg, aes(displ, hwy, shape = drv, colour = fl)) + 
      geom_point() + 
      theme(
        legend.justification = "top", 
        legend.box = "horizontal",
        legend.box.margin = margin(3, 3, 3, 3, "mm"), 
        legend.margin = margin(),
        legend.box.background = element_rect(colour = "grey50")
      )

    theme-2-1

  • panel.margin and legend.margin have been renamed to panel.spacing and legend.spacing respectively, as this better indicates their roles. A new legend.margin actually controls the margin around each legend.
  • When computing the height of titles, ggplot2 now inclues the height of the descenders (i.e. the bits g and y that hang underneath). This improves the margins around titles, particularly the y axis label. I have also very slightly increased the inner margins of axis titles, and removed the outer margins.
  • The default themes has been tweaked by Jean-Olivier Irisson making them better match theme_grey().

Stacking bars

position_stack() and position_fill() now stack values in the reverse order of the grouping, which makes the default stack order match the legend.

avg_price <- diamonds %>% 
  group_by(cut, color) %>% 
  summarise(price = mean(price)) %>% 
  ungroup() %>% 
  mutate(price_rel = price - mean(price))

ggplot(avg_price) + 
  geom_col(aes(x = cut, y = price, fill = color))

stack-1-1

(Note also the new geom_col() which is short-hand for geom_bar(stat = "identity"), contributed by Bob Rudis.)

If you want to stack in the opposite order, try forcats::fct_rev():

ggplot(avg_price) + 
  geom_col(aes(x = cut, y = price, fill = fct_rev(color)))

stack-2-1

Additionally, you can now stack negative values:

ggplot(avg_price) + 
  geom_col(aes(x = cut, y = price_rel, fill = color))

stack-3-1

The overall ordering cannot necessarily be matched in the presence of negative values, but the ordering on either side of the x-axis will match.

Labels can also be stacked, but the default position is suboptimal:

series <- data.frame(
  time = c(rep(1, 4),rep(2, 4), rep(3, 4), rep(4, 4)),
  type = rep(c('a', 'b', 'c', 'd'), 4),
  value = rpois(16, 10)
)

ggplot(series, aes(time, value, group = type)) +
  geom_area(aes(fill = type)) +
  geom_text(aes(label = type), position = "stack")

stack-4-1

You can improve the position with the vjust parameter. A vjust of 0.5 will center the labels inside the corresponding area:

ggplot(series, aes(time, value, group = type)) +
  geom_area(aes(fill = type)) +
  geom_text(aes(label = type), position = position_stack(vjust = 0.5))

stack-5-1

Today we are pleased to release a new version of svglite. This release fixes many bugs, includes new documentation vignettes, and improves fonts support.

You can install svglite with:

install.packages("svglite")

Font handling

Fonts are tricky with SVG because they are needed at two stages:

  • When creating the SVG file, the fonts are needed in order to correctly measure the amount space each character occupies. This is particularly important for plot that use plotmath.
  • When drawing the SVG file on screen, the fonts are needed to draw each character correctly.

For the best display, that means you need to have the same fonts installed on both the computer that generates the SVG file and the computer that draws it. By default, svglite uses fonts that are installed on pretty much every computer. svglite’s font support is now much more flexible thanks to two new arguments: system_fonts and user_fonts.

  1. system_fonts allows you to specify the name of a font installed on your computer. This is useful, for example, if you’d like to use a font with better CJK support:
    svglite("Rplots.svg", system_fonts = list(sans = "Arial Unicode MS"))
    plot.new()
    text(0.5, 0.5, "正規分布")
    dev.off()
  2. user_fonts allows you to specify a font installed in a R package (like fontquiver). This is needed if you want to generate identical plot across different operating systems, and are using in the upcoming vdiffr package which provides graphical unit tests.

For more details, see vignette("fonts").

Text scaling

This update also fixes many bugs. The most important is that text is now properly scaled within the plot, and we provide a vignette that describes the details: vignette("scaling"). It documents, for instance, how to include a svglite graphic in a web page with the figure text consistently scaled with the surrounding text.

Find a full list of changes in the release notes.

We are excited to announce that submissions for lightning talks at rstudio::conf are now open! Lightning talks are short (5 minute) high energy presentations that give you the chance to talk about an interesting project that you’ve tackled with R. Short talks, or demos of your R code, R packages, and shiny apps are great options. See some of the great lightning talks from the Shiny Developer Conference (scroll down to user talks).

Submit your lightning talk proposal here!

Submissions are due December 1, 2016. We’ll announce the accepted talks on December 15. (You must be a registered attendee of rstudio::conf to present a lightning talk.)

install.packages("haven")

haven 1.0.0 is a major release, and indicates that haven is now largely feature complete and has been tested on many real world datasets. There are four major changes in this version of haven:

  1. Improvements to the underlying ReadStat library
  2. Better handling of “special” missing values
  3. Improved date/time support
  4. Support for other file metadata.

There were also a whole bunch of other minor improvements and bug fixes: you can see the complete list in the release notes.

ReadStat

Haven builds on top of the ReadStat C library by Evan Miller. This version of haven includes many improvements thanks to Evan’s hard work on ReadStat:

  • Can read binary/Ross compressed SAS files.
  • Support for reading and writing Stata 14 data files.
  • New write_sas() allows you to write data frames out to sas7bdat files. This is still somewhat experimental.
  • read_por() now actually works.
  • Many other bug fixes and minor improvements.

Missing values

haven 1.0.0 includes comprehensive support for the “special” types of missing values found in SAS, SPSS, and Stata. All three tools provide a global “system missing value”, displayed as .. This is roughly equivalent to R’s NA, although neither Stata nor SAS propagate missingness in numeric comparisons (SAS treats the missing value as the smallest possible number and Stata treats it as the largest possible number).

Each tool also provides a mechanism for recording multiple types of missingness:

  • Stata has “extended” missing values, .A through .Z.
  • SAS has “special” missing values, .A through .Z plus ._.
  • SPSS has per-column “user” missing values. Each column can declare up to three distinct values or a range of values (plus one distinct value) that should be treated as missing.

Stata and SAS only support tagged missing values for numeric columns. SPSS supports up to three distinct values for character columns. Generally, operations involving a user-missing type return a system missing value.

Haven models these missing values in two different ways:

  • For SAS and Stata, haven provides tagged_na() which extend R’s regular NA to add a single character label.
  • For SPSS, haven provides labelled_spss() that also models user defined values and ranges.

Use zap_missing() if you just want to convert to R’s regular NAs.

You can get more details in the semantics vignette.

Date/times

Support for date/times has substantially improved:

  • read_dta() now recognises “%d” and custom date types.
  • read_sav() now correctly recognises EDATE and JDATE formats as dates. Variables with format DATE, ADATE, EDATE, JDATE or SDATE are imported as Date variables instead of POSIXct.
  • write_dta() and write_sav() support writing date/times.
  • Support for hms() has been moved into the hms package. Time varibles now have class c("hms", "difftime") and a unitsattribute with value “secs”.

Other metadata

Haven is slowly adding support for other types of metadata:

  • Variable formats can be read and written. Similarly to to variable labels, formats are stored as an attribute on the vector. Use zap_formats() if you want to remove these attributes.
  • Added support for reading file “label” and “notes”. These are not currently printed, but are stored in the attributes if you need to access them.