You are currently browsing the category archive for the ‘tidyverse’ category.
I’m pleased to announce the release of the dbplyr package, which now contains all dplyr code related to connecting to databases. This shouldn’t affect you-as-a-user much, but it makes dplyr simpler, and makes it easier to release improvements just for database related code.
You can install the latest version of dbplyr with:
install.packages("dbplyr")
DBI and dplyr alignment
The biggest change in this release is that dplyr/dbplyr works much more directly with DBI database connections. This makes it much easier to switch between low-level queries written in SQL, and high-level data manipulation functions written with dplyr verbs.
To connect to a database, first use DBI::dbConnect()
to create a database connection. For example, the following code connects to a temporary, in-memory, SQLite database, then uses DBI to copy over some data.
con <- DBI::dbConnect(RSQLite::SQLite(), ":memory:")
DBI::dbWriteTable(con, "iris", iris)
#> [1] TRUE
DBI::dbWriteTable(con, "mtcars", mtcars)
#> [1] TRUE
With this connection in hand, you can execute hand-written SQL queries:
DBI::dbGetQuery(con, "SELECT count() FROM iris")
#> count()
#> 1 150
Or you can let dplyr generate the SQL for you:
iris2 <- tbl(con, "iris")
species_mean <- iris2 %>%
group_by(Species) %>%
summarise_all(mean)
species_mean %>% show_query()
#> <SQL>
#> SELECT `Species`, AVG(`Sepal.Length`) AS `Sepal.Length`, AVG(`Sepal.Width`) AS `Sepal.Width`, AVG(`Petal.Length`) AS `Petal.Length`, AVG(`Petal.Width`) AS `Petal.Width`
#> FROM `iris`
#> GROUP BY `Species`
species_mean
#> # Source: lazy query [?? x 5]
#> # Database: sqlite 3.11.1 [:memory:]
#> Species Sepal.Length Sepal.Width Petal.Length Petal.Width
#> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 setosa 5.006 3.428 1.462 0.246
#> 2 versicolor 5.936 2.770 4.260 1.326
#> 3 virginica 6.588 2.974 5.552 2.026
This alignment is made possible thanks to the hard work of Kirill Muller who has been working to make DBI backends more consistent, comprehensive, and easier to use. This work has been funded by the R Consortium and will continue this year with improvements to backends for the two major open source databases MySQL/MariaDB and PostgreSQL.
(You can continue to the old style src_mysql()
, src_postgres()
, and src_sqlite()
functions, which still live in dplyr, but I recommend that you switch to the new style for new code)
SQL translation
We’ve also worked to improve the translation of R code to SQL. Thanks to @hhoeflin, dbplyr now has a basic SQL optimiser that considerably reduces the number of subqueries needed in many expressions. For example, the following code used to generate three subqueries, but now generates idiomatic SQL:
con %>%
tbl("mtcars") %>%
filter(cyl > 2) %>%
select(mpg:hp) %>%
head(10) %>%
show_query()
#> <SQL>
#> SELECT `mpg` AS `mpg`, `cyl` AS `cyl`, `disp` AS `disp`, `hp` AS `hp`
#> FROM `mtcars`
#> WHERE (`cyl` > 2.0)
#> LIMIT 10
At a lower-level, dplyr now:
- Can translate
case_when()
:library(dbplyr) translate_sql(case_when(x > 1 ~ "big", y < 2 ~ "small"), con = con) #> <SQL> CASE #> WHEN (`x` > 1.0) THEN ('big') #> WHEN (`y` < 2.0) THEN ('small') #> END
- Has better support for type coercions:
translate_sql(as.character(cyl), con = con) #> <SQL> CAST(`cyl` AS TEXT) translate_sql(as.integer(cyl), con = con) #> <SQL> CAST(`cyl` AS INTEGER) translate_sql(as.double(cyl), con = con) #> <SQL> CAST(`cyl` AS NUMERIC)
- Can more reliably translate
%IN%
:translate_sql(x %in% 1:5, con = con) #> <SQL> `x` IN (1, 2, 3, 4, 5) translate_sql(x %in% 1L, con = con) #> <SQL> `x` IN (1) translate_sql(x %in% c(1L), con = con) #> <SQL> `x` IN (1)
You can now use in_schema()
to refer to tables in schema: in_schema("my_schema_name", "my_table_name")
. You can use the result of this function anywhere you could previously use a table name.
We’ve also included better translations for Oracle, MS SQL Server, Hive and Impala. We’re working to add support for more databases over time, but adding support on your own is surprisingly easy. Submit an issue to dplyr and we’ll help you get started.
These are just the highlights: you can see the full set of improvements and bug fixes in the release notes
Contributors
As with all R packages, this is truly a community effort. A big thanks goes to all those who contributed code or documentation to this release: Austen Head, Edgar Ruiz, Greg Freedman Ellis, Hannes Mühleisen, Ian Cook, Karl Dunkle Werner, Michael Sumner, Mine Cetinkaya-Rundel, @shabbybanks and Sergio Oller
Vision
Since you’ve read this far, I also wanted to touch on RStudio’s vision for databases. Many analysts have most of their data in databases, and making it as easy as possible to get data out of the database and into R makes a huge difference. Thanks to the community, R already has strong tools for talking to the popular open source databases. But support for connecting to enterprise databases and solving enterprise challenges has lagged somewhat. At RStudio we are actively working to solve these problems.
As well as dbplyr and DBI, we are working on many other pain points in the database ecosystem. You’ll hear much more about these packages in the future, but I wanted to touch on the highlights so you can see where we are heading. These pieces are not yet as integrated as they should be, but they are valuable by themselves, and we will continue to work to make a seamless database experience, that is as good as (or better than!) any other environment.
- The odbc package provides a DBI compliant backend for any database with an ODBC driver. Compared to the existing RODBC package, odbc is faster (~3x for reading, ~2x for writing), translates date/time data types, and is under active development.RStudio is also planning on providing best-of-breed ODBC drivers for the most important enterprise databases to our Pro customers. If you’ve felt the pain of connecting to your enterprise database and would like to learn more, please schedule a meeting with our sales team.
- You should never record database credentials in your R scripts, so we are working on safer ways to store them that don’t add a lot of extra hassle. One piece of the puzzle is the keyring package, which allows you to securely store information in your system keychain, and only decrypt it when needed.Another piece of the puzzle is the config package, which makes it easy to parameterise your database connection credentials so that you can connect to your testing database when experimenting locally, and your production database when you deploy your code.
- Connecting to databases from Shiny can be challenging because you don’t want a fresh connection every for every user action (because that’s slow), and you don’t want one connection per app (because that’s unreliable). The pool package allows you to manage a shared pool of connections for your app, giving you both speed and reliability.
- We’re also working to make sure all of these pieces are easily used from the IDE and inside R Markdown. One neat feature that you might not have heard about is support for SQL chunks in R Markdown.
If any of these pieces sound interesting, please stay tuned to the blog for more upcoming announcements. Please also check out out new database website: https://db.rstudio.com. Over time, this website will expand to document all database best practices, so you can find everything you need in one place.
I’m pleased to announce that dplyr 0.7.0 is now on CRAN! (This was dplyr 0.6.0 previously; more on that below.) dplyr provides a “grammar” of data transformation, making it easy and elegant to solve the most common data manipulation challenges. dplyr supports multiple backends: as well as in-memory data frames, you can also use it with remote SQL databases. If you haven’t heard of dplyr before, the best place to start is the Data transformation chapter in R for Data Science.
You can install the latest version of dplyr with:
install.packages("dplyr")
Features
dplyr 0.7.0 is a major release including over 100 improvements and bug fixes, as described in the release notes. In this blog post, I want to discuss one big change and a handful of smaller updates. This version of dplyr also saw a major revamp of database connections. That’s a big topic, so it’ll get its own blog post next week.
Tidy evaluation
The biggest change is a new system for programming with dplyr, called tidy evaluation, or tidy eval for short. Tidy eval is a system for capturing expressions and later evaluating them in the correct context. It is is important because it allows you to interpolate values in contexts where dplyr usually works with expressions:
my_var <- quo(homeworld)
starwars %>%
group_by(!!my_var) %>%
summarise_at(vars(height:mass), mean, na.rm = TRUE)
#> # A tibble: 49 x 3
#> homeworld height mass
#>
#> 1 Alderaan 176.3333 64.0
#> 2 Aleen Minor 79.0000 15.0
#> 3 Bespin 175.0000 79.0
#> 4 Bestine IV 180.0000 110.0
#> 5 Cato Neimoidia 191.0000 90.0
#> 6 Cerea 198.0000 82.0
#> 7 Champala 196.0000 NaN
#> 8 Chandrila 150.0000 NaN
#> 9 Concord Dawn 183.0000 79.0
#> 10 Corellia 175.0000 78.5
#> # ... with 39 more rows
This makes it possible to write your functions that work like dplyr functions, reducing the amount of copy-and-paste in your code:
starwars_mean <- function(my_var) {
my_var <- enquo(my_var)
starwars %>%
group_by(!!my_var) %>%
summarise_at(vars(height:mass), mean, na.rm = TRUE)
}
starwars_mean(homeworld)
You can also use the new .data
pronoun to refer to variables with strings:
my_var <- "homeworld"
starwars %>%
group_by(.data[[my_var]]) %>%
summarise_at(vars(height:mass), mean, na.rm = TRUE)
This is useful when you’re writing packages that use dplyr code because it avoids an annoying note from R CMD check
.
To learn more about how tidy eval helps solve data analysis challenge, please read the new programming with dplyr vignette. Tidy evaluation is implemented in the rlang package, which also provides a vignette on the theoretical underpinnings. Tidy eval is a rich system and takes a while to get your head around it, but we are confident that learning tidy eval will pay off, especially as it roles out to other packages in the tidyverse (tidyr and ggplot2 are next on the todo list).
The introduction of tidy evaluation means that the standard evaluation (underscored) version of each main verb (filter_()
, select_()
etc) is no longer needed, and so these functions have been deprecated (but remain around for backward compatibility).
Character encoding
We have done a lot of work to ensure that dplyr works with encodings other than Latin1 on Windows. This is most likely to affect you if you work with data that contains Chinese, Japanese, or Korean (CJK) characters. dplyr should now just work with such data. Please let us know if you have problems!
New datasets
dplyr has some new datasets that will help write more interesting examples:
starwars
, shown above, contains information about characters from the Star Wars movies, sourced from the Star Wars API. It contains a number of list-columns.starwars #> # A tibble: 87 x 13 #> name height mass hair_color skin_color eye_color #> #> 1 Luke Skywalker 172 77 blond fair blue #> 2 C-3PO 167 75 gold yellow #> 3 R2-D2 96 32 white, blue red #> 4 Darth Vader 202 136 none white yellow #> 5 Leia Organa 150 49 brown light brown #> 6 Owen Lars 178 120 brown, grey light blue #> 7 Beru Whitesun lars 165 75 brown light blue #> 8 R5-D4 97 32 white, red red #> 9 Biggs Darklighter 183 84 black light brown #> 10 Obi-Wan Kenobi 182 77 auburn, white fair blue-gray #> # ... with 77 more rows, and 7 more variables: birth_year , #> # gender , homeworld , species , films , #> # vehicles , starships
storms
has the trajectories of ~200 tropical storms. It contains a strong grouping structure.storms #> # A tibble: 10,010 x 13 #> name year month day hour lat long status category #> #> 1 Amy 1975 6 27 0 27.5 -79.0 tropical depression -1 #> 2 Amy 1975 6 27 6 28.5 -79.0 tropical depression -1 #> 3 Amy 1975 6 27 12 29.5 -79.0 tropical depression -1 #> 4 Amy 1975 6 27 18 30.5 -79.0 tropical depression -1 #> 5 Amy 1975 6 28 0 31.5 -78.8 tropical depression -1 #> 6 Amy 1975 6 28 6 32.4 -78.7 tropical depression -1 #> 7 Amy 1975 6 28 12 33.3 -78.0 tropical depression -1 #> 8 Amy 1975 6 28 18 34.0 -77.0 tropical depression -1 #> 9 Amy 1975 6 29 0 34.4 -75.8 tropical storm 0 #> 10 Amy 1975 6 29 6 34.0 -74.8 tropical storm 0 #> # ... with 10,000 more rows, and 4 more variables: wind , #> # pressure , ts_diameter , hu_diameter
band_members
,band_instruments
andband_instruments2
has a tiny amount of data about bands. It’s designed to be very simple so you can illustrate how joins work without getting distracted by the details of the data.band_members #> # A tibble: 3 x 2 #> name band #> #> 1 Mick Stones #> 2 John Beatles #> 3 Paul Beatles band_instruments #> # A tibble: 3 x 2 #> name plays #> #> 1 John guitar #> 2 Paul bass #> 3 Keith guitar
New and improved verbs
- The
pull()
generic allows you to extract a single column either by name or position. It’s similar toselect()
but returns a vector, rather than a smaller tibble.mtcars %>% pull(-1) %>% str() #> num [1:32] 4 4 1 1 2 1 4 2 2 4 ... mtcars %>% pull(cyl) %>% str() #> num [1:32] 6 6 4 6 8 6 8 4 4 6 ...
Thanks to Paul Poncet for the idea!
arrange()
for grouped data frames gains a.by_group
argument so you can choose to sort by groups if you want to (defaults toFALSE
).- All single table verbs now have scoped variants suffixed with
_if()
,_at()
and_all()
. Use these if you want to do something to every variable (_all
), variables selected by their names (_at
), or variables that satisfy some predicate (_if
).iris %>% summarise_if(is.numeric, mean) starwars %>% select_if(Negate(is.list)) storms %>% group_by_at(vars(month:hour))
Other important changes
- Local join functions can now control how missing values are matched. The default value is
na_matches = "na"
, which treats two missing values as equal. To prevent missing values from matching, usena_matches = "never"
.
You can change the default behaviour by calling pkgconfig::set_config("dplyr::na_matches", "never")
.
bind_rows()
andcombine()
are more strict when coercing. Logical values are no longer coerced to integer and numeric. Date, POSIXct and other integer or double-based classes are no longer coerced to integer or double to avoid dropping important metadata. We plan to continue improving this interface in the future.
Breaking changes
From time-to-time I discover that I made a mistake in an older version of dplyr and developed what is now a clearly suboptimal API. If the problem isn’t too big, I try to just leave it – the cost of making small improvements is not worth it when compared to the cost of breaking existing code. However, there are bigger improvements where I believe the short-term pain of breaking code is worth the long-term payoff of a better API.
Regardless, it’s still frustrating when an update to dplyr breaks your code. To minimise this pain, I plan to do two things going forward:
- Adopt an odd-even release cycle so that API breaking changes only occur in odd numbered releases. Even numbered releases will only contain bug fixes and new features. This is why I’ve skipped dplyr 0.6.0 and gone directly to dplyr 0.7.0.
- Invest time in developing better tools isolating packages across projects so that you can choose when to upgrade a package on a project-by-project basis, and if something goes wrong, easily roll back to a version that worked. Look for news about this later in the year.
Contributors
dplyr is truly a community effort. Apart from the dplyr team (myself, Kirill Müller, and Lionel Henry), this release wouldn’t have been possible without patches from Christophe Dervieux, Dean Attali, Ian Cook, Ian Lyttle, Jake Russ, Jay Hesselberth, Jennifer (Jenny) Bryan, @lindbrook, Mauro Lepore, Nicolas Coutin, Daniel, Tony Fischetti, Hiroaki Yutani and Sergio Oller. Thank you all for your contributions!
I’m pleased to announce that readxl 1.0.0 is available on CRAN. readxl makes it easy to bring tabular data out of Excel and into R, for modern .xlsx
files and the legacy .xls
format. readxl does not have any tricky external dependencies, such as Java or Perl, and is easy to install and use on Mac, Windows, and Linux.
You can install it with:
install.packages("readxl")
As well as fixing many bugs, this release:
- Allows you to target specific cells for reading, in a variety of ways
- Adds two new column types:
"logical"
and"list"
, for data of disparate type - Is more resilient to the wondrous diversity in spreadsheets, e.g., those written by 3rd party tools
You can see a full list of changes in the release notes. This is the first release maintained by Jenny Bryan.
Specifying the data rectangle
In an ideal world, data would live in a neat rectangle in the upper left corner of a spreadsheet. But spreadsheets often serve multiple purposes for users with different priorities. It is common to encounter several rows of notes above or below the data, for example. The new range
argument provides a flexible interface for describing the data rectangle, including Excel-style ranges and row- or column-only ranges.
library(readxl)
read_excel(
readxl_example("deaths.xlsx"),
range = "arts!A5:F15"
)
#> # A tibble: 10 × 6
#> Name Profession Age `Has kids` `Date of birth`
#>
#> 1 David Bowie musician 69 TRUE 1947-01-08
#> 2 Carrie Fisher actor 60 TRUE 1956-10-21
#> 3 Chuck Berry musician 90 TRUE 1926-10-18
#> 4 Bill Paxton actor 61 TRUE 1955-05-17
#> # ... with 6 more rows, and 1 more variables: `Date of death`
read_excel(
readxl_example("deaths.xlsx"),
sheet = "other",
range = cell_rows(5:15)
)
#> # A tibble: 10 × 6
#> Name Profession Age `Has kids` `Date of birth`
#>
#> 1 Vera Rubin scientist 88 TRUE 1928-07-23
#> 2 Mohamed Ali athlete 74 TRUE 1942-01-17
#> 3 Morley Safer journalist 84 TRUE 1931-11-08
#> 4 Fidel Castro politician 90 TRUE 1926-08-13
#> # ... with 6 more rows, and 1 more variables: `Date of death`
There is also a new argument n_max
that limits the number of data rows read from the sheet. It is an example of readxl’s evolution towards a readr-like interface. The Sheet Geometry vignette goes over all the options.
Column typing
The new ability to target cells for reading means that readxl’s automatic column typing will “just work” for most sheets, most of the time. Above, the Has kids
column is automatically detected as logical
, which is a new column type for readxl.
You can still specify column type explicitly via col_types
, which gets a couple new features. If you provide exactly one type, it is recycled to the necessary length. The new type "guess"
can be mixed with explicit types to specify some types, while leaving others to be guessed.
read_excel(
readxl_example("deaths.xlsx"),
range = "arts!A5:C15",
col_types = c("guess", "skip", "numeric")
)
#> # A tibble: 10 × 2
#> Name Age
#>
#> 1 David Bowie 69
#> 2 Carrie Fisher 60
#> 3 Chuck Berry 90
#> 4 Bill Paxton 61
#> # ... with 6 more rows
The new argument guess_max
limits the rows used for type guessing. Leading and trailing whitespace is trimmed when the new trim_ws
argument is TRUE
, which is the default. Finally, thanks to Jonathan Marshall, multiple na
values are accepted. The Cell and Column Types vignette has more detail.
"list"
columns
Thanks to Greg Freedman Ellis we now have a "list"
column type. This is useful if you want to bring truly disparate data into R without the coercion required by atomic vector types.
(df <- read_excel(
readxl_example("clippy.xlsx"),
col_types = c("text", "list")
))
#> # A tibble: 4 × 2
#> name value
#> <chr> <list>
#> 1 Name <chr [1]>
#> 2 Species <chr [1]>
#> 3 Approx date of death <dttm [1]>
#> 4 Weight in grams <dbl [1]>
tibble::deframe(df)
#> $Name
#> [1] "Clippy"
#>
#> $Species
#> [1] "paperclip"
#>
#> $`Approx date of death`
#> [1] "2007-01-01 UTC"
#>
#> $`Weight in grams`
#> [1] 0.9
Everything else
To learn more, read the vignettes and articles or release notes. Highlights include:
- General rationalization of sheet geometry, including detection and treatment of empty rows and columns.
- Improved behavior and messaging around coercion and mismatched cell and column types.
- Improved handling of datetimes with respect to 3rd party software, rounding, and the Lotus 1-2-3 leap year bug.
read_xls()
andread_xlsx()
are now exposed, so that files without an.xls
or.xlsx
extension can be read. Thanks Jirka Lewandowski!- readxl Workflows showcases patterns that reduce tedium and increase reproducibility when raw data arrives in a spreadsheet.
rstudio::conf 2017, the conference on all things R and RStudio, is only 90 days away. Now is the time to claim your spot or grab one of the few remaining seats at Training Days – including the new Tidyverse workshop.
Whether you’re already registered or still working on it, we’re delighted today to announce the full conference schedule, so that you can plan your days in Florida.
rstudio::conf 2017 takes place January 12-14 at the Gaylord Resorts in Kissimmee, Florida. There are over 30 talks and tutorials to choose from that are sure to accelerate your productivity in R and RStudio. In addition to the highlights below, topics include the latest news on R notebooks, sparklyr, profiling, the tidyverse, shiny, r markdown, html widgets, data access and the new enterprise-scale publishing capabilities of RStudio Connect.
Schedule Highlights
Keynotes
– Hadley Wickham, Chief Scientist, RStudio: Data Science in the Tidyverse
– Andrew Flowers, Economics Writer, FiveThirtyEight: Finding and Telling Stories with R
– J.J. Allaire, Software Engineer, CEO & Founder: RStudio Past, Present and Future
Tutorials
– Winston Chang, Software Engineer, RStudio: Building Dashboards with Shiny
– Charlotte Wickham, Oregon State University: Happy R Users Purrr
– Yihui Xie, Software Engineer, RStudio: Advanced R Markdown
– Jenny Bryan, University of British Columbia: Happy Git and GitHub for the UseR
Featured Speakers
– Max Kuhn, Senior Director Non-Clinical Statistics, Pfizer
– Dirk Eddelbuettel, Ketchum Trading: Extending R with C++: A Brief Introduction to Rcpp
– Hilary Parker, Stitch Fix: Opinionated Analysis Development“
– Bryan Lewis, Paradigm4: “Fun with htmlwidgets”
– Ryan Hafen, Hafen Consulting: “Interactive plotting with rbokeh and crosstalk”
– Julia Silge, Datassist: “Text mining, the tidy way”
– Bob Rudis, Rapid7: “Writing readable code with pipes”
Featured Talk
– Joseph Rickert, R Ambassador, RStudio: R’s Role in Data Science
Be sure to visit https://www.rstudio.com/conference/ for the full schedule and latest updates and don’t forget to download the RStudio conference app to help you plan your days in detail.
Special Reminder: When you register, make sure you purchase your ticket for Friday evening at Universal’s Wizarding World of Harry Potter. The park is reserved exclusively for rstudio::conf attendees. It’s an extraordinary experience we’re sure you’ll enjoy!
We appreciate our sponsors and exhibitors!
I’m pleased to announce the release of haven. Haven is designed to faciliate the transfer of data between R and SAS, SPSS, and Stata. It makes it easy to read SAS, SPSS, and Stata file formats in to R data frames, and makes it easy to save your R data frames in to SAS, SPSS, and Stata if you need to collaborate with others using closed source statistical software. Install haven by running:
install.packages("haven")
haven 1.0.0 is a major release, and indicates that haven is now largely feature complete and has been tested on many real world datasets. There are four major changes in this version of haven:
- Improvements to the underlying ReadStat library
- Better handling of “special” missing values
- Improved date/time support
- Support for other file metadata.
There were also a whole bunch of other minor improvements and bug fixes: you can see the complete list in the release notes.
ReadStat
Haven builds on top of the ReadStat C library by Evan Miller. This version of haven includes many improvements thanks to Evan’s hard work on ReadStat:
- Can read binary/Ross compressed SAS files.
- Support for reading and writing Stata 14 data files.
- New
write_sas()
allows you to write data frames out tosas7bdat
files. This is still somewhat experimental. read_por()
now actually works.- Many other bug fixes and minor improvements.
Missing values
haven 1.0.0 includes comprehensive support for the “special” types of missing values found in SAS, SPSS, and Stata. All three tools provide a global “system missing value”, displayed as .
. This is roughly equivalent to R’s NA
, although neither Stata nor SAS propagate missingness in numeric comparisons (SAS treats the missing value as the smallest possible number and Stata treats it as the largest possible number).
Each tool also provides a mechanism for recording multiple types of missingness:
- Stata has “extended” missing values,
.A
through.Z
. - SAS has “special” missing values,
.A
through.Z
plus._
. - SPSS has per-column “user” missing values. Each column can declare up to three distinct values or a range of values (plus one distinct value) that should be treated as missing.
Stata and SAS only support tagged missing values for numeric columns. SPSS supports up to three distinct values for character columns. Generally, operations involving a user-missing type return a system missing value.
Haven models these missing values in two different ways:
- For SAS and Stata, haven provides
tagged_na()
which extend R’s regularNA
to add a single character label. - For SPSS, haven provides
labelled_spss()
that also models user defined values and ranges.
Use zap_missing()
if you just want to convert to R’s regular NA
s.
You can get more details in the semantics vignette.
Date/times
Support for date/times has substantially improved:
read_dta()
now recognises “%d” and custom date types.read_sav()
now correctly recognises EDATE and JDATE formats as dates. Variables with format DATE, ADATE, EDATE, JDATE or SDATE are imported asDate
variables instead ofPOSIXct
.write_dta()
andwrite_sav()
support writing date/times.- Support for
hms()
has been moved into the hms package. Time varibles now have classc("hms", "difftime")
and aunits
attribute with value “secs”.
Other metadata
Haven is slowly adding support for other types of metadata:
- Variable formats can be read and written. Similarly to to variable labels, formats are stored as an attribute on the vector. Use
zap_formats()
if you want to remove these attributes. - Added support for reading file “label” and “notes”. These are not currently printed, but are stored in the attributes if you need to access them.
I’m planning to release ggplot2 2.2.0 in early November. In preparation, I’d like to announce that a release candidate is now available: version 2.1.0.9001. Please try it out, and file an issue on GitHub if you discover any problems. I hope we can find and fix any major issues before the official release.
Install the pre-release version with:
# install.packages("devtools")
devtools::install_github("hadley/ggplot2")
If you discover a major bug that breaks your plots, please file a minimal reprex, and then roll back to the released version with:
install.packages("ggplot2")
ggplot2 2.2.0 will be a relatively major release including:
- Subtitles and captions.
- A large rewrite of the facetting system.
- Improved theme options.
- Better stacking
- Numerous bug fixes and minor improvements.
The majority of this work was carried out by Thomas Pedersen, who I was lucky to have as my “ggplot2 intern” this summer. Make sure to check out other visualisation packages: ggraph, ggforce, and tweenr.
Subtitles and captions
Thanks to Bob Rudis, you can now add subtitles and captions:
ggplot(mpg, aes(displ, hwy)) +
geom_point(aes(color = class)) +
geom_smooth(se = FALSE, method = "loess") +
labs(
title = "Fuel efficiency generally decreases with engine size",
subtitle = "Two seaters (sports cars) are an exception because of their light weight",
caption = "Data from fueleconomy.gov"
)

These are controlled by the theme settings plot.subtitle
and plot.caption
.
The plot title is now aligned to the left by default. To return to the previous centering, use theme(plot.title = element_text(hjust = 0.5))
.
Facets
The facet and layout implementation has been moved to ggproto and received a large rewrite and refactoring. This will allow others to create their own facetting systems, as descrbied in the Extending ggplot2 vignette. Along with the rewrite a number of features and improvements has been added, most notably:
- Functions in facetting formulas, thanks to Dan Ruderman.
ggplot(diamonds, aes(carat, price)) + geom_hex(bins = 20) + facet_wrap(~cut_number(depth, 6))
- Axes were dropped when the panels in
facet_wrap()
did not completely fill the rectangle. Now, an axis is drawn underneath the hanging panels:ggplot(mpg, aes(displ, hwy)) + geom_point() + facet_wrap(~class)
- It is now possible to set the position of the axes through the
position
argument in the scale constructor:ggplot(mpg, aes(displ, hwy)) + geom_point() + scale_x_continuous(position = "top") + scale_y_continuous(position = "right")
- You can display a secondary axis that is a one-to-one transformation of the primary axis with the
sec.axis
argument:ggplot(mpg, aes(displ, hwy)) + geom_point() + scale_y_continuous( "mpg (US)", sec.axis = sec_axis(~ . * 1.20, name = "mpg (UK)") )
- Strips can be placed on any side, and the placement with respect to axes can be controlled with the
strip.placement
theme option.ggplot(mpg, aes(displ, hwy)) + geom_point() + facet_wrap(~ drv, strip.position = "bottom") + theme( strip.placement = "outside", strip.background = element_blank(), strip.text = element_text(face = "bold") ) + xlab(NULL)
Theming
- Blank elements can now be overridden again so you get the expected behavior when setting e.g.
axis.line.x
. element_line()
gets anarrow
argument that lets you put arrows on axes.arrow <- arrow(length = unit(0.4, "cm"), type = "closed") ggplot(mpg, aes(displ, hwy)) + geom_point() + theme_minimal() + theme( axis.line = element_line(arrow = arrow) )
- Control of legend styling has been improved. The whole legend area can be aligned according to the plot area and a box can be drawn around all legends:
ggplot(mpg, aes(displ, hwy, shape = drv, colour = fl)) + geom_point() + theme( legend.justification = "top", legend.box.margin = margin(3, 3, 3, 3, "mm"), legend.box.background = element_rect(colour = "grey50") )
panel.margin
andlegend.margin
have been renamed topanel.spacing
andlegend.spacing
respectively as this better indicates their roles. A newlegend.margin
has been actually controls the margin around each legend.- When computing the height of titles ggplot2, now inclues the height of the descenders (i.e. the bits
g
andy
that hang underneath). This makes improves the margins around titles, particularly the y axis label. I have also very slightly increased the inner margins of axis titles, and removed the outer margins. - The default themes has been tweaked by Jean-Olivier Irisson making them better match
theme_grey()
. - Lastly, the
theme()
function now has named arguments so autocomplete and documentation suggestions are vastly improved.
Stacking bars
position_stack()
and position_fill()
now stack values in the reverse order of the grouping, which makes the default stack order match the legend.
avg_price <- diamonds %>%
group_by(cut, color) %>%
summarise(price = mean(price)) %>%
ungroup() %>%
mutate(price_rel = price - mean(price))
ggplot(avg_price) +
geom_col(aes(x = cut, y = price, fill = color))
(Note also the new geom_col()
which is short-hand for geom_bar(stat = "identity")
, contributed by Bob Rudis.)
Additionally, you can now stack negative values:
ggplot(avg_price) +
geom_col(aes(x = cut, y = price_rel, fill = color))
The overall ordering cannot necessarily be matched in the presence of negative values, but the ordering on either side of the x-axis will match.
If you want to stack in the opposite order, try forcats::fct_rev()
:
ggplot(avg_price) +
geom_col(aes(x = cut, y = price, fill = fct_rev(color)))

The tidyverse is a set of packages that work in harmony because they share common data representations and API design. The tidyverse package is designed to make it easy to install and load core packages from the tidyverse in a single command.
The best place to learn about all the packages in the tidyverse and how they fit together is R for Data Science. Expect to hear more about the tidyverse in the coming months as I work on improved package websites, making citation easier, and providing a common home for discussions about data analysis with the tidyverse.
Installation
You can install tidyverse with
install.packages("tidyverse")
This will install the core tidyverse packages that you are likely to use in almost every analysis:
- ggplot2, for data visualisation.
- dplyr, for data manipulation.
- tidyr, for data tidying.
- readr, for data import.
- purrr, for functional programming.
- tibble, for tibbles, a modern re-imagining of data frames.
It also installs a selection of other tidyverse packages that you’re likely to use frequently, but probably not in every analysis. This includes packages for data manipulation:
Data import:
- DBI, for databases.
- haven, for SPSS, SAS and Stata files.
- httr, for web apis.
- jsonlite for JSON.
- readxl, for
.xls
and.xlsx
files. - rvest, for web scraping.
- xml2, for XML.
And modelling:
These packages will be installed along with tidyverse, but you’ll load them explicitly with library()
.
Usage
library(tidyverse)
will load the core tidyverse packages: ggplot2, tibble, tidyr, readr, purrr, and dplyr. You also get a condensed summary of conflicts with other packages you have loaded:
library(tidyverse)
#> Loading tidyverse: ggplot2
#> Loading tidyverse: tibble
#> Loading tidyverse: tidyr
#> Loading tidyverse: readr
#> Loading tidyverse: purrr
#> Loading tidyverse: dplyr
#> Conflicts with tidy packages ---------------------------------------
#> filter(): dplyr, stats
#> lag(): dplyr, stats
You can see conflicts created later with tidyverse_conflicts()
:
library(MASS)
#>
#> Attaching package: 'MASS'
#> The following object is masked from 'package:dplyr':
#>
#> select
tidyverse_conflicts()
#> Conflicts with tidy packages --------------------------------------
#> filter(): dplyr, stats
#> lag(): dplyr, stats
#> select(): dplyr, MASS
And you can check that all tidyverse packages are up-to-date with tidyverse_update()
:
tidyverse_update()
#> The following packages are out of date:
#> * broom (0.4.0 -> 0.4.1)
#> * DBI (0.4.1 -> 0.5)
#> * Rcpp (0.12.6 -> 0.12.7)
#> Update now?
#>
#> 1: Yes
#> 2: No
I am pleased to announced lubridate 1.6.0. Lubridate is designed to make working with dates and times as pleasant as possible, and is maintained by Vitalie Spinu. You can install the latest version with:
install.packages("lubridate")
This release includes a range of bug fixes and minor improvements. Some highlights from this release include:
period()
andduration()
constructors now accept character strings and allow a very flexible specification of timespans:period("3H 2M 1S") #> [1] "3H 2M 1S" duration("3 hours, 2 mins, 1 secs") #> [1] "10921s (~3.03 hours)" # Missing numerals default to 1. # Repeated units are summed period("hour minute minute") #> [1] "1H 2M 0S"
Period and duration parsing allows for arbitrary abbreviations of time units as long as the specification is unambiguous. For single letter specs,
strptime()
rules are followed, som
stands formonths
andM
forminutes
.These same rules allows you to compare strings and durations/periods:
"2mins 1 sec" > period("2mins") #> [1] TRUE
- Date time rounding (with
round_date()
,floor_date()
andceiling_date()
) now supports unit multipliers, like “3 days” or “2 months”:ceiling_date(ymd_hms("2016-09-12 17:10:00"), unit = "5 minutes") #> [1] "2016-09-12 17:10:00 UTC"
- The behavior of
ceiling_date
forDate
objects is now more intuitive. In short, dates are now interpreted as time intervals that are physically part of longer unit intervals:|day1| ... |day31|day1| ... |day28| ... | January | February | ...
That means that rounding up
2000-01-01
by a month is done to the boundary between January and February which, i.e.2000-02-01
:ceiling_date(ymd("2000-01-01"), unit = "month") #> [1] "2000-02-01"
This behavior is controlled by the
change_on_boundary
argument. - It is now possible to compare
POSIXct
andDate
objects:ymd_hms("2000-01-01 00:00:01") > ymd("2000-01-01") #> [1] TRUE
- C-level parsing now handles English months and AM/PM indicator regardless of your locale. This means that English date-times are now always handled by lubridate C-level parsing and you don’t need to explicitly switch the locale.
- New parsing function
yq()
allows you to parse a year + quarter:yq("2016-02") #> [1] "2016-04-01"
The new
q
format is available in all lubridate parsing functions.
See the release notes for the full list of changes. A big thanks goes to everyone who contributed: @arneschillert, @cderv, @ijlyttle, @jasonelaw, @jonboiser, and @krlmlr.
I’m excited to announce forcats, a new package for categorical variables, or factors. Factors have a bad rap in R because they often turn up when you don’t want them. That’s because historically, factors were more convenient than character vectors, as discussed in stringsAsFactors: An unauthorized biography by Roger Peng, and stringsAsFactors = <sigh> by Thomas Lumley.
If you use packages from the tidyverse (like tibble and readr) you don’t need to worry about getting factors when you don’t want them. But factors are a useful data structure in their own right, particularly for modelling and visualisation, because they allow you to control the order of the levels. Working with factors in base R can be a little frustrating because of a handful of missing tools. The goal of forcats is to fill in those missing pieces so you can access the power of factors with a minimum of pain.
Install forcats with:
install.packages("forcats")
forcats provides two main types of tools to change either the values or the order of the levels. I’ll call out some of the most important functions below, using using the included gss_cat
dataset which contains a selection of categorical variables from the General Social Survey.
library(dplyr)
library(ggplot2)
library(forcats)
gss_cat
#> # A tibble: 21,483 × 9
#> year marital age race rincome partyid
#> <int> <fctr> <int> <fctr> <fctr> <fctr>
#> 1 2000 Never married 26 White $8000 to 9999 Ind,near rep
#> 2 2000 Divorced 48 White $8000 to 9999 Not str republican
#> 3 2000 Widowed 67 White Not applicable Independent
#> 4 2000 Never married 39 White Not applicable Ind,near rep
#> 5 2000 Divorced 25 White Not applicable Not str democrat
#> 6 2000 Married 25 White $20000 - 24999 Strong democrat
#> # ... with 2.148e+04 more rows, and 3 more variables: relig <fctr>,
#> # denom <fctr>, tvhours <int>
Change level values
You can recode specified factor levels with fct_recode()
:
gss_cat %>% count(partyid)
#> # A tibble: 10 × 2
#> partyid n
#> <fctr> <int>
#> 1 No answer 154
#> 2 Don't know 1
#> 3 Other party 393
#> 4 Strong republican 2314
#> 5 Not str republican 3032
#> 6 Ind,near rep 1791
#> # ... with 4 more rows
gss_cat %>%
mutate(partyid = fct_recode(partyid,
"Republican, strong" = "Strong republican",
"Republican, weak" = "Not str republican",
"Independent, near rep" = "Ind,near rep",
"Independent, near dem" = "Ind,near dem",
"Democrat, weak" = "Not str democrat",
"Democrat, strong" = "Strong democrat"
)) %>%
count(partyid)
#> # A tibble: 10 × 2
#> partyid n
#> <fctr> <int>
#> 1 No answer 154
#> 2 Don't know 1
#> 3 Other party 393
#> 4 Republican, strong 2314
#> 5 Republican, weak 3032
#> 6 Independent, near rep 1791
#> # ... with 4 more rows
Note that unmentioned levels are left as is, and the order of the levels is preserved.
fct_lump()
allows you to lump the rarest (or most common) levels in to a new “other” level. The default behaviour is to collapse the smallest levels in to other, ensuring that it’s still the smallest level. For the religion variable that tells us that Protestants out number all other religions, which is interesting, but we probably want more level.
gss_cat %>%
mutate(relig = fct_lump(relig)) %>%
count(relig)
#> # A tibble: 2 × 2
#> relig n
#> <fctr> <int>
#> 1 Other 10637
#> 2 Protestant 10846
Alternatively you can supply a number of levels to keep, n
, or minimum proportion for inclusion, prop
. If you use negative values, fct_lump()
will change direction, and combine the most common values while preserving the rarest.
gss_cat %>%
mutate(relig = fct_lump(relig, n = 5)) %>%
count(relig)
#> # A tibble: 6 × 2
#> relig n
#> <fctr> <int>
#> 1 Other 913
#> 2 Christian 689
#> 3 None 3523
#> 4 Jewish 388
#> 5 Catholic 5124
#> 6 Protestant 10846
gss_cat %>%
mutate(relig = fct_lump(relig, prop = -0.10)) %>%
count(relig)
#> # A tibble: 12 × 2
#> relig n
#> <fctr> <int>
#> 1 No answer 93
#> 2 Don't know 15
#> 3 Inter-nondenominational 109
#> 4 Native american 23
#> 5 Christian 689
#> 6 Orthodox-christian 95
#> # ... with 6 more rows
Change level order
There are four simple helpers for common operations:
fct_relevel()
is similar tostats::relevel()
but allows you to move any number of levels to the front.fct_inorder()
orders according to the first appearance of each level.fct_infreq()
orders from most common to rarest.fct_rev()
reverses the order of levels.
fct_reorder()
and fct_reorder2()
are useful for visualisations. fct_reorder()
reorders the factor levels by another variable. This is useful when you map a categorical variable to position, as shown in the following example which shows the average number of hours spent watching television across religions.
relig <- gss_cat %>%
group_by(relig) %>%
summarise(
age = mean(age, na.rm = TRUE),
tvhours = mean(tvhours, na.rm = TRUE),
n = n()
)
ggplot(relig, aes(tvhours, relig)) + geom_point()
ggplot(relig, aes(tvhours, fct_reorder(relig, tvhours))) +
geom_point()
fct_reorder2()
extends the same idea to plots where a factor is mapped to another aesthetic, like colour. The defaults are designed to make legends easier to read for line plots, as shown in the following example looking at marital status by age.
by_age <- gss_cat %>% filter(!is.na(age)) %>% group_by(age, marital) %>% count() %>% mutate(prop = n / sum(n)) ggplot(by_age, aes(age, prop)) + geom_line(aes(colour = marital))
ggplot(by_age, aes(age, prop)) + geom_line(aes(colour = fct_reorder2(marital, age, prop))) + labs(colour = "marital")
Learning more
You can learn more about forcats in R for data science, and on the forcats website.
Please let me know if you have more factor problems that forcats doesn’t help with!
We’re proud to announce version 1.2.0 of the tibble package. Tibbles are a modern reimagining of the data frame, keeping what time has shown to be effective, and throwing out what is not. Grab the latest version with:
install.packages("tibble")
This is mostly a maintenance release, with the following major changes:
- More options for adding individual rows and (new!) columns
- Improved function names
- Minor tweaks to the output
There are many other small improvements and bug fixes: please see the release notes for a complete list.
Thanks to Jenny Bryan for add_row()
and add_column()
improvements and ideas, to William Dunlap for pointing out a bug with tibble’s implementation of all.equal()
, to Kevin Wright for pointing out a rare bug with glimpse()
, and to all the other contributors. Use the issue tracker to submit bugs or suggest ideas, your contributions are always welcome.
Adding rows and columns
There are now more options for adding individual rows, and columns can be added in a similar way, illustrated with this small tibble:
df <- tibble(x = 1:3, y = 3:1)
df
#> # A tibble: 3 × 2
#> x y
#> <int> <int>
#> 1 1 3
#> 2 2 2
#> 3 3 1
The add_row()
function allows control over where the new rows are added. In the following example, the row (4, 0) is added before the second row:
df %>%
add_row(x = 4, y = 0, .before = 2)
#> # A tibble: 4 × 2
#> x y
#> <dbl> <dbl>
#> 1 1 3
#> 2 4 0
#> 3 2 2
#> 4 3 1
Adding more than one row is now fully supported, although not recommended in general because it can be a bit hard to read.
df %>%
add_row(x = 4:5, y = 0:-1)
#> # A tibble: 5 × 2
#> x y
#> <int> <int>
#> 1 1 3
#> 2 2 2
#> 3 3 1
#> 4 4 0
#> 5 5 -1
Columns can now be added in much the same way with the new add_column()
function:
df %>%
add_column(z = -1:1, w = 0)
#> # A tibble: 3 × 4
#> x y z w
#> <int> <int> <int> <dbl>
#> 1 1 3 -1 0
#> 2 2 2 0 0
#> 3 3 1 1 0
It also supports .before
and .after
arguments:
df %>%
add_column(z = -1:1, .after = 1)
#> # A tibble: 3 × 3
#> x z y
#> <int> <int> <int>
#> 1 1 -1 3
#> 2 2 0 2
#> 3 3 1 1
df %>%
add_column(w = 0:2, .before = "x")
#> # A tibble: 3 × 3
#> w x y
#> <int> <int> <int>
#> 1 0 1 3
#> 2 1 2 2
#> 3 2 3 1
The add_column()
function will never alter your existing data: you can’t overwrite existing columns, and you can’t add new observations.
Function names
frame_data()
is now tribble()
, which stands for “transposed tibble”. The old name still works, but will be deprecated eventually.
tribble(
~x, ~y,
1, "a",
2, "z"
)
#> # A tibble: 2 × 2
#> x y
#> <dbl> <chr>
#> 1 1 a
#> 2 2 z
Output tweaks
We’ve tweaked the output again to use the multiply character ×
instead of x
when printing dimensions (this still renders nicely on Windows.) We surround non-semantic column with backticks, and dttm
is now used instead of time
to distinguish POSIXt
and hms
(or difftime
) values.
The example below shows the new rendering:
tibble(`date and time` = Sys.time(), time = hms::hms(minutes = 3))
#> # A tibble: 1 × 2
#> `date and time` time
#> <dttm> <time>
#> 1 2016-08-29 16:48:57 00:03:00
Expect the printed output to continue to evolve in next release. Stay tuned for a new function that reconstructs tribble()
calls from existing data frames.