Sunday 11 February 2024

Interfaces to the outside world

Data are read in using connection interfaces. Connections can be made to files (most common) or to other more exotic things.

  • file, opens a connection to a file
  • gzfile, opens a connection to a file compressed with gzip
  • bzfile, opens a connection to a file compressed with bzip2
  • url, opens a connection to a webpage

In general, connections are powerful tools that let you navigate files or other external objects. Connections can be thought of as a translator that lets you talk to objects that are outside of R. Those outside objects could be anything from a data base, a simple text file, or a a web service API. Connections allow R functions to talk to all these different external objects without you having to write custom code for each object.

1.File Connections

Connections to text files can be created with the file() function.

> str(file)
function (description = "", open = "", blocking = TRUE, encoding = getOption("encoding"), 
    raw = FALSE, method = getOption("url.method", "default"))  

The file() function has a number of arguments that are common to many other connection functions so it’s worth going into a little detail here.

  • description is the name of the file
  • open is a code indicating what mode the file should be opened in

The open argument allows for the following options:

  • “r” open file in read only mode
  • “w” open a file for writing (and initializing a new file)
  • “a” open a file for appending
  • “rb”, “wb”, “ab” reading, writing, or appending in binary mode (Windows)

In practice, we often don’t need to deal with the connection interface directly as many functions for reading and writing data just deal with it in the background.

For example, if one were to explicitly use connections to read a CSV file in to R, it might look like this,

> ## Create a connection to 'foo.txt'
> con <- file("foo.txt")       
> 
> ## Open connection to 'foo.txt' in read-only mode
> open(con, "r")               
> 
> ## Read from the connection
> data <- read.csv(con)        
> 
> ## Close the connection
> close(con)                   

which is the same as

> data <- read.csv("foo.txt")

In the background, read.csv() opens a connection to the file foo.txt, reads from it, and closes the connection when it’s done.

The above example shows the basic approach to using connections. Connections must be opened, then the are read from or written to, and then they are closed.

2. Reading Lines of a Text File

Text files can be read line by line using the readLines() function. This function is useful for reading text files that may be unstructured or contain non-standard data.

> ## Open connection to gz-compressed text file
> con <- gzfile("words.gz")   
> x <- readLines(con, 10) 
> x
 [1] "1080"     "10-point" "10th"     "11-point" "12-point" "16-point"
 [7] "18-point" "1st"      "2"        "20-point"

For more structured text data like CSV files or tab-delimited files, there are other functions like read.csv() or read.table().

The above example used the gzfile() function which is used to create a connection to files compressed using the gzip algorithm. This approach is useful because it allows you to read from a file without having to uncompress the file first, which would be a waste of space and time.

There is a complementary function writeLines() that takes a character vector and writes each element of the vector one line at a time to a text file.

3.Reading From a URL Connection

The readLines() function can be useful for reading in lines of webpages. Since web pages are basically text files that are stored on a remote server, there is conceptually not much difference between a web page and a local text file. However, we need R to negotiate the communication between your computer and the web server. This is what the url() function can do for you, by creating a url connection to a web server.

This code might take time depending on your connection speed.

> ## Open a URL connection for reading
> con <- url("https://www.jhu.edu", "r")  
> 
> ## Read the web page
> x <- readLines(con)                      
> 
> ## Print out the first few lines
> head(x)                                  
[1] "<!doctype html>"                    ""                                  
[3] "<html class=\"no-js\" lang=\"en\">" "  <head>"                          
[5] "    <script>"                       "    dataLayer = [];"               

While reading in a simple web page is sometimes useful, particularly if data are embedded in the web page somewhere. However, more commonly we can use URL connection to read in specific data files that are stored on web servers.

Using URL connections can be useful for producing a reproducible analysis, because the code essentially documents where the data came from and how they were obtained. This is approach is preferable to opening a web browser and downloading a dataset by hand. Of course, the code you write with connections may not be executable at a later date if things on the server side are changed or reorganized.

Using reader package

The readr package is recently developed by Hadley Wickham to deal with reading in large flat files quickly. The package provides replacements for functions like read.table() and read.csv(). The analogous functions in readr are read_table() and read_csv(). These functions are often much faster than their base R analogues and provide a few other nice features such as progress meters.

For the most part, you can read use read_table() and read_csv() pretty much anywhere you might use read.table() and read.csv(). In addition, if there are non-fatal problems that occur while reading in the data, you will get a warning and the returned data frame will have some information about which rows/observations triggered the warning. This can be very helpful for “debugging” problems with your data before you get neck deep in data analysis.

The importance of the read_csv function is perhaps better understood from an historical perspective. R’s built in read.csv function similarly reads CSV files, but the read_csv function in readr builds on that by removing some of the quirks and “gotchas” of read.csv as well as dramatically optimizing the speed with which it can read data into R. The read_csv function also adds some nice user-oriented features like a progress meter and a compact method for specifying column types.

A typical call to read_csv will look as follows.

> library(readr)
> teams <- read_csv("data/team_standings.csv")
Rows: 32 Columns: 2
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
chr (1): Team
dbl (1): Standing

ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
> teams
# A tibble: 32 × 2
   Standing Team       
      <dbl> <chr>      
 1        1 Spain      
 2        2 Netherlands
 3        3 Germany    
 4        4 Uruguay    
 5        5 Argentina  
 6        6 Brazil     
 7        7 Ghana      
 8        8 Paraguay   
 9        9 Japan      
10       10 Chile      
# … with 22 more rows

By default, read_csv will open a CSV file and read it in line-by-line. It will also (by default), read in the first few rows of the table in order to figure out the type of each column (i.e. integer, character, etc.). From the read_csv help page:

If ‘NULL’, all column types will be imputed from the first 1000 rows on the input. This is convenient (and fast), but not robust. If the imputation fails, you’ll need to supply the correct types yourself.

You can specify the type of each column with the col_types argument.

In general, it’s a good idea to specify the column types explicitly. This rules out any possible guessing errors on the part of read_csv. Also, specifying the column types explicitly provides a useful safety check in case anything about the dataset should change without you knowing about it.

> teams <- read_csv("data/team_standings.csv", col_types = "cc")

Note that the col_types argument accepts a compact representation. Here "cc" indicates that the first column is character and the second column is character (there are only two columns). Using the col_types argument is useful because often it is not easy to automatically figure out the type of a column by looking at a few rows (especially if a column has many missing values).

The read_csv function will also read compressed files automatically. There is no need to decompress the file first or use the gzfile connection function. The following call reads a gzip-compressed CSV file containing download logs from the RStudio CRAN mirror.

> logs <- read_csv("data/2016-07-19.csv.bz2", n_max = 10)
Rows: 10 Columns: 10
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
chr  (6): r_version, r_arch, r_os, package, version, country
dbl  (2): size, ip_id
date (1): date
time (1): time

ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.

Note that the warnings indicate that read_csv may have had some difficulty identifying the type of each column. This can be solved by using the col_types argument.

> logs <- read_csv("data/2016-07-19.csv.bz2", col_types = "ccicccccci", n_max = 10)
> logs
# A tibble: 10 × 10
   date       time     size r_version r_arch r_os  package version country ip_id
   <chr>      <chr>   <int> <chr>     <chr>  <chr> <chr>   <chr>   <chr>   <int>
 1 2016-07-19 22:00… 1.89e6 3.3.0     x86_64 ming… data.t… 1.9.6   US          1
 2 2016-07-19 22:00… 4.54e4 3.3.1     x86_64 ming… assert… 0.1     US          2
 3 2016-07-19 22:00… 1.43e7 3.3.1     x86_64 ming… stringi 1.1.1   DE          3
 4 2016-07-19 22:00… 1.89e6 3.3.1     x86_64 ming… data.t… 1.9.6   US          4
 5 2016-07-19 22:00… 3.90e5 3.3.1     x86_64 ming… foreach 1.4.3   US          4
 6 2016-07-19 22:00… 4.88e4 3.3.1     x86_64 linu… tree    1.0-37  CO          5
 7 2016-07-19 22:00… 5.25e2 3.3.1     x86_64 darw… surviv… 2.39-5  US          6
 8 2016-07-19 22:00… 3.23e6 3.3.1     x86_64 ming… Rcpp    0.12.5  US          2
 9 2016-07-19 22:00… 5.56e5 3.3.1     x86_64 ming… tibble  1.1     US          2
10 2016-07-19 22:00… 1.52e5 3.3.1     x86_64 ming… magrit… 1.5     US          2

You can specify the column type in a more detailed fashion by using the various col_* functions. For example, in the log data above, the first column is actually a date, so it might make more sense to read it in as a Date variable. If we wanted to just read in that first column, we could do

> logdates <- read_csv("data/2016-07-19.csv.bz2", 
+                      col_types = cols_only(date = col_date()),
+                      n_max = 10)
> logdates
# A tibble: 10 × 1
   date      
   <date>    
 1 2016-07-19
 2 2016-07-19
 3 2016-07-19
 4 2016-07-19
 5 2016-07-19
 6 2016-07-19
 7 2016-07-19
 8 2016-07-19
 9 2016-07-19
10 2016-07-19

Now the date column is stored as a Date object which can be used for relevant date-related computations (for example, see the lubridate package).

Note:--->The read_csv function has a progress option that defaults to TRUE. This options provides a nice progress meter while the CSV file is being read. However, if you are using read_csv in a function, or perhaps embedding it in a loop, it’s probably best to set progress = FALSE.

Getting Data in and out of R

1. Reading and Writing Data

There are a few principal functions reading data into R.

  • read.table, read.csv, for reading tabular data
  • readLines, for reading lines of a text file
  • source, for reading in R code files (inverse of dump)
  • dget, for reading in R code files (inverse of dput)
  • load, for reading in saved workspaces
  • unserialize, for reading single R objects in binary form

There are of course, many R packages that have been developed to read in all kinds of other datasets, and you may need to resort to one of these packages if you are working in a specific area.

There are analogous functions for writing data to files

  • write.table, for writing tabular data to text files (i.e. CSV) or connections

  • writeLines, for writing character data line-by-line to a file or connection

  • dump, for dumping a textual representation of multiple R objects

  • dput, for outputting a textual representation of an R object

  • save, for saving an arbitrary number of R objects in binary format (possibly compressed) to a file.

  • serialize, for converting an R object into a binary format for outputting to a connection (or file).

2 .Reading Data Files with read.table()

The read.table() function is one of the most commonly used functions for reading data. The help file for read.table() is worth reading in its entirety if only because the function gets used a lot (run ?read.table in R). I know, I know, everyone always says to read the help file, but this one is actually worth reading.

The read.table() function has a few important arguments:

  • file, the name of a file, or a connection
  • header, logical indicating if the file has a header line
  • sep, a string indicating how the columns are separated
  • colClasses, a character vector indicating the class of each column in the dataset
  • nrows, the number of rows in the dataset. By default read.table() reads an entire file.
  • comment.char, a character string indicating the comment character. This defalts to "#". If there are no commented lines in your file, it’s worth setting this to be the empty string "".
  • skip, the number of lines to skip from the beginning
  • stringsAsFactors, should character variables be coded as factors? This defaults to TRUE because back in the old days, if you had data that were stored as strings, it was because those strings represented levels of a categorical variable. Now we have lots of data that is text data and they don’t always represent categorical variables. So you may want to set this to be FALSE in those cases. If you always want this to be FALSE, you can set a global option via options(stringsAsFactors = FALSE). I’ve never seen so much heat generated on discussion forums about an R function argument than the stringsAsFactors argument. Seriously.

For small to moderately sized datasets, you can usually call read.table without specifying any other arguments

> data <- read.table("foo.txt")

In this case, R will automatically

  • skip lines that begin with a #
  • figure out how many rows there are (and how much memory needs to be allocated)
  • figure what type of variable is in each column of the table.

Telling R all these things directly makes R run faster and more efficiently. The read.csv() function is identical to read.table except that some of the defaults are set differently (like the sep argument).

3. Reading in Larger Datasets with read.table

With much larger datasets, there are a few things that you can do that will make your life easier and will prevent R from choking.

  • Read the help page for read.table, which contains many hints

  • Make a rough calculation of the memory required to store your dataset (see the next section for an example of how to do this). If the dataset is larger than the amount of RAM on your computer, you can probably stop right here.

  • Set comment.char = "" if there are no commented lines in your file.

  • Use the colClasses argument. Specifying this option instead of using the default can make ’read.table’ run MUCH faster, often twice as fast. In order to use this option, you have to know the class of each column in your data frame. If all of the columns are “numeric”, for example, then you can just set colClasses = "numeric". A quick an dirty way to figure out the classes of each column is the following:

> initial <- read.table("datatable.txt", nrows = 100)
> classes <- sapply(initial, class)
> tabAll <- read.table("datatable.txt", colClasses = classes)
  • Set nrows. This doesn’t make R run faster but it helps with memory usage. A mild overestimate is okay. You can use the Unix tool wc to calculate the number of lines in a file.

In general, when using R with larger datasets, it’s also useful to know a few things about your system.

  • How much memory is available on your system?
  • What other applications are in use? Can you close any of them?
  • Are there other users logged into the same system?
  • What operating system ar you using? Some operating systems can limit the amount of memory a single process can access

4. Calculating Memory Requirements for R Objects

Because R stores all of its objects physical memory, it is important to be cognizant of how much memory is being used up by all of the data objects residing in your workspace. One situation where it’s particularly important to understand memory requirements is when you are reading in a new dataset into R. Fortunately, it’s easy to make a back of the envelope calculation of how much memory will be required by a new dataset.

For example, suppose I have a data frame with 1,500,000 rows and 120 columns, all of which are numeric data. Roughly, how much memory is required to store this data frame? Well, on most modern computers double precision floating point numbers are stored using 64 bits of memory, or 8 bytes. Given that information, you can do the following calculation

1,500,000 × 120 × 8 bytes/numeric | = 1,440,000,000 bytes |
| = 1,440,000,000 / 220 bytes/MB
| = 1,373.29 MB
| = 1.34 GB

So the dataset would require about 1.34 GB of RAM. Most computers these days have at least that much RAM. However, you need to be aware of

  • what other programs might be running on your computer, using up RAM
  • what other R objects might already be taking up RAM in your workspace
Reading in a large dataset for which you do not have enough RAM is one easy way to freeze up your computer (or at least your R session). This is usually an unpleasant experience that usually requires you to kill the R process, in the best case scenario, or reboot your computer, in the worst case. So make sure to do a rough calculation of memory requirements before reading in a large dataset.

Advertisement

Follow US

Join 12,000+ People Following

Notifications

More

Results

More

Java Tutorial

More

Digital Logic design Tutorial

More

syllabus

More

ANU Materials

More

Advertisement

Top