One nice feature: Reading data uses connections which can be local files, remote files accessed via http, pipes from other programs or more.
As a simple example, consider this access for N=10 random integers between min=100 and max=200 from random.org (which supplies true random numbers based on atmospheric noise rather than a pseudo random number generator):
R> site <- "http://random.org/integers/" # base URL
R> query <- "num=10&min=100&max=200&col=2&base=10&format=plain&rnd=new"
R> txt <- paste(site, query, sep="?") # concat url and query string
R> nums <- read.table(file=txt) # and read the data
R> nums # and show it
V1 V2
1 165 143
2 107 118
3 103 132
4 191 100
5 138 185
R>
As an aside, the random package provides several convenience functions for accessing random.org.
Upon Dirk's advice, I am posting single examples. I hope they are not too "cute" [clever, but I don't care] or trivial for this audience.
Linear models are the bread and butter of R. When the number of independent variables is high, one has two choices. The first is to it use lm.fit(), which receives the design matrix x and the response y as arguments, similarly to Matlab. The drawback to this approach is that the return value is a list of objects (fitted coefficients, residuals, etc), not an object of class "lm", which can be nicely summarized, used for prediction, stepwise selection, etc. The second approach is create a formula:
Another trick. Some packages, like glmnet, only take as inputs the design matrix and the response variable. If one wants to fit a model with all interactions between features, she can't use the formula "y ~ .^2". Using expand.grid() allows us to take advantage of the powerful array indexing and vector operations of R.
This type of situation occurs more often than not, and use of eval() and parse() can help address it. Of course, I welcome any feedback on alternative ways of coding this up.
head() and tail() to get the first and last parts of a dataframe, vector, matrix, function, etc. Especially with large data frames, this is a quick way to check that it has loaded ok.
I've posted this once before but I use it so much I thought I'd post it again. Its just a little function to return the names and position numbers of a data.frame. Its nothing special to be sure, but I almost never make it through a session without using it multiple times.
##creates an object from a data.frame listing the column names and location
Definitively system().
To be able to have access to all the unix tools (at least under Linux/MacOSX) from inside the R environment has rapidly become invaluable in my daily workflow.
[Edit]
Dirk asks why one would give invalid names? I don't know! But I certainly encounter this problem in practice fairly often. For example, using hadley's reshape package:
> library(reshape)
> df$z <- c(1,1,2,2,2)
> recast(df,z~.,id.var="z")
Aggregation requires fun.aggregate: length used as default
z (all)
1 1 4
2 2 6
> recast(df,z~.,id.var="z")$(all)
Error: unexpected '(' in "recast(df,z~.,id.var="z")$("
> recast(df,z~.,id.var="z")$`(all)`
Aggregation requires fun.aggregate: length used as default
[1] 4 6
CrossTable() from the gmodels package provides easy access to SAS- and SPSS-style crosstabs, along with the usual tests (Chisq, McNemar, etc.). Basically, it's xtabs() with fancy output and some additional tests - but it does make sharing output with the heathens easier.
I have found Google spreadsheets to be a fantastic way for all collaborators to be on the same page. Furthermore, Google Forms allows one to capture data from respondents and effortlessly write it to a google spreadsheet. Since data changes frequently and is almost never final it is far preferable for R to read a google spreadsheet directly than to futz with downloading csv files and reading them in.
# Get data from google spreadsheet
library(RGoogleDocs)
ps <-readline(prompt="get the password in ")
auth = getGoogleAuth("me@gmail.com", ps, service="wise")
sheets.con <- getGoogleDocsConnection(auth)
ts2=getWorksheets("Data Collection Repos",sheets.con)
names(ts2)
init.consent <-sheetAsMatrix(ts2$Sheet1,header=TRUE, as.data.frame=TRUE, trim=TRUE)
I cannot rembember which but one or two of the following commands takes several seconds.
It seems I cannot comment (maybe it has to do with this "reputation" business)
Anyway further to the RGoogleDocs tips above:
ps <-readline(prompt="get the password in ")
This won't work from within Emacs, which I like to use for R, with ESS of course.
On Linux, you can use zenity to get the password from user input, and set it to hide the input, so as an additional benefit, your password is not plaintext on your screen:
I'm really surprised no one has posted about apply, tapply, lapply, and sapply. A general rule I use when doing stuff in R is that if I have a for loop that is doing data processing or simulations, I try to factor it out and replace it with an *apply. Some people shy away from the *apply functions because they think only single parameter functions can be passed in. Nothing could be further from the truth! Like passing around functions with parameters as first class objects in Javascript, you do this in R with anonymous functions. For example:
(For those that follow #rstats, I also posted this there).
Remember, use apply, sapply, lapply, tapply, and do.call! Take avantage of R's vectorization. You should never walk up to a bunch of R code and see:
N = 10000
l = numeric()
for (i in seq(1:N)) {
sim <- rnorm(1, 0, 1)
l <- rbind(l, sim)
}
Not only is this not vectorized, but the array structure in R is not grown as it is in Python (doubling size when space runs out, IIRC). So each rbind step must first grow l enough to accept the results from rbind(), then copy all over the previous l's contents. For fun, try the above in R. Notice how long it takes (you won't even need Rprof or any timing function). Then try
N=10000
l <- rnorm(N, 0, 1)
The following is better than the first version too:
N = 10000
l = numeric(N)
for (i in seq(1:N)) {
sim <- rnorm(1, 0, 1)
l[i] <- sim
}
Sometimes you need to rbind multiple data frames. do.call() will let you do that (someone had to explain this to me when bind I asked this question, as it doesn't appear to be an obvious use).
The best part is that if you are doing something that actually requires a significant amount of time, you can switch from %do% to %dopar% (with the appropriate backend library) to instantly parallelize, even across a cluster. Very slick.
The sqldf package provides an SQL interface to R data frames
## recast the previous subset() expression in SQL
sqldf('SELECT product, revenue FROM sales \
WHERE country = "USA" \
AND product IN (1,2)')
>
product revenue
1 1 108.4597
2 2 100.3475
Perform an aggregation or GROUP BY
sqldf('select country, sum(revenue) revenue \
FROM sales \
GROUP BY country')
>
country revenue
1 FR 307.1157
2 UK 280.6382
3 USA 304.6860
For more sophisticated map-reduce-like functionality on data frames, check out the plyr package. And if find yourself wanting to pull your hair out, I recommend checking out Data Manipulation with R.
Although this question has been up for a while I recently discovered a great trick on the SAS and R blog for using the command cut. The command is used to divide data into categories and I will use the iris dataset as an example and divide it into 10 categories:
The traceback() function is a must when you have an error somewhere and do not understand it readily. It will print a trace of the stack, very helpful as R is not very verbose by default.
Then setting options(error=recover) will allow you to "enter" into the function raising the error and try and understand what happens exactly, as if you had full control over it and could put a browser() in it.
These three functions can really help debugging your code.
In R programming (not interactive sessions), I use if (bad.condition) stop("message") a lot. Every function starts with a few of these, and as I work through computations, I pepper these in, too. I guess I got into the habit from using assert() in C. The benefits are two-fold. First, it's a lot faster to get working code with these checks in place. Second, and probably more important, it is a lot easier to work with existing code when you see these checks on every screen in your editor. You won't have to wonder whether x>0, or trust a comment stating that it is ... you'll know, from a glance, that it is.
I find I am using with() and within() more and more. No more $ littering my code and one doesn't need to start attaching objects to the search path. More seriously, I find with() etc make the intention of my data analysis scripts much clearer.
> df <- data.frame(A = runif(10), B = rnorm(10))
> A <- 1:10 ## something else hanging around...
> with(df, A + B) ## I know this will use A in df!
[1] 0.04334784 -0.40444686 1.99368816 0.13871605 -1.17734837
[6] 0.42473812 2.33014226 1.61690799 1.41901860 0.8699079
with() sets up an environment within which the R expression is evaluated. within() does the same thing but allows you to modify the data object used to create the environment.
> df <- within(df, C <- rpois(10, lambda = 2))
> head(df)
A B C
1 0.62635571 -0.5830079 1
2 0.04810539 -0.4525522 1
3 0.39706979 1.5966184 3
4 0.95802501 -0.8193090 2
5 0.76772541 -1.9450738 2
6 0.21335006 0.2113881 4
Something I didn't realise when I first used within() is that you have to do an assignment as part of the expression evaluated and assign the returned object (as above) to get the desired effect.
for (f in files) {
if (!(f == 'mysource.r' )) {
print(paste('Sourcing',f))
source(paste(d,f,sep=''))
}
}
I use the above code to source all the files in a directory at start up with various utility programs I use in my interactive session with R. I am sure there are better ways but I find it useful for my work. The line that does this is as follows.
I mention this one because there is a distinct lack of examples using it on SO.
The new(ish) aggregate.formula syntax makes it much more flexible and useful than the old generic aggregate. It keeps the name of the aggregated variable, is more compact than the list syntax, and it allows you to do multiple variables at the same time including dot notation on either side of the formula.
use
newdf <- aggregate( cbind(rt, acc) ~ x + y + subj, olddf, mean )
instead of...
newdf <- with( olddf, aggregate( rt, list(x = x, y = y, subj = subj), mean ))
names(newdf)[4] <- 'rt'
newdf$acc <- with( olddf,
aggregate( acc, list(x = x, y = y, subj = subj), mean ))[,4]
Perhaps as a bit of a side note... the aggregate.data.frame examples in ?aggregate as well. The function does a lot of things people don't know about.