The standard error (SE) is just the standard deviation of the sampling distribution. The variance of the sampling distribution is the variance of the data divided by N and the SE is the square root of that. Going from that understanding one can see that it is more efficient to use variance in the SE calculation. The sd function in R already does one square root (code for sd is in R and revealed by just typing "sd"). Therefore, the following is most efficient.
se <- function(x) sqrt(var(x)/length(x))
in order to make the function only a bit more complex and handle all of the options that you could pass to var, you could make this modification.
se <- function(x, ...) sqrt(var(x, ...)/length(x))
Using this syntax one can take advantage of things like how var deals with missing values. Anything that can be passed to var as a named argument can be used in this se call.
As I'm going back to this question every now and then and because this question is old, I'm posting a benchmark for the most voted answers.
Note, that for @Ian's and @John's answers I created another version. Instead of using length(x), I used sum(!is.na(x)) (to avoid NAs).
I used a vector of 10^6, with 1,000 repetitions.
Remembering that the mean can also by obtained using a linear model, regressing the variable against a single intercept, you can use also the lm(x~1) function for this!
Advantages are:
You obtain immediately confidence intervals with confint()
You can use tests for various hypothesis about the mean, using for example car::linear.hypothesis()
You can use more sophisticated estimates of the standard deviation, in case you have some heteroskedasticity, clustered-data, spatial-data etc, see package sandwich