There is the command line parameter --spider exactly for this. In this mode, wget does not download the files and its return value is zero if the resource was found and non-zero if it was not found. Try this (in your favorite shell):
wget -q --spider address
echo $?
Or if you want full output, leave the -q off, so just wget --spider address. -nv shows some output, but not as much as the default.
Works even on URLs with just a path but has the disadvantage that something's really downloaded so this is not recommended when checking big files for existence.
If you are in a directory where only root have access to write in system. Then you can directly use wget www.example.com/wget-test using a standard user account. So it will hit the url but because of having no write permission file won't be saved..
This method is working fine for me as i am using this method for a cronjob.
Thanks.