Easiest way to read from a URL into a string in .NET

Given a URL in a string:

http://www.example.com/test.xml

What's the easiest/most succinct way to download the contents of the file from the server (pointed to by the url) into a string in C#?

The way I'm doing it at the moment is:

WebRequest request = WebRequest.Create("http://www.example.com/test.xml");
WebResponse response = request.GetResponse();
Stream dataStream = response.GetResponseStream();
StreamReader reader = new StreamReader(dataStream);
string responseFromServer = reader.ReadToEnd();

That's a lot of code that could essentially be one line:

string responseFromServer = ????.GetStringFromUrl("http://www.example.com/test.xml");

Note: I'm not worried about asynchronous calls - this is not production code.

108590 次浏览

Important: this was correct when written, but in $current_year$, please see the HttpClient answer below


using(WebClient client = new WebClient()) {
string s = client.DownloadString(url);
}

The method in the above answer is now deprecated, the current recommendation is to use HttpClient:

using (HttpClient client = new HttpClient())
{
string s = await client.GetStringAsync(url);
}

Given that, at the time of this writing, HttpClient is the only remaining, valid .Net mechanism for performing this function, and, in any case where you're "not worried about asynchronous calls" (which appear to be unavoidable with HttpClient), I think that this function should get you what you're after:

public static class Http
{
///<remarks>NOTE: The <i>HttpCLient</i> class is <b>intended</b> to only ever be instantiated once in any application.</remarks>
private static readonly HttpClient _client = new();


/// <summary>Used to retrieve webserver data via simple <b>GET</b> requests.</summary>
/// <param name="url">A string containing the complete web <b>URL</b> to submit.</param>
/// <returns>Whatever <i>HttpClient</i> returns after attempting the supplied query (as a <i>Task&lt;string&gt;</i> value).</returns>
/// <exception cref="InvalidOperationException">Returned if the supplied <i>url</i> string is null, empty or whitespace.</exception>
private static async Task<string> HttpClientKludge( string url )
{
if ( string.IsNullOrWhiteSpace( url ) )
throw new InvalidOperationException( "You must supply a url to interrogate for this function to work." );


Uri uri;
try { uri = new Uri( url ); }
catch ( UriFormatException e ) { return $"{e.Message}\r\n{url}"; }


return await _client.GetStringAsync( uri );
}


/// <summary>Attempts to interrogate a website via the supplied URL and stores the result in a <i>string</i>.</summary>
/// <param name="url">A string containing a fully-formed, proper URL to retrieve.</param>
/// <param name="captureExceptions">If <b>TRUE</b>, any Exceptions generated by the operation will be suppressed with their Message returned as the result string, otherwise they're thrown normally.</param>
/// <returns>The result generated by submitting the request, as a <i>string</i>.</returns>
public static string Get( string url, bool captureExceptions = true )
{
string result;
try { result = HttpClientKludge( url ).Result; }
catch (AggregateException e)
{
if (!captureExceptions) throw;
result = e.InnerException is null ? e.Message : e.InnerException.Message;
}
return result;
}
}

With that in place, anytime you want to interrogate a website with a simple URL+GET inquiry, you can simply do:

string query = "/search?q=Easiest+way+to+read+from+a+URL+into+a+string+in+.NET",
siteResponse = Http.Get( $"https://www.google.com{query}" );
// Now use 'siteResponse' in any way you want...