The CSV format is implemented as ASCII, not unicode, in Excel, thus mangling the diacritics. We experienced the same issue which is how I tracked down that the official CSV standard was defined as being ASCII-based in Excel.
A correctly formatted UTF8 file can have a Byte Order Mark as its first three octets. These are the hex values 0xEF, 0xBB, 0xBF. These octets serve to mark the file as UTF8 (since they are not relevant as "byte order" information).1 If this BOM does not exist, the consumer/reader is left to infer the encoding type of the text. Readers that are not UTF8 capable will read the bytes as some other encoding such as Windows-1252 and display the characters  at the start of the file.
There is a known bug where Excel, upon opening UTF8 CSV files via file association, assumes that they are in a single-byte encoding, disregarding the presence of the UTF8 BOM. This can not be fixed by any system default codepage or language setting. The BOM will not clue in Excel - it just won't work. (A minority report claims that the BOM sometimes triggers the "Import Text" wizard.) This bug appears to exist in Excel 2003 and earlier. Most reports (amidst the answers here) say that this is fixed in Excel 2007 and newer.
Note that you can always* correctly open UTF8 CSV files in Excel using the "Import Text" wizard, which allows you to specify the encoding of the file you're opening. Of course this is much less convenient.
Readers of this answer are most likely in a situation where they don't particularly support Excel < 2007, but are sending raw UTF8 text to Excel, which is misinterpreting it and sprinkling your text with à and other similar Windows-1252 characters. Adding the UTF8 BOM is probably your best and quickest fix.
If you are stuck with users on older Excels, and Excel is the only consumer of your CSVs, you can work around this by exporting UTF16 instead of UTF8. Excel 2000 and 2003 will double-click-open these correctly. (Some other text editors can have issues with UTF16, so you may have to weigh your options carefully.)
* Except when you can't, (at least) Excel 2011 for Mac's Import Wizard does not actually always work with all encodings, regardless of what you tell it. </anecdotal-evidence> :)
Prepending a BOM (\uFEFF) worked for me (Excel 2007), in that Excel recognised the file as UTF-8. Otherwise, saving it and using the import wizard works, but is less ideal.
I've also noticed that the question was "answered" some time ago but I don't understand the stories that say you can't open a utf8-encoded csv file successfully in Excel without using the text wizard.
My reproducible experience:
Type Old MacDonald had a farm,ÈÌÉÍØ into Notepad, hit Enter, then Save As (using the UTF-8 option).
Using Python to show what's actually in there:
>>> open('oldmac.csv', 'rb').read()
'\xef\xbb\xbfOld MacDonald had a farm,\xc3\x88\xc3\x8c\xc3\x89\xc3\x8d\xc3\x98\r\n'
>>> ^Z
Good. Notepad has put a BOM at the front.
Now go into Windows Explorer, double click on the file name, or right click and use "Open with ...", and up pops Excel (2003) with display as expected.
Another solution I found was just to encode the result as Windows Code Page 1252 (Windows-1252 or CP1252). This would be done, for example by setting Content-Type appropriately to something like text/csv; charset=Windows-1252 and setting the character encoding of the response stream similarly.
I've found a way to solve the problem. This is a nasty hack but it works: open the doc with Open Office, then save it into any excel format; the resulting .xls or .xlsx will display the accentuated characters.
If you have legacy code in vb.net like I have, the following code worked for me:
Response.Clear()
Response.ClearHeaders()
Response.ContentType = "text/csv"
Response.Expires = 0
Response.AddHeader("Content-Disposition", "attachment; filename=export.csv;")
Using sw As StreamWriter = New StreamWriter(Context.Response.OutputStream, System.Text.Encoding.Unicode)
sw.Write(csv)
sw.Close()
End Using
Response.End()
UTF-8 doesn't work for me in office 2007 without any service pack, with or without BOM
(U+ffef or 0xEF,0xBB,0xBF , neither works)
installing sp3 makes UTF-8 work when 0xEF,0xBB,0xBF BOM is prepended.
UTF-16 works when encoding in python using "utf-16-le" with a 0xff 0xef
BOM prepended, and using tab as seperator.
I had to manually write out the BOM, and then use "utf-16-le" rather then "utf-16",
otherwise each encode() prepended the BOM to every row written out which
appeared as garbage on the first column of the second line and after.
can't tell whether UTF-16 would work without any sp installed, since
I can't go back now. sigh
This is on windows, dunno about office for MAC.
for both working cases, the import works when launching a download directly from the
browser and the text import wizard doesn't intervence, it works like you would expect.
Note that including the UTF-8 BOM is not necessarily a good idea - Mac versions of Excel ignore it and will actually display the BOM as ASCII… three nasty characters at the start of the first field in your spreadsheet…
open the file csv with notepad++
clic on Encode, select convert to UTF-8 (not convert to UTF-8(without BOM))
Save
open by double clic with excel
Hope that help
Christophe GRISON
The answer for all combinations of Excel versions (2003 + 2007) and file types
Most other answers here concern their Excel version only and will not necessarily help you, because their answer just might not be true for your version of Excel.
For example, adding the BOM character introduces problems with automatic column separator recognition, but not with every Excel version.
There are 3 variables that determines if it works in most Excel versions:
Encoding
BOM character presence
Cell separator
Somebody stoic at SAP tried every combination and reported the outcome. End result? Use UTF16le with BOM and tab character as separator to have it work in most Excel versions.