In my original answer, I also suggested unicodedata.normalize. However, I decided to test it out and it turns out it doesn't work with Unicode quotation marks. It does a good job translating accented Unicode characters, so I'm guessing unicodedata.normalize is implemented using the unicode.decomposition function, which leads me to believe it probably can only handle Unicode characters that are combinations of a letter and a diacritical mark, but I'm not really an expert on the Unicode specification, so I could just be full of hot air...
In any event, you can use unicode.translate to deal with punctuation characters instead. The translate method takes a dictionary of Unicode ordinals to Unicode ordinals, thus you can create a mapping that translates Unicode-only punctuation to ASCII-compatible punctuation:
'Maps left and right single and double quotation marks'
'into ASCII single and double quotation marks'
>>> punctuation = { 0x2018:0x27, 0x2019:0x27, 0x201C:0x22, 0x201D:0x22 }
>>> teststring = u'\u201Chello, world!\u201D'
>>> teststring.translate(punctuation).encode('ascii', 'ignore')
'"hello, world!"'
You can add more punctuation mappings if needed, but I don't think you necessarily need to worry about handling every single Unicode punctuation character. If you do need to handle accents and other diacritical marks, you can still use unicodedata.normalize to deal with those characters.
Unidecode looks like a complete solution. It converts fancy quotes to ascii quotes, accented latin characters to unaccented and even attempts transliteration to deal with characters that don't have ASCII equivalents. That way your users don't have to see a bunch of ? when you had to pass their text through a legacy 7-bit ascii system.
>>> from unidecode import unidecode
>>> print unidecode(u"\u5317\u4EB0")
Bei Jing