That's a difficult problem to solve since visually similar PDFs may have a wildly differing structure depending on how they were produced. In the worst case the library would need to basically act like an OCR. On the other hand, the PDF may contain sufficient structure and metadata for easy removal of tables and figures, which the library can be tailored to take advantage of.
I'm pretty sure there are no open source tools which solve your problem for a wide variety of PDFs, but I remember having heard of commercial software claiming to do exactly what you ask for. I'm sure you'll run into them while googling.
You can also take a look at PDFMiner (or for older versions of Python see PDFMiner and PDFMiner).
A particular feature of interest in PDFMiner is that you can control how it regroups text parts when extracting them. You do this by specifying the space between lines, words, characters, etc. So, maybe by tweaking this you can achieve what you want (that depends of the variability of your documents). PDFMiner can also give you the location of the text in the page, it can extract data by Object ID and other stuff. So dig in PDFMiner and be creative!
But your problem is really not an easy one to solve because, in a PDF, the text is not continuous, but made from a lot of small groups of characters positioned absolutely in the page. The focus of PDF is to keep the layout intact. It's not content oriented but presentation oriented.