I think you're reading those stats incorrectly. They show that Python is up to about 400 times slower than C++ and with the exception of a single case, Python is more of a memory hog. When it comes to source size though, Python wins flat out.
My experiences with Python show the same definite trend that Python is on the order of between 10 and 100 times slower than C++ when doing any serious number crunching. There are many reasons for this, the major ones being: a) Python is interpreted, while C++ is compiled; b) Python has no primitives, everything including the builtin types (int, float, etc.) are objects; c) a Python list can hold objects of different type, so each entry has to store additional data about its type. These all severely hinder both runtime and memory consumption.
This is no reason to ignore Python though. A lot of software doesn't require much time or memory even with the 100 time slowness factor. Development cost is where Python wins with the simple and concise style. This improvement on development cost often outweighs the cost of additional cpu and memory resources. When it doesn't, however, then C++ wins.
My experience is the same as the benchmarks. Python can be slow and uses more memory. I write much, much less code and it works the first time with much less debugging. Since it manages memory for me, I don't have to do any memory management, saving hours of chasing down core leaks.
I think those stats show that Python is much slower and uses more memory for those benchmarks - are you sure you're reading them the right way up?
In my experience, which is mostly with writing network- and file-system-bound programs in Python, Python isn't significantly slower in any way that matters. For that kind of work, its benefits outweigh its costs.
It's still a bad comparison, since noone would do the numbercrunchy stuff benchmarks tend to focus on in pure Python anyway. A better one would be comparing the performance of realistic applications, or C++ versus NumPy, to get an idea whether your program will be noticeably slower.
All the slowest (>100x) usages of Python on the shootout are scientific operations that require high GFlop/s count. You should NOT use python for those anyways. The correct way to use python is to import a module that does those calculations, and then go have a relaxing afternoon with your family. That is the pythonic way :)
It's the same problem with managed and easy to use programming language as always - they are slow (and sometimes memory-eating).
These are languages to do control rather than processing. If I would have to write application to transform images and had to use Python too all the processing could be written in C++ and connected to Python via bindings while interface and process control would be definetely Python.