Threads share a process and a process runs on a core, but you can use python's multiprocessing module to call your functions in separate processes and use other cores, or you can use the subprocess module, which can run your code and non-python code too.
CPython (the classic and prevalent implementation of Python) can't have more than one thread executing Python bytecode at the same time. This means compute-bound programs will only use one core. I/O operations and computing happening inside C extensions (such as numpy) can operate simultaneously.
Other implementation of Python (such as Jython or PyPy) may behave differently, I'm less clear on their details.
The usual recommendation is to use many processes rather than many threads.
But cPython cannot when you are using regular threads for concurrency.
You can either use something like multiprocessing, celery or mpi4py to split the parallel work into another process;
Or you can use something like Jython or IronPython to use an alternative interpreter that doesn't have a GIL.
A softer solution is to use libraries that don't run afoul of the GIL for heavy CPU tasks, for instance numpy can do the heavy lifting while not retaining the GIL, so other python threads can proceed. You can also use the ctypes library in this way.
If you are not doing CPU bound work, you can ignore the GIL issue entirely (kind of) since python won't acquire the GIL while it's waiting for IO.
Python threads cannot take advantage of many cores. This is due to an internal implementation detail called the GIL (global interpreter lock) in the C implementation of python (cPython) which is almost certainly what you use.