• Mynameisspam1@fediverser.communick.devB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It’s because, in Python, you don’t actually get a parallel speed up when working with threads in CPU heavy tasks, even for embarrassingly parallel problems. This is because CPython implements some concurrency safety for primitive objects by using a global lock for all threads that ensures that only one of them has the interpreter at a time (meaning only one thread runs at a time).

    From the CPython built-in threading library documentation:

    CPython implementation detail: In CPython, due to the Global Interpreter Lock, only one thread can execute Python code at once (even though certain performance-oriented libraries might overcome this limitation). If you want your application to make better use of the computational resources of multi-core machines, you are advised to use multiprocessing or concurrent.futures.ProcessPoolExecutor. However, threading is still an appropriate model if you want to run multiple I/O-bound tasks simultaneously.

    Until 3.13 we won’t have any built-in way of using multiple cores to speed up CPU bound tasks with just python code, short of creating new processes. Sub-interpretters in 3.12 can now have their own GIL, but that won’t have a python interface until 3.13 releases.