Skip to content

Feature/issue 3311 test thread tbb exp#3314

Open
drezap wants to merge 23 commits intostan-dev:developfrom
drezap:feature/issue-3311-test-thread-tbb-exp
Open

Feature/issue 3311 test thread tbb exp#3314
drezap wants to merge 23 commits intostan-dev:developfrom
drezap:feature/issue-3311-test-thread-tbb-exp

Conversation

@drezap
Copy link
Copy Markdown
Member

@drezap drezap commented Apr 29, 2026

Summary

I wrote a class that contains an operator for exp, which allows use to use tbb for parallelization of a for loop. It looks like at lower number of observations, the parallelization is marginal, but at higher number of observations the parallelism of the for loop, using tbb::parallel_for, for example, at ~=32,000 there seems to be a speed up at 4 threads that sustains as we increase the size of the Container.

Tests

I tested for numerical accuracy, which checks out. Moreover, I did the following performance tests:

  1. Low number of observations with threading, no threading, and scaling the number of threads (seems to vary based on number of processes running on my computer but marginal speed-up
  2. N=10mm, scaling number of threads. Does not crash, but after a certain amount of threads the speedup plateaus and there is no gain from adding additional threads.
  3. Fix Scale N, fix number of threads. After a certain amount of observations (2^15) definite speed up at even 4 threads. At 2 threads, we don't start to see an advantage until N=2^30, but it kicks in with higher number of threads at lower number of observations.

Side Effects

  1. Yes. If we kick in threads too early, there's actually a slow down in computing exp on a vector with a lower number of observations. May be it would be good if there was a default min threads, or have them kick in only when dataset is a certain size. Moreover, this is just one function, so the result may be different when we have a composite function (Gaussian). I think this may be advantageous at lower number observations, but have not evaluated this.

  2. What I've done is added a directive that runs the multithreaded code for only vector, and calls the original code (but it's copy pasted into the STAN_THREADS section) accordingly if the function is not threaded for exp. I'd be open to a quick re-factor if we wanted to set it up like openCL, and have a threads directory under stan\math\prim.

Release notes

?

Checklist

  • Copyright holder: (Andre Zapico, Likely LLC, 2026)

    The copyright holder is typically you or your assignee, such as a university or company. By submitting this pull request, the copyright holder is agreeing to the license the submitted work under the following licenses:
    - Code: BSD 3-clause (https://opensource.org/licenses/BSD-3-Clause)
    - Documentation: CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)

  • the basic tests are passing

    • unit tests pass (to run, use: ./runTests.py test/unit)
    • header checks pass, (make test-headers)
    • dependencies checks pass, (make test-math-dependencies)
    • docs build, (make doxygen)
    • code passes the built in C++ standards checks (make cpplint)
  • the code is written in idiomatic C++ and changes are documented in the doxygen

  • the new changes are tested

drezap and others added 15 commits April 25, 2026 14:56
parallel_for, blocked range compiles for stan::math::exp

compiling blocked_range works fine

some progress, now a type deduction issue?

ok something closer...

implement struct version for parallel_for... uncompiled

begin new class to use parallel for

almost compiles...

getting close, have template deduction failed which we can figure out

almost compiles

hold on

compiles

remove dead code

compiled parallel_for, blocked_range for stan::math::exp

compiled parallel_for, blocked_range for stan::math::exp
@drezap
Copy link
Copy Markdown
Member Author

drezap commented Apr 29, 2026

Hold on, sorry I should re-base. I have some questions, wondering if anyone had comments or is this all on me? Refactor, and using threads at lower number of observations.

@SteveBronder
Copy link
Copy Markdown
Collaborator

Do you have a graph that shows the speedup? Overall I'd be kind of cautious introducing lower level threading like this. Like you saw, whether you get a speedup or slowdown depends a lot on the number of observations. So for every vector operations we would have to have a check that the size exceeded some threshold. That threshold is going to vary a lot per computer and I think I think if we are not careful could make the codebase kind of funky.

The other piece here is that this works for prim functions of double type, but parallelism is much harder for reverse mode which is the main piece of the math library we worry about. The main issue is handling how the global AD tape should sync when we have jobs across N threads. @andrjohns thought for a long while trying to figure out how to do a nice parallel map(...) style function for reverse mode autodiff. I'm not sure he came up with something he found satisfying. I have not either honestly. Essentially you need to shard the operation over N shards which will have N autodiff stacks, then once the parallel computation is done you have to pass those autodiff stacks back and put them onto the main thread's stack. So there you would get performance benefits for setting up the forward pass in parallel, but then the reverse pass would still be serial and you pay the cost of the sharding and thread startup. I'm very certain there is a way to do it so you can do the forward and reverse pass in parallel, but nothing has ever come to me for this problem.

@drezap
Copy link
Copy Markdown
Member Author

drezap commented Apr 30, 2026 via email

@drezap
Copy link
Copy Markdown
Member Author

drezap commented May 1, 2026

I'm doing continuous integration tests, it looks like it's mostly passing now.
Remaining:

  • I haven't thought about rev autodiff yet.
  • I want to see what's going to happen with posteriordb tests
  • I'll do a run with some Stan models on this branch locally so nothing breaks
  • Refactor so that the threaded code is in it's own directory like openCL. What I did was add a declarative and copy pasted the prim unthreaded code and then just threaded the part that was vectorized, this is kinda sloppy.
  • I also just #if0 #endif'd the complex tests and code in the threaded version. I guess I could template it if it's really desired, but to speed it up I just didn't compile complex number support. If it's used a lot, I can fix it.

And I need to consider threading the rev autodiff stack, that would be cool, if different threads could build different expression trees, I think that's what Steve was saying.

But if this adds incremental speed increase, why not?

WRT Steves comment I can think about it, but here I'm not parallelizing anything on the stack, just evaluation of the computation of exp, so that's a bit of a different topic.

@stan-buildbot
Copy link
Copy Markdown
Contributor


Name Old Result New Result Ratio Performance change( 1 - new / old )
gp_regr/gp_regr.stan 0.1 0.09 1.12 10.42% faster
gp_regr/gen_gp_data.stan 0.03 0.02 1.12 10.64% faster
arK/arK.stan 2.01 1.73 1.16 13.67% faster
eight_schools/eight_schools.stan 0.06 0.05 1.13 11.44% faster
low_dim_gauss_mix_collapse/low_dim_gauss_mix_collapse.stan 9.35 8.41 1.11 10.11% faster
pkpd/one_comp_mm_elim_abs.stan 20.42 18.56 1.1 9.14% faster
pkpd/sim_one_comp_mm_elim_abs.stan 0.27 0.24 1.11 9.86% faster
sir/sir.stan 75.1 72.27 1.04 3.76% faster
gp_pois_regr/gp_pois_regr.stan 3.05 2.94 1.04 3.61% faster
low_dim_gauss_mix/low_dim_gauss_mix.stan 2.83 2.79 1.02 1.56% faster
irt_2pl/irt_2pl.stan 4.54 4.41 1.03 2.86% faster
arma/arma.stan 0.32 0.31 1.01 1.22% faster
garch/garch.stan 0.48 0.46 1.04 3.4% faster
low_dim_corr_gauss/low_dim_corr_gauss.stan 0.01 0.01 1.05 4.86% faster
performance.compilation 221.46 228.05 0.97 -2.98% slower
Mean result: 1.069128278114437

Jenkins Console Log
Blue Ocean
Commit hash: e0729e1cdec40e8ec3da60b40b20a2cfc223fc94


Machine information No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.3 LTS Release: 20.04 Codename: focal

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
Stepping: 4
CPU MHz: 2400.000
CPU max MHz: 3700.0000
CPU min MHz: 1000.0000
BogoMIPS: 4800.00
Virtualization: VT-x
L1d cache: 1.3 MiB
L1i cache: 1.3 MiB
L2 cache: 40 MiB
L3 cache: 55 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Vmscape: Mitigation; IBPB before exit to userspace
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke md_clear flush_l1d arch_capabilities

G++:
g++ (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Clang:
clang version 10.0.0-4ubuntu1
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin

@drezap
Copy link
Copy Markdown
Member Author

drezap commented May 2, 2026

Not sure why Jenkins emailed me SUCCESS when there's so many errors? I'm not seeing these locally.

I also named the branch wrong, but I'll just leave it until it's closed...

@stan-buildbot
Copy link
Copy Markdown
Contributor


Name Old Result New Result Ratio Performance change( 1 - new / old )
gp_regr/gp_regr.stan 0.1 0.1 0.99 -1.25% slower
gp_regr/gen_gp_data.stan 0.03 0.02 1.02 1.91% faster
arK/arK.stan 1.89 1.89 1.0 -0.06% slower
eight_schools/eight_schools.stan 0.06 0.06 1.02 1.5% faster
low_dim_gauss_mix_collapse/low_dim_gauss_mix_collapse.stan 9.1 9.2 0.99 -1.08% slower
pkpd/one_comp_mm_elim_abs.stan 20.18 20.28 0.99 -0.51% slower
pkpd/sim_one_comp_mm_elim_abs.stan 0.26 0.26 0.99 -0.98% slower
sir/sir.stan 74.16 74.3 1.0 -0.19% slower
gp_pois_regr/gp_pois_regr.stan 2.89 2.91 1.0 -0.48% slower
low_dim_gauss_mix/low_dim_gauss_mix.stan 2.82 2.79 1.01 1.0% faster
irt_2pl/irt_2pl.stan 4.52 4.47 1.01 1.22% faster
arma/arma.stan 0.31 0.31 1.01 0.53% faster
garch/garch.stan 0.46 0.45 1.01 1.4% faster
low_dim_corr_gauss/low_dim_corr_gauss.stan 0.01 0.01 1.13 11.68% faster
performance.compilation 234.12 225.05 1.04 3.87% faster
Mean result: 1.0136082132108795

Jenkins Console Log
Blue Ocean
Commit hash: e0729e1cdec40e8ec3da60b40b20a2cfc223fc94


Machine information No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.3 LTS Release: 20.04 Codename: focal

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
Stepping: 4
CPU MHz: 2400.000
CPU max MHz: 3700.0000
CPU min MHz: 1000.0000
BogoMIPS: 4800.00
Virtualization: VT-x
L1d cache: 1.3 MiB
L1i cache: 1.3 MiB
L2 cache: 40 MiB
L3 cache: 55 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Vmscape: Mitigation; IBPB before exit to userspace
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke md_clear flush_l1d arch_capabilities

G++:
g++ (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Clang:
clang version 10.0.0-4ubuntu1
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants