Single-Instruction multiple data
Before we are going to specifically parallelize code we talk about an inbuilt mechanism called Single Instruction Multiple Data or SIMD for short. The main idea is that central processing units (CPUs) or basically any arithmetic logic unit (ALU) can perform the same operation on multiple inputs in a single clock cycle. This was already used for BLAS and LAPACK packages, with the so called unrolling.
Let us consider the following example
which should look pretty familiar from the basic vector addition . As mentioned, modern processors have vector units that can deal with this kind of operation at once, basically:
or visualized as:
Even if you do not see it right away, we can modify our sum over a vector, and learn how Julia is including the SIMD concept and why it is most of the time better to call library functions than programming them on your own. As we already know how to do benchmarking, let us try to figure out if our sum function is doing a good job.
using BenchmarkTools
function my_sum(V)
result = zero(eltype(V))
for i in eachindex(V)
@inbounds result += V[i]
end
return result
end
a = rand(100_000)
println("Simple sum:")
@btime my_sum(a)
println()
println("Built-in sum:")
@btime sum(a)
Simple sum:
92.553 μs (1 allocation: 16 bytes)
Built-in sum:
11.101 μs (1 allocation: 16 bytes)
50092.66051872485
As we can see, we are slower, exactly how much slower depends on the architecture of your CPU but it is usually between 2 to 16 times.
In order to enable SIMD in a program (if it is not done by library calls anyway), we can use the @simd
macro, this works if we loop over the indices or the elements, Julia is quite flexible there.
using BenchmarkTools
function my_sum(V)
result = zero(eltype(V))
for i in eachindex(V)
@inbounds result += V[i]
end
return result
end
function my_sum_simd(V)
result = zero(eltype(V))
@simd for i in eachindex(V)
@inbounds result += V[i]
end
return result
end
function my_sum_elem_access(V)
s = zero(eltype(V))
for v in V
s += v
end
return s
end
function my_sum_simd_elem_access(V)
s = zero(eltype(V))
@simd for v in V
s += v
end
return s
end
a = rand(100_000)
println("Simple sum")
@show my_sum(a)
@btime my_sum($a)
println()
println("Built-in sum")
@show sum(a)
@btime sum($a)
println()
println("Simple sum with SIMD")
@show my_sum_simd(a)
@btime my_sum_simd($a)
println()
println("Simple my_sum with direct element access")
@show my_sum_elem_access(a)
@btime my_sum_elem_access($a)
println()
println("Simple sum with SIMD and direct element access")
@show my_sum_simd_elem_access(a)
@btime my_sum_simd_elem_access($a)
Simple sum
my_sum(a) = 50006.27334164475
92.553 μs (0 allocations: 0 bytes)
Built-in sum
sum(a) = 50006.27334164441
10.470 μs (0 allocations: 0 bytes)
Simple sum with SIMD
my_sum_simd(a) = 50006.273341644395
9.558 μs (0 allocations: 0 bytes)
Simple my_sum with direct element access
my_sum_elem_access(a) = 50006.27334164475
92.543 μs (0 allocations: 0 bytes)
Simple sum with SIMD and direct element access
my_sum_simd_elem_access(a) = 50006.273341644395
9.387 μs (0 allocations: 0 bytes)
50006.273341644395
We can see a massive speed up (that will depend on the CPU architecture you are running your code on). What is interesting is, that the results of the three function calls are not the same.
This is due to the fact that the numerics involved are a bit tricky. In short, when adding floating point numbers you lose accuracy when adding a large number to a small number. This is exactly what is happening for our first example as we add all the numbers in one long sequence.
The built-in sum
function as well as the @simd
macro allow Julia to change the order of the operations. In this specific case, it boils down to computing the result for the even and odd entries separately and therefore gaining a bit of accuracy.
If you are not sure if something is vectorized, you can check out the LLVM code for the two versions and see the difference (Hint: look out for something called vector.ph
).
using InteractiveUtils
@code_llvm my_sum(a)
printstyled("\n------Separator-------\n\n"; color = :red)
@code_llvm my_sum_simd(a)
; @ none:1 within `my_sum`
define double @julia_my_sum_2004({}* noundef nonnull align 16 dereferenceable(40) %0) #0 {
top:
; @ none:4 within `my_sum`
; ┌ @ abstractarray.jl:318 within `eachindex`
; │┌ @ abstractarray.jl:134 within `axes1`
; ││┌ @ abstractarray.jl:98 within `axes`
; │││┌ @ array.jl:191 within `size`
%1 = bitcast {}* %0 to { i8*, i64, i16, i16, i32 }*
%arraylen_ptr = getelementptr inbounds { i8*, i64, i16, i16, i32 }, { i8*, i64, i16, i16, i32 }* %1, i64 0, i32 1
%arraylen = load i64, i64* %arraylen_ptr, align 8
; └└└└
; ┌ @ range.jl:897 within `iterate`
; │┌ @ range.jl:672 within `isempty`
; ││┌ @ operators.jl:378 within `>`
; │││┌ @ int.jl:83 within `<`
%.not.not = icmp eq i64 %arraylen, 0
; └└└└
br i1 %.not.not, label %L29, label %L13.preheader
L13.preheader: ; preds = %top
%2 = bitcast {}* %0 to double**
%arrayptr12 = load double*, double** %2, align 8
; @ none:6 within `my_sum`
%3 = add nsw i64 %arraylen, -1
%xtraiter = and i64 %arraylen, 7
%4 = icmp ult i64 %3, 7
br i1 %4, label %L29.loopexit.unr-lcssa, label %L13.preheader.new
L13.preheader.new: ; preds = %L13.preheader
%unroll_iter = and i64 %arraylen, 9223372036854775800
br label %L13
L13: ; preds = %L13, %L13.preheader.new
%value_phi3 = phi i64 [ 1, %L13.preheader.new ], [ %28, %L13 ]
%value_phi5 = phi double [ 0.000000e+00, %L13.preheader.new ], [ %27, %L13 ]
%niter = phi i64 [ 0, %L13.preheader.new ], [ %niter.next.7, %L13 ]
; @ none:5 within `my_sum`
; ┌ @ essentials.jl:13 within `getindex`
%5 = add nsw i64 %value_phi3, -1
%6 = getelementptr inbounds double, double* %arrayptr12, i64 %5
%arrayref = load double, double* %6, align 8
; └
; ┌ @ float.jl:409 within `+`
%7 = fadd double %value_phi5, %arrayref
; └
; @ none:6 within `my_sum`
; ┌ @ range.jl:901 within `iterate`
%8 = add nuw nsw i64 %value_phi3, 1
; └
; @ none:5 within `my_sum`
; ┌ @ essentials.jl:13 within `getindex`
%9 = getelementptr inbounds double, double* %arrayptr12, i64 %value_phi3
%arrayref.1 = load double, double* %9, align 8
; └
; ┌ @ float.jl:409 within `+`
%10 = fadd double %7, %arrayref.1
; └
; @ none:6 within `my_sum`
; ┌ @ range.jl:901 within `iterate`
%11 = add nuw nsw i64 %value_phi3, 2
; └
; @ none:5 within `my_sum`
; ┌ @ essentials.jl:13 within `getindex`
%12 = getelementptr inbounds double, double* %arrayptr12, i64 %8
%arrayref.2 = load double, double* %12, align 8
; └
; ┌ @ float.jl:409 within `+`
%13 = fadd double %10, %arrayref.2
; └
; @ none:6 within `my_sum`
; ┌ @ range.jl:901 within `iterate`
%14 = add nuw nsw i64 %value_phi3, 3
; └
; @ none:5 within `my_sum`
; ┌ @ essentials.jl:13 within `getindex`
%15 = getelementptr inbounds double, double* %arrayptr12, i64 %11
%arrayref.3 = load double, double* %15, align 8
; └
; ┌ @ float.jl:409 within `+`
%16 = fadd double %13, %arrayref.3
; └
; @ none:6 within `my_sum`
; ┌ @ range.jl:901 within `iterate`
%17 = add nuw nsw i64 %value_phi3, 4
; └
; @ none:5 within `my_sum`
; ┌ @ essentials.jl:13 within `getindex`
%18 = getelementptr inbounds double, double* %arrayptr12, i64 %14
%arrayref.4 = load double, double* %18, align 8
; └
; ┌ @ float.jl:409 within `+`
%19 = fadd double %16, %arrayref.4
; └
; @ none:6 within `my_sum`
; ┌ @ range.jl:901 within `iterate`
%20 = add nuw nsw i64 %value_phi3, 5
; └
; @ none:5 within `my_sum`
; ┌ @ essentials.jl:13 within `getindex`
%21 = getelementptr inbounds double, double* %arrayptr12, i64 %17
%arrayref.5 = load double, double* %21, align 8
; └
; ┌ @ float.jl:409 within `+`
%22 = fadd double %19, %arrayref.5
; └
; @ none:6 within `my_sum`
; ┌ @ range.jl:901 within `iterate`
%23 = add nuw nsw i64 %value_phi3, 6
; └
; @ none:5 within `my_sum`
; ┌ @ essentials.jl:13 within `getindex`
%24 = getelementptr inbounds double, double* %arrayptr12, i64 %20
%arrayref.6 = load double, double* %24, align 8
; └
; ┌ @ float.jl:409 within `+`
%25 = fadd double %22, %arrayref.6
; └
; ┌ @ essentials.jl:13 within `getindex`
%26 = getelementptr inbounds double, double* %arrayptr12, i64 %23
%arrayref.7 = load double, double* %26, align 8
; └
; ┌ @ float.jl:409 within `+`
%27 = fadd double %25, %arrayref.7
; └
; @ none:6 within `my_sum`
; ┌ @ range.jl:901 within `iterate`
%28 = add nuw nsw i64 %value_phi3, 8
; └
%niter.next.7 = add i64 %niter, 8
%niter.ncmp.7 = icmp eq i64 %niter.next.7, %unroll_iter
br i1 %niter.ncmp.7, label %L29.loopexit.unr-lcssa, label %L13
L29.loopexit.unr-lcssa: ; preds = %L13, %L13.preheader
%.lcssa.ph = phi double [ undef, %L13.preheader ], [ %27, %L13 ]
%value_phi3.unr = phi i64 [ 1, %L13.preheader ], [ %28, %L13 ]
%value_phi5.unr = phi double [ 0.000000e+00, %L13.preheader ], [ %27, %L13 ]
%lcmp.mod.not = icmp eq i64 %xtraiter, 0
br i1 %lcmp.mod.not, label %L29, label %L13.epil
L13.epil: ; preds = %L13.epil, %L29.loopexit.unr-lcssa
%value_phi3.epil = phi i64 [ %32, %L13.epil ], [ %value_phi3.unr, %L29.loopexit.unr-lcssa ]
%value_phi5.epil = phi double [ %31, %L13.epil ], [ %value_phi5.unr, %L29.loopexit.unr-lcssa ]
%epil.iter = phi i64 [ %epil.iter.next, %L13.epil ], [ 0, %L29.loopexit.unr-lcssa ]
; @ none:5 within `my_sum`
; ┌ @ essentials.jl:13 within `getindex`
%29 = add nsw i64 %value_phi3.epil, -1
%30 = getelementptr inbounds double, double* %arrayptr12, i64 %29
%arrayref.epil = load double, double* %30, align 8
; └
; ┌ @ float.jl:409 within `+`
%31 = fadd double %value_phi5.epil, %arrayref.epil
; └
; @ none:6 within `my_sum`
; ┌ @ range.jl:901 within `iterate`
%32 = add nuw nsw i64 %value_phi3.epil, 1
; └
%epil.iter.next = add i64 %epil.iter, 1
%epil.iter.cmp.not = icmp eq i64 %epil.iter.next, %xtraiter
br i1 %epil.iter.cmp.not, label %L29, label %L13.epil
L29: ; preds = %L13.epil, %L29.loopexit.unr-lcssa, %top
%value_phi9 = phi double [ 0.000000e+00, %top ], [ %.lcssa.ph, %L29.loopexit.unr-lcssa ], [ %31, %L13.epil ]
; @ none:8 within `my_sum`
ret double %value_phi9
}
------Separator-------
; @ none:1 within `my_sum_simd`
define double @julia_my_sum_simd_2006({}* noundef nonnull align 16 dereferenceable(40) %0) #0 {
top:
; @ none:4 within `my_sum_simd`
; ┌ @ simdloop.jl:69 within `macro expansion`
; │┌ @ abstractarray.jl:318 within `eachindex`
; ││┌ @ abstractarray.jl:134 within `axes1`
; │││┌ @ abstractarray.jl:98 within `axes`
; ││││┌ @ array.jl:191 within `size`
%1 = bitcast {}* %0 to { i8*, i64, i16, i16, i32 }*
%arraylen_ptr = getelementptr inbounds { i8*, i64, i16, i16, i32 }, { i8*, i64, i16, i16, i32 }* %1, i64 0, i32 1
%arraylen = load i64, i64* %arraylen_ptr, align 8
; │└└└└
; │ @ simdloop.jl:72 within `macro expansion`
; │┌ @ int.jl:83 within `<`
%.not = icmp eq i64 %arraylen, 0
; │└
br i1 %.not, label %L30, label %L13.lr.ph
L13.lr.ph: ; preds = %top
%2 = bitcast {}* %0 to double**
%arrayptr6 = load double*, double** %2, align 8
; │ @ simdloop.jl:75 within `macro expansion`
%min.iters.check = icmp ult i64 %arraylen, 16
br i1 %min.iters.check, label %scalar.ph, label %vector.ph
vector.ph: ; preds = %L13.lr.ph
%n.vec = and i64 %arraylen, 9223372036854775792
%3 = add nsw i64 %n.vec, -16
%4 = lshr exact i64 %3, 4
%5 = add nuw nsw i64 %4, 1
%xtraiter = and i64 %5, 7
%6 = icmp ult i64 %3, 112
br i1 %6, label %middle.block.unr-lcssa, label %vector.ph.new
vector.ph.new: ; preds = %vector.ph
%unroll_iter = and i64 %5, 2305843009213693944
br label %vector.body
vector.body: ; preds = %vector.body, %vector.ph.new
; │ @ simdloop.jl:78 within `macro expansion`
; │┌ @ int.jl:87 within `+`
%index = phi i64 [ 0, %vector.ph.new ], [ %index.next.7, %vector.body ]
%vec.phi = phi <4 x double> [ <double 0.000000e+00, double -0.000000e+00, double -0.000000e+00, double -0.000000e+00>, %vector.ph.new ], [ %99, %vector.body ]
%vec.phi10 = phi <4 x double> [ <double -0.000000e+00, double -0.000000e+00, double -0.000000e+00, double -0.000000e+00>, %vector.ph.new ], [ %100, %vector.body ]
%vec.phi11 = phi <4 x double> [ <double -0.000000e+00, double -0.000000e+00, double -0.000000e+00, double -0.000000e+00>, %vector.ph.new ], [ %101, %vector.body ]
%vec.phi12 = phi <4 x double> [ <double -0.000000e+00, double -0.000000e+00, double -0.000000e+00, double -0.000000e+00>, %vector.ph.new ], [ %102, %vector.body ]
%niter = phi i64 [ 0, %vector.ph.new ], [ %niter.next.7, %vector.body ]
; │└
; │ @ simdloop.jl:77 within `macro expansion` @ none:5
; │┌ @ essentials.jl:13 within `getindex`
%7 = getelementptr inbounds double, double* %arrayptr6, i64 %index
%8 = bitcast double* %7 to <4 x double>*
%wide.load = load <4 x double>, <4 x double>* %8, align 8
%9 = getelementptr inbounds double, double* %7, i64 4
%10 = bitcast double* %9 to <4 x double>*
%wide.load13 = load <4 x double>, <4 x double>* %10, align 8
%11 = getelementptr inbounds double, double* %7, i64 8
%12 = bitcast double* %11 to <4 x double>*
%wide.load14 = load <4 x double>, <4 x double>* %12, align 8
%13 = getelementptr inbounds double, double* %7, i64 12
%14 = bitcast double* %13 to <4 x double>*
%wide.load15 = load <4 x double>, <4 x double>* %14, align 8
; │└
; │┌ @ float.jl:409 within `+`
%15 = fadd reassoc contract <4 x double> %vec.phi, %wide.load
%16 = fadd reassoc contract <4 x double> %vec.phi10, %wide.load13
%17 = fadd reassoc contract <4 x double> %vec.phi11, %wide.load14
%18 = fadd reassoc contract <4 x double> %vec.phi12, %wide.load15
; │└
; │ @ simdloop.jl:78 within `macro expansion`
; │┌ @ int.jl:87 within `+`
%index.next = or i64 %index, 16
; │└
; │ @ simdloop.jl:77 within `macro expansion` @ none:5
; │┌ @ essentials.jl:13 within `getindex`
%19 = getelementptr inbounds double, double* %arrayptr6, i64 %index.next
%20 = bitcast double* %19 to <4 x double>*
%wide.load.1 = load <4 x double>, <4 x double>* %20, align 8
%21 = getelementptr inbounds double, double* %19, i64 4
%22 = bitcast double* %21 to <4 x double>*
%wide.load13.1 = load <4 x double>, <4 x double>* %22, align 8
%23 = getelementptr inbounds double, double* %19, i64 8
%24 = bitcast double* %23 to <4 x double>*
%wide.load14.1 = load <4 x double>, <4 x double>* %24, align 8
%25 = getelementptr inbounds double, double* %19, i64 12
%26 = bitcast double* %25 to <4 x double>*
%wide.load15.1 = load <4 x double>, <4 x double>* %26, align 8
; │└
; │┌ @ float.jl:409 within `+`
%27 = fadd reassoc contract <4 x double> %15, %wide.load.1
%28 = fadd reassoc contract <4 x double> %16, %wide.load13.1
%29 = fadd reassoc contract <4 x double> %17, %wide.load14.1
%30 = fadd reassoc contract <4 x double> %18, %wide.load15.1
; │└
; │ @ simdloop.jl:78 within `macro expansion`
; │┌ @ int.jl:87 within `+`
%index.next.1 = or i64 %index, 32
; │└
; │ @ simdloop.jl:77 within `macro expansion` @ none:5
; │┌ @ essentials.jl:13 within `getindex`
%31 = getelementptr inbounds double, double* %arrayptr6, i64 %index.next.1
%32 = bitcast double* %31 to <4 x double>*
%wide.load.2 = load <4 x double>, <4 x double>* %32, align 8
%33 = getelementptr inbounds double, double* %31, i64 4
%34 = bitcast double* %33 to <4 x double>*
%wide.load13.2 = load <4 x double>, <4 x double>* %34, align 8
%35 = getelementptr inbounds double, double* %31, i64 8
%36 = bitcast double* %35 to <4 x double>*
%wide.load14.2 = load <4 x double>, <4 x double>* %36, align 8
%37 = getelementptr inbounds double, double* %31, i64 12
%38 = bitcast double* %37 to <4 x double>*
%wide.load15.2 = load <4 x double>, <4 x double>* %38, align 8
; │└
; │┌ @ float.jl:409 within `+`
%39 = fadd reassoc contract <4 x double> %27, %wide.load.2
%40 = fadd reassoc contract <4 x double> %28, %wide.load13.2
%41 = fadd reassoc contract <4 x double> %29, %wide.load14.2
%42 = fadd reassoc contract <4 x double> %30, %wide.load15.2
; │└
; │ @ simdloop.jl:78 within `macro expansion`
; │┌ @ int.jl:87 within `+`
%index.next.2 = or i64 %index, 48
; │└
; │ @ simdloop.jl:77 within `macro expansion` @ none:5
; │┌ @ essentials.jl:13 within `getindex`
%43 = getelementptr inbounds double, double* %arrayptr6, i64 %index.next.2
%44 = bitcast double* %43 to <4 x double>*
%wide.load.3 = load <4 x double>, <4 x double>* %44, align 8
%45 = getelementptr inbounds double, double* %43, i64 4
%46 = bitcast double* %45 to <4 x double>*
%wide.load13.3 = load <4 x double>, <4 x double>* %46, align 8
%47 = getelementptr inbounds double, double* %43, i64 8
%48 = bitcast double* %47 to <4 x double>*
%wide.load14.3 = load <4 x double>, <4 x double>* %48, align 8
%49 = getelementptr inbounds double, double* %43, i64 12
%50 = bitcast double* %49 to <4 x double>*
%wide.load15.3 = load <4 x double>, <4 x double>* %50, align 8
; │└
; │┌ @ float.jl:409 within `+`
%51 = fadd reassoc contract <4 x double> %39, %wide.load.3
%52 = fadd reassoc contract <4 x double> %40, %wide.load13.3
%53 = fadd reassoc contract <4 x double> %41, %wide.load14.3
%54 = fadd reassoc contract <4 x double> %42, %wide.load15.3
; │└
; │ @ simdloop.jl:78 within `macro expansion`
; │┌ @ int.jl:87 within `+`
%index.next.3 = or i64 %index, 64
; │└
; │ @ simdloop.jl:77 within `macro expansion` @ none:5
; │┌ @ essentials.jl:13 within `getindex`
%55 = getelementptr inbounds double, double* %arrayptr6, i64 %index.next.3
%56 = bitcast double* %55 to <4 x double>*
%wide.load.4 = load <4 x double>, <4 x double>* %56, align 8
%57 = getelementptr inbounds double, double* %55, i64 4
%58 = bitcast double* %57 to <4 x double>*
%wide.load13.4 = load <4 x double>, <4 x double>* %58, align 8
%59 = getelementptr inbounds double, double* %55, i64 8
%60 = bitcast double* %59 to <4 x double>*
%wide.load14.4 = load <4 x double>, <4 x double>* %60, align 8
%61 = getelementptr inbounds double, double* %55, i64 12
%62 = bitcast double* %61 to <4 x double>*
%wide.load15.4 = load <4 x double>, <4 x double>* %62, align 8
; │└
; │┌ @ float.jl:409 within `+`
%63 = fadd reassoc contract <4 x double> %51, %wide.load.4
%64 = fadd reassoc contract <4 x double> %52, %wide.load13.4
%65 = fadd reassoc contract <4 x double> %53, %wide.load14.4
%66 = fadd reassoc contract <4 x double> %54, %wide.load15.4
; │└
; │ @ simdloop.jl:78 within `macro expansion`
; │┌ @ int.jl:87 within `+`
%index.next.4 = or i64 %index, 80
; │└
; │ @ simdloop.jl:77 within `macro expansion` @ none:5
; │┌ @ essentials.jl:13 within `getindex`
%67 = getelementptr inbounds double, double* %arrayptr6, i64 %index.next.4
%68 = bitcast double* %67 to <4 x double>*
%wide.load.5 = load <4 x double>, <4 x double>* %68, align 8
%69 = getelementptr inbounds double, double* %67, i64 4
%70 = bitcast double* %69 to <4 x double>*
%wide.load13.5 = load <4 x double>, <4 x double>* %70, align 8
%71 = getelementptr inbounds double, double* %67, i64 8
%72 = bitcast double* %71 to <4 x double>*
%wide.load14.5 = load <4 x double>, <4 x double>* %72, align 8
%73 = getelementptr inbounds double, double* %67, i64 12
%74 = bitcast double* %73 to <4 x double>*
%wide.load15.5 = load <4 x double>, <4 x double>* %74, align 8
; │└
; │┌ @ float.jl:409 within `+`
%75 = fadd reassoc contract <4 x double> %63, %wide.load.5
%76 = fadd reassoc contract <4 x double> %64, %wide.load13.5
%77 = fadd reassoc contract <4 x double> %65, %wide.load14.5
%78 = fadd reassoc contract <4 x double> %66, %wide.load15.5
; │└
; │ @ simdloop.jl:78 within `macro expansion`
; │┌ @ int.jl:87 within `+`
%index.next.5 = or i64 %index, 96
; │└
; │ @ simdloop.jl:77 within `macro expansion` @ none:5
; │┌ @ essentials.jl:13 within `getindex`
%79 = getelementptr inbounds double, double* %arrayptr6, i64 %index.next.5
%80 = bitcast double* %79 to <4 x double>*
%wide.load.6 = load <4 x double>, <4 x double>* %80, align 8
%81 = getelementptr inbounds double, double* %79, i64 4
%82 = bitcast double* %81 to <4 x double>*
%wide.load13.6 = load <4 x double>, <4 x double>* %82, align 8
%83 = getelementptr inbounds double, double* %79, i64 8
%84 = bitcast double* %83 to <4 x double>*
%wide.load14.6 = load <4 x double>, <4 x double>* %84, align 8
%85 = getelementptr inbounds double, double* %79, i64 12
%86 = bitcast double* %85 to <4 x double>*
%wide.load15.6 = load <4 x double>, <4 x double>* %86, align 8
; │└
; │┌ @ float.jl:409 within `+`
%87 = fadd reassoc contract <4 x double> %75, %wide.load.6
%88 = fadd reassoc contract <4 x double> %76, %wide.load13.6
%89 = fadd reassoc contract <4 x double> %77, %wide.load14.6
%90 = fadd reassoc contract <4 x double> %78, %wide.load15.6
; │└
; │ @ simdloop.jl:78 within `macro expansion`
; │┌ @ int.jl:87 within `+`
%index.next.6 = or i64 %index, 112
; │└
; │ @ simdloop.jl:77 within `macro expansion` @ none:5
; │┌ @ essentials.jl:13 within `getindex`
%91 = getelementptr inbounds double, double* %arrayptr6, i64 %index.next.6
%92 = bitcast double* %91 to <4 x double>*
%wide.load.7 = load <4 x double>, <4 x double>* %92, align 8
%93 = getelementptr inbounds double, double* %91, i64 4
%94 = bitcast double* %93 to <4 x double>*
%wide.load13.7 = load <4 x double>, <4 x double>* %94, align 8
%95 = getelementptr inbounds double, double* %91, i64 8
%96 = bitcast double* %95 to <4 x double>*
%wide.load14.7 = load <4 x double>, <4 x double>* %96, align 8
%97 = getelementptr inbounds double, double* %91, i64 12
%98 = bitcast double* %97 to <4 x double>*
%wide.load15.7 = load <4 x double>, <4 x double>* %98, align 8
; │└
; │┌ @ float.jl:409 within `+`
%99 = fadd reassoc contract <4 x double> %87, %wide.load.7
%100 = fadd reassoc contract <4 x double> %88, %wide.load13.7
%101 = fadd reassoc contract <4 x double> %89, %wide.load14.7
%102 = fadd reassoc contract <4 x double> %90, %wide.load15.7
; │└
; │ @ simdloop.jl:78 within `macro expansion`
; │┌ @ int.jl:87 within `+`
%index.next.7 = add nuw i64 %index, 128
%niter.next.7 = add i64 %niter, 8
%niter.ncmp.7 = icmp eq i64 %niter.next.7, %unroll_iter
br i1 %niter.ncmp.7, label %middle.block.unr-lcssa, label %vector.body
middle.block.unr-lcssa: ; preds = %vector.body, %vector.ph
%.lcssa21.ph = phi <4 x double> [ undef, %vector.ph ], [ %99, %vector.body ]
%.lcssa20.ph = phi <4 x double> [ undef, %vector.ph ], [ %100, %vector.body ]
%.lcssa19.ph = phi <4 x double> [ undef, %vector.ph ], [ %101, %vector.body ]
%.lcssa18.ph = phi <4 x double> [ undef, %vector.ph ], [ %102, %vector.body ]
%index.unr = phi i64 [ 0, %vector.ph ], [ %index.next.7, %vector.body ]
%vec.phi.unr = phi <4 x double> [ <double 0.000000e+00, double -0.000000e+00, double -0.000000e+00, double -0.000000e+00>, %vector.ph ], [ %99, %vector.body ]
%vec.phi10.unr = phi <4 x double> [ <double -0.000000e+00, double -0.000000e+00, double -0.000000e+00, double -0.000000e+00>, %vector.ph ], [ %100, %vector.body ]
%vec.phi11.unr = phi <4 x double> [ <double -0.000000e+00, double -0.000000e+00, double -0.000000e+00, double -0.000000e+00>, %vector.ph ], [ %101, %vector.body ]
%vec.phi12.unr = phi <4 x double> [ <double -0.000000e+00, double -0.000000e+00, double -0.000000e+00, double -0.000000e+00>, %vector.ph ], [ %102, %vector.body ]
%lcmp.mod.not = icmp eq i64 %xtraiter, 0
br i1 %lcmp.mod.not, label %middle.block, label %vector.body.epil
vector.body.epil: ; preds = %vector.body.epil, %middle.block.unr-lcssa
%index.epil = phi i64 [ %index.next.epil, %vector.body.epil ], [ %index.unr, %middle.block.unr-lcssa ]
%vec.phi.epil = phi <4 x double> [ %111, %vector.body.epil ], [ %vec.phi.unr, %middle.block.unr-lcssa ]
%vec.phi10.epil = phi <4 x double> [ %112, %vector.body.epil ], [ %vec.phi10.unr, %middle.block.unr-lcssa ]
%vec.phi11.epil = phi <4 x double> [ %113, %vector.body.epil ], [ %vec.phi11.unr, %middle.block.unr-lcssa ]
%vec.phi12.epil = phi <4 x double> [ %114, %vector.body.epil ], [ %vec.phi12.unr, %middle.block.unr-lcssa ]
%epil.iter = phi i64 [ %epil.iter.next, %vector.body.epil ], [ 0, %middle.block.unr-lcssa ]
; │└
; │ @ simdloop.jl:77 within `macro expansion` @ none:5
; │┌ @ essentials.jl:13 within `getindex`
%103 = getelementptr inbounds double, double* %arrayptr6, i64 %index.epil
%104 = bitcast double* %103 to <4 x double>*
%wide.load.epil = load <4 x double>, <4 x double>* %104, align 8
%105 = getelementptr inbounds double, double* %103, i64 4
%106 = bitcast double* %105 to <4 x double>*
%wide.load13.epil = load <4 x double>, <4 x double>* %106, align 8
%107 = getelementptr inbounds double, double* %103, i64 8
%108 = bitcast double* %107 to <4 x double>*
%wide.load14.epil = load <4 x double>, <4 x double>* %108, align 8
%109 = getelementptr inbounds double, double* %103, i64 12
%110 = bitcast double* %109 to <4 x double>*
%wide.load15.epil = load <4 x double>, <4 x double>* %110, align 8
; │└
; │┌ @ float.jl:409 within `+`
%111 = fadd reassoc contract <4 x double> %vec.phi.epil, %wide.load.epil
%112 = fadd reassoc contract <4 x double> %vec.phi10.epil, %wide.load13.epil
%113 = fadd reassoc contract <4 x double> %vec.phi11.epil, %wide.load14.epil
%114 = fadd reassoc contract <4 x double> %vec.phi12.epil, %wide.load15.epil
; │└
; │ @ simdloop.jl:78 within `macro expansion`
; │┌ @ int.jl:87 within `+`
%index.next.epil = add nuw i64 %index.epil, 16
%epil.iter.next = add i64 %epil.iter, 1
%epil.iter.cmp.not = icmp eq i64 %epil.iter.next, %xtraiter
br i1 %epil.iter.cmp.not, label %middle.block, label %vector.body.epil
middle.block: ; preds = %vector.body.epil, %middle.block.unr-lcssa
; │└
; │ @ simdloop.jl:77 within `macro expansion` @ none:5
; │┌ @ float.jl:409 within `+`
%.lcssa21 = phi <4 x double> [ %.lcssa21.ph, %middle.block.unr-lcssa ], [ %111, %vector.body.epil ]
%.lcssa20 = phi <4 x double> [ %.lcssa20.ph, %middle.block.unr-lcssa ], [ %112, %vector.body.epil ]
%.lcssa19 = phi <4 x double> [ %.lcssa19.ph, %middle.block.unr-lcssa ], [ %113, %vector.body.epil ]
%.lcssa18 = phi <4 x double> [ %.lcssa18.ph, %middle.block.unr-lcssa ], [ %114, %vector.body.epil ]
; │└
; │ @ simdloop.jl:75 within `macro expansion`
%bin.rdx = fadd reassoc contract <4 x double> %.lcssa20, %.lcssa21
%bin.rdx16 = fadd reassoc contract <4 x double> %.lcssa19, %bin.rdx
%bin.rdx17 = fadd reassoc contract <4 x double> %.lcssa18, %bin.rdx16
%115 = call reassoc contract double @llvm.vector.reduce.fadd.v4f64(double -0.000000e+00, <4 x double> %bin.rdx17)
%cmp.n = icmp eq i64 %arraylen, %n.vec
br i1 %cmp.n, label %L30, label %scalar.ph
scalar.ph: ; preds = %middle.block, %L13.lr.ph
%bc.resume.val = phi i64 [ %n.vec, %middle.block ], [ 0, %L13.lr.ph ]
%bc.merge.rdx = phi double [ %115, %middle.block ], [ 0.000000e+00, %L13.lr.ph ]
br label %L13
L13: ; preds = %L13, %scalar.ph
%value_phi19 = phi i64 [ %bc.resume.val, %scalar.ph ], [ %118, %L13 ]
%value_phi8 = phi double [ %bc.merge.rdx, %scalar.ph ], [ %117, %L13 ]
; │ @ simdloop.jl:77 within `macro expansion` @ none:5
; │┌ @ essentials.jl:13 within `getindex`
%116 = getelementptr inbounds double, double* %arrayptr6, i64 %value_phi19
%arrayref = load double, double* %116, align 8
; │└
; │┌ @ float.jl:409 within `+`
%117 = fadd reassoc contract double %value_phi8, %arrayref
; │└
; │ @ simdloop.jl:78 within `macro expansion`
; │┌ @ int.jl:87 within `+`
%118 = add nuw nsw i64 %value_phi19, 1
; │└
; │ @ simdloop.jl:75 within `macro expansion`
; │┌ @ int.jl:83 within `<`
%exitcond.not = icmp eq i64 %118, %arraylen
; │└
br i1 %exitcond.not, label %L30, label %L13
L30: ; preds = %L13, %middle.block, %top
%value_phi2 = phi double [ 0.000000e+00, %top ], [ %115, %middle.block ], [ %117, %L13 ]
; └
; @ none:8 within `my_sum_simd`
ret double %value_phi2
}
The LLVM project is the compiler toolchain technology that Julia uses for its Just in Time (JIT) compilation. Basically, it translates the Julia code into a machine language close to Assembler (but quite readable, if you get used to it) and this is compiled when needed. We could see JIT doing its magic in the beginning of the Benchmark section, as the function my_sum
was compiled on its first run. Note: in general packages get precompiled before they are used to gain performance.
Multiple dispatch
While on the subject of performance and the JIT compilation it is time to recall the multiple dispatch capabilities of Julia.
Like most of the time, this concept is best explained by showing an example. In our, now already famous, sum example we never specified what type the argument has. As long as one was able to loop over it and add the entries it was fine. That does not mean Julia never cared. In fact, we can take a look what Julia does for different input types.
For this we use another macro from the InteractiveUtils
package, namely @code_typed
. Again, we get some intermediate code that Julia produces for us. This time a bit more compact but most important, all the type information of the input argument attached to it.
For an array of Int64
we get:
@code_typed optimize=false my_sum([1, 2, 3])
CodeInfo(
1 ─ %1 = eltype(V)::Core.Const(Int64)
│ (result = zero(%1))::Core.Const(0)
│ %3 = eachindex(V)::Base.OneTo{Int64}
│ (@_3 = Base.iterate(%3))::Union{Nothing, Tuple{Int64, Int64}}
│ %5 = (@_3 === nothing)::Bool
│ %6 = Base.not_int(%5)::Bool
└── goto #4 if not %6
2 ┄ %8 = @_3::Tuple{Int64, Int64}
│ (i = Core.getfield(%8, 1))::Int64
│ %10 = Core.getfield(%8, 2)::Int64
│ nothing::Core.Const(nothing)
│ %12 = result::Int64
│ %13 = Base.getindex(V, i)::Int64
│ %14 = (%12 + %13)::Int64
│ (result = %14)::Int64
│ (val = %14)::Int64
│ nothing::Core.Const(nothing)
│ val::Int64
│ (@_3 = Base.iterate(%3, %10))::Union{Nothing, Tuple{Int64, Int64}}
│ %20 = (@_3 === nothing)::Bool
│ %21 = Base.not_int(%20)::Bool
└── goto #4 if not %21
3 ─ goto #2
4 ┄ return result
) => Int64
and for Float64
:
@code_typed optimize=false my_sum([1.0, 2.0, 3.0])
CodeInfo(
1 ─ %1 = eltype(V)::Core.Const(Float64)
│ (result = zero(%1))::Core.Const(0.0)
│ %3 = eachindex(V)::Base.OneTo{Int64}
│ (@_3 = Base.iterate(%3))::Union{Nothing, Tuple{Int64, Int64}}
│ %5 = (@_3 === nothing)::Bool
│ %6 = Base.not_int(%5)::Bool
└── goto #4 if not %6
2 ┄ %8 = @_3::Tuple{Int64, Int64}
│ (i = Core.getfield(%8, 1))::Int64
│ %10 = Core.getfield(%8, 2)::Int64
│ nothing::Core.Const(nothing)
│ %12 = result::Float64
│ %13 = Base.getindex(V, i)::Float64
│ %14 = (%12 + %13)::Float64
│ (result = %14)::Float64
│ (val = %14)::Float64
│ nothing::Core.Const(nothing)
│ val::Float64
│ (@_3 = Base.iterate(%3, %10))::Union{Nothing, Tuple{Int64, Int64}}
│ %20 = (@_3 === nothing)::Bool
│ %21 = Base.not_int(%20)::Bool
└── goto #4 if not %21
3 ─ goto #2
4 ┄ return result
) => Float64
We can see, that in the first output everything is of type Int64
, including the result. The second output has the same instructions but with Float64
as type.
As we might have already seen throughout this workshop we can define the same function name for different input arguments. This is very obvious for the basic math operators but it is true for every function. Let us have a look at the +
operator:
methods(+)
# 208 methods for generic function "+" from Base:
[1] +(level::Base.CoreLogging.LogLevel, inc::Integer)
@ Base.CoreLogging logging.jl:131
[2] +(x::Bool, z::Complex{Bool})
@ complex.jl:305
[3] +(x::Bool, y::Bool)
@ bool.jl:166
[4] +(x::Bool)
@ bool.jl:163
[5] +(x::Bool, z::Complex)
@ complex.jl:312
[6] +(x::Real, z::Complex{Bool})
@ complex.jl:319
[7] +(x::Bool, y::T) where T<:AbstractFloat
@ bool.jl:173
[8] +(z::Complex{Bool}, x::Bool)
@ complex.jl:306
[9] +(z::Complex{Bool}, x::Real)
@ complex.jl:320
[10] +(z::Complex, x::Bool)
@ complex.jl:313
[11] +(t::Dates.Time, dt::Dates.Date)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/arithmetic.jl:22
[12] +(x::Dates.Time, y::Dates.TimePeriod)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/arithmetic.jl:85
[13] +(a::Pkg.Resolve.FieldValue, b::Pkg.Resolve.FieldValue)
@ Pkg.Resolve /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Pkg/src/Resolve/fieldvalues.jl:43
[14] +(x::Rational{BigInt}, y::Rational{BigInt})
@ Base.GMP.MPQ gmp.jl:1061
[15] +(x::Dates.DateTime, y::Dates.Quarter)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/arithmetic.jl:77
[16] +(dt::Dates.DateTime, z::Dates.Month)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/arithmetic.jl:49
[17] +(dt::Dates.DateTime, y::Dates.Year)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/arithmetic.jl:25
[18] +(x::Dates.DateTime, y::Dates.Period)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/arithmetic.jl:83
[19] +(x::BigFloat, c::BigInt)
@ Base.MPFR mpfr.jl:463
[20] +(a::BigFloat, b::BigFloat, c::BigFloat, d::BigFloat, e::BigFloat)
@ Base.MPFR mpfr.jl:619
[21] +(x::BigFloat, y::BigFloat)
@ Base.MPFR mpfr.jl:432
[22] +(a::BigFloat, b::BigFloat, c::BigFloat)
@ Base.MPFR mpfr.jl:606
[23] +(a::BigFloat, b::BigFloat, c::BigFloat, d::BigFloat)
@ Base.MPFR mpfr.jl:612
[24] +(x::BigFloat, c::Union{UInt16, UInt32, UInt64, UInt8})
@ Base.MPFR mpfr.jl:439
[25] +(x::BigFloat, c::Union{Int16, Int32, Int64, Int8})
@ Base.MPFR mpfr.jl:447
[26] +(x::BigFloat, c::Union{Float16, Float32, Float64})
@ Base.MPFR mpfr.jl:455
[27] +(x::Dates.CompoundPeriod, y::Dates.CompoundPeriod)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/periods.jl:334
[28] +(x::Dates.CompoundPeriod, y::Dates.TimeType)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/periods.jl:362
[29] +(x::Dates.CompoundPeriod, y::Dates.Period)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/periods.jl:332
[30] +(x::Dates.Date, y::Dates.Day)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/arithmetic.jl:81
[31] +(x::Dates.Date, y::Dates.Week)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/arithmetic.jl:79
[32] +(x::Dates.Date, y::Dates.Quarter)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/arithmetic.jl:75
[33] +(dt::Dates.Date, z::Dates.Month)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/arithmetic.jl:56
[34] +(dt::Dates.Date, y::Dates.Year)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/arithmetic.jl:29
[35] +(dt::Dates.Date, t::Dates.Time)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/arithmetic.jl:21
[36] +(a::OpenSSL.BigNum, b::OpenSSL.BigNum)
@ OpenSSL ~/.julia/packages/OpenSSL/8wxMC/src/OpenSSL.jl:747
[37] +(a::Pkg.Resolve.VersionWeight, b::Pkg.Resolve.VersionWeight)
@ Pkg.Resolve /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Pkg/src/Resolve/versionweights.jl:22
[38] +(B::BitMatrix, J::LinearAlgebra.UniformScaling)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/uniformscaling.jl:151
[39] +(x::BigInt, y::BigInt)
@ Base.GMP gmp.jl:501
[40] +(a::BigInt, b::BigInt, c::BigInt)
@ Base.GMP gmp.jl:541
[41] +(a::BigInt, b::BigInt, c::BigInt, d::BigInt)
@ Base.GMP gmp.jl:542
[42] +(a::BigInt, b::BigInt, c::BigInt, d::BigInt, e::BigInt)
@ Base.GMP gmp.jl:543
[43] +(x::BigInt, y::BigInt, rest::BigInt...)
@ Base.GMP gmp.jl:683
[44] +(c::BigInt, x::BigFloat)
@ Base.MPFR mpfr.jl:468
[45] +(x::BigInt, c::Union{UInt16, UInt32, UInt64, UInt8})
@ Base.GMP gmp.jl:549
[46] +(x::BigInt, c::Union{Int16, Int32, Int64, Int8})
@ Base.GMP gmp.jl:555
[47] +(::Missing, ::Missing)
@ missing.jl:122
[48] +(::Missing)
@ missing.jl:101
[49] +(x::Missing, y::Dates.AbstractTime)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/arithmetic.jl:91
[50] +(::Missing, ::Number)
@ missing.jl:123
[51] +(index1::CartesianIndex{N}, index2::CartesianIndex{N}) where N
@ Base.IteratorsMD multidimensional.jl:119
[52] +(A::Array, B::SparseArrays.AbstractSparseMatrixCSC)
@ SparseArrays /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/SparseArrays/src/sparsematrix.jl:2246
[53] +(A::Array, Bs::Array...)
@ arraymath.jl:12
[54] +(A::LinearAlgebra.Bidiagonal, B::LinearAlgebra.Tridiagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/special.jl:202
[55] +(A::LinearAlgebra.Bidiagonal, B::LinearAlgebra.Bidiagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/bidiag.jl:390
[56] +(A::LinearAlgebra.Bidiagonal, B::LinearAlgebra.UniformScaling)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/special.jl:242
[57] +(A::LinearAlgebra.Bidiagonal, B::LinearAlgebra.SymTridiagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/special.jl:217
[58] +(A::LinearAlgebra.Bidiagonal, B::LinearAlgebra.UpperTriangular)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/special.jl:99
[59] +(A::LinearAlgebra.Bidiagonal, B::LinearAlgebra.Diagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/special.jl:151
[60] +(A::LinearAlgebra.Bidiagonal, B::LinearAlgebra.LowerTriangular)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/special.jl:99
[61] +(A::LinearAlgebra.Bidiagonal, B::LinearAlgebra.UnitUpperTriangular)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/special.jl:99
[62] +(A::LinearAlgebra.Bidiagonal, B::LinearAlgebra.UnitLowerTriangular)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/special.jl:99
[63] +(x::LinearAlgebra.Bidiagonal, H::LinearAlgebra.UpperHessenberg)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/hessenberg.jl:120
[64] +(A::SparseArrays.AbstractSparseMatrixCSC, B::SparseArrays.AbstractSparseMatrixCSC)
@ SparseArrays /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/SparseArrays/src/sparsematrix.jl:2242
[65] +(A::SparseArrays.AbstractSparseMatrixCSC, B::Array)
@ SparseArrays /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/SparseArrays/src/sparsematrix.jl:2245
[66] +(A::SparseArrays.AbstractSparseMatrixCSC{Tv, Ti}, J::LinearAlgebra.UniformScaling{T}) where {T<:Number, Tv, Ti}
@ SparseArrays /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/SparseArrays/src/sparsematrix.jl:4275
[67] +(H::LinearAlgebra.Hermitian, D::LinearAlgebra.Diagonal{var"#s988", V} where {var"#s988"<:Real, V<:AbstractVector{var"#s988"}})
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/diagonal.jl:238
[68] +(A::LinearAlgebra.Hermitian{<:Any, <:SparseArrays.AbstractSparseMatrix}, B::SparseArrays.AbstractSparseMatrix)
@ SparseArrays /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/SparseArrays/src/linalg.jl:16
[69] +(A::LinearAlgebra.Hermitian, B::SparseArrays.AbstractSparseMatrix)
@ SparseArrays /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/SparseArrays/src/linalg.jl:19
[70] +(A::LinearAlgebra.Hermitian{<:Any, <:SparseArrays.AbstractSparseMatrix}, B::LinearAlgebra.Symmetric{<:Real, <:SparseArrays.AbstractSparseMatrix})
@ SparseArrays /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/SparseArrays/src/linalg.jl:27
[71] +(A::LinearAlgebra.Hermitian{<:Any, <:SparseArrays.AbstractSparseMatrix}, B::LinearAlgebra.Symmetric{<:Any, <:SparseArrays.AbstractSparseMatrix})
@ SparseArrays /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/SparseArrays/src/linalg.jl:25
[72] +(A::LinearAlgebra.Hermitian, B::LinearAlgebra.Symmetric{var"#s988", S} where {var"#s988"<:Real, S<:(AbstractMatrix{<:var"#s988"})})
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/symmetric.jl:518
[73] +(A::LinearAlgebra.Hermitian, B::LinearAlgebra.SymTridiagonal{var"#s126", V} where {var"#s126"<:Real, V<:AbstractVector{var"#s126"}})
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/symmetric.jl:523
[74] +(A::LinearAlgebra.Hermitian, B::LinearAlgebra.Hermitian)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/symmetric.jl:504
[75] +(A::LinearAlgebra.Hermitian, J::LinearAlgebra.UniformScaling{<:Complex})
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/uniformscaling.jl:195
[76] +(F::LinearAlgebra.Hessenberg, J::LinearAlgebra.UniformScaling)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/hessenberg.jl:560
[77] +(A::LinearAlgebra.UpperTriangular, B::LinearAlgebra.Bidiagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/special.jl:91
[78] +(x::LinearAlgebra.UpperTriangular, H::LinearAlgebra.UpperHessenberg)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/hessenberg.jl:120
[79] +(A::LinearAlgebra.UpperTriangular, B::LinearAlgebra.UnitUpperTriangular)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/triangular.jl:652
[80] +(A::LinearAlgebra.UpperTriangular, B::LinearAlgebra.UpperTriangular)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/triangular.jl:650
[81] +(x::LinearAlgebra.Diagonal, H::LinearAlgebra.UpperHessenberg)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/hessenberg.jl:120
[82] +(B::LinearAlgebra.Diagonal, A::LinearAlgebra.Bidiagonal)
@ LinearAlgebra none:0
[83] +(D::LinearAlgebra.Diagonal, S::LinearAlgebra.Symmetric)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/diagonal.jl:229
[84] +(Da::LinearAlgebra.Diagonal, Db::LinearAlgebra.Diagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/diagonal.jl:225
[85] +(A::LinearAlgebra.Diagonal, B::LinearAlgebra.SymTridiagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/special.jl:166
[86] +(D::LinearAlgebra.Diagonal{var"#s988", V} where {var"#s988"<:Real, V<:AbstractVector{var"#s988"}}, H::LinearAlgebra.Hermitian)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/diagonal.jl:235
[87] +(A::LinearAlgebra.Diagonal, B::LinearAlgebra.UniformScaling)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/special.jl:247
[88] +(A::LinearAlgebra.Diagonal, B::LinearAlgebra.Tridiagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/special.jl:187
[89] +(x::Number, J::LinearAlgebra.UniformScaling)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/uniformscaling.jl:145
[90] +(::Number, ::Missing)
@ missing.jl:124
[91] +(x::Number, y::Base.TwicePrecision)
@ twiceprecision.jl:292
[92] +(z::Complex)
@ complex.jl:292
[93] +(x::Rational)
@ rational.jl:300
[94] +(x::Number)
@ operators.jl:524
[95] +(z::Complex, w::Complex)
@ complex.jl:294
[96] +(x::AbstractIrrational, y::AbstractIrrational)
@ irrationals.jl:161
[97] +(c::Union{Int16, Int32, Int64, Int8}, x::BigInt)
@ Base.GMP gmp.jl:556
[98] +(c::Union{UInt16, UInt32, UInt64, UInt8}, x::BigInt)
@ Base.GMP gmp.jl:550
[99] +(x::T, y::T) where T<:Union{Int128, Int16, Int32, Int64, Int8, UInt128, UInt16, UInt32, UInt64, UInt8}
@ int.jl:87
[100] +(a::Integer, b::Integer)
@ int.jl:1064
[101] +(x::Rational, y::Rational)
@ rational.jl:314
[102] +(x::T, y::T) where T<:Union{Float16, Float32, Float64}
@ float.jl:409
[103] +(x::T, y::T) where T<:Number
@ promotion.jl:507
[104] +(z::Complex, x::Real)
@ complex.jl:332
[105] +(x::Real, z::Complex)
@ complex.jl:331
[106] +(y::Integer, x::Rational)
@ rational.jl:350
[107] +(y::AbstractFloat, x::Bool)
@ bool.jl:176
[108] +(x::Rational, y::Integer)
@ rational.jl:343
[109] +(c::Union{Float16, Float32, Float64}, x::BigFloat)
@ Base.MPFR mpfr.jl:460
[110] +(c::Union{Int16, Int32, Int64, Int8}, x::BigFloat)
@ Base.MPFR mpfr.jl:452
[111] +(c::Union{UInt16, UInt32, UInt64, UInt8}, x::BigFloat)
@ Base.MPFR mpfr.jl:444
[112] +(x::Number, y::Number)
@ promotion.jl:422
[113] +(A::LinearAlgebra.UnitUpperTriangular, B::LinearAlgebra.Bidiagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/special.jl:91
[114] +(x::LinearAlgebra.UnitUpperTriangular, H::LinearAlgebra.UpperHessenberg)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/hessenberg.jl:120
[115] +(UL::LinearAlgebra.UnitUpperTriangular, J::LinearAlgebra.UniformScaling)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/uniformscaling.jl:181
[116] +(A::LinearAlgebra.UnitUpperTriangular, B::LinearAlgebra.UnitUpperTriangular)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/triangular.jl:656
[117] +(A::LinearAlgebra.UnitUpperTriangular, B::LinearAlgebra.UpperTriangular)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/triangular.jl:654
[118] +(A::LinearAlgebra.UnitLowerTriangular, B::LinearAlgebra.Bidiagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/special.jl:91
[119] +(UL::LinearAlgebra.UnitLowerTriangular, J::LinearAlgebra.UniformScaling)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/uniformscaling.jl:181
[120] +(A::LinearAlgebra.UnitLowerTriangular, B::LinearAlgebra.UnitLowerTriangular)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/triangular.jl:657
[121] +(A::LinearAlgebra.UnitLowerTriangular, B::LinearAlgebra.LowerTriangular)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/triangular.jl:655
[122] +(A::BitArray, B::BitArray)
@ bitarray.jl:1184
[123] +(x::Ptr, y::Integer)
@ pointer.jl:282
[124] +(x::T, y::Integer) where T<:AbstractChar
@ char.jl:237
[125] +(r1::LinRange{T}, r2::LinRange{T}) where T
@ range.jl:1461
[126] +(A::LinearAlgebra.Tridiagonal, B::LinearAlgebra.UniformScaling)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/special.jl:232
[127] +(B::LinearAlgebra.Tridiagonal, A::LinearAlgebra.Bidiagonal)
@ LinearAlgebra none:0
[128] +(B::LinearAlgebra.Tridiagonal, A::LinearAlgebra.Diagonal)
@ LinearAlgebra none:0
[129] +(A::LinearAlgebra.Tridiagonal, B::LinearAlgebra.SymTridiagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/special.jl:183
[130] +(x::LinearAlgebra.Tridiagonal, H::LinearAlgebra.UpperHessenberg)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/hessenberg.jl:120
[131] +(A::LinearAlgebra.Tridiagonal, B::LinearAlgebra.Tridiagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/tridiag.jl:739
[132] +(J::LinearAlgebra.UniformScaling, B::BitMatrix)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/uniformscaling.jl:152
[133] +(B::LinearAlgebra.UniformScaling, A::LinearAlgebra.Tridiagonal)
@ LinearAlgebra none:0
[134] +(B::LinearAlgebra.UniformScaling, A::LinearAlgebra.Bidiagonal)
@ LinearAlgebra none:0
[135] +(x::LinearAlgebra.UniformScaling, H::LinearAlgebra.UpperHessenberg)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/hessenberg.jl:120
[136] +(J::LinearAlgebra.UniformScaling{T}, A::SparseArrays.AbstractSparseMatrixCSC{Tv, Ti}) where {T<:Number, Tv, Ti}
@ SparseArrays /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/SparseArrays/src/sparsematrix.jl:4277
[137] +(J::LinearAlgebra.UniformScaling, F::LinearAlgebra.Hessenberg)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/hessenberg.jl:561
[138] +(B::LinearAlgebra.UniformScaling, A::LinearAlgebra.SymTridiagonal)
@ LinearAlgebra none:0
[139] +(B::LinearAlgebra.UniformScaling, A::LinearAlgebra.Diagonal)
@ LinearAlgebra none:0
[140] +(J::LinearAlgebra.UniformScaling, x::Number)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/uniformscaling.jl:144
[141] +(J::LinearAlgebra.UniformScaling, A::AbstractMatrix)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/uniformscaling.jl:153
[142] +(J1::LinearAlgebra.UniformScaling, J2::LinearAlgebra.UniformScaling)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/uniformscaling.jl:150
[143] +(J::LinearAlgebra.UniformScaling)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/uniformscaling.jl:149
[144] +(x::Integer, y::AbstractChar)
@ char.jl:247
[145] +(x::Integer, y::Ptr)
@ pointer.jl:284
[146] +(x::Dates.Instant)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/arithmetic.jl:4
[147] +(x::Dates.AbstractTime, y::Missing)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/arithmetic.jl:90
[148] +(x::Base.TwicePrecision{T}, y::Base.TwicePrecision{T}) where T
@ twiceprecision.jl:294
[149] +(x::Base.TwicePrecision, y::Base.TwicePrecision)
@ twiceprecision.jl:299
[150] +(x::Base.TwicePrecision, y::Number)
@ twiceprecision.jl:288
[151] +(B::LinearAlgebra.SymTridiagonal, A::LinearAlgebra.Diagonal)
@ LinearAlgebra none:0
[152] +(B::LinearAlgebra.SymTridiagonal, A::LinearAlgebra.Bidiagonal)
@ LinearAlgebra none:0
[153] +(x::LinearAlgebra.SymTridiagonal, H::LinearAlgebra.UpperHessenberg)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/hessenberg.jl:120
[154] +(A::LinearAlgebra.SymTridiagonal, B::LinearAlgebra.Symmetric)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/symmetric.jl:520
[155] +(A::LinearAlgebra.SymTridiagonal{var"#s127", V} where {var"#s127"<:Real, V<:AbstractVector{var"#s127"}}, B::LinearAlgebra.Hermitian)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/symmetric.jl:522
[156] +(A::LinearAlgebra.SymTridiagonal, B::LinearAlgebra.SymTridiagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/tridiag.jl:210
[157] +(A::LinearAlgebra.SymTridiagonal, B::LinearAlgebra.UniformScaling)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/special.jl:237
[158] +(B::LinearAlgebra.SymTridiagonal, A::LinearAlgebra.Tridiagonal)
@ LinearAlgebra none:0
[159] +(r1::OrdinalRange, r2::OrdinalRange)
@ range.jl:1454
[160] +(y::Dates.TimeType, x::StridedArray{<:Union{Dates.CompoundPeriod, Dates.Period}})
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/deprecated.jl:18
[161] +(x::Dates.TimeType)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/arithmetic.jl:8
[162] +(x::Dates.TimeType, y::Dates.CompoundPeriod)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/periods.jl:356
[163] +(a::Dates.TimeType, b::Dates.Period, c::Dates.Period)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/periods.jl:353
[164] +(a::Dates.TimeType, b::Dates.Period, c::Dates.Period, d::Dates.Period...)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/periods.jl:354
[165] +(A::LinearAlgebra.LowerTriangular, B::LinearAlgebra.Bidiagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/special.jl:91
[166] +(A::LinearAlgebra.LowerTriangular, B::LinearAlgebra.UnitLowerTriangular)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/triangular.jl:653
[167] +(A::LinearAlgebra.LowerTriangular, B::LinearAlgebra.LowerTriangular)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/triangular.jl:651
[168] +(r::AbstractRange{<:Dates.TimeType}, x::Dates.Period)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/ranges.jl:65
[169] +(x::Dates.Period, r::AbstractRange{<:Dates.TimeType})
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/ranges.jl:64
[170] +(y::Dates.Period, x::Dates.TimeType)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/arithmetic.jl:87
[171] +(y::Dates.Period, x::Dates.CompoundPeriod)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/periods.jl:333
[172] +(x::P, y::P) where P<:Dates.Period
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/periods.jl:75
[173] +(x::Dates.Period, y::Dates.Period)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/periods.jl:331
[174] +(A::LinearAlgebra.AbstractTriangular, B::LinearAlgebra.AbstractTriangular)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/triangular.jl:658
[175] +(S::LinearAlgebra.Symmetric, D::LinearAlgebra.Diagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/diagonal.jl:232
[176] +(A::LinearAlgebra.Symmetric{<:Any, <:SparseArrays.AbstractSparseMatrix}, B::SparseArrays.AbstractSparseMatrix)
@ SparseArrays /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/SparseArrays/src/linalg.jl:16
[177] +(A::LinearAlgebra.Symmetric, B::SparseArrays.AbstractSparseMatrix)
@ SparseArrays /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/SparseArrays/src/linalg.jl:19
[178] +(A::LinearAlgebra.Symmetric{<:Real, <:SparseArrays.AbstractSparseMatrix}, B::LinearAlgebra.Hermitian{<:Any, <:SparseArrays.AbstractSparseMatrix})
@ SparseArrays /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/SparseArrays/src/linalg.jl:26
[179] +(A::LinearAlgebra.Symmetric{<:Any, <:SparseArrays.AbstractSparseMatrix}, B::LinearAlgebra.Hermitian{<:Any, <:SparseArrays.AbstractSparseMatrix})
@ SparseArrays /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/SparseArrays/src/linalg.jl:24
[180] +(A::LinearAlgebra.Symmetric{var"#s128", S} where {var"#s128"<:Real, S<:(AbstractMatrix{<:var"#s128"})}, B::LinearAlgebra.Hermitian)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/symmetric.jl:519
[181] +(A::LinearAlgebra.Symmetric, B::LinearAlgebra.SymTridiagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/symmetric.jl:521
[182] +(A::LinearAlgebra.Symmetric, B::LinearAlgebra.Symmetric)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/symmetric.jl:504
[183] +(x::SparseArrays.AbstractSparseVector, y::SparseArrays.AbstractSparseVector)
@ SparseArrays /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/SparseArrays/src/sparsevector.jl:1562
[184] +(A::SparseArrays.AbstractSparseMatrix, B::LinearAlgebra.Hermitian{<:Any, <:SparseArrays.AbstractSparseMatrix})
@ SparseArrays /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/SparseArrays/src/linalg.jl:15
[185] +(A::SparseArrays.AbstractSparseMatrix, B::LinearAlgebra.Hermitian)
@ SparseArrays /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/SparseArrays/src/linalg.jl:18
[186] +(A::SparseArrays.AbstractSparseMatrix, B::LinearAlgebra.Symmetric{<:Any, <:SparseArrays.AbstractSparseMatrix})
@ SparseArrays /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/SparseArrays/src/linalg.jl:15
[187] +(A::SparseArrays.AbstractSparseMatrix, B::LinearAlgebra.Symmetric)
@ SparseArrays /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/SparseArrays/src/linalg.jl:18
[188] +(x::AbstractArray{<:Dates.TimeType}, y::Union{Dates.CompoundPeriod, Dates.Period})
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/deprecated.jl:6
[189] +(H::LinearAlgebra.UpperHessenberg, x::LinearAlgebra.UniformScaling)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/hessenberg.jl:119
[190] +(A::AbstractMatrix, J::LinearAlgebra.UniformScaling)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/uniformscaling.jl:214
[191] +(r1::StepRangeLen{T, R}, r2::StepRangeLen{T, R}) where {R<:Base.TwicePrecision, T}
@ twiceprecision.jl:626
[192] +(r1::StepRangeLen{T, S}, r2::StepRangeLen{T, S}) where {T, S}
@ range.jl:1477
[193] +(H::LinearAlgebra.UpperHessenberg, x::LinearAlgebra.Tridiagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/hessenberg.jl:119
[194] +(H::LinearAlgebra.UpperHessenberg, x::LinearAlgebra.Bidiagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/hessenberg.jl:119
[195] +(H::LinearAlgebra.UpperHessenberg, x::LinearAlgebra.Diagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/hessenberg.jl:119
[196] +(H::LinearAlgebra.UpperHessenberg, x::LinearAlgebra.SymTridiagonal)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/hessenberg.jl:119
[197] +(A::LinearAlgebra.UpperHessenberg, B::LinearAlgebra.UpperHessenberg)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/hessenberg.jl:112
[198] +(H::LinearAlgebra.UpperHessenberg, x::LinearAlgebra.UnitUpperTriangular)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/hessenberg.jl:119
[199] +(H::LinearAlgebra.UpperHessenberg, x::LinearAlgebra.UpperTriangular)
@ LinearAlgebra /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/LinearAlgebra/src/hessenberg.jl:119
[200] +(X::StridedArray{<:Union{Dates.CompoundPeriod, Dates.Period}}, Y::StridedArray{<:Union{Dates.CompoundPeriod, Dates.Period}})
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/deprecated.jl:62
[201] +(r1::Union{LinRange, OrdinalRange, StepRangeLen}, r2::Union{LinRange, OrdinalRange, StepRangeLen})
@ range.jl:1470
[202] +(A::AbstractArray, B::AbstractArray)
@ arraymath.jl:6
[203] +(x::StridedArray{<:Union{Dates.CompoundPeriod, Dates.Period}})
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/deprecated.jl:55
[204] +(x::AbstractArray{<:Number})
@ abstractarraymath.jl:220
[205] +(y::Union{Dates.CompoundPeriod, Dates.Period}, x::AbstractArray{<:Dates.TimeType})
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/deprecated.jl:14
[206] +(x::StridedArray{<:Union{Dates.CompoundPeriod, Dates.Period}}, y::Dates.TimeType)
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/deprecated.jl:10
[207] +(x::Union{Dates.CompoundPeriod, Dates.Period})
@ Dates /opt/hostedtoolcache/julia/1.10.0/x64/share/julia/stdlib/v1.10/Dates/src/periods.jl:342
[208] +(a, b, c, xs...)
@ operators.jl:587
This shows us two things. First, we did not think anybody will print this workshop on paper or we would have gone for shorter outputs. Second, and more importantly, every time we call a function, Julia will look at the type of each argument and search for the function that fits best. As a result we can write optimized code for different types and in general this is one key stone in the excellent performance of Julia.
Before we continue to the other parallel computation concepts, we introduce an example that will help us along, same as the sum did for the beginning of this section.