Files
master-thesis/thesis/chapters/evaluation.tex
Daniel 99a222341d
Some checks failed
CI / Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ github.event_name }} (x64, ubuntu-latest, 1.10) (push) Has been cancelled
CI / Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ github.event_name }} (x64, ubuntu-latest, 1.6) (push) Has been cancelled
CI / Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ github.event_name }} (x64, ubuntu-latest, pre) (push) Has been cancelled
benchmarking: finished taking evaluation results; evaluation: continued writing
2025-05-25 13:27:18 +02:00

171 lines
19 KiB
TeX

\chapter{Evaluation}
\label{cha:evaluation}
This thesis aims to determine whether one of the two GPU evaluators is faster than the current CPU evaluator. This chapter describes the performance evaluation process. First, the environment in which the performance benchmarks are conducted is explained. Next the individual results for the GPU interpreter and transpiler are presented individually. This section also includes the performance tuning steps taken to achieve these results. Finally, the results of the GPU evaluators are compared to those of the CPU evaluator to answer the research questions of this thesis.
\section{Benchmark Environment}
In this section, the benchmark environment used to evaluate the performance is outlined. To ensure the validity and reliability of the results, it is necessary to specify the details of the environment. This includes a description of the hardware and software configuration as well as the performance evaluation process. With this, the variance between the results is minimised, which allows for better reproducibility and comparability between the implementations.
\subsection{Hardware Configuration}
The hardware configuration is the most important aspect of the benchmark environment. The capabilities of both the CPU and GPU can have a significant impact on the resulting performance. The following sections outline the importance of the individual components as well as the actual hardware used for the benchmarks.
\subsubsection{GPU}
The GPU plays a crucial role, as different microarchitectures typically require different optimisations. Although the evaluators can generally operate on any Nvidia GPU with a compute capability of at least 6.1, they are tuned for the Ampere microarchitecture which has a compute capability of 8.6. Despite the evaluators being tuned for this microarchitecture, more recent ones can be used as well. However, additional tuning is required to ensure that the evaluators can utilise the hardware to its fullest potential.
Tuning must also be done on a per-problem basis. In particular, the number of variable sets can impact how well the hardware is utilised. Therefore, it is crucial to determine which configuration yields the best performance. Section \ref{sec:results} outlines a strategy for tuning the configuration to a new problem.
\subsubsection{CPU}
Although the GPU plays a crucial role, work is also carried out on the CPU. The interpreter primarily utilises the CPU for data transfer and the pre-processing step, making it more GPU-bound as most of the work is performed on the GPU. However, the transpiler additionally relies on the CPU to perform the transpilation step. This step involves generating a kernel for each expression and sending these kernels to the driver for compilation, a process also handled by the CPU. By contrast, the interpreter only required one kernel to be converted into PTX and compiled by the driver once. Consequently, the transpiler is significantly more CPU-bound and variations in the CPU used have a much greater impact. Therefore, using a more powerful CPU benefits the transpiler more than the interpreter.
\subsubsection{System Memory}
In addition to the hardware configuration of the GPU and CPU, system memory (RAM) also plays a crucial role. Although RAM does not directly contribute to the overall performance, it can have a noticeable indirect impact due to its role in caching and general data storage. Insufficient RAM forces the operating system to use the page file, which is stored on a considerably slower SSD. This leads to slower data access, thereby reducing the overall performance of the application.
As seen in the list below, only 16 GB of RAM were available during the benchmarking process. This amount is insufficient to utilise caching to the extent outlined in Chapter \ref{cha:implementation}. Additional RAM was not available, meaning caching had to be disabled for all benchmarks as further explained in Section \ref{sec:results}.
\subsubsection{Hardware}
With the requirements explained above in mind, the following hardware is used to perform the benchmarks for the CPU-based evaluator, which was used as the baseline, as well as for the GPU-based evaluators:
\begin{itemize}
\item Intel i5 12500
\item Nvidia RTX 3060 Ti
\item 16 GB 4400 MT/s DDR5 RAM
\end{itemize}
\subsection{Software Configuration}
Apart from the hardware, the performance of the evaluators can also be significantly affected by the software. Primarily these three software components or libraries are involved in the performance of the evaluators:
\begin{itemize}
\item GPU Driver
\item Julia
\item CUDA.jl
\end{itemize}
Typically, newer versions of these components include, among other things, performance improvements. This is why it is important to specify the version which is used for benchmarking. The GPU driver has version \emph{561.17}, Julia has version \emph{1.11.5}, and CUDA.jl has version \emph{5.8.1}. As with the hardware configuration, this ensures that the results are reproducible and comparable to each other.
\subsection{Performance Evaluation Process}
With the hardware and software configuration established, the process of benchmarking the implementations can be described. This process is designed to simulate the load and scenario in which these evaluators will be used. The Nikuradse dataset \parencite{nikuradse_laws_1950} has been chosen as the data source. The dataset models the laws of flow in rough pipes and provides $362$ variable sets, each set containing two variables. This dataset has first been used by \textcite{guimera_bayesian_2020} to benchmark a symbolic regression algorithm.
Since only the evaluators are benchmarked, the expressions to be evaluated must already exist. These expressions are generated for the Nikuradse dataset using the exhaustive symbolic regression algorithm proposed by \textcite{bartlett_exhaustive_2024}. This ensures that the expressions are representative of what needs to be evaluated in a real-world application. In total, three benchmarks will be conducted, each having a different goal, which will be further explained in the following paragraphs.
The first benchmark involves a very large set of roughly $250\,000$ expressions with $362$ variable sets. This means that when using GP all $250\,000$ expressions would be evaluated in a single generation. In a typical generation, significantly fewer expressions would be evaluated. However, this benchmark is designed to show how the evaluators can handle very large volumes of data. Because of memory constraints, it was not possible to conduct an additional benchmark with a higher number of variable sets.
Both the second and third benchmarks are conducted to demonstrate how the evaluators will perform in more realistic scenarios. For the second benchmark the number of expressions has been reduced to roughly $10\,000$, and the number of variable sets is again $362$. The number of expressions is much more representative to a typical scenario, while the number of variable sets is very low. To determine if the GPU evaluators are also a feasible alternative, this benchmark is conducted nonetheless.
Finally, the third benchmark will be conducted. Similar to the second benchmark, this benchmark evaluates the same $10\,000$ expressions but now with 30 times more variable sets, which equates to roughly $10\,000$. This benchmark mimics the scenario where the evaluators will most likely be used. While the others simulate different conditions to determine if and where the GPU evaluators can be used efficiently, this benchmark is more focused on determining if the GPU evaluators are suitable for the specific scenario they would be used in.
All three benchmarks also simulate a parameter optimisation step, as this is the scenario in which these evaluators will be used in. For parameter optimisation, $100$ steps are used, meaning that all expressions will be evaluated $100$ times. During the benchmark, this process is simulated by re-transmitting the parameters instead of generating new ones. Generating new parameters is not part of the evaluators and is therefore not implemented. However, because the parameters are re-transmitted every time, the overhead of sending the data is taken into account. This overhead is part of the evaluators and is an additional burden that the CPU implementation does not have, making important to be measured.
\subsubsection{Measuring Performance}
The performance measurements are taken, using the BenchmarkTools.jl\footnote{\url{https://juliaci.github.io/BenchmarkTools.jl/stable/}} package. It is the standard for benchmarking applications in Julia, which makes it an obvious choice for measuring the performance of the evaluators.
It offers extensive support for measuring and comparing results of different implementations and versions of the same implementation. Benchmark groups allow to categorise the different implementations, take performance measurements and compare them. When taking performance measurements, it also supports setting a timeout and most importantly, set the number of samples to be taken. This is especially important, as it ensures to produce stable results by combating run-to-run variance. For this thesis, a sample size of $50$ has been used. This means that each of the previously-mentioned benchmarks, gets executed $50$ times.
\section{Results}
\label{sec:results}
This section presents the results of the benchmarks described above. First the results for the GPU-based interpreter will be presented alongside the performance tuning process. This is followed by the results of the transpiler as well as the performance tuning process. Finally, both GPU-based evaluators will be compared with each other to determine which of them performs the best. Additionally, these evaluators will be compared against the CPU-based interpreter to answer the research questions of this thesis.
%BECAUSE OF RAM CONSTRAINTS, CACHING IS NOT USED TO THE FULL EXTEND AS IN CONTRAST TO HOW IT IS EXPLAINED IN THE IMPLEMENTATION CHAPTER. I hope I can cache the frontend. If only the finished kernels can not be cached, move this explanation to the transpiler section below and update the reference in subsubsection "System Memory"
\subsection{Interpreter}
% Results only for Interpreter (also contains final kernel configuration and probably quick overview/recap of the implementation used and described in Implementation section)
In this section, the results for the GPU-based interpreter are presented in detail. Following the benchmark results, the process of tuning the interpreter is described as well as how to adapt the tuning for the different benchmarks. This part not only contains the tuning of the GPU, but also performance improvements done on the CPU side.
\subsubsection{Benchmark 1}
The first benchmark consisted of $250\,000$ expressions and $362$ variable sets with $100$ parameter optimisation steps. Because each expression needs to be evaluated with each variable set for each parameter optimisation step, a total of $9.05\,\textit{billion}$ evaluations have been performed per sample. In Figure \ref{fig:gpu_i_benchmark_1} the result over all $50$ samples is presented. The median value across all samples is $466.3$ seconds with a standard deviation of $14.2$ seconds.
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{results/gpu-interpreter-final-performance-benchmark1.png}
\caption{The results of the GPU-based interpreter for benchmark 1}
\label{fig:gpu_i_benchmark_1}
\end{figure}
For the kernel configuration, a block size of $128$ threads has been used. As will be explained below, this has been found to be the configuration that results in the most performance. During the benchmark, the utilisation of both the CPU and GPU was roughly $100\%$.
\subsubsection{Benchmark 2}
With $10\,000$ expressions, $362$ variable sets and $100$ parameter optimisation steps, the total number of evaluations per sample was $362\,\textit{million}$. The median across all samples is $21.3$ seconds with a standard deviation of $0.75$ seconds. Compared to benchmark 1, there were $25$ times fewer evaluations which also resulted in a reduction of the median and standard deviation of roughly $25$ times. Since the number of variable sets did not change, the block size for this benchmark remained at $128$ threads. Again the utilisation of the CPU and GPU during the benchmark was roughly $100\%$.
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{results/gpu-interpreter-final-performance-benchmark2.png}
\caption{The results of the GPU-based interpreter for benchmark 2}
\label{fig:gpu_i_benchmark_2}
\end{figure}
\subsubsection{Benchmark 3}
The third benchmark used the same $10\,000$ expressions and $100$ parameter optimisation steps. However, now there are 30 times more variable sets that need to be used for evaluation. This means, that the total number of evaluations per sample is now $10.86\,\textit{billion}$. This means, compared to benchmark 1, an additional $1.8\,\textit{billion}$ evaluations were performed. However, as seen in Figure \ref{fig:gpu_i_benchmark_3}, the execution time was significantly faster. With a median of $30.3$ seconds and a standard deviation of $0.45$ seconds, this benchmark was only marginally slower than benchmark 2. This also indicates, that the GPU evaluators are much more suited for scenarios, where there is a high number of variable sets.
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{results/gpu-interpreter-final-performance-benchmark3.png}
\caption{The results of the GPU-based interpreter for benchmark 3}
\label{fig:gpu_i_benchmark_3}
\end{figure}
Although the number of variable sets has been increased by 30 times, the block size remained at 128 threads. Unlike the previous benchmarks, the hardware utilisation was different. Now only the GPU was utilised to 100\% while the CPU utilisation started at 100\% and slowly dropped to 80\%. The GPU needs to perform 30 times more evaluations, meaning it takes longer for one kernel dispatch to be finished. At the same time, the CPU tries to dispatch the kernel at the same rate as before. Because only a certain amount of kernels can be dispatched at once, the CPU needs to wait for the GPU to finish a kernel before another one can be dispatched again. Therefore, in this scenario, the evaluator runs into a GPU-bottleneck and using a GPU with more performance, would consequently improve the runtime in this scenario. In the benchmarks before, both the CPU and GPU would need to be upgraded, to achieve better performance.
blocksize 128: 84.84 blocks fast (prolly because less wasted threads)
bocksize 192: 56.56 blocks very slow
\subsubsection{Performance Tuning} % either subsubSection or change the title to "Performance Tuning Interpreter"
Document the process of performance tuning (mostly GPU, but also talk about CPU. Especially the re-aranging of data transfer and non usage of a cache)
Initial: no cache; 256 blocksize; exprs pre-processed and sent to GPU on every call; vars sent on every call; frontend + dispatch are multithreaded
1.) Done before parameter optimisation loop: Frontend, transmitting Exprs and Variables (improved runtime)
2.) tuned blocksize to have as little wasted threads as possible (new blocksize 121 -> 3-blocks -> 363 threads but 362 threads needed per expression) (128 should lead to the same results. Talk here a bit what to look out for, so block-size should be a multiple of 32 and should divide the nr. of varsets as best as possible to a whole number without going over)
3.) Minor optimisations. Reduced stacksize; reduced memory allocations on the CPU; reduced GC pressure
CPU and GPU are almost all the time at 100\% utilisation (GPU every now and then drops to 70\%), meaning it is quite balanced.
Uncached but multithreaded frontend only makes up a small percentage of the total runtime (optimisations there are not really needed, which is good because enabling caching took up too much RAM)
Most of the time is spent doing the parameter optimisation step
\subsection{Transpiler}
Results only for Transpiler (also contains final kernel configuration and probably quick overview/recap of the implementation used and described in Implementation section
\subsubsection{Benchmark 1}
\subsubsection{Benchmark 2}
kernels can now be compiled at the same time as they are generated (should drastically improve performance)
std: 1.16 seconds
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{results/gpu-transpiler-final-performance-benchmark2.png}
\caption{The results of the transpiler for benchmark 2}
\label{fig:gpu_t_benchmark_2}
\end{figure}
CPU: 100\%
GPU: very short bursts to 100\% then down to 0\% with a very high frequency (therefore GPU pretty much utilised to 50\% during a sample)
\subsubsection{Benchmark 3}
Even larger var sets would be perfect. 10k is rather small and the GPU still has barely any work to do
std: 2.64 seconds
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{results/gpu-transpiler-final-performance-benchmark3.png}
\caption{The results of the transpiler for benchmark 3}
\label{fig:gpu_t_benchmark_3}
\end{figure}
CPU: 100\% during frontend + transpilation + compilation, then goes hovers at 80\% (same reason than interpreter bench 3 most likely)
GPU: During compilation at 20\% -> evaluation: between 50 and 100 but fewer spikes to 100; probably very small kernels, therefore a lot of scheduling on GPU, resulting in less utilisation but too many dispatches so CPU slows down (maybe do another quick performance tuning session to see if different block size can improve this behaviour)
\subsection{Performance Tuning}
Document the process of performance tuning
Initial: no cache; 256 blocksize; exprs pre-processed and transpiled on every call; vars sent on every call; frontend + transpilation + dispatch are multithreaded
1.) Done before parameter optimisation loop: Frontend, transmitting Exprs and Variables (improved runtime)
2.) All expressions to execute are transpiled first (before they were transpiled for every execution, even in parameter optimisation scenarios). Compilation is still done every time, because too little RAM was available (compilation takes the most time, so this is only a minor boost). Also tried blocksize of 121. However, kernel itself is very fast anyway, so this didn't make a difference (further proof that the CPU is the bottleneck here)
CPU at 100\% GPU at around 30\%. Heavily CPU bottlenecked. Mainly due to PTX compilation taking by far the longest (while kernels are finished more or less instantly)
\subsection{Comparison}
Comparison of Interpreter and Transpiler as well as Comparing the two with CPU interpreter
more var sets == better performance for GPU; more expressions == more performance for CPU evaluator
\subsubsection{Benchmark 1}
\subsubsection{Benchmark 2}
\subsubsection{Benchmark 3}