Related work: continuation of GPGPU
Some checks failed
CI / Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ github.event_name }} (x64, ubuntu-latest, 1.10) (push) Has been cancelled
CI / Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ github.event_name }} (x64, ubuntu-latest, 1.6) (push) Has been cancelled
CI / Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ github.event_name }} (x64, ubuntu-latest, pre) (push) Has been cancelled
Some checks failed
CI / Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ github.event_name }} (x64, ubuntu-latest, 1.10) (push) Has been cancelled
CI / Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ github.event_name }} (x64, ubuntu-latest, 1.6) (push) Has been cancelled
CI / Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ github.event_name }} (x64, ubuntu-latest, pre) (push) Has been cancelled
This commit is contained in:
parent
34d98f9997
commit
203e157f11
|
@ -12,20 +12,25 @@ The expressions generated by an equation learning algorithm can look like this $
|
|||
|
||||
|
||||
\section[GPGPU]{General Purpose Computation on Graphics Processing Units}
|
||||
Graphics cards (GPUs) are commonly used to increase the performance of many different applications. Originally they were designed to improve performance and visual quality in games. \textcite{dokken_gpu_2005} first described the usage of GPUs for general purpose programming. They have shown how the graphics pipeline can be used for GPGPU programming. Because this approach also requires the programmer to understand the graphics terminology, this was not a great solution. Therefore, Nvidia released CUDA\footnote{\url{https://developer.nvidia.com/cuda-toolkit}} in 2007 with the goal of allowing developers to program GPUs independent of the graphics pipeline and terminology. A study of the programmability of GPUs with CUDA and the resulting performance has been conducted by \textcite{huang_gpu_2008}. They found that GPGPU programming has potential, even for non-embarassingly parallel problems. Research is also done in making the low level CUDA development simpler. \textcite{han_hicuda_2011} have described a directive-based language to make development simpler and less error-prone, while retaining the performance of handwritten code. To drastically simplify CUDA development \textcite{besard_effective_2019} showed that it is possible to develop with CUDA in the high level programming language Julia\footnote{\url{https://julialang.org/}} while performing similar to CUDA written in C. In a subsequent study \textcite{lin_comparing_2021} found that high performance computing (HPC) on the CPU and GPU in Julia performs similar to HPC development in C. This means that Julia can be a viable alternative to Fortran, C and C++ in the HPC field and has the additional benefit of developer comfort since it is a high level language with modern features such as garbage-collectors. \textcite{besard_rapid_2019} have also shown how the combination of Julia and CUDA help in rapidly developing HPC software. While this section and thesis in general talk about CUDA, as it is a widely used framework for GPGPU programming, there also exist alternatives by AMD called ROCm\footnote{\url{https://www.amd.com/de/products/software/rocm.html}} and a vendor independent alternative called OpenCL\footnote{\url{https://www.khronos.org/opencl/}}.
|
||||
Graphics cards (GPUs) are commonly used to increase the performance of many different applications. Originally they were designed to improve performance and visual quality in games. \textcite{dokken_gpu_2005} first described the usage of GPUs for general purpose programming. They have shown how the graphics pipeline can be used for GPGPU programming. Because this approach also requires the programmer to understand the graphics terminology, this was not a great solution. Therefore, Nvidia released CUDA\footnote{\url{https://developer.nvidia.com/cuda-toolkit}} in 2007 with the goal of allowing developers to program GPUs independent of the graphics pipeline and terminology. A study of the programmability of GPUs with CUDA and the resulting performance has been conducted by \textcite{huang_gpu_2008}. They found that GPGPU programming has potential, even for non-embarassingly parallel problems. Research is also done in making the low level CUDA development simpler. \textcite{han_hicuda_2011} have described a directive-based language to make development simpler and less error-prone, while retaining the performance of handwritten code. To drastically simplify CUDA development \textcite{besard_effective_2019} showed that it is possible to develop with CUDA in the high level programming language Julia\footnote{\url{https://julialang.org/}} while performing similar to CUDA written in C. In a subsequent study \textcite{lin_comparing_2021} found that high performance computing (HPC) on the CPU and GPU in Julia performs similar to HPC development in C. This means that Julia can be a viable alternative to Fortran, C and C++ in the HPC field and has the additional benefit of developer comfort since it is a high level language with modern features such as garbage-collectors. \textcite{besard_rapid_2019} have also shown how the combination of Julia and CUDA help in rapidly developing HPC software. While this thesis in general revolves around CUDA, there also exist alternatives by AMD called ROCm\footnote{\url{https://www.amd.com/de/products/software/rocm.html}} and a vendor independent alternative called OpenCL\footnote{\url{https://www.khronos.org/opencl/}}.
|
||||
|
||||
While in the early days of GPGPU programming a lot of research has been done to assess if this approach is feasible, it now seems obvious to use GPUs to accelerate algorithms. Weather simulations began using GPUs very early for their models. In 2008 \textcite{michalakes_gpu_2008} proposed a method for simulating weather with the WRF model on a GPU. With their approach, they reached a speed-up of the most compute intensive task of 5 to 20, with very little GPU optimisation effort. They also found that the GPU usages was very low, meaning there are resources and potential for more detailed simulations. Generally, simulations are great candidates for using GPUs, as they can benefit heavily from a high degree of parallelism and data throughput. \textcite{koster_high-performance_2020} have developed a way of using adaptive time steps to improve the performance of time step simulations, while retaining their precision and constraint correctness. Black hole simulations are crucial for science and education for a better understanding of our world. \textcite{verbraeck_interactive_2021} have shown that simulating complex Kerr (rotating) black holes can be done on consumer hardware in a few seconds. Schwarzschild black hole simulations can be performed in real-time with GPUs as described by \textcite{hissbach_overview_2022} which is especially helpful for educational scenarios. While both approaches do not have the same accuracy as detailed simulations on supercomputers, they show how single GPUs can yield similar accuracy at a fraction of the cost. Networking can also heavily benefit from GPU acceleration as shown by \textcite{han_packetshader_2010}, where they achieved a significant increase in throughput than with a CPU only implementation. Finite element structural analysis is an essential tool for many branches of engineering and can also heavily benefit from the usage of GPUs as demonstrated by \textcite{georgescu_gpu_2013}.
|
||||
|
||||
\subsection{Programming GPUs}
|
||||
% This part now starts taking about architecture and how to program GPUs
|
||||
talk about the fields GPGPU really helped make performance improvements (weather simulations etc). Then describe how it differs from classical programming. talk about architecture (SIMD/SIMT; a lot of "slow" cores).
|
||||
|
||||
starting from here I can hopefully incorporate more images to break up these walls of text
|
||||
|
||||
\subsection[PTX]{Parallel Thread Execution}
|
||||
Describe what PTX is to get a common ground for the implementation chapter. Probably a short section
|
||||
|
||||
% Maybe make this instead of what is there below:
|
||||
% \section{Compilers}
|
||||
% \subsection{Transpilers}
|
||||
% \subsection{Interpreters}
|
||||
|
||||
\section{GPU Interpretation}
|
||||
Different sources on how to do interpretation on the gpu (and maybe interpretation in general too?)
|
||||
\section{Compilers}
|
||||
brief overview about compilers (just setting the stage for the subsections basically). Talk about register management and these things
|
||||
|
||||
\section{Transpiler}
|
||||
talk about what transpilers are and how to implement them. If possible also gpu specific transpilation. Also talk about compilation and register management. and probably find a better title
|
||||
\subsection{Interpreters}
|
||||
What are interpreters; how they work; should mostly contain/reference gpu interpreters
|
||||
|
||||
\subsection{Transpilers}
|
||||
talk about what transpilers are and how to implement them. If possible also gpu specific transpilation.
|
||||
|
|
BIN
thesis/main.pdf
BIN
thesis/main.pdf
Binary file not shown.
|
@ -456,7 +456,7 @@ Publisher: Multidisciplinary Digital Publishing Institute},
|
|||
urldate = {2025-03-01},
|
||||
date = {2008-12},
|
||||
note = {{ISSN}: 2379-5352},
|
||||
keywords = {Application software, Central Processing Unit, Computer architecture, Computer graphics, Distributed computing, Grid computing, Multicore processing, Pipelines, Programming profession, Rendering (computer graphics)},
|
||||
keywords = {Computer architecture, Application software, Central Processing Unit, Computer graphics, Distributed computing, Grid computing, Multicore processing, Pipelines, Programming profession, Rendering (computer graphics)},
|
||||
file = {IEEE Xplore Abstract Record:C\:\\Users\\danwi\\Zotero\\storage\\2FJP9K25\\references.html:text/html},
|
||||
}
|
||||
|
||||
|
@ -475,3 +475,34 @@ Publisher: Multidisciplinary Digital Publishing Institute},
|
|||
note = {Conference Name: {IEEE} Transactions on Parallel and Distributed Systems},
|
||||
file = {IEEE Xplore Abstract Record:C\:\\Users\\danwi\\Zotero\\storage\\5K63T7RB\\5445082.html:text/html},
|
||||
}
|
||||
|
||||
@article{verbraeck_interactive_2021,
|
||||
title = {Interactive Black-Hole Visualization},
|
||||
volume = {27},
|
||||
issn = {1941-0506},
|
||||
url = {https://ieeexplore.ieee.org/abstract/document/9226126},
|
||||
doi = {10.1109/TVCG.2020.3030452},
|
||||
abstract = {We present an efficient algorithm for visualizing the effect of black holes on its distant surroundings as seen from an observer nearby in orbit. Our solution is {GPU}-based and builds upon a two-step approach, where we first derive an adaptive grid to map the 360-view around the observer to the distorted celestial sky, which can be directly reused for different camera orientations. Using a grid, we can rapidly trace rays back to the observer through the distorted spacetime, avoiding the heavy workload of standard tracing solutions at real-time rates. By using a novel interpolation technique we can also simulate an observer path by smoothly transitioning between multiple grids. Our approach accepts real star catalogues and environment maps of the celestial sky and generates the resulting black-hole deformations in real time.},
|
||||
pages = {796--805},
|
||||
number = {2},
|
||||
journaltitle = {{IEEE} Transactions on Visualization and Computer Graphics},
|
||||
author = {Verbraeck, Annemieke and Eisemann, Elmar},
|
||||
urldate = {2025-03-02},
|
||||
date = {2021-02},
|
||||
note = {Conference Name: {IEEE} Transactions on Visualization and Computer Graphics},
|
||||
keywords = {Algorithms, Cameras, Computer Graphics Techniques, Distortion, Engineering, Mathematics, Observers, Physical \& Environmental Sciences, Ray tracing, Real-time systems, Rendering (computer graphics), Visualization},
|
||||
file = {PDF:C\:\\Users\\danwi\\Zotero\\storage\\HDASRGYN\\Verbraeck und Eisemann - 2021 - Interactive Black-Hole Visualization.pdf:application/pdf},
|
||||
}
|
||||
|
||||
@book{hissbach_overview_2022,
|
||||
title = {An Overview of Techniques for Egocentric Black Hole Visualization and Their Suitability for Planetarium Applications},
|
||||
isbn = {978-3-03868-189-2},
|
||||
url = {https://doi.org/10.2312/vmv.20221207},
|
||||
abstract = {The visualization of black holes is used in science communication to educate people about our universe and concepts of general relativity. Recent visualizations aim to depict black holes in realtime, overcoming the challenge of efficient general relativistic ray tracing. In this state-of-the-art report, we provide the first overview of existing works about egocentric black hole visualization that generate images targeted at general audiences. We focus on Schwarzschild and Kerr black holes and discuss current methods to depict the distortion of background panoramas, point-shaped stars, nearby objects, and accretion disks. Approaches to realtime visualizations are highlighted. Furthermore, we present the implementation of a state-of-the-art black hole visualization in the planetarium software Uniview.},
|
||||
publisher = {The Eurographics Association},
|
||||
author = {Hissbach, Anny-Marleen and Dick, Christian and Lawonn, Kai},
|
||||
urldate = {2025-03-02},
|
||||
date = {2022},
|
||||
langid = {english},
|
||||
file = {Full Text PDF:C\:\\Users\\danwi\\Zotero\\storage\\TBBLEZ5N\\Hissbach et al. - 2022 - An Overview of Techniques for Egocentric Black Hole Visualization and Their Suitability for Planetar.pdf:application/pdf},
|
||||
}
|
||||
|
|
Loading…
Reference in New Issue
Block a user