While Moore’s Law continues, increases in clock speed have been stagnant due to physical obstacles. Thus, hardware and software have turned towards parallelism as an answer to keep up with the ever growing demand for more compute power. The probably highest degree of parallelism within a single chip is found on the modern graphics processing unit (GPU). However, traditional algorithms do not work well on massively parallel devices like the GPU; especially concerning the control over the execution itself. We investigate novel algorithms for scheduling dynamic workloads on the GPU, including algorithms for efficient task scheduling, memory management, and dynamic workload prioritization. Our research enables algorithms—which previously did not seem fit for GPU execution—to harness the full power of the GPU. Applying our insights to computer graphics, we are interested in highly efficient and on-the-fly generation of procedural content, prioritizing image synthesis for deadline-driven foveated rendering, and alternative rendering pipeline designs with new views on rasterization.