Studienarbeiten - Generative Modeling with Discrete Diffusion and Iterative Refinement Models
Studienarbeiten - Generative Modeling with Discrete Diffusion and Iterative Refinement Models
Studienarbeiten - Generative Modeling with Discrete Diffusion and Iterative Refinement Models
Studienarbeiten - Generative Modeling with Discrete Diffusion and Iterative Refinement Models
Technische Universität München
Fach- und Hochschulen
München
- Art der Beschäftigung: Vollzeit
- 48.000 € – 61.000 € (von XING geschätzt)
- Vor Ort
- Zu den Ersten gehören
Studienarbeiten - Generative Modeling with Discrete Diffusion and Iterative Refinement Models
Über diesen Job
Studienarbeiten - Generative Modeling with Discrete Diffusion and Iterative Refinement Models
09.12.2025, Studentische Hilfskräfte, Praktikantenstellen, Studienarbeiten
The Group of Stefan Bauer [1] offers multiple Master’s thesis topics in modern generative modeling, focusing on discrete diffusion processes, discrete latent variable models, and iterative refinement methods. Most topics build on a recently released codebase for discrete diffusion ensuring a simplified start [2]. Discrete diffusion models define forward and reverse Markov processes on categorical state spaces, making them ideal for modalities such as text tokens, quantized image tokens, or symbolic representations [3]. In parallel, emerging looping models introduce a flexible class of iterative generative architectures that repeatedly update and refine a discrete state. Unlike autoregressive models (single-pass left-to-right) or classical diffusion (fixed noise schedules), looping models use learned iterative transitions that can converge toward high-quality samples through repeated refinement. This perspective unifies and generalizes several iterative generation paradigms and offers promising synergies with discrete diffusion. Together, these developments open an exciting research space combining discrete generative processes, iterative state refinement, and transformer-based architectural innovations. Available Master’s Thesis Topics 1. Hybrid Discrete Diffusion in Learned Discrete Latent Spaces 2. Scaling Discrete Diffusion Models to High-Dimensional Quantized Vision Tokens 3. Architecture-Level Advancements: Depth-Growth Strategies in Discrete Diffusion Models 4. Conditional Generative Modeling with Discrete Diffusion 5. Looping Models as Learned Refinement Operators for Discrete Diffusion [4] We welcome creative extensions or alternative ideas building on discrete diffusion, discrete latent modeling, and/or architecture-level innovations. We expect a strong background in machine learning, probabilistic modeling, and neural networks as well as solid programming skills in Python and PyTorch. Please submit a short interest statement indicating your preferred thesis topic (or proposing an alternative), along with a CV and academic transcript, to st.bauer@tum.de References: [1] https://scholar.google.com/citations?user=O-oICE8AAAAJ&hl=de [2] K. Nadimpalli* & V. Pauline* "UNI-D² , a unified codebase for discrete diffusion language models” https://github.com/nkalyanv99/UNI-D2 [3] V. Pauline et al "Foundations of Diffusion Models in General State Spaces: A Self-Contained Introduction” https://arxiv.org/abs/2512.05092 [4] F. Kapl* . E. Angelis* , T. Hoeppe* et al. "Do Depth-Grown Models Overcome The Curse Of Depth? An In-Depth Analysis”
The Group of Stefan Bauer [1] offers multiple Master’s thesis topics in modern generative modeling, focusing on discrete diffusion processes, discrete latent variable models, and iterative refinement methods. Most topics build on a recently released codebase for discrete diffusion ensuring a simplified start [2].
Discrete diffusion models define forward and reverse Markov processes on categorical state spaces, making them ideal for modalities such as text tokens, quantized image tokens, or symbolic representations [3]. In parallel, emerging looping models introduce a flexible class of iterative generative architectures that repeatedly update and refine a discrete state. Unlike autoregressive models (single-pass left-to-right) or classical diffusion (fixed noise schedules), looping models use learned iterative transitions that can converge toward high-quality samples through repeated refinement. This perspective unifies and generalizes several iterative generation paradigms and offers promising synergies with discrete diffusion.
Together, these developments open an exciting research space combining discrete
generative processes, iterative state refinement, and transformer-based architectural innovations.
Available Master’s Thesis Topics
1. Hybrid Discrete Diffusion in Learned Discrete Latent Spaces
2. Scaling Discrete Diffusion Models to High-Dimensional Quantized Vision Tokens
3. Architecture-Level Advancements: Depth-Growth Strategies in Discrete Diffusion Models
4. Conditional Generative Modeling with Discrete Diffusion
5. Looping Models as Learned Refinement Operators for Discrete Diffusion [4]
We welcome creative extensions or alternative ideas building on discrete diffusion, discrete latent modeling, and/or architecture-level innovations.
We expect a strong background in machine learning, probabilistic modeling, and neural networks as well as solid programming skills in Python and PyTorch.
Please submit a short interest statement indicating your preferred thesis topic (or proposing an alternative), along with a CV and academic transcript, to st.bauer@tum.de
References:
[1] https://scholar.google.com/citations?user=O-oICE8AAAAJ&hl=de
[2] K. Nadimpalli* & V. Pauline* "UNI-D² , a unified codebase for discrete diffusion language models” https://github.com/nkalyanv99/UNI-D2
[3] V. Pauline et al "Foundations of Diffusion Models in General State Spaces: A
Self-Contained Introduction” https://arxiv.org/abs/2512.05092
[4] F. Kapl* . E. Angelis* , T. Hoeppe* et al. "Do Depth-Grown Models Overcome The Curse Of Depth? An In-Depth Analysis”
Kontakt: st.bauer@tum.de