Thesis

Interested in writing your MSc or BSc thesis with us? We offer several BSc and MSc thesis topics at MaiNLP.

Currently, the following research vectors characterize broad topics in which we offer MSc and BSc thesis projects. We provide a (non-exhaustive) list of research projects within each vector. We are also interested to supervise projects related to our research projects. You are welcome to send us your own project proposal. We recommend to check out the suggested/selected publications to get inspired.

Unless otherwise specified, all projects can be either at the MSc or BSc level. The exact scope of the project will be determined in a meeting before the start of the project.

Important note: We currently do not supervise industrial MSc/BSc thesis projects (Industrieabschlussarbeiten).

Regularly check back here for updates on thesis project suggestions.

News:

  • 2026, Jan 15: MSc/BSc project applications deadlines posted. New topics currently under development, stay tuned!

Legend:

  • :hourglass_flowing_sand: reserved
  • strikethrough topic no longer available

How to apply for a BSc and MSc thesis project

Important information for LMU students: You need to apply for a MSc/BSc thesis project the latest three weeks before the thesis project registration date.

Deadlines for the summer semester 2025-2026:

  • MSc students apply before Monday February 9, 2026
  • BSc students apply before Monday February 16, 2026

To apply, please send your application material with subject “[BSc (or MSc) thesis project at MaiNLP - inquiry [Name and which semester]” to: thesisplank@cis.lmu.de

It should contain a single pdf with the following information:

  • CV, your study program, full grade transcript
  • Level: BSc or MSc thesis project
  • Which theme or project interests you (optional: we are open to project proposals related to the research vectors or on-going research projects). If you are interested in multiple, list your preferences for up to four (ranked list: first priority, second priority, third priority, fourth priority)
  • Languages you speak
  • Your favorite project so far, and why
  • Your knowledge and interest in data annotation, data analysis and machine learning/deep learning (including which toolkits you are familiar with)
  • Whether you have access to GPU resources (and which)
  • A term project report or your BSc thesis if you apply for a MSc thesis (optional)

Reach out if you have questions, using the email above.

MSc/BSc thesis research vectors:

V1: NLP for Dialects, Low-resource and Multilinguality

This research vector covers methods and resources for processing dialectal, low-resource, and multilingual language data. It focuses on improving robustness, fairness, and coverage of NLP systems across languages and varieties, including cross-lingual transfer and data-efficient learning.

Thesis projects

  • Computational Dialectology. Language usage often differs based on sociodemographic background, where linguistic differences based on the geographical origin of the speaker are typically studied in the field of dialectology. While qualitative studies into dialectal differences have yielded valuable insights into language variation, such studies often rely on labor-intensive data collection, annotation, and analysis. As such, computational approaches to dialect differences have emerged as a possible method towards the large-scale study of dialects. For students interested in this project, multiple directions are possible, including (but not limited to): (a) interpretability of what features dialect models rely on for differentiation, (b) creation of (parallel) resources for dialect continua, (c) development of new methods to quantify dialectal or sociolinguistic variation, (d) adapting existing models to better accommodate dialect variation. References: Bartelds & Wieling 2022, Bafna et al. 2025, Shim et al. 2026. Level: BSc or MSc.

  • Methods for mining low-resource parallel corpora. Parallel corpora are critical for developing and evaluating dedicated machine translation systems, as well as general-purpose large language models capable of translation. One strategy for obtaining such corpora is to mine unstructured text corpora (typically web crawls) for parallel sentences. However, standard methods typically score candidate sentences via cosine distance of their sentence embeddings, a method which requires strong sentence encoders. Such sentence encoders are typically weaker for very low-resource languages, including language varieties such as dialects. Strategies include: bootstrapping, building classifiers, coming up with simple heuristics such as word-edit distance, or relying on meta-data like HTML tags. Depending on the student’s interest and academic level, this project can focus more or less on specific directions such as: evaluating the impact of different methods, methods for scoring candidate sentences, or strategies for obtaining candidate sentences. References: Improving Parallel Sentence Mining for Low-Resource and Endangered Languages - ACL Anthology, Obtaining Parallel Sentences in Low-Resource Language Pairs with Minimal Supervision - PMC Level: BSc or MSc.

  • Synthetic language variation for robust NLP. Robust NLP entails models that can process human language variation, such as dialects and other language varieties. These varieties are typically characterized by high variation due to their orthography, lexicography, syntax, each of which present challenges to NLP. Furthermore, these varieties are typically low-resourced, such that we widely rely on transfer from standard language data to language varieties in building NLP models. One strategy for improving robustness to linguistic variation is to introduce synthetic variation. This can range from naive perturbation of characters in order to induce more varied tokenization of standard training data, to targeted de-standardization of training data. References: Improving Zero-Shot Cross-lingual Transfer Between Closely Related Languages by Injecting Character-Level Noise - ACL Anthology Neural, Text Normalization for Luxembourgish Using Real-Life Variation Data - ACL Anthology Level: BSc or MSc (scope adjusted by languages covered, and complexity of the approaches).

  • Universal Dependencies for Underrepresented Language Varieties. Universal Dependencies (UD) provides a cross-linguistically consistent framework for syntactic annotation, yet many language varieties and genres remain underrepresented. This thesis explores how existing UD annotations for a related standard language can be adapted to support annotation of a lower-resource variety or underrepresented genre (e.g., a regional variety, informal writing, or domain-specific text). The project will involve selecting a small corpus, analyzing where standard-language annotation guidelines break down, and producing a limited but linguistically principled UD annotation or conversion strategy. Additionally, one would analyze annotation challenges and experiment with downstream UD parsing. Level: BSc or MSc (scope adjusted by dataset size and depth of linguistic analysis and experiments).

  • Cross-lingual Semantic Disagreement in Human Annotations. Human judgments about meaning often diverge across languages. This thesis investigates such cross-lingual semantic disagreement by comparing annotations produced independently in two languages spoken by the student (e.g., German–English, Mandarin–English, Russian–German). One could choose to work on NLP tasks such as natural language inference, sentiment or stance classification, discourse relation labeling, etc. The project will involve selecting a small parallel or comparable dataset (or conducting the translation yourself), designing a controlled annotation setup, and analyzing where and why label distributions diverge across languages. The goal is not to resolve disagreements, but to characterize systematic patterns of variation and relate them to linguistic differences such as tense, aspect, definiteness, evidentiality, or pragmatic conventions. Level: BSc or MSc (scope adjusted by dataset size and depth of linguistic analysis and experiments).

  • Dialect Variation Dictionaries and Evaluation. A large amount of culture-specific knowledge is captured in dialect corpora. However, dialects pose a challenge due to their limited resources and high variability, with words frequently spelled differently, reflecting regional pronunciations. This thesis extends our previous research on the Bavarian dialect (Litschko et al., 2025) by exploring another dialect. The project consists of two main components: 1) Annotating pairs of German and dialect words to determine whether they are equivalent translations. A dataset will be provided for this purpose. 2a) Evaluating the ability of LLMs to "understand" the dialect through translation and word-pair classification tasks, or 2b) building and evaluating a rule-based lexical normalization model (Millour and Fort, 2019; Weissweiler and Fraser, 2018). To successfully complete this thesis, it is essential that the student has a strong understanding of one of the following dialects: Alemannic (Alemannisch), Palatinate (Pfälzisch), North Frisian (Nordfriesisch), Saterland Frisian (Saterfriesisch), Low German (Niederdeutsch), Colognian (Kölsch). Dialects of other languages can potentially also be considered, please reach out to us. Level: BSc or MSc (adjusted scope includes dialect-specific model adaptation).

V2: Data-centric NLP, NLP Applications, High-Quality Information Extraction and Retrieval

Selected Research Projects

  • NLP for Job Market Analysis. Job postings are a rich resource to understand the dynamics of the labor market including which skills are demanded, which is also important from an educational viewpoint. Recently, the emerging line of work on computational job market analysis or NLP for human resources has started to provide data resources and models for automatic job market analysis, such as the identification and extraction of skills in job postings, or the prediction of career paths. For students interested in real-world applications, this theme provides multiple thesis projects including but not limited to: cross-lingual or cross-domain skill and knowledge extraction to data sources like job postings, patents or scientific articles. Alternatively, the project could focus on career path prediction, the task of predicting a person’s next occupation based on their resume. See references of MultiSkill project. See also Bhola et al., 2020, Gnehm et al. 2021, our own ESCOXLM-R model and the Karrierewege dataset. Level: BSc or MSc.
  • Climate Change Insights through NLP. Climate change is a pressing issue internationally that is receiving more and more attention everyday. It is influencing regulations and decision-making in various parts of society such as politics, agriculture, business, and it is discussed extensively on social media. For students interested in real-world societal applications, this project aims to contribute insights on the discussion surrounding climate change. Example projects: a) Analyzing social media data. The data will have to be collected (potentially from existing sources), cleaned, and analyzed using NLP techniques to examine various aspects or features of interest such as stance, sentiment, the extraction of key players, etc. References: Luo et al., 2020, Stede & Patz, 2021, Vaid et al., 2022. Level: BSc or MSc.
  • Better Benchmarks / Mining for Errors in Annotated Datasets. Benchmark datasets are essential in empirical research, but even widely-used annotated datasets contain mistakes, as annotators inevitably make mistakes (e.g. annotation inconsistencies). There are several lines of work in this direction. On the one side, annotation error detection methods provide a suite of methods to detect errors in existing datasets (cf. Klie et al. 2023, Weber & Plank 2023, Weber-Genzel et al. 2024 ), including tools such as data maps (Swayamdipta et al. 2020). On the other side, there is work on inspecting existing datasets in revision efforts that exist for English NER in the past year (cf. Reiss et al. 2020, Rücker & Akbik 2023). The goal of projects on this theme can be:
    1. (MSc level) to investigate error detection methods in novel scenarios (new benchmarks, new applications, and/or create a new error detection dataset)
    2. (MSc or BSc level) extend revision efforts on NER to other languages. For the latter, for a BSc thesis, your task includes improving a benchmark dataset with iterations of sanity checks and revisions and comparing NLP models on the original versus revised versions. For MSc, you could extend either by incorporating Annotation Error Detection methods (see previous part) or conducting additional evaluations on multiple downstream NLP tasks.
    3. (BSc or MSc level) Checking the annotation consistency of non-standardized language data. Automatic methods for finding potential inconsistencies in annotations typically rely on consistent orthographies (e.g., detecting sequences that occur multiple times in a corpus but have received different annotations; Dickinson & Meurers 2003). When text is written in a language variety without a standardized orthography, such methods may no longer work well because of spelling differences between the repeated sequences. Your task is to extend such approaches to detect errors in existing datasets to be more robust to orthographic variation and/or to investigate how well annotation error detection methods that do not directly depend on orthographic matches work (cf. Klie et al. 2023). The target dataset would ideally be one of the dialectal datasets developed at the lab (this requires familiarity with German dialects and, depending on the dataset, an interest in syntax).
    4. (MSc Level) To explore the nature and consequences of annotation errors in classification tasks. First, you need to seek to demonstrate that annotation errors behave more like random noise rather than reflecting human label variation, thereby disentangling their influence on model training and evaluation. Building on this insight, you’re encouraged to develop automated methods for detecting and removing such errors. Preliminary experimental results from the VariErr dataset (Weber et al., 2024) reveal two critical questions: (1) the process of self-correction is challenging for both humans and LLMs (Goh et al., 2024), suggesting broader implications for error interpretation and generalization ; and (2) transitioning to datasets with extensive annotations and labels could mitigate the adverse effects of removing noisy labels, resulting in a smoother and more reliable label distribution.
  • Error Analysis of a BERT-based Search Engine. Multi-stage ranking has become a popular paradigm in information retrieval, this approach a fast first-stage ranker generates a candidate set of documents followed by a much slower re-ranker to refine the ranking (Nogueira et al. 2019). Prior work has shown that better candidate sets (higher recall) don’t necessarily translate to a better final ranking (Gao et al. 2021). The goal of this thesis is two-fold: First, we would like to perform an error analysis of linguistic triggers that cause this behavior. In the second part, the goal is to apply and interpret automatically generated explanations from tools such as DeepSHAP (Fernando et al. 2019) and LIME (Riberio et al. 2016). Basic knowledge in information retrieval is helpful, but not required. Level: B.Sc.
  • Injecting Lexical Similarity Signals into Neural Information Retrieval (IR) Models. Early IR models rely on lexical signals (keyword matches) such as BM25, TF-IDF and the Query Likelihood Model (QLM) between queries and documents to determine their relevance. For a long time lexical retrieval models performed still very competitively, outperforming neural retrieval in domain-specific settings Thakur et al. (2021). Reranking with cross-encoders (CE) is arguable among the most widely used IR paradigms and frames relevance prediction as a sequence pair classification task where inputs are constructed through concatenating query-document pairs “[CLS] Query [SEP] Doc [SEP]”. In a recent work, Askari et al (2023) showed that CEs benefit from further including lexical input signals “[CLS] Query [SEP] BM25 [SEP] Doc [SEP]”. The goal of this thesis is to conduct a systematic evaluation of additional lexical signals (and combinations thereof) including, e.g., TF-IDF and QLM (reduced scope, BSc) and semantic signals obtained from semantic similarity models or LLMs (full scope, MSc). This thesis is suitable for students who do not have access to large GPUs. Level: BSc or MSc.
V3: Natural Language Understanding, Semantic & Pragmatics, Computational Social Science

Selected Research Projects

  • Understanding Indirectness. Indirectness involves for example indirect answers to requests that do not explicitly contain answer clues like Yes, yeah or no. Example: Q: Do you wanna crash on the couch? A: I gotta go home sometime. Indirect answers are natural in human dialogue, but very difficult for a conversational AI system. This thesis project revolves around creating technology that can better deal with indirectness, for example by improving indirectness classifiers, creating novel datasets or integration of context, for example by modeling longer dialogue context. The exact project scope will be determined with the student. References: Louis et al., 2020, Damgaard et al., 2021 and Sanagavarapu et al., 2022, Yusupujiang & Ginzburg, 2023, Müller and Plank, 2024 Level: MSc (preferred), BSc (possibly).

  • Improving Automatic Text Simplification through Elaboration and Personalization. Automatic Text Simplification (ATS) aims to make complex texts more understandable for different audiences, often by simplifying vocabulary and sentence structures. However, research has shown that effective simplification often requires elaboration — adding explanations or extra context to clarify difficult terms or concepts (Neha Srikanth at al., 2021) and LLMs have been shown to be limited in the task (Asthana, 2024). This thesis would explore ways to improve ATS by incorporating elaboration strategies. Possible directions include: a) External Knowledge Integration: how can external knowledge sources be used to add elaboration and context during simplification? b) Personalization: How can simplification techniques be personalized to specific user groups (e.g., children, non-native speakers, or individuals with disabilities)? Other topics in the area of ATS are also possible. Level: MSc (projects on a lower scope in the domain of ATS are also possible).

V4: Human-centric Natural Language Understanding: Uncertainty, Perception, Cognition, Vision, Social and Cultural Aware NLP, Interpretability

Some general references for this section: References: Plank, 2016, Plank, 2022 EMNLP, Yang et al., 2024

Selected research projects

  • In-context learning from human preference disagreement. Aggregating annotations via majority vote could lead to ignoring the opinions of minority groups. Learning from individual annotators shows a better result on classification tasks such as hate speech detection, emotion detection and natural language inference than from the majority vote (Davani et al., 2022). In this project, we want to investigate the potential of learning from individual annotators in an in-context learning setting. Furthermore, we want to go beyond the classification datasets towards the human preference datasets. How can the model learn from the disagreement in the preference data in context? References: Davani et al., 2022, Chen et al.,2024, Zhang et al., 2024. Level: MSc.
  • Enhancing NLI Label Modeling with LLM-Generated Explanations: Annotation, Multilinguality, and Linguistic Perspectives. Recent studies (Chen et al., 2024a; Chen et al., 2024b) suggest that using human or LLM-generated explanations can help LLMs better model human label distributions in Natural Language Inference (NLI). Building on this, we propose three research directions. Level: BSc or MSc:
  • Annotation-focused: Develop a VariErr-like dataset (Weber et al., 2024) using LLM-generated explanations instead of human ones, with human annotators only validating whether the LLM explanations align with their reasoning. Annotators can flag cases where all LLM-generated explanations diverge from their intuition, significantly improving annotation efficiency.
  • Multilingual-focused: Analyze how LLM-generated label distributions vary across languages or incorporate multilingual explanation generation as a joint task.
  • Linguistic-focused: Explore existing datasets like liveNLI (Jiang et al., 2023), e-SNLI (Camburu et al., 2018), and VariErr NLI (Weber et al., 2024), where different explanations exist for the same label, to classify these explanations linguistically and observe the impact on LLM-generated label distributions.
  • Understanding disagreement in human-generated scores. Large collections of human evaluations on a scale (e.g., how concrete word X is on a scale from 1 to 5) are crucial for modeling various linguistic phenomena (e.g., figurative language use). These collections typically report an average score derived from numerous participants. However, this method of averaging can be highly deceptive, especially when individual scores vary significantly and the average score converges to a middle-range score. This project aims to explore the characteristics of words that achieve mid-scale average scores, examining their lexical attributes (like frequency), distributional properties (such as associations with other words), and behavioral and cognitive aspects (for instance, emotional connotations). This study will build on the existing work by Knupleš et al. (2023). Expected skills: regression analysis, cluster analysis, data visualization. Level: MSc.
  • Human-understandable descriptions of topic clusters. Topic modelling is a core tool of fields like digital humanities and computational social sciences and recent developments like BERTTopic and SCA have made it possible to leverage word embeddings to find topics. These topic clusters are, however, difficult to analyse and additional tools like c-TF-IDF are necessary to make them human-understandable. In this thesis, we will implement and evaluate new techniques for describing topic clusters in a more robust and human-understandable way. References: BERTTopic, SCA, c-TF-IDF. Level: BSc.
  • Controlling models through state space analysis and manipulation. Recent developments in model interpretability, often labeled as mechanistic interpretability, have made it possible to get a better understanding of the internal behavior of LLMs. In this project, we want to go one step further and leverage these insights to better control the output of LLMs. This project could be on a specific use case we have or do a more free exploration. References: Anthropic, Saphra & Wiegreffe, Arditi et al., Singh et al., Level: MSc.
  • Grokking. “Grokking” [1] is a phenomenon observed when training deep neural networks. It is tied to the so-called double-descent curve and emergent abilities [2]. The term describes a dynamic at train time where the model rapidly transitions from “memorizing” training samples to generalizing abstractions evidenced by a sharp rise in validation-set performance. It has been shown that for small toy-models grokking can be reliably triggered through a combination of weight decay and a critical dataset size [2]. However, the gap between these toy tasks and actual LLMs is still large. In this project, we want to take further steps towards exploring more complex settings and evaluate if grokking helps to better understand the learning processes of modern deep language models. References: Power et al., Huang et al., Hupkes et al. Level: MSc
  • Understanding prompt instability. It has been shown that LLMs are very unstable with regard to the prompts and even a small change like an inserted space or misspelled word can drastically change the generated output. In this project, we want to identify such instability situations and especially explore the reasons behind them. References: Mizrahi et al., Shu et al., Zhao et al. Level: BSc or MSc.
  • Unveiling the Mechanisms of Soft Prompt Tuning in Cross-Lingual Transfer: An Interpretability-Driven Study. This research aims to investigate why Soft Prompt Tuning (SPT) outperforms full-parameter fine-tuning (FT) in cross-lingual transfer tasks, where models trained in high-resource languages (e.g., English) are tested on low-resource languages. While SPT’s superiority (Tu et al., 2022; Ma et al., 2023; Park et al., 2023; Philippy et al., 2024) may stem from mitigating issues like monolingual overfitting, catastrophic forgetting, and multilingual tokenization, the underlying mechanisms remain unclear. Using advanced interpretability techniques (Luo&Specia, 2024) such as logit lens, probing tasks, and sparse autoencoders, this study will analyze how SPT preserves language-agnostic features, avoids common pitfalls of FT, and enhances cross-lingual generalization, by evaluating natural language understanding tasks like text classification and sequence labelling across diverse languages. Level: Msc.
  • Exploring the effectiveness and challenges of using LLM as evaluators. Large Language Models (LLMs) have recently been explored as general-purpose evaluators for tasks such as text generation and classification. However, questions remain on their validity (Bavanesco et al., 2024), stability, and potential biases (Panickssery et al., 2024; Zheng at al., 2024). Specific topics in this area might include: a) analyzing LLM-as-evaluators sensitivity: how do slight changes in the input prompt, examples provided, or reference text affect the evaluation scores given by LLMs? b) domain and style variability: do LLMs perform consistently across different domains (e.g., scientific vs. literary texts) and linguistic styles (formal vs. informal)? How accurately do LLMs evaluate texts written by humans compared to those written by other machines? c) LLM biases: what biases might LLMs introduce in their evaluations, such as preference for verbosity, certain positions, or even their own training data? Related topics proposed from the student are also possible. A recent survey on leveraging LLMs as evaluators (for NLG tasks) was recently published by Li et al. Level: MSc.
  • Exploring Linear Directions in LLMs for Identifying Desirable Text Properties. Recent studies have explored how specific properties can be captured using linear directions in LLM embeddings. For example, Marks and Tegmark (2024) have found that true / false statements’ representations show a linear structure. Sheng et al. (2024) have explored a similar idea in evaluation, leveraging a linear direction to estimate text quality. Building on this line of work, this project aims at exploring whether these findings also transfer to other polar properties and to which extent they can be exploited for text evaluation. Level: MSc.
  • Gaze data for NLP. The way in which our eyes move when reading a text can tell us a lot about the cognitive processes required for language understanding. For example, longer reading times indicate higher processing difficulty. In the past 10 years, a line of research has emerged that attempts to use gaze data obtained by eye tracking to improve NLP models for various tasks (e.g., Barrett et al., 2016, Hollenstein & Zhang, 2019, Deng et al., 2023, Alaçam et al., 2024). A student project could involve investigating under which circumstances and for which NLP tasks gaze data can be beneficial, or whether we can achieve the same effect with artificially synthesized gaze data. References: Hollenstein et al. (2019), Sood et al. (2020), Khurana et al. (2023), Bolliger et al. (2023). Level: MSc
  • Investigating Information Asymmetry on Wikipedia. Information asymmetry (IA) refers to the fact that the volume and type of information on the web varies between languages. For example, prior work by Kolbitsch and Maurer (2006) and Callahan et al. (2011) find that articles about locally famous people (“local heroes”) are written in a more favorable way whereas articles about the same people written in other languages also discuss controversial aspects. As a result, monolingual search engines can produce culturally biased search results. The goal of this thesis is to investigate IA on Wikipedia. In the first part, the student is expected to devise an annotation schema that allows to systematically analyze IA and then use it to compare articles about “local heroes” in 2-3 different languages. The second part consists of delving deeper into measuring IA in an automated way. The student is expected to investigate different ways of applying unsupervised semantic similarity models (e.g. BERTScore proposed in Zhang et al. (2019)) or LLMs to identify IA. This thesis is suitable for BSc students (under a reduced scope) and MSc students.