Projects

Content:

At MaiNLP we aim to make NLP models more robust, so that they can deal better with underlying shifts in data due to language variation.

On-going research projects

The following lists selected on-going research projects at MaiNLP, including selected publications:

ERC Consolidator grant DIALECT: Natural Language Understanding for non-standard languages and dialects

Dialects are ubiquitous and for many speakers are part of everyday life. They carry important social and communicative functions. Yet, dialects and non-standard languages in general are a blind spot in research on Natural Language Understanding (NLU). Despite recent breakthroughs, NLU still fails to take linguistic diversity into account. This lack of modeling language variation results in biased language models with high error rates on dialect data. This failure excludes millions of speakers today and prevents the development of future technology that can adapt to such users.

To account for linguistic diversity, a paradigm shift is needed: Away from data-hungry algorithms with passive learning from large data and single ground truth labels, which are known to be biased. To go past current learning practices, the key is to tackle variation at both ends: in input data and label bias. With DIALECT, I propose such an integrated approach, to devise algorithms which aid transfer from rich variability in inputs, and interactive learning which integrates human uncertainty in labels. This will reduce the need for data and enable better adaptation and generalization.

Advances in salient areas of deep learning research now make it possible to tackle this challenge. DIALECT’s objectives are to devise a) new algorithms and insights to address extremely scarce data setups and biased labels; b) novel representations which integrate auxiliary sources of information such as complement text data with speech; and c) new datasets with conversational data in its most natural form.

By integrating dialectal variation into models able to learn from scarce data and biased labels, the foundations will be established for fairer and more accurate NLU to break down language and literary barriers.

Selected publications:

DFF Sapere Aude Project MultiVaLUe: Multilingual Variety-aware Language Understanding Technology

Intelligent machines that understand natural language texts are the Holy Grail of Artificial Intelligence. If achieved, they can automatically extract useful information from big messy text collections. Many challenges must be overcome first. To alleviate the scarcity of resources and broaden the scope to Danish and other small languages, we will unify two strands of research, transfer learning and weak supervision, with the aim to design cross-domain and cross-lingual algorithms that extract information more robustly under minimal guidance. In this project we work on two concrete applications: cross-lingual syntactic parsing (and representation learning on the linguistic manifold) and cross-domain information extraction.

Selected publications:

DFF Project thematic AI, MultiSkill: Multilingual Information Extraction for Job Posting Analysis

Job markets are about to undergo profound changes in the years to come. The skills required to perform most jobs will shift significantly. This is due to a series of interrelated developments in technology, migration and digitization. As skills change, we are facing increasing needs for quicker and better hiring to better match people to jobs. Big multilingual job vacancy data are emerging on a variety and multitude of platforms. Such big data can provide insights on labor market skill set demands. This project is centered around computational job market analysis, to reliably perform high-precision information extraction on targeted domain data.

Selected publications:

KLIMA-MEMES: The Impact of Humorous Communication on Political Decision-Making in the Climate Change Context

Climate change is a pressing problem facing humanity and a major polarizing topic in public discourse. Discussions on climate change pervade political agendas worldwide. IPCC experts agree that currently implemented measures against climate change are inadequate in their efforts. Information akin to this often rapidly breaks into social media spheres (e.g., Instagram and TikTok) in increasingly visual and often humor-driven attention cycles. The KLIMA-MEMES project will analyze how such communication in the form of memes or other visual media with an intent to be humorous can affect political decision-making. This is a collaborative project with multiple partners at LMU, funded by the Bavarian Research Institute for Digital Transformation.

More information:

Applications for PhD, Postdoc and student assistant positions

Interested in PhD, Postdoc or student assistants jobs? →Open positions




BSc/MSc thesis projects

Interested in doing your MSc or BSc thesis with us? We offer several BSc and MSc thesis topics at MaiNLP.

Currently, the following research vectors characterize broad topics in which we offer MSc and BSc thesis projects. We provide a (non-exhaustive) list of research projects within each vector. We are also interested to supervise projects related to the Research projects indicated above. You are welcome to send us your own project proposal. We recommend to check out the suggested/selected publications to get inspired.

Unless otherwise specified, all projects can be either at the MSc or BSc level. The exact scope of the project will be determined in a meeting before the start of the project.

Important note: We currently do not supervise industrial MSc/BSc thesis project (Industrieabschlussarbeiten).

Regularly check back here for updates on thesis project suggestions.

News:

  • 2024, Feb 26: Thank you all for your interest! We are closed for further thesis applications for the upcoming summer semester
  • 2024, Jan 15: MSc/BSc project applications are open! See list of project below and how to apply here
  • 2024, Jan 8: Information on how to apply added. Stay tuned for updates on project proposals

Legend:

  • :hourglass_flowing_sand: topic currently reserved
  • strikethrough topic no longer available

MSc/BSc thesis research vectors:

V1: NLP for Dialects, Multilinguality

Selected research projects

  • NLP for dialectal/non-standard language data. You are welcome to propose thesis projects related to processing dialectal or non-standard language data. Selected example projects are given below. Please include the following information in your application: (1) the dialect(s)/language(s) you are interested in working with and your familiarity with them, (2) one or more NLP tasks you would be interested in working on, (3) links to relevant datasets (some starting points for finding datasets: Germanic low-resource languages/dialects – Blaschke et al. 2023 (web overview), regional languages of Italy – Ramponi 2024, Arabic dialects – Guellil et al. 2021) and/or details on what kind of data you would like to collect/annotate, and (4) what focus you are most interested in (modelling/engineering, linguistically motivated error analysis, etc. – you are very welcome to pitch specific research questions here). General reference: Zampieri et al. 2020. Level: BSc or MSc.

    • :hourglass_flowing_sand: Analyzing dialect identification systems. How well can we automatically discern between different closely related language varieties, and do the features that are relevant to the success of an automatic classifier correlate with those that linguists would describe? You can decide on the language varieties to be included in the thesis, provided that high-quality, accessible corpora are available. (See above for starting points for finding datasets. If you prefer to work with a language not covered in these overviews, please contact us early on.) You can also decide which thematic focus/foci the thesis should have: (i) linguistic analysis of the data and classifier output; (ii) applying interpretability methods to the classifier output (in conjunction with focus i); (iii) comparing different kinds of input representations and ML architectures. The specific focus and level of detail will depend on your background and skills as well as the degree (BSc vs. MSc). Additional references: Gaman ea. 2020 and other VarDial shared task papers for literature on automatic dialect identification, Nerbonne ea. 2021 and Wieling & Nerbonne 2015 for introductions to dialectometry; Madsen ea. 2022 and Barredo Arrieta ea. 2020 for overviews of interpretability methods. Level: BSc or MSc.

    • :hourglass_flowing_sand: Slot and Intent Detection for Low-Resource Dialects. Digital assistants are becoming wide-spread, yet current technology covers only a limited set of languages. How can we best do zero-shot transfer to low-resource language variants without standard orthography? Reference: van der Goot et al., 2021 and VarDial 2023 SID4LR. Create a new evaluation dataset of a low-resource language variant you speak, and investigate how to best transfer to it. Topics: Transfer Learning, Cross-linguality, Dataset annotation (Particularly suited for students interested in covering their own language or dialect not yet covered by existing systems including local dialects, e.g. Austrian, Low Saxon, Sardinian dialects or others). Level: MSc or BSc.

    • :hourglass_flowing_sand: Creating a dialectal dependency treebank or POS-tagged corpus. Create a small Universal Dependencies (UD; de Marneffe et al. 2021) treebank in a dialect, regional language or other low-resource language that you are familiar with. For a less time-intensive project, it is also possible to only annotate part-of-speech (POS) tags (otherwise: POS tags + dependencies) and complement the project with something else. This project requires a strong interest in linguistics and syntax. You will need to read up on UD’s annotation guidelines and independently seek out relevant linguistic literature on your chosen language. You also evaluate parsers/POS taggers on your new dataset in a cross-lingual transfer set-up and, time permitting, you might also train your own parsers. Ideally, the project leads to contributing a new treebank to the UD project. Examples of similar corpora: Siewert et al. 2021, Hollenstein & Aepli 2014, Cassidy et al. 2022, Lusito & Maillard 2021. Tutorial for UD newcomers. Level: BSc or MSc.

  • :hourglass_flowing_sand: Transfer or translate: how to better work with dialectal data. Demands for generalizing NLP pipelines to dialectal data are on the rise. Given current LLMs trained in hundreds of languages, there are two common approaches. The first approach is to translate (or normalize) dialectal data to its mainstream counterpart and apply pipelines to the translated mainstream counterpart. Such an approach can benefit from the bigger amount of unannotated and annotated data in the mainstream variant but suffers from error propagation in the pipeline. The second transfer approach is to annotate a small amount of dialectal data and few-shot transfer (finetune) models on the dialect. This involves more dialectal annotation as well as collected unannotated dialectal data. Reference: Zampieri et al. 2020. For a BSc thesis, you would choose an NLP task (e.g., syntactic or semantic parsing, sentiment or stance detection, QA or summarization, etc.) and a specific dialect, compare performances of fewshot versus translation approaches quantitatively, and conduct a qualitative error analysis on the difficult cases. For MSc, the research needs to scale up either to multiple dialects (in the same or across different language families) or to multiple NLP tasks. Level: BSc or MSc.

V2: High-Quality Information Extraction and Retrieval, Data-centric NLP

Selected research projects

  • :hourglass_flowing_sand: Computational Job Market Analysis. Job postings are a rich resource to understand the dynamics of the labor market including which skills are demanded, which is also important for an educational viewpoint. Recently, the emerging line of work on computational job market analysis or NLP for human resources has started to provide data resources and models for automatic job posting analysis, such as the identification and extraction of skills. See references of MultiSkill project. For students interested in real-world applications, this theme provides multiple thesis projects including but not limited to: an in-depth analysis of existing data set and models, researching implicit skill extraction or cross-domain transfer learning to adapt skill and knowledge extraction to data sources other than job postings like patents or scientific articles. See references of MultiSkill project. See also Bhola et al., 2020 and Gnehm et al. 2021 and our own ESCOXLM-R model. Level: BSc or MSc.

  • :hourglass_flowing_sand: Climate Change Insights through NLP. Climate change is a pressing issue internationally that is receiving more and more attention everyday. It is influencing regulations and decision-making in various parts of society such as politics, agriculture, business, and it is discussed extensively on social media. For students interested in real-world societal applications, this project aims to contribute insights on the discussion surrounding climate change on social media by examining discourse from a social media platform. The data will have to be collected (potentially from existing sources), cleaned, and analyzed using NLP techniques to examine various aspects or features of interest such as stance, sentiment, the extraction of key players, etc. References: Luo et al., 2020, Stede & Patz, 2021, Vaid et al., 2022. Level: BSc or MSc.

  • :hourglass_flowing_sand: Better Benchmarks / Mining for Errors in Annotated Datasets. Benchmark datasets are essential in empirical research, but even widely-used annotated datasets contain mistakes, as annotators inevitably make mistakes (e.g. annotation inconsistencies). There are several lines of work in this direction. On the one site, annotation error detection methods provide a suite of methods to detect errors in existing datasets (cf. Klie et al. 2023, Weber & Plank 2023), including tools such as data maps (Swayamdipta et al. 2020). On the other side, there is work on inspecting existing datasets in revision efforts that exist for English NER in the past year (cf. Reiss et al. 2020, Rücker & Akbik 2023). The goal of projects on this theme can be:
    a) (MSc level) to investigate error detection methods in novel scenarios (new benchmarks, new applications, and/or create a new error detection dataset),
    b) (MSc or BSc level) extend revision efforts on NER to other languages. For the latter, for a BSc thesis, your task includes improving a benchmark dataset with iterations of sanity checks and revisions and comparing NLP models on the original versus revised versions. For MSc, you could extend either by incorporating Annotation Error Detection methods (see previous part) or conducting additional evaluations on multiple downstream NLP tasks.
    c) (MSc level) Checking the annotation consistency of non-standardized language data. Automatic methods for finding potential inconsistencies in annotations typically rely on consistent orthographies (e.g., detecting sequences that occur multiple times in a corpus but have received different annotations; Dickinson & Meurers 2003). When text is written in a language variety without a standardized orthography, such methods may no longer work well because of spelling differences between the repeated sequences. Your task is to extend such approaches to detect errors in existing datasets to be more robust to orthographic variation and/or to investigate how well annotation error detection methods that do not directly depend on orthographic matches work (cf. Klie et al. 2023). The target dataset would ideally be a dialectal dataset currently under development at the lab (this requires familiarity with German dialects and an interest in syntax).

  • :hourglass_flowing_sand: Error Analysis of a BERT-based Search Engine: Multi-stage ranking has become a popular paradigm in information retrieval, this approach a fast first-stage ranker generates a candidate set of documents followed by a much slower re-ranker to refine the ranking (Nogueira et al. 2019). Prior work has shown that better candidate sets (higher recall) don’t necessarily translate to a better final ranking (Gao et al. 2021). The goal of this thesis is two-fold: First, we would like to perform an error analysis of linguistic triggers that cause this behavior. In the second part, the goal is to apply and interpret automatically generated explanations from tools such as DeepSHAP (Fernando et al. 2019) and LIME (Riberio et al. 2016). Basic knowledge in information retrieval is helpful, but not required. Level: B.Sc.

  • Adopting Information Retrieval Models for Rare Terms. Neural ranking models have shown impressive results on general retrieval benchmarks, however, domain specific retrieval and representing rare terms are still an open challenge (Thakur et al., 2021). In this thesis, the goal is to explore strategies for rewriting queries and documents with the help of text simplification models or external resources such as WordNet or Wikipedia in order to improve their performance in domain transfer. Level: MSc.

  • Regularizing Monolingual Overfitting in IR: Prior work shows that training cross-encoder ranking models (i.e. sequence-pair classification models) on monolingual queries and documents (e.g. EN–EN) biases models towards learning to detect exact keyword matches (Litschko et al. 2023). This phenomenon, also known as monolingual overfitting causes performance drops when transferring rankers to the cross-lingual setting (e.g. EN–DE) where queries and document tokens are drawn from different vocabularies. To regularize monolingual overfitting Litschko et al. (2023) train on code-switched data instead. The goal of this thesis is to investigate to what extent these findings generalize to (a) other models and architectures, (b) additional languages, and (c) different types of retrieval tasks such as question answering tasks or sentence-level retrieval. Level: MSc.

  • Learning Task Representations. We are often interested in transferring NLP/IR models to datasets for which we have little or no label annotations available (Kim et al. 2023; van der Goot et al. 2023). In such a zero-shot setting it’s possible to transfer a model from a single related task or a set of related tasks. Representing tasks as embeddings and measuring task similarity is an open challenge and active research field (Sileo et al. 2022; Hendel et al. 2023), the goal of this thesis is to explore approaches for deriving task representations and evaluating their effectiveness in a multi-task setting. Level: MSc.

V3: Natural Language Understanding, Semantic & Pragmatics, Computational Social Science

Selected research projects

  • :hourglass_flowing_sand: Understanding Indirect Answers. Indirect answers are replies to polar questions without explicit use of Yes/No clues. Example: Q: Do you wanna crash on the couch? A: I gotta go home sometime. Indirect answers are natural in human dialogue, but very difficult for a conversational AI system. Build an NLP model that improves automatic understanding of indirect questions in English, for example by modeling longer dialogue context. Project can be extended to multlingual/transfer learning. References: Louis et al., 2020, Damgaard et al., 2021 and Sanagavarapu et al., 2022, Yusupujiang & Ginzburg, 2023. Level: BSc or MSc.

  • Prominent entities in text summarization. One important factor in summarizing a document is identifying the prominent entities within the document. Here is an example from the CNN summarization dataset: Two selected paragraphs:
    [Alonso] has been training hard for [his] planned comeback at the Malaysian Grand Prix in nine days and used the {McLaren} simulator to hone [his] mental preparations. The CNN-sponsored team announced the news on Twitter, showing {McLaren} sporting director Eric Boullier and [Alonso] at the team’s headquarters in Woking, England.
    Reference summary:
    [Fernando Alonso] steps up comeback preparations in {McLaren} simulator.
    What linguistic phenomena contribute to the prominence of entities in a document? For example, would coreference chain and discourse relation help? Are summarization strategies the same across different genres and languages? Build an NLP model that predicts the prominent entities in a document and evaluate accordingly based on reference summaries. Start with the CNN/DM dataset. Project can be extended to other genres and languages. Level: BSc or MSc.

  • :hourglass_flowing_sand: Understanding Political Party Manifestos. Recent advancements in natural language processing have already changed adjacent fields like political science and communication research. A question that has always been relevant in these and other social sciences has been how to turn textual data into ecologically valid numerical representations. In the field of party communication, the question of how one can turn party manifestos into numerical vector representations has been studied by the Manifesto Project for decades. The project codes - with huge human work input - some dozens of political issue categories for thousands of party manifestos. This project aims to use recent advances in natural language inference and zero-shot classification to reproduce the human codings produced by the Manifesto Project. Level: MSc (could be adapted to BSc). References: Intro to the political science political theory behind the Manifesto Project (Chapters 1-3): Lemmer 2023; Paper on Natural Language Inference: Laurer et al. 2024; Manifesto Project website.

  • Characteristics of language between amateur and expert poetists. Writing is an art - a beautiful and moving poem has various characteristics which readers relate to and draws their mind into an imaginative tale. This project aims to better understand and characterize writing styles of amaetur and expert poets. The first step would be constructing a corpus of poems or prose of experienced and amateur writings from online sources, checking carefully for copyright. The data would have to be clean and preprocessed. Afterwards, various NLP techniques such as sentiment analysis or analysis of metaphors will be used to better understand and characterize various writing styles. If time allows, the corpus could be expanded to across genres and time periods for a more comprehensive analysis of writing style. References: Kao & Jurafsky 2015, Kao & Jurafsky 2012, Gopidi & Alam 2019. Level: BSc or MSc

V4: Human-centric Natural Language Understanding: Uncertainty, Perception, Cognition, Vision, Interpretability

Some general references for this section: References: Plank, 2016., Jensen and Plank, 2022, Plank, 2022 EMNLP

Selected research projects

  • :hourglass_flowing_sand: In-context learning from human disagreement on subjective tasks. Aggregating annotations via majority vote could lead to ignoring the opinions of minority groups, especially on subjective tasks. Learning from individual annotators shows a better result on subjective classification tasks such as hate speech detection and emotion detection than from the majority vote (Davani et al., 2022). In this project, we want to investigate the potential of learning from individual annotators in an in-context learning setting. How can the model learn from the disagreement between annotators by instruction tuning and prompting? How do we design such instructions? References: Davani et al., 2022, Schick et al., 2021 and Mishra et al., 2022. Level: MSc.

  • :hourglass_flowing_sand: Beyond the Multiple Choice Survey Question: Evaluating LLM’s opinion in realistic settings. Multiple choice questions play a crucial role in LLM evaluation studies. Given an instruction and example QA pairs, the LLM is asked to pick the preferred answer, such as the MMLU benchmark (Hendrycks et al. 2021). Other than objective tasks such as math questions, survey questions are also used to evaluate LM’s opinion (Santurkar et al. 2023). However, it is still unclear how well the survey question responses of the LM represent its behavior in real-world scenarios. In this work, we want to investigate how representative the survey question evaluation result is and develop an open-ended generation task for evaluating LLM in a realistic setting. Level: BSc, MSc.

  • :hourglass_flowing_sand: Active learning for Vision Question Answering with Large Pretrained Models. There were several attempts on active learning for VQA (Jedoui et al. 2019, Karamcheti et al. 2021). However, these models are small in size and were trained from scratch. Large Pre-trained Models have achieved great success in unimodality (language or vision) and multimodality (vision-language) settings (Li et al. 2022). This project aims to deploy SOTA foundation models in the active learning framework for VQA tasks. A starting point could be re-implementing some VQA active learning works such as Jedoui et al. (2019). Level: MSc.

  • :hourglass_flowing_sand: Systematic Generalization for Abstract Reasoning Tasks. Abstract reasoning is a fundamental aspect of human intelligence. It allows us to solve new, unfamiliar problems without the need for prior knowledge (Gilead et al. 2014). Developing machines that can reason this way (an ability closely related to the concept of fluid intelligence (Cattel 1963)), has been a long-standing goal in the field of artificial intelligence (Barrett et al. 2018). However, most modern AI systems (including large language models) do still lack general abstract reasoning capabilities (Marcus 2018, Gendron et al. 2023). In 2019, François Chollet introduced the Abstraction and Reasoning Corpus (ARC), a collection of visual puzzles designed to evaluate human-like general fluid intelligence in AI systems (Chollet 2019). While humans can robustly solve around 80% of ARC tasks, current algorithms markedly underperform with a maximum success rate of 31% (Johnson et al. 2021). This disparity underscores the necessity for continued development in the field of automated abstract reasoning. Various challenges have been created to motivate developers to solve ARC. This project intends to tackle the ARC benchmark by leveraging insights from recent findings on systematic generalization. For this, we will exploit the compositional nature of ARC tasks. The aim is to develop an algorithm capable of systematic generalization, extrapolating from specific examples to novel test scenarios. This approach aims not only to enhance the performance of AI systems on the ARC benchmark, but also to contribute to the broader understanding of abstract reasoning in artificial intelligence. Level: MSc.

  • Reasoning Back to Consistency. Evaluating Causality in Large Language Models: While formal logic is monotonic, i.e. once a valid conclusion is drawn, no additional valid premise can refute it, our everyday reasoning is often non-monotonic (Johnson-Laird & Byrne 1991). For instance, conclusions are often withdrawn based on new evidence or assumptions that do no longer hold. According to Johnson-Laird (2011), when an inconsistency of an expected outcome is encountered, reasoning back to consistency appears to involve three main steps: detection of the inconsistency, revision of beliefs, and generation of a diagnostic explanation that resolves the inconsistency. A study by Johnson-Laird et al. (2004) investigates the preferences of individuals regarding diagnostic explanations for inconsistencies in reasoning. Interestingly, cause-and-effect relations play a pivotal role for the perceived plausibility of explanations, and thus are a driving aspect for our understanding of the world. In this project, we aim to replicate the study by Johnson-Laird et al. (2004) with a focus on evaluating which diagnostic explanations are considered most probable by Large Language Models (LLMs). Through this endeavor, we aim to assess whether LLMs align with the human-centric concept of causality in their reasoning processes. Level: MSc.

  • Solving Murder Mysteries: An Evaluation of Causal Reasoning in Large Language Models. Large Language Models (LLMs) have shown remarkable capabilities in various domains, yet their proficiency in complex causal reasoning remains largely unexplored (Kıcıman et al. 2023, Zečević et al. 2023, Zhang et al. 2023). This project seeks to employ a dataset of murder mystery riddles as a means to assess the causal reasoning abilities of large language models. Such riddles provide a structured yet creative way to assess how well LLMs can formulate cause-and-effect hypotheses and derive logical conclusions from observations (Lagnado & Gerstenberg 2017, Sun et al. 2023, Pearl 2013). The objectives of this study are multifaceted. Based on Del & Fishel (2023), it aims to establish a new benchmark consisting of a diverse collection of short murder mystery riddles, each characterized by a clearly defined cause-and-effect framework. Secondly, the study is designed to assess the causal reasoning and deductive problem-solving skills of LLMs. Thirdly, it seeks to identify the specific strengths and limitations of current open-source LLMs in tasks that require causal reasoning. Anticipated outcomes of this research encompass a quantitative evaluation of LLM performance in causal reasoning tasks, qualitative insights into the reasoning process, and the identification of key areas for improvement. Level: MSc.

  • Understanding disagreement in human-generated scores. Large collections of human evaluations on a scale (e.g., how concrete word X is on a scale from 1 to 5) are crucial for modeling various linguistic phenomena (e.g., figurative language use). These collections typically report an average score derived from numerous participants. However, this method of averaging can be highly deceptive, especially when individual scores vary significantly and the average score converges to a middle-range score. This project aims to explore the characteristics of words that achieve mid-scale average scores, examining their lexical attributes (like frequency), distributional properties (such as associations with other words), and behavioral and cognitive aspects (for instance, emotional connotations). This study will build on the existing work by Knupleš et al. (2023). Expected skills: regression analysis, cluster analysis, data visualization. Level: MSc.

How to apply for a BSc and MSc thesis project

Important information for LMU students: You need to apply for a MSc/BSc thesis project the latest three weeks before the thesis project registration date. Deadlines for the summer semester 2024:

  • MSc students apply before February 15, 2024
  • BSc students apply before February 29, 2024

To apply, please send your application material with subject “[BSc (or MSc) thesis project at MaiNLP - inquiry [Name and which semester]” to: thesisplank@cis.lmu.de

It should contain a single pdf with the following information:

  • CV, your study program, full grade transcript
  • Level: BSc or MSc thesis project
  • Which theme or concrete project interests you the most (optional: we are open to project proposals related to the research vectors or on-going research projects above). If you are interested in multiple, list your preferences for up to two (ranked list: first priority, second priority)
  • Languages you speak
  • Your favorite project so far, and why
  • Your knowledge and interest in data annotation, data analysis and machine learning/deep learning (including which toolkits you are familiar with)
  • Whether you have access to GPU resources (and which)
  • A term project report or your BSc thesis if you apply for a MSc thesis (optional)

Reach out if you have questions, using the email above.