Dear students,
We are pleased to propose a curricular internship opportunity focused on the study of hallucination detection in Large Language Models (LLMs). The internship will involve both implementation and experimental evaluation activities, with the goal of comparing different approaches proposed in the literature.
Title:
Hallucination Detection in Large Language Models: A Comparative Experimental Study
Description
Large Language Models (LLMs) often generate plausible but incorrect information, known as hallucinations. This internship aims to investigate and experimentally compare different approaches for detecting such phenomena across models and datasets.
Activities
- Develop wrappers for LLM initialization and inference using Hugging Face Transformers
- Implement basic hallucination detection methods (e.g., model perplexity, output entropy)
- Implement pairwise evaluation metrics (e.g., ROUGE, cosine similarity)
- Run experiments across multiple models and benchmark datasets
- Analyze and report the results
Requirements
- Good knowledge of Python
- Basic understanding of machine learning and NLP
- Familiarity with PyTorch is a plus
Expected Outcomes
- Experimental comparison of hallucination detection methods
- Well-documented codebase
- Final report summarizing findings
If you are interested, please get in touch for further details.
Best regards,
Marco Viviani