LLM Evaluation Framework Comparison

Comparative Study of LLM Evaluation Frameworks (Deloitte-Anthropic Alliance)

In collaboration with Deloitte’s Anthropic Alliance, this capstone research for the M.S. in Data Science at the University of Virginia critically examines leading frameworks for evaluating large language models (LLMs). The study leverages multiple datasets and methodologies to benchmark state-of-the-art approaches for ethical and reliable AI assessment. This comprehensive research evaluated and compared multiple LLM evaluation frameworks across eight critical metrics: toxicity detection, bias detection, hallucination detection, summarization quality, tone identification, readability assessment, retrieval accuracy, and response accuracy.

May 2025 · Afnán Alabdulwahab

Understanding DeepEval's Bias Evaluation Methodology

This blog post explores the three-stage bias detection process in DeepEval, an LLM-based evaluation system that quantifies bias in AI-generated text. The methodology leverages structured validation, templated prompts, and a scoring framework to assess bias across multiple categories.

February 2025 · Afnán Alabdulwahab
AI-Generated Text Detection

Detecting AI-Generated Text: Targeting Academic Integrity Applications

Completed for DS6051: Decoding Large Language Models at UVA, this project explores transformer-based methods for detecting AI-generated text in academic contexts. By fine-tuning RoBERTa using LoRA and optimizing for human accuracy, the model reduced false positives on human-written abstracts from 83.2% to just 0.7%, demonstrating the importance of fairness and robustness in detection systems.

May 2025 · Afnán Alabdulwahab