My career has spanned embedded firmware development, desktop applications, and complex system debugging across runtime and application layers. Currently, I’m focused on AI safety research, particularly mechanistic interpretability and model evaluation. I’m drawn to understanding the internal mechanisms that drive AI behavior and exploring whether we can build systems that are both capable and interpretable by design. I bring a systematic investigation mindset from my engineering background, treating AI models as complex systems to be understood rather than black boxes to be merely measured.