CV
This page provides a summary of my academic background, research experience, and publications. A full PDF version of my CV is available for download.
Contact Information
| Name | Hamin Koo |
| Professional Title | Master Student |
| hamin2065@yonsei.ac.kr |
Experience
-
2024.08 - 2025.02 Research Intern at ML3 Lab
Yonsei University
- Advisor: Prof. Jaehyung Kim
- Large Language Models (LLMs)
-
2023.07 - 2024.07 Research Intern at MLAI Lab
KAIST AI Graduate School
- Advisor: Prof. Sungju Hwang
- Large Language Models (LLMs), Retrieval-Augmented Generation (RAG)
-
2022.07 - 2023.06 Research Intern at Machine Intelligence Lab
Hankuk University of Foreign Studies
- Advisor: Prof. Byunghwan Jeon
- Classification of Manufacturing products using Multi-Resnet Model
Education
-
2025.03 - Current South Korea
M.S.
Yonsei University
Artificial Intelligence
- Advised by Prof. Jaehyung Kim at ML3 Lab
-
2023.01 - 2023.05 United States
Visiting Student
University of California, Berkeley
Computer Science
-
2020.03 - 2024.02 South Korea
B.S.
Hankuk University of Foreign Studies
Computer Engineering
- Graduated with Honors (College of Engineering)
- Total GPA: 4.39/4.5
Awards
-
2023 & 2024 National Excellence Scholarship, Korea Student Aid Foundation
-
2020 & 2021 Academic Excellence Scholarship, Hankuk University of Foreign Studies, Korea
Publications
-
2026 Align to Misalign: Automatic LLM Jailbreak with Meta-Optimized LLM Judges
International Conference on Learning Representations (ICLR)
We introduce AMIS (Align to MISalign), a bi-level optimization framework for jointly evolving jailbreak prompts and scoring templates.
-
2025 Extracting and Emulsifying Cultural Explanation to Improve Multilingual Capability of LLMs
Preprint
We propose EMCEE (Extracting synthetic Multilingual Context and merging), a multilingual prompting framework that enhances LLMs on non-English queries by extracting and merging query-relevant knowledge with reasoning.
-
2024 Optimizing Query Generation for Enhanced Document Retrieval in RAG
Preprint
We introduce QOQA (Query Optimization using Query expAnsion), an LLM-based query optimization method that refines RAG queries improving document retrieval quality and reducing hallucinations.