Search

Taehyeon Kim

 Prev: Google Research (NYC), Qualcomm AI, Dynamo AI (YCW22)
 PhD @ KAIST AI [OSI Lab]
 LG AI Research Scientist @Superintelligence Lab
Education
 PhD - KAIST AI
 BS - KAIST Mathematical Science
[Minor-Intellectual Property]
I am a Research Scientist at LG AI Research, focusing on thinking and reasoning strategies for LLMs, including scalable test-time inference and model behavior optimization. I received my Ph.D. in Optimization & Statistical Inference (OSI) Lab @ KAIST, advised by Prof. Se-Young Yun. I worked as a PhD Intern @ Google ResearchQualcomm AI, and DynamoFL (YCW22). My expected graduation date is Feb. 2025.  Contact: potter32 [at] kaist.ac.kr, kimtaehyeon610 [at] gmail.com (permanent)
Please feel free to contact me!  Click CV & LinkedIn (Updated: Dec, 2024)!

 Short Bio

Taehyeon Kim is a Research Scientist at LG AI Research, where he focuses on thinking and reasoning strategies for large language models (LLMs), including scalable test-time inference and model behavior optimization. He received his Ph.D. in AI from the Korea Advanced Institute of Science and Technology (KAIST), advised by Prof. Se-Young Yun in the OSI Lab.
During his Ph.D., he gained diverse research and engineering experience through collaborations with Google Research (2023), Dynamo AI (2023), the Korean National Institute of Meteorological Sciences (2022), and Qualcomm AI (2021). His work has been recognized with spotlight and oral presentations at top venues (e.g., ICLR, ICML Workshops), as well as several NeurIPS competition awards and leadership roles.
Taehyeon’s research has centered around efficient and effective LLM inference, with contributions to speculative decoding, instructive decoding, and collaborative inference frameworks. His broader work spans real-world challenges including distributed optimization for heterogeneous data, semi-supervised federated object detection, automated hyperparameter search, instruction-following alignment, and domain-specific forecasting for weather prediction in South Korea.
Looking ahead, he is particularly interested in collaborative decoding among multiple LLMs via prompt optimization, client-centric test-time adaptation incorporating user preferences, and fast, scalable reasoning under compute and latency constraints. His research integrates both theoretical insights in matrix analysis and practical algorithmic design, enabling robust performance across diverse application domains.
(Click to Open) Featured Publications (1st Authored)

 CV (Updated: Dec, 2024)

️ News

Apr. 2025
 Working with LG AI Research, led by Moontae Lee
Feb. 2025
 Successfully received my PhD!
Oct. 2024
 1 Accepted @ NeurIPS2024W: Speculative Decoding with multiple drafters
 1 Accepted @ TMLR 2024: Federated Learning with Noisy Labels
 Google Conference Scholarship - NeurIPS 2024
Sep. 2024
 Successfully passed my PhD Proposal!
 1 Accepted @ EMNLP2024 Main - Specialized Speculative Decoding!
 2 Accepted @ NeurIPS 2024 - Speculative Decoding and Block Transformer!
Jun. 2024
 1 Accepted @ ICML2024W: Blockwise Parallel Decoding (Speculative Decoding)
May. 2024
 Attending ICLR 2024 @ Vienna, Aus  
Jan. 2024
 1 Accepted @ ICLR2024 (Spotlight): Instruction Following on Large Language Model

Working at/with

Search
Name
When
Research
Advisor/Coworker
2023.01 - 2023.05
Semi-Supervised Object Detection
Federated Learning
Eric Lin
2021.06 - 2021.12
Neural Architecture Search
Knowledge Distillation
Heesoo Myeong

 Publications & Technical Reports

Search
Title
Conference and Journal
Author
COUNT24

Preprints (Under Review)

 Leadership Awards  Activities

 Research Projects

 Invited Talks

 Services & Others

Search

 Others