CV

pdf-imageDownload

Research Interests

I am interested in a secure and efficient architecture of hardware accelerators (such as GPU and NPU).

My research objective is to design high-performance accelerators with security guarantees. To achieve this goal, my recent studies focus on 1) hardware security and 2) performance improvement of hardware accelerators.

Hardware Security of Accelerators : As accelerators are widely used in mission-critical tasks, the importance of security gets larger. Although I extended Trusted Execution Environment to GPU and NPU in previous works, countless security weaknesses still remain. Therefore, I aim to increase the security level to resist unintended operations.

Performance Improvement of Accelerators : Since machine learning requires speedy processing, the performance improvement of accelerators is crucial. Hence, I consider both hardware and software to enhance parallelism or cut down unnecessary procedures. In a recent publication, I proposed the fine-grained scheduling algorithm in GPU by leveraging Multi-Process Service. I set the further reduction of the execution time as a future research direction.

From these two sub-goals, I target combining a trusted system with a high-performance accelerator design. It is expected to protect users from accidents (caused by attackers or extreme environments) within a reasonable latency.

Education

KAIST, Daejeon, Republic of Korea, Mar 2021 - present
  Ph.D. Student, School of Computing
  Advisor: Jaehyuk Huh

KAIST, Daejeon, Republic of Korea, Mar 2019 - Feb 2021
  Master of Science, School of Computing
  Advisor: Jaehyuk Huh
  Thesis: Hardware Security Techniques for Trusted Machine Learning Accelerators

Yonsei University, Seoul, Republic of Korea, Mar 2015 - Feb 2019
  Bachelor of Science, Computer Science

Publications

Patents

Research Experiences

KAIST, Daejeon, Republic of Korea, Mar 2019 - present
Ongoing Researches at CASYS (Computer Architecture and SYStem) Lab
Advisor: Jaehyuk Huh
  Accelerator Hardware-based Security
    - Memory protection optimization for GPU: Common counters for duplicate counters (Published in HPCA 2021)
    - Memory protection optimization for multi-GPU system (Published in HPCA 2024)
    - Trusted execution environment for NPU: Tensor-granularity counters (Published in HPCA 2022)
    - Memory protection optimization for NPU: Parital memory protection (Published in ICCD 2022)
    - Side-channel attack protection for NPU
    - Dynamic secure-granularity management for heterogeneous processors
  Accelerator Performance
    - Multi-tenancy support for a multi-GPU system: Time and spatial sharing (Published in USENIX ATC 2022)
    - Accurate multi-NPU simulation: Multi-NPU simulator attached with DRAMsim3 (Published in IISWC 2023)
    - On-chip memory management for training NPU: Access order rearrangement (Published in MICRO 2023)

Yonsei University, Seoul, Republic of Korea, Sep 2017 - June 2018
Undergraduate Research Intern at ELC (Embedded systems Languages and Compilers) Lab
Advisor: Bernd Burgstaller
  Parallelism
    - Accelerating big-data streaming engine: Multi-thread and shared-memory
    - Parallelization of SFA (Simultaneous Deterministic Finite Automata) construction: MPI and Huang’s algorithm

Recognition

KAIST, Daejeon, Republic of Korea
  Outstanding Teaching Assistant Award - CS311 Computer Organization, Spring 2022, Fall 2019
  Outstanding Teaching Assistant Award - CS230 System Programming, Fall 2023

Yonsei University, Seoul, Republic of Korea
  Dean’s List, Spring 2018, Spring 2015
  Undergraduate Capstone Project Award (Third Place) - Project Leader, Spring 2018
    Title: Cloud SFA: Parallel Construction of Simultaneous Deterministic Finite Automata in Distributed System

Samsung Electronics, Hwaseong, Republic of Korea
  Best Paper Award (Third Place), Summer 2022
    Title: TNPU: Supporting Trusted Execution with Tree-less Integrity Protection for Neural Processing Unit

Participation

uArch (in conjunction with ISCA 2022), New York City, United States of America
Student Panel
  Life in Grad School, June 2022

Skills

Programming Languages: C, C++, Python
NPU Simulators: mNPUsim, SCALE-Sim, MAESTRO, Gemmini
GPU Programming: CUDA, MPS
Multi-core CPU Programming: MPI, OpenMP
Machine Learning Frameworks: Pytorch, Tensorflow

Teaching Experiences

KAIST, Daejeon, Republic of Korea
Teaching Assistant
  CS230 System Programming, Fall 2023, Fall 2021
  CS311 Computer Organization, Fall 2022, Spring 2022, Spring 2021, Fall 2019
  CS211 Digital System and Lab, Spring 2019

KAIST Education Center, Daejeon, Republic of Korea
Mentor & Lecturer
  Seocho AI College, Summer 2019, Summer 2021
  Python for Beginners, Summer 2022, Summer 2021