Rapid Poison: Practical Poisoning Attacks Against the Rapid Response Framework
In the International Conference on Machine Learning (ICML), 2026. Also presented at the ICLR 2026 AIWILD Workshop.
Hello, I'm Jaewon, an undergraduate at UC Berkeley studying EECS. I'm fortunate to be involved in research at Berkeley AI Research, where I work with both the Berkeley NLP Group and Professor David Wagner's Security Group. In the NLP group, advised by Alane Suhr, my work focuses on improving task decomposition and reasoning in language models. In the Security Group, I am fortunate to be mentored by Chawin Sitawarin and Sizhe Chen, where I study safety topics ranging from adversarial learning and jailbreak poisoning attacks to, more recently, prompt injection defenses. Previously, I conducted distributed systems research at Berkeley's Sky Computing Lab under the guidance of Jaewan Hong.
* = equal contribution
In the International Conference on Machine Learning (ICML), 2026. Also presented at the ICLR 2026 AIWILD Workshop.
In the Twelfth International Conference on Learning Representations (ICLR), 2024
I enjoy teaching; happy to chat about coursework, research, or project ideas.