Department of Quantitative Theory & Methods
The Department of Computer Science
Office: E414 Mathematics and Science Center
Additional Contact Information
400 Dowman Drive
Atlanta, GA 30322
- Ph.D., Joint Degree in Computer Science and Cognitive Science, University of Colorado at Boulder, 2012
- M.S.E., Computer and Information Science, University of Pennsylvania, 2003
- B.A., Dual Degree in Mathematics and Computer Science, Coe College, 2001
Jinho Choi is an assistant professor in the Department of Computer Science, the Department of Quantitative Theory and Methods, and the Program of Linguistics at Emory University. He obtained a B.A. in Computer Science and Mathematics (dual degree) from Coe College in 2002, a M.S.E. in Computer and Information Science from the University of Pennsylvania in 2003 with Mitchell Marcus, a Ph.D. in Computer Science and Cognitive Science (joint degree) from the University of Colorado Boulder in 2012 with Martha Palmer, and did his postdoctoral work at the University of Massachusetts Amherstin 2014 with Andrew McCallum. He was a full-time lecturer in the Department of Computer Science at the Korea Military Academy from 2004 to 2007 while he was serving his military duty in South Korea. He was a R&D team lead of the Amelia project, the next generation machine reading system developed at IPsoft Inc. He is the founder of the Natural Language Processing Research lab at Emory University.
Jinho Choi has been active in research on natural language processing; especially, on the optimization of low-level NLP (e.g., dependency parsing, named entity recognition, sentiment analysis) for robustness on various data and scalability on large data. He has developed an open source project called NLP4J, providing NLP components with state-of-the-art accuracy and speed, which has been widely used for both academic and industrial research. His current research focuses on the development of NLP components for different domains (e.g., social media, radiology reports, dialogs) and the applications of these NLP components for end-user systems such as question-answering, character mining, text generation, etc. He is also interested in interdisciplinary research where NLP can enhance researches in other areas.
My research focuses on the optimization of natural language processing for "robustness" on various data and "scalability" on large data. The goal is to develop NLP components that are readily available for more higher-end research; in other words, we worry about NLP while you do more interesting things with the components we provide. All our NLP components (e.g., dependency parser, semantic role labeler) are developed in ClearNLP, an open source project that has been widely used for academic and industrial research.
Another part of my research focuses on NLP applications such as question answering, information extraction, dialog management, etc. These applications are often domain specific; our goal is to develop applications that work well enough to be practical for certain domains (e.g., FAQ for a company, entities in social media, topics in news), and keep expanding these domain as needed. Constructing meaning representation from texts is a big part of this research.