Steven Feng

I'm a third-year Stanford Computer Science PhD student and NSERC PGS-D scholar, working with the Stanford AI Lab and Stanford NLP Group. I am co-advised by Michael C. Frank and Noah Goodman as part of the Language & Cognition (LangCog) and Computation & Cognition (CoCo) Labs. I am grateful to receive collaboration and support from Google DeepMind, Amazon Science, Microsoft AFMR, and StabilityAI.

My ultimate goal is to blend knowledge from multiple disciplines to advance AI research. My current research centers around aligning foundation model and human learning and capabilities, particularly in reasoning, generalization, and efficiency. I have explored ways to improve the controllability of language and visual generation models, and integrate structured and multimodal information to enhance their reasoning capabilities.

I'm investigating psychologically and cognitively inspired methods for continual learning, self-improvement, and advanced reasoning in foundation models. I'm also exploring methods to bridge the data efficiency gap between human and model learning [1,2,3] while shedding further light on human cognitive models and our efficient language acquisition capabilities.

Previously, I was a master's student at Carnegie Mellon University (CMU), where I worked with Eduard Hovy and Malihe Alikhani on language generation, data augmentation, and commonsense reasoning. Before that, I was an undergraduate student at the University of Waterloo, where I worked with Jesse Hoey on dialogue agents and text generation.

My research contributions have been recognized with several publications at major conferences and a best paper award at INLG 2021. I am also an Honorable Mention for the Jessie W.H. Zou Memorial Award and CRA Outstanding Undergraduate Researcher Award.

I am a co-instructor for the Stanford CS25 Transformers course, and mentor and advise several students. I also led the organization of CtrlGen, a controllable generation workshop at NeurIPS 2021, and was involved in the GEM benchmark and workshop for NLG evaluation.

In my free time, I enjoy gaming, playing the piano and guitar, singing, dancing, martial arts, and table tennis. I am also the founder and president of the Stanford Piano Society.

Email  /  CV  /  Google Scholar  /  Twitter  /  LinkedIn  /  GitHub  /  YouTube

profile photo

News & Highlights

Peer-Reviewed Publications and Conference Proceedings

Is Child-Directed Speech Effective Training Data for Language Models?
Steven Y. Feng, Noah D. Goodman, Michael C. Frank
Accepted to Empirical Methods in Natural Language Processing (EMNLP) 2024
Abstract / Bibtex

CHARD: Clinical Health-Aware Reasoning Across Dimensions for Text Generation Models
Steven Y. Feng, Vivek Khetan, Bogdan Sacaleanu, Anatole Gershman, Eduard Hovy
Proceedings of European Chapter of the Association for Computational Linguistics (EACL) 2023
Abstract / Bibtex

PANCETTA: Phoneme Aware Neural Completion to Elicit Tongue Twisters Automatically
Sedrick Scott Keh, Steven Y. Feng*, Varun Gangal*, Malihe Alikhani, Eduard Hovy
Proceedings of European Chapter of the Association for Computational Linguistics (EACL) 2023
Abstract / Bibtex / GitHub

PINEAPPLE: Personifying INanimate Entities by Acquiring Parallel Personification data for Learning Enhanced generation
Sedrick Scott Keh, Kevin Lu, Varun Gangal*, Steven Y. Feng*, Harsh Jhamtani, Malihe Alikhani, Eduard Hovy
Proceedings of International Conference on Computational Linguistics (COLING) 2022
Abstract at TADA 2021: Conference on New Directions in Analyzing Text as Data
Abstract / Bibtex / GitHub / Talk / Presentation Slides / Poster

Retrieve, Caption, Generate: Visual Grounding for Enhancing Commonsense in Text Generation Models
Steven Y. Feng, Kevin Lu, Zhuofu Tao, Malihe Alikhani, Teruko Mitamura, Eduard Hovy, Varun Gangal
Proceedings of AAAI Conference on Artificial Intelligence 2022 (Acceptance rate: 15%)
Accepted to AKBC 2021 Commonsense Reasoning and Knowledge Bases (CSKB) Workshop.
Abstract / Bibtex / GitHub / Presentation Slides / Poster

NAREOR: The Narrative Reordering Problem
Varun Gangal*, Steven Y. Feng*, Malihe Alikhani, Teruko Mitamura, Eduard Hovy
Proceedings of AAAI Conference on Artificial Intelligence 2022 (Acceptance rate: 15%)
Abstract / Bibtex / GitHub / Presentation Slides / Poster

SAPPHIRE: Approaches for Enhanced Concept-to-Text Generation
Steven Y. Feng, Jessica Huynh, Chaitanya Narisetty, Eduard Hovy, Varun Gangal
Proceedings of International Conference on Natural Language Generation (INLG) 2021 [Best Long Paper]
Abstract / Bibtex / GitHub / Poster

A Survey of Data Augmentation Approaches for NLP
Steven Y. Feng*, Varun Gangal*, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, Eduard Hovy
Proceedings of Association for Computational Linguistics (ACL) 2021 Findings [Long Paper]
Abstract / Bibtex / GitHub / Podcast (with Ed Hovy) / Talk (for Google Research) / Presentation Slides / Poster

GenAug: Data Augmentation for Finetuning Text Generators
Steven Y. Feng*, Varun Gangal*, Dongyeop Kang, Teruko Mitamura, Eduard Hovy
Proceedings of EMNLP 2020 Deep Learning Inside Out (DeeLIO) Workshop [Long Paper]
Abstract / Bibtex / GitHub / Presentation Slides

ALOHA: Artificial Learning of Human Attributes for Dialogue Agents
Aaron W. Li, Veronica Jiang*, Steven Y. Feng*, Julia Sprague, Wei Zhou, Jesse Hoey
Proceedings of AAAI Conference on Artificial Intelligence 2020 (Acceptance rate: 20.6%) [Oral]
Abstract / Bibtex / GitHub

Keep Calm and Switch On! Preserving Sentiment and Fluency in Semantic Text Exchange
Steven Y. Feng*, Aaron W. Li*, Jesse Hoey
Proceedings of Empirical Methods in Natural Language Processing (EMNLP) 2019 (Acceptance rate: 23.8%) [Long Paper]
Abstract / Bibtex / GitHub / Poster / News Article

* Equal Contribution

Preprints

The BabyView dataset: High-resolution egocentric videos of infants' and young children's everyday experiences
Bria Long, Violet Xiang, (...) Steven Y. Feng, (...) Daniel L. K. Yamins, Michael C. Frank
Arxiv preprint. In preparation for The Thirteenth International Conference on Learning Representations (ICLR) 2025
Abstract / Bibtex

Talks, Interviews, & Lectures

Apr. 2024: The first lecture of our Stanford CS25 Transformers V4 (Spring 2024) course! We gave a brief intro and overview of the history of NLP, Transformers and how they work, and their impact. We also discussed recent trends, breakthroughs, applications, and remaining challenges/weaknesses of Transformers. Lastly, Div talked about AI agents. This is a super useful lecture for those who want a broader overview of Transformers and the field right now! Slides here. We had a full room (approx. 200 folks in the audience) and over 300+ on Zoom! All other talks are / will be released on the same YouTube playlist.



July 2021: Eduard Hovy and I were on The Data Exchange Podcast with Ben Lorica. We discuss data augmentation for NLP (inspired by our survey paper) and challenges + future directions in NLP and machine learning research. Audio and notes here.



Aug. 2021: Varun and I gave a talk (to over 100 attendees) for Google Research about data augmentation for NLP (inspired by our survey paper). We also touch upon NL-Augmenter and our CtrlGen Workshop at NeurIPS 2021.



Teaching and Instruction

Stanford's CS25: Transformers United - I am a co-instructor for Stanford's CS25 course! We are one of Stanford's hottest seminar courses, with attendance open to the public! Zoom link and other details are on our course website. We feature in-depth discussion from exciting speakers each week about cutting-edge research in Transformers. Speakers so far include Andrej Karpathy, Geoffrey Hinton, Jim Fan, Ashish Vaswani, and Jason Wei. Recordings of talks are here. Some class photos below! Speakers pictured: Andrej Karpathy, Jim Fam, Jason Wei & Hyung Won Chung, and the CS25 instructors.

Karpathy1
Jim-Fan
Karpathy1
Jim-Fan

Mentorship and Advising

  • Shijia Yang [Stanford Master's of Computer Science, Class of 2025]
  • Mentoring a research project on multimodal chain-of-thought reasoning using vision-language models (VLMs).
  • Sedrick Scott Keh [CMU Master's of Machine Learning (MSML), Class of 2022]
  • Mentored several research projects on controllable and creative text generation [e.g. paper1, paper2].
  • Kevin Lu [University of Waterloo Undergrad, Computer Science, Class of 2026]
  • Mentored several research projects on controllable, creative, and visually-grounded text generation [e.g. paper1, paper2].
  • Zhuofu (Derek) Tao [UCLA Ph.D. in Electrical Engineering, Class of 2025]
  • Mentored a research project on controllable and visually-grounded text generation [paper].
  • Jerry Huang, Hongru Xiang, Xintao (Cynthia) Zhu, Saidi Tang [University of Waterloo Undergrads, Software Engineering, Class of 2022]
  • Advised their software engineering capstone project on text simplification for ESL students.

Last Updated: Nov. 19, 2024 Site Template