My ultimate goal is to blend knowledge from multiple disciplines to advance AI research. My current research centers around aligning foundation model and human learning and capabilities, particularly in reasoning, generalization, and efficiency. I have explored ways to improve the controllability of language and visual generation models, and integrate structured and multimodal information to enhance their reasoning capabilities.
I'm investigating psychologically and cognitively inspired methods for continual learning, self-improvement, and advanced reasoning in foundation models. I'm also exploring methods to bridge the data efficiency gap between human and model learning [1,2,3] while shedding further light on human cognitive models and our efficient language acquisition capabilities.
Previously, I was a master's student at Carnegie Mellon University (CMU), where I worked with Eduard Hovy and Malihe Alikhani on language generation, data augmentation, and commonsense reasoning. Before that, I was an undergraduate student at the University of Waterloo, where I worked with Jesse Hoey on dialogue agents and text generation.
I am a co-instructor for the Stanford CS25 Transformers course, and mentor and advise several students. I also led the organization of CtrlGen, a controllable generation workshop at NeurIPS 2021, and was involved in the GEM benchmark and workshop for NLG evaluation.
In my free time, I enjoy gaming, playing the piano and guitar, singing, dancing, martial arts, and table tennis. I am also the founder and president of the Stanford Piano Society.
May 2021: Our data augmentation survey paper, published at ACL 2021 Findings, has received lots of attention on social media (e.g. this tweet, Sebastian Ruder's NLP Newsletter) and was one of the top 10 hottest machine learning papers in May 2021 (source: labml.ai).
Peer-Reviewed Publications and Conference Proceedings
While high-performing language models are typically trained on hundreds of billions of words, human children become fluent language users with a much smaller amount of data. What are the features of the data they receive, and how do these features support language modeling objectives? To investigate this question, we train GPT-2 models on 29M words of English-language child-directed speech and a new matched, synthetic dataset (TinyDialogues), comparing to a heterogeneous blend of datasets from the BabyLM challenge. We evaluate both the syntactic and semantic knowledge of these models using developmentally-inspired evaluations. Through pretraining experiments, we test whether the global developmental ordering or the local discourse ordering of children's training data support high performance relative to other datasets. The local properties of the data affect model results, but somewhat surprisingly, global properties do not. Further, child language input is not uniquely valuable for training language models. These findings support the hypothesis that, rather than proceeding from better data, children's learning is instead substantially more efficient than current language modeling techniques.
@misc{feng2024childdirectedspeecheffectivetraining,
title={Is Child-Directed Speech Effective Training Data for Language Models?},
author={Steven Y. Feng and Noah D. Goodman and Michael C. Frank},
year={2024},
eprint={2408.03617},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2408.03617},
}
We motivate and introduce CHARD: Clinical Health-Aware Reasoning across Dimensions, to investigate the capability of text generation models to act as implicit clinical knowledge bases and generate free-flow textual explanations about various health-related conditions across several dimensions. We collect and present an associated dataset, CHARDat, consisting of explanations about 52 health conditions across three clinical dimensions. We conduct extensive experiments using BART and T5 along with data augmentation, and perform automatic, human, and qualitative analyses. We show that while our models can perform decently, CHARD is very challenging with strong potential for further exploration.
@inproceedings{feng-etal-2023-chard,
title = "{CHARD}: Clinical Health-Aware Reasoning Across Dimensions for Text Generation Models",
author = "Feng, Steven Y. and Khetan, Vivek and Sacaleanu, Bogdan and Gershman, Anatole and Hovy, Eduard",
editor = "Vlachos, Andreas and Augenstein, Isabelle",
booktitle = "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.eacl-main.24",
doi = "10.18653/v1/2023.eacl-main.24",
pages = "313--327"}
Tongue twisters are meaningful sentences that are difficult to pronounce. The process of automatically generating tongue twisters is challenging since the generated utterance must satisfy two conditions at once: phonetic difficulty and semantic meaning. Furthermore, phonetic difficulty is itself hard to characterize and is expressed in natural tongue twisters through a heterogeneous mix of phenomena such as alliteration and homophony. In this paper, we propose PANCETTA: Phoneme Aware Neural Completion to Elicit Tongue Twisters Automatically. We leverage phoneme representations to capture the notion of phonetic difficulty, and we train language models to generate original tongue twisters on two proposed task settings. To do this, we curate a dataset called PANCETTA, consisting of existing English tongue twisters. Through automatic and human evaluation, as well as qualitative analysis, we show that PANCETTA generates novel, phonetically difficult, fluent, and semantically meaningful tongue twisters.
@inproceedings{keh-etal-2023-pancetta,
title = "{PANCETTA}: Phoneme Aware Neural Completion to Elicit Tongue Twisters Automatically",
author = "Keh, Sedrick Scott and
Feng, Steven Y. and
Gangal, Varun and
Alikhani, Malihe and
Hovy, Eduard",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.eacl-main.36",
doi = "10.18653/v1/2023.eacl-main.36",
pages = "491--504"}
A personification is a figure of speech that endows inanimate entities with properties and actions typically seen as requiring animacy. In this paper, we explore the task of personification generation. To this end, we propose PINEAPPLE: Personifying INanimate Entities by Acquiring Parallel Personification data for Learning Enhanced generation. We curate a corpus of personifications called PersonifCorp, together with automatically generated de-personified literalizations of these personifications. We demonstrate the usefulness of this parallel corpus by training a seq2seq model to personify a given literal input. Both automatic and human evaluations show that fine-tuning with PersonifCorp leads to significant gains in personification-related qualities such as animacy and interestingness. A detailed qualitative analysis also highlights key strengths and imperfections of PINEAPPLE over baselines, demonstrating a strong ability to generate diverse and creative personifications that enhance the overall appeal of a sentence.
@inproceedings{keh-etal-2022-pineapple,
title = "{PINEAPPLE}: Personifying {IN}animate Entities by Acquiring Parallel Personification Data for Learning Enhanced Generation",
author = "Keh, Sedrick Scott and Lu, Kevin and Gangal, Varun and Feng, Steven Y. and Jhamtani, Harsh and Alikhani, Malihe and Hovy, Eduard",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.547",
pages = "6270--6284"}
We investigate the use of multimodal information contained in images as an effective method for enhancing the commonsense of Transformer models for text generation. We perform experiments using BART and T5 on concept-to-text generation, specifically the task of generative commonsense reasoning, or CommonGen. We call our approach VisCTG: Visually Grounded Concept-to-Text Generation. VisCTG involves captioning images representing appropriate everyday scenarios, and using these captions to enrich and steer the generation process. Comprehensive evaluation and analysis demonstrate that VisCTG noticeably improves model performance while successfully addressing several issues of the baseline generations, including poor commonsense, fluency, and specificity.
@article{Feng_Lu_Tao_Alikhani_Mitamura_Hovy_Gangal_2022,
title={Retrieve, Caption, Generate: Visual Grounding for Enhancing Commonsense in Text Generation Models},
volume={36},
url={https://ojs.aaai.org/index.php/AAAI/article/view/21306},
DOI={10.1609/aaai.v36i10.21306},
number={10},
journal={Proceedings of the AAAI Conference on Artificial Intelligence},
author={Feng, Steven Y. and Lu, Kevin and Tao, Zhuofu and Alikhani, Malihe and Mitamura, Teruko and Hovy, Eduard and Gangal, Varun},
year={2022},
month={Jun.},
pages={10618-10626}}
Many implicit inferences exist in text depending on how it is structured that can critically impact the text's interpretation and meaning. One such structural aspect present in text with chronology is the order of its presentation. For narratives or stories, this is known as the narrative order. Reordering a narrative can impact the temporal, causal, event-based, and other inferences readers draw from it, which in turn can have strong effects both on its interpretation and interestingness. In this paper, we propose and investigate the task of Narrative Reordering (NAREOR) which involves rewriting a given story in a different narrative order while preserving its plot. We present a dataset, NAREORC, with human rewritings of stories within ROCStories in non-linear orders, and conduct a detailed analysis of it. Further, we propose novel task-specific training methods with suitable evaluation metrics. We perform experiments on NAREORC using state-of-the-art models such as BART and T5 and conduct extensive automatic and human evaluations. We demonstrate that although our models can perform decently, NAREOR is a challenging task with potential for further exploration. We also investigate two applications of NAREOR: generation of more interesting variations of stories and serving as adversarial sets for temporal/event-related tasks, besides discussing other prospective ones, such as for pedagogical setups related to language skills like essay writing and applications to medicine involving clinical narratives.
@article{Gangal_Feng_Alikhani_Mitamura_Hovy_2022,
title={NAREOR: The Narrative Reordering Problem},
volume={36},
url={https://ojs.aaai.org/index.php/AAAI/article/view/21309},
DOI={10.1609/aaai.v36i10.21309},
number={10},
journal={Proceedings of the AAAI Conference on Artificial Intelligence},
author={Gangal, Varun and Feng, Steven Y. and Alikhani, Malihe and Mitamura, Teruko and Hovy, Eduard},
year={2022},
month={Jun.},
pages={10645-10653}}
We motivate and propose a suite of simple but effective improvements for concept-to-text generation called SAPPHIRE: Set Augmentation and Post-hoc PHrase Infilling and REcombination. We demonstrate their effectiveness on generative commonsense reasoning, a.k.a the CommonGen task, through experiments using both BART and T5 models. Through extensive automatic and human evaluation, we show that SAPPHIRE noticeably improves model performance. An in-depth qualitative analysis illustrates that SAPPHIRE effectively addresses many issues of the baseline model generations, including lack of commonsense, insufficient specificity, and poor fluency.
@inproceedings{feng-etal-2021-sapphire,
title = "{SAPPHIRE}: Approaches for Enhanced Concept-to-Text Generation",
author = "Feng, Steven and
Huynh, Jessica and
Narisetty, Chaitanya Prasad and
Hovy, Eduard and
Gangal, Varun",
booktitle = "Proceedings of the 14th International Conference on Natural Language Generation",
month = aug,
year = "2021",
address = "Aberdeen, Scotland, UK",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.inlg-1.21",
pages = "212--225"}
Data augmentation has recently seen increased interest in NLP due to more work in low-resource domains, new tasks, and the popularity of large-scale neural networks that require large amounts of training data. Despite this recent upsurge, this area is still relatively underexplored, perhaps due to the challenges posed by the discrete nature of language data. In this paper, we present a comprehensive and unifying survey of data augmentation for NLP by summarizing the literature in a structured manner. We first introduce and motivate data augmentation for NLP, and then discuss major methodologically representative approaches. Next, we highlight techniques that are used for popular NLP applications and tasks. We conclude by outlining current challenges and directions for future research. Overall, our paper aims to clarify the landscape of existing literature in data augmentation for NLP and motivate additional work in this area. We also present a GitHub repository with a paper list that will be continuously updated at this link.
@inproceedings{feng-etal-2021-survey,
title = "A Survey of Data Augmentation Approaches for {NLP}",
author = "Feng, Steven Y. and
Gangal, Varun and
Wei, Jason and
Chandar, Sarath and
Vosoughi, Soroush and
Mitamura, Teruko and
Hovy, Eduard",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.84",
doi = "10.18653/v1/2021.findings-acl.84",
pages = "968--988"}
In this paper, we investigate data augmentation for text generation, which we call GenAug. Text generation and language modeling are important tasks within natural language processing, and are especially challenging for low-data regimes. We propose and evaluate various augmentation methods, including some that incorporate external knowledge, for finetuning GPT-2 on a subset of Yelp Reviews. We also examine the relationship between the amount of augmentation and the quality of the generated text. We utilize several metrics that evaluate important aspects of the generated text including its diversity and fluency. Our experiments demonstrate that insertion of character-level synthetic noise and keyword replacement with hypernyms are effective augmentation methods, and that the quality of generations improves to a peak at approximately three times the amount of original data.
@inproceedings{feng-etal-2020-genaug,
title = "{G}en{A}ug: Data Augmentation for Finetuning Text Generators",
author = "Feng, Steven Y. and
Gangal, Varun and
Kang, Dongyeop and
Mitamura, Teruko and
Hovy, Eduard",
booktitle = "Proceedings of Deep Learning Inside Out (DeeLIO): The First Workshop on Knowledge Extraction and Integration for Deep Learning Architectures",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.deelio-1.4",
doi = "10.18653/v1/2020.deelio-1.4",
pages = "29--42"}
For conversational AI and virtual assistants to communicate with humans in a realistic way, they must exhibit human characteristics such as expression of emotion and personality. Current attempts toward constructing human-like dialogue agents have presented significant difficulties. We propose Human Level Attributes (HLAs) based on tropes as the basis of a method for learning dialogue agents that can imitate the personalities of fictional characters. Tropes are characteristics of fictional personalities that are observed recurrently and determined by viewers' impressions. By combining detailed HLA data with dialogue data for specific characters, we present a dataset, HLA-Chat, that models character profiles and gives dialogue agents the ability to learn characters' language styles through their HLAs. We then introduce a three-component system, ALOHA (which stands for Artificial Learning of Human Attributes), that combines character space mapping, character community detection, and language style retrieval to build a character (or personality) specific language model. Our preliminary experiments demonstrate that two variations of ALOHA, combined with our proposed dataset, can outperform baseline models at identifying the correct dialogue responses of chosen target characters, and are stable regardless of the character's identity, the genre of the show, and the context of the dialogue.
@article{Li_2020,
title={ALOHA: Artificial Learning of Human Attributes for Dialogue Agents},
volume={34},
ISSN={2159-5399},
url={http://dx.doi.org/10.1609/aaai.v34i05.6328},
DOI={10.1609/aaai.v34i05.6328},
number={05},
journal={Proceedings of the AAAI Conference on Artificial Intelligence},
publisher={Association for the Advancement of Artificial Intelligence (AAAI)},
author={Li, Aaron W. and Jiang, Veronica and Feng, Steven Y. and Sprague, Julia and Zhou, Wei and Hoey, Jesse},
year={2020},
month={Apr},
pages={8155–8163}}
In this paper, we present a novel method for measurably adjusting the semantics of text while preserving its sentiment and fluency, a task we call semantic text exchange. This is useful for text data augmentation and the semantic correction of text generated by chatbots and virtual assistants. We introduce a pipeline called SMERTI that combines entity replacement, similarity masking, and text infilling. We measure our pipeline’s success by its Semantic Text Exchange Score (STES): the ability to preserve the original text’s sentiment and fluency while adjusting semantic content. We propose to use masking (replacement) rate threshold as an adjustable parameter to control the amount of semantic change in the text. Our experiments demonstrate that SMERTI can outperform baseline models on Yelp reviews, Amazon reviews, and news headlines.
@inproceedings{feng-etal-2019-keep,
title = "Keep Calm and Switch On! Preserving Sentiment and Fluency in Semantic Text Exchange",
author = "Feng, Steven Y. and
Li, Aaron W. and
Hoey, Jesse",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1272",
doi = "10.18653/v1/D19-1272",
pages = "2701--2711"}
Human children far exceed modern machine learning algorithms in their sample efficiency, achieving high performance in key domains with much less data than current models. This ''data gap'' is a key challenge both for building intelligent artificial systems and for understanding human development. Egocentric video capturing children's experience -- their ''training data'' -- is a key ingredient for comparison of humans and models and for the development of algorithmic innovations to bridge this gap. Yet there are few such datasets available, and extant data are low-resolution, have limited metadata, and importantly, represent only a small set of children's experiences. Here, we provide the first release of the largest developmental egocentric video dataset to date -- the BabyView dataset -- recorded using a high-resolution camera with a large vertical field-of-view and gyroscope/accelerometer data. This 493 hour dataset includes egocentric videos from children spanning 6 months - 5 years of age in both longitudinal, at-home contexts and in a preschool environment. We provide gold-standard annotations for the evaluation of speech transcription, speaker diarization, and human pose estimation, and evaluate models in each of these domains. We train self-supervised language and vision models and evaluate their transfer to out-of-distribution tasks including syntactic structure learning, object recognition, depth estimation, and image segmentation. Although performance in each scales with dataset size, overall performance is relatively lower than when models are trained on curated datasets, especially in the visual domain. Our dataset stands as an open challenge for robust, humanlike AI systems: how can such systems achieve human-levels of success on the same scale and distribution of training data as humans?
@misc{long2024babyviewdatasethighresolutionegocentric,
title={The BabyView dataset: High-resolution egocentric videos of infants' and young children's everyday experiences},
author={Bria Long and Violet Xiang and Stefan Stojanov and Robert Z. Sparks and Zi Yin and Grace E. Keene and Alvin W. M. Tan and Steven Y. Feng and Chengxu Zhuang and Virginia A. Marchman and Daniel L. K. Yamins and Michael C. Frank},
year={2024},
eprint={2406.10447},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2406.10447},
}
Talks, Interviews, & Lectures
Apr. 2024: The first lecture of our Stanford CS25 Transformers V4 (Spring 2024) course! We gave a brief intro and overview of the history of NLP, Transformers and how they work, and their impact. We also discussed recent trends, breakthroughs, applications, and remaining challenges/weaknesses of Transformers. Lastly, Div talked about AI agents. This is a super useful lecture for those who want a broader overview of Transformers and the field right now! Slides here. We had a full room (approx. 200 folks in the audience) and over 300+ on Zoom! All other talks are / will be released on the same YouTube playlist.
July 2021: Eduard Hovy and I were on The Data Exchange Podcast with Ben Lorica. We discuss data augmentation for NLP (inspired by our survey paper) and challenges + future directions in NLP and machine learning research. Audio and notes here.
Aug. 2021: Varun and I gave a talk (to over 100 attendees) for Google Research about data augmentation for NLP (inspired by our survey paper). We also touch upon NL-Augmenter and our CtrlGen Workshop at NeurIPS 2021.
Teaching and Instruction
Stanford's CS25: Transformers United - I am a co-instructor for Stanford's CS25 course! We are one of Stanford's hottest seminar courses, with attendance open to the public! Zoom link and other details are on our course website. We feature in-depth discussion from exciting speakers each week about cutting-edge research in Transformers. Speakers so far include Andrej Karpathy, Geoffrey Hinton, Jim Fan, Ashish Vaswani, and Jason Wei. Recordings of talks are here. Some class photos below! Speakers pictured: Andrej Karpathy, Jim Fam, Jason Wei & Hyung Won Chung, and the CS25 instructors.
Mentorship and Advising
Shijia Yang [Stanford Master's of Computer Science, Class of 2025]
Mentoring a research project on multimodal chain-of-thought reasoning using vision-language models (VLMs).
Sedrick Scott Keh [CMU Master's of Machine Learning (MSML), Class of 2022]
Mentored several research projects on controllable and creative text generation [e.g. paper1, paper2].
Kevin Lu [University of Waterloo Undergrad, Computer Science, Class of 2026]
Mentored several research projects on controllable, creative, and visually-grounded text generation [e.g. paper1, paper2].
Zhuofu (Derek) Tao [UCLA Ph.D. in Electrical Engineering, Class of 2025]
Mentored a research project on controllable and visually-grounded text generation [paper].