Publications
2025
How Do Vision-Language Models Process Conflicting Information Across Modalities?
Tianze Hua*, Tian Yun*, Ellie Pavlick
Under Review
[paper] [project] [code]TACO: Enhancing Multimodal In-context Learning via Task Mapping-Guided Sequence Configuration
Yanshu Li, Jianjiang Yang, Tian Yun, Pinyuan Feng, Jinfa Huang, Ruixiang Tang
Under Review$100K or 100 days: Trade-offs when Pre-Training with Academic Resources
Apoorv Khandelwal, Tian Yun, Nihal V. Nayak, Jack Merullo, Stephen H. Bach, Chen Sun, Ellie Pavlick
COLM 2025
[paper] [code]What is an “Abstract Reasoner”? Revisiting Experiments and Arguments about Large Language Models
Tian Yun, Ellie Pavlick, Chen Sun
CONLL 2025
[paper] [project]Pre-trained Vision-Language Models Learn Discoverable Visual Concepts
Yuan Zang, Tian Yun, Hao Tan, Trung Bui, Chen Sun
TMLR 2025
[paper] [project] [code]
2024
- mOthello: When Do Cross-Lingual Representation Alignment and Cross-Lingual Transfer Emerge in Multilingual Models? Tianze Hua*, Tian Yun*, Ellie Pavlick
NAACL 2024
[paper] [project] [code]
2023
Emergence of Grounded Representations in Embodied Sequence Modeling
Tian Yun*, Zilai Zeng*, Kunal Handa, Ashish Thapliyal, Bo Pang, Ellie Pavlick, Chen Sun
EMNLP 2023
[paper] [project] [code]Improved Inference of Human Intent by Combining Plan Recognition and Language Feedback
Ifrah Idrees, Tian Yun, Naveen Sharma, Nakul Gopalan, Stefanie Tellex, George Konidaris
IROS 2023
[paper]Do Vision-Language Pretrained Models Learn Composable Primitive Concepts?
Tian Yun, Usha Bhalla, Ellie Pavlick, Chen Sun
TMLR
[paper] [project] [code]
2021
Does Vision-and-Language Pretraining Improve Lexical Grounding?
Tian Yun, Chen Sun, Ellie Pavlick
Findings of EMNLP 2021
[paper] [code]Mining Biomedical Texts for Pediatric Information
Tian Yun, Deepti Garg, Natalia Khuri
14th International Joint Conference on Biomedical Engineering Systems and Technologies - BIOINFORMATICS, 2021
[paper]
Preprints
- BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Teven Scao et al.
arXiv preprint arXiv:2211.05100 (2022)
[paper]