Closed AI Models Make Bad Baselines
Will GPT-4 become a universally expected baseline in NLP research, like BERT in its time? Basic scientific methodology demands otherwise.
Will GPT-4 become a universally expected baseline in NLP research, like BERT in its time? Basic scientific methodology demands otherwise.
As a program chair of ACL’23, I was the lead author for this blog post on the conference website that summarized our approach to peer-review matching: https:...
This blog post (on the conference website) summarized our approach to the use of generative AI in ACL conference submissions and reviewing: https://2023.aclw...
Some argue that any publicly available text/art data is fair game for commercial models because human text/art also has sources. But unlike models, we know w...
Field notes from EMNLP 2021, the first hybrid *ACL conference.