On AI-assisted writing in graduate school
What is the proper role of AI ‘assistance’ in graduate student writing? It depends on what you mean by ‘graduate’.
What is the proper role of AI ‘assistance’ in graduate student writing? It depends on what you mean by ‘graduate’.
AI ‘news’ produced by LLM-powered content farms are flooding the web. They are currently very cheap and easy to produce, even beyond English, and currently w...
One of the often-repeated claims about LLMs is that they have ‘emergent properties’. Unfortunately, in most cases the speaker/writer does not clarify what th...
It’s official: I joined the ACL Rolling Review team as an editor-in-chief, and I’d like to share some brief thoughts on this.
This post (at ACL conference website) summarizes the analysis of ACL’23 peer review process: https://2023.aclweb.org/blog/review-report/. The full analysis i...
Will GPT-4 become a universally expected baseline in NLP research, like BERT in its time? Basic scientific methodology demands otherwise.
As a program chair of ACL’23, I was the lead author for this blog post on the conference website that summarized our approach to peer-review matching: https:...
This blog post (on the conference website) summarized our approach to the use of generative AI in ACL conference submissions and reviewing: https://2023.aclw...
Some argue that any publicly available text/art data is fair game for commercial models because human text/art also has sources. But unlike models, we know w...
Field notes from EMNLP 2021, the first hybrid *ACL conference.
This is a post I wrote during my time in Text Machine Lab: https://text-machine-lab.github.io/blog/2021/busters/. It reports on one of the first studies on t...
Yes, it is possible to record and edit a conference talk at home, with open-source tools in reasonable time.
This is a post I wrote for The Gradient: https://thegradient.pub/when-bert-plays-the-lottery-all-tickets-are-winning/. It reports on our investigation of th...
This is a post I wrote for The Gradient: https://thegradient.pub/how-can-we-improve-peer-review-in-nlp/. It reports on a position paper that discusses the h...
Why fully anonymous peer-review is important, and how we can achieve that in ACL rolling review reform.
Resource papers strike back! How the authors and the reviewers can stop talking past each other.
Many reviewers at major NLP conferences tend to reject models that fail to beat state-of-the-art. It is a heuristic that is simple, convenient, and wrong.
This is a post I wrote during my time in Text Machine Lab: https://text-machine-lab.github.io/blog/2020/quail/. It presents an English resource for machine r...
This is a post I wrote during my time in Text Machine Lab: https://text-machine-lab.github.io/blog/2020/bert-secrets/. It reports on an influential paper on ...
What I learned from organizing an introductory course on NLP for linguists at ESSLLI 2019.
A post inspired by an Uber ride with a Trump supporter.
Negative results are hard to publish, and even harder to make well-known. Even when the disproved result is something as pervasive as Mikolov’s word analogies.
With the huge Transformer-based models such as BERT, GPT-2, and XLNet, are we losing track of how the state-of-the-art performance is achieved?
Benefits of blogging for the academic souls.