I am joining ACL Rolling Review
It’s official: I joined the ACL Rolling Review team as an editor-in-chief, and I’d like to share some brief thoughts on this.
It’s official: I joined the ACL Rolling Review team as an editor-in-chief, and I’d like to share some brief thoughts on this.
This post (at ACL conference website) summarizes the analysis of ACL’23 peer review process: https://2023.aclweb.org/blog/review-report/. The full analysis i...
Will GPT-4 become a universally expected baseline in NLP research, like BERT in its time? Basic scientific methodology demands otherwise.
As a program chair of ACL’23, I was the lead author for this blog post on the conference website that summarized our approach to peer-review matching: https:...
This blog post (on the conference website) summarized our approach to the use of generative AI in ACL conference submissions and reviewing: https://2023.aclw...
This is a post I wrote for The Gradient: https://thegradient.pub/how-can-we-improve-peer-review-in-nlp/. It reports on a position paper that discusses the h...
Why fully anonymous peer-review is important, and how we can achieve that in ACL rolling review reform.
Resource papers strike back! How the authors and the reviewers can stop talking past each other.
Many reviewers at major NLP conferences tend to reject models that fail to beat state-of-the-art. It is a heuristic that is simple, convenient, and wrong.
It’s official: I joined the ACL Rolling Review team as an editor-in-chief, and I’d like to share some brief thoughts on this.
Will GPT-4 become a universally expected baseline in NLP research, like BERT in its time? Basic scientific methodology demands otherwise.
This blog post (on the conference website) summarized our approach to the use of generative AI in ACL conference submissions and reviewing: https://2023.aclw...
Why fully anonymous peer-review is important, and how we can achieve that in ACL rolling review reform.
With the huge Transformer-based models such as BERT, GPT-2, and XLNet, are we losing track of how the state-of-the-art performance is achieved?
Benefits of blogging for the academic souls.
Will GPT-4 become a universally expected baseline in NLP research, like BERT in its time? Basic scientific methodology demands otherwise.
Resource papers strike back! How the authors and the reviewers can stop talking past each other.
Many reviewers at major NLP conferences tend to reject models that fail to beat state-of-the-art. It is a heuristic that is simple, convenient, and wrong.
Negative results are hard to publish, and even harder to make well-known. Even when the disproved result is something as pervasive as Mikolov’s word analogies.
With the huge Transformer-based models such as BERT, GPT-2, and XLNet, are we losing track of how the state-of-the-art performance is achieved?
This is a post I wrote during my time in Text Machine Lab: https://text-machine-lab.github.io/blog/2021/busters/. It reports on one of the first studies on t...
This is a post I wrote for The Gradient: https://thegradient.pub/when-bert-plays-the-lottery-all-tickets-are-winning/. It reports on our investigation of th...
This is a post I wrote during my time in Text Machine Lab: https://text-machine-lab.github.io/blog/2020/bert-secrets/. It reports on an influential paper on ...
With the huge Transformer-based models such as BERT, GPT-2, and XLNet, are we losing track of how the state-of-the-art performance is achieved?
This post (at ACL conference website) summarizes the analysis of ACL’23 peer review process: https://2023.aclweb.org/blog/review-report/. The full analysis i...
As a program chair of ACL’23, I was the lead author for this blog post on the conference website that summarized our approach to peer-review matching: https:...
This blog post (on the conference website) summarized our approach to the use of generative AI in ACL conference submissions and reviewing: https://2023.aclw...
Field notes from EMNLP 2021, the first hybrid *ACL conference.
It’s official: I joined the ACL Rolling Review team as an editor-in-chief, and I’d like to share some brief thoughts on this.
Will GPT-4 become a universally expected baseline in NLP research, like BERT in its time? Basic scientific methodology demands otherwise.
Some argue that any publicly available text/art data is fair game for commercial models because human text/art also has sources. But unlike models, we know w...
Will GPT-4 become a universally expected baseline in NLP research, like BERT in its time? Basic scientific methodology demands otherwise.
Some argue that any publicly available text/art data is fair game for commercial models because human text/art also has sources. But unlike models, we know w...
Negative results are hard to publish, and even harder to make well-known. Even when the disproved result is something as pervasive as Mikolov’s word analogies.
Negative results are hard to publish, and even harder to make well-known. Even when the disproved result is something as pervasive as Mikolov’s word analogies.
A post inspired by an Uber ride with a Trump supporter.
What I learned from organizing an introductory course on NLP for linguists at ESSLLI 2019.
This is a post I wrote during my time in Text Machine Lab: https://text-machine-lab.github.io/blog/2020/quail/. It presents an English resource for machine r...
Yes, it is possible to record and edit a conference talk at home, with open-source tools in reasonable time.
Some argue that any publicly available text/art data is fair game for commercial models because human text/art also has sources. But unlike models, we know w...