What I learned from organizing an introductory course on NLP for linguists at ESSLLI 2019.
Posts by Category
A post inspired by an Uber ride with a Trump supporter.
Negative results are hard to publish, and even harder to make well-known. Even when the disproved result is something as pervasive as Mikolov’s word analogies.
With the huge Transformer-based models such as BERT, GPT-2, and XLNet, are we losing track of how the state-of-the-art performance is achieved?
Benefits of blogging for the academic souls.