NLP News Cypher | 11.17.19
Gearing Up for NeurIPS 2019…
While we wind down from the recent EMNLP conference, NeurIPS 2019 is just around the corner starting on Dec. 8 thru the 14th! For a quick rundown of the NeurIPS’ metadata (authors, topics etc), check out this post:
GitHub:
This Week:
PyTorch vs. TensorFlow: The Final Frontier
GitHub’s Developer Community is A’Boomin, Same for Data Science!
Self-Supervised Representation
Gary Marcus Disses Everyone’s Demo
Part II: Knowledge Graphs Research from EMNLP
Transformers: THE Table To Know 🤯
When we think of the recent evolution inAI development, it is remarkable how the Connectionist School made us forget about the Symbolic School so quickly.
But in recent time, it seems that Symbolic AI is making a comeback (more on this later):
GitHub:
PyTorch vs. TensorFlow: The Final Frontier
When it comes to AI research, it seams the PyTorch framework is edging out Tensorflow (TensorFlow is still 🔥🔥 though):
More in-depth:
https://chillee.github.io/pytorch-vs-tensorflow/
GitHub‘s’ Developer Community is A’Boomin, Same for Data Science!
Developers are flocking to GitHub contributing to more NLP projects than ever before. In general:
“10M new developers joined the GitHub community, contributing to 44M+ repositories across every continent on earth.” Octoverse Report — GitHub
Full Report:
Here is Kaggle’s survey on the state of Data Science (Python Owns R 😢):
Self-Supervised Representation
Predicting the next word in text (GPT-2) or predicting the masked word in a sentence (BERT) are self-supervised techniques helping models learn from unlabeled data and thus priming the model for smaller labeled datasets for down streaming tasks. Check out how peeps are adopting self-supervision learning in other domains in Lilian Weng’s outstanding blog:
Gary Marcus Disses Everyone’s Demo
In search for AI’s holy grail, AGI, Gary Marcus (of the Yann LeCun debate/arm wrestling fame) recently discovered that GPT-2 can’t do math when demoing Hugging Face’s GPT-2 chatbot used in last year’s NeurIPS ConvAI2 competition:
He then tries out Adam’s GPT-2 talktotransformer.com and also doesn’t like it:
Minutes later he demos our BERT Question Answering model (fine-tuned on SQuAD) and trashes it in several tweets 😂😂. Here‘s an example:
ML community:
On a more serious note, what Gary wants (more reasoning (symbolic) adopted alongside deep learning techniques) is important! In fact, from my personal experience, AI models developed in the private sector are mostly hybrid: deep learning/machine learning + rule-based/graphs.
Which makes the next section a perfect segue…
Part II: Knowledge Graphs Research from EMNLP
Below is what Michael discusses in his latest post:
- Question Answering over Knowledge Graphs
- Natural Language Generation from KGs
- Commonsense Reasoning with KGs
- Named Entity Recognition and Relation Linking
Transformers: THE Table To Know 🤯
If you’re near NYC/Columbia Univ. tomorrow, check out Harry Crane’s presentation on Probability and Complexity!
This column is a weekly roundup of NLP news and code drops from researchers worldwide.
Follow us on Twitter for daily updates: @Quantum_Stat