This Month in AI and NLP: Tesla’s AI Shifts, Meta’s Language Model’s Languages, and Google’s Mislabeled Emotion Datasets

What happened in NLP and AI news in July? Read all about Tesla’s AI changes, Meta’s language models, Google’s mislabeled emotion datasets, and more.
Post Header Image
Anna Redbond
July 29, 2022
Published on
July 29, 2022
Post Detail Image

Every month, Datasaur rounds up developments in the ever-changing world of AI and NLP. This is a time of major digital transformation, so we’ve captured some of the high-impact news from the month. Let’s take a look at July’s highlights.

Meta AI’s open source machine translation model can translate between 200 different languages

Meta’s new massive language model is now capable of translating between 200 languages. This is an astounding feat. In their words, they have a goal of “eradicating language barriers on a global scale.” They mention that a large amount of artificial intelligence research is focused on a small subset of languages, which naturally leaves out those less represented in the datasets. They have now created a conditional compute model that is trained on data obtained with “novel and effective data mining techniques tailored for low-resource languages.” 

Datasaur’s Two Cents:

We’re cautiously optimistic about Meta’s efforts, and think that the interesting side effect of this is that it helps democratize machine learning. How so? So far, English and Mandarin have made the largest strides by far in NLP because of the distribution of researchers. Meta has now collected datasets from a wide range of languages, such as several local languages in Indonesia—Balinese, Acehnese, Java, Minangkabau (Padang), Sundanese, and Sanskrit. This is a great way to contribute to a wider community that is beginning to pay more attention to languages that are typically left out of the discussion of machine learning.


The model itself is good, but the dataset being produced is even better since it allows everyone to participate in building the tools needed to democratize the technology. At Datasaur, we believe we should democratize technologies like this to make sure that every language in every geography can benefit. This is a big step in that direction. 

30% of Google's Emotions Dataset is Mislabeled

Last year, Google released their “GoEmotions” dataset: a human-labeled dataset of 58K Reddit comments categorized according to 27 emotions. The issue: 30% of the dataset is mislabeled. A couple of examples:

  • “daaaaaamn girl!” is labeled as anger
  • “Nobody has the money to. What a joke” is mislabeled as joy

The dataset struggled with things like idioms, sarcasm, profanity, and politics. 

Datasaur’s Two Cents: Data labeling is difficult. The human language is incredibly complex and computers now have to understand all of those complexities and subjectivities to label data. When building a dataset with labelers, we are often modeling the labeler’s bias and behavior instead of the actual data distribution. To mitigate this, things like Datasaur’s inter-annotator agreement features are important. They serve as a first line of defense to prevent bias and mistaken labels from being introduced to the dataset. Remember: “garbage in, garbage out.” Poor data going into the model creates poor accuracy on the other side. 

Tesla’s Head of AI Resigns

The biggest news in AI this past week was the surprise resignation of Tesla’s Andrej Karpathy, announced on Twitter. Karpathy oversaw the company’s artificial intelligence group and was central to Tesla’s efforts to deliver two promises: a fully self-driving car and a humanoid robot. 

Datasaur’s Two Cents

We don’t have much insight into why Karpathy resigned yet, though more information may come out in the coming weeks. There is one idea floating around the AI world, though, which says that if Tesla isn’t able to come through on their promises, that doesn’t bode well for the rest of us in the AI space. We’ll be keeping a close eye on any developments. 

More Applications of AI: A Snapshot 

  • A Human-AI Partnership Improves Breast Cancer Screening

Researchers from University-Hospital Essen (in Germany) recently published a study highlighting the power of AI and radiologists working together. It showed that a decision-referral approach—which sees cardiologists work with AI models to evaluate breast cancer screening—generates better results than clinicians or algorithms can achieve alone. It flagged improved specificity and sensitivity scores. This AI partnership could reduce the strain on radiologists without affecting the quality of their work or diagnoses. 

  • Robots can now Play with Playdough

Scientists from MIT and Stanford University are currently training robots to manipulate soft, deformable material into various shapes from visual inputs. The robots have two-finger grippers and are currently able to manipulate Playdough based on visual inputs. Based on the input, the robot can see, simulate, and shape doughy objects. The goal is for this to help improve home assistants, as this is a step towards robots being capable of performing household tasks like stuffing dumplings (which they are currently working on!). This could then help the elderly or those living with disabilities in the home. 

The Big Picture 

Why is this important? What are we keeping an eye on? Each week, AI and NLP advance and develop in a big way. It’s fast-paced and it can be hard to keep up, but it’s also fascinating to see the way things are changing and the way that the global community is impacting it. Right now, NLP and language AI are developing all the time, and now in a growing number of languages! As AI and NLP keep progressing, we’re excited to keep the conversation going and to keep learning with you.

No items found.