This Month in AI and NLP: “Safety AI”, Predictions for AI in 2023, and Using AI to Catch Dangerous Drivers

What happened in NLP and AI news in November? Read all about “safety AI”, predictions for AI in 2023, and more.
Post Header Image
Anna Redbond
December 3, 2022
Published on
December 3, 2022
December 2, 2022
Post Detail Image

Each month, we gather the news in the fast-changing world of AI and NLP. This month saw controversy around “safety AI,” the state of AI 2022 report, and the UK leveraging AI to catch drivers using their phones. Let’s look at November’s highlights.

State of AI 2022 Report

The annual State of AI report (by Ian Hogarth and Nathan Benaich) has been published. And here it is! There's a huge amount of information in it, so here are a few quickfire takeaways that we gleaned from it: 

  • There's a growing interest in research around AI being used safely and responsibly, but it's still far behind our potential capabilities.
  • When it comes to large-model AI work, almost 0% of projects are being done in academia. Rather, they are all coming from decentralized research collectives and industry. (As evidenced by developments like the public release of Stable Diffusion.) 
  • AI is expanding its reach in a big way—from developing enzymes capable of breaking down plastic, to natural product discovery, and more.
  • Tech giants are not monopolizing AI and NLP (yet). They're being challenged by well-financed, venture-backed startups.

And A Few Predictions for AI in 2023 

  • A generative audio AI will debut that will attract more than 100,000 developers by September 2023.
  • >$100M will be invested in dedicated AI Alignment organizations 2023, as more people become aware of the risks of letting AI capabilities develop before safety protocols.
  • A major user generated content site (such as Reddit) will negotiate a commercial settlement with a start-up producing AI models (such as OpenAI) for training on their corpus of user generated content.

“AI Safety” is in the Spotlight

The AI world is currently paying attention to the collapse of Sam Bankman-Fried’s cryptocurrency exchange FTX and his trading firm Alameda Research. 

Why is the AI world interested in this? Bankman-Fried was a major donor to projects focused on developing advanced AI, and on what is being called “AI safety.” The collapse has triggered fears that important research around the potential dangers of AI could now be at risk.

There is growing interest in research around AI safety (as mentioned in the State of AI 2022 report), as people are increasingly concerned about potential adverse effects of AI. However, it is also apparent that research is lagging behind our true capabilities and has a way to go. 

Effective Altruism and AI

Another interesting side-effect here is that the scandal around Bankman-Fried has drawn a lot of negative attention to Effective Altruism (EA) and its impact on AI.

What is EA? It’s a philosophical and social movement that believes in “using evidence and careful reasoning to take actions that help others as much as possible.” One shared belief is that AI is a hugely powerful technology that has the potential to lead to humanity’s extinction if it runs amok. 

EA believes that humans have a limited amount of energy and time, and so must “maximize utility” in deciding where to put our time and money. So, Bankman-Fried was committed to funding safety AI research and developing cutting-edge AI in line with his beliefs. 

However, reports are now claiming that he used consumer funds to cover company losses (something which Bankman-Fried is now disputing). Now, some in the AI world are wondering whether Bankman-Fried used EA’s philosophy to rationalize business practices that were unethical (and possibly illegal). 

AI-enabled Cameras Are Catching People Driving Without Seatbelts

In the English counties of Devon and Cornwall, cameras are now being used to catch people driving without seatbelts or driving while using their phones. According to the BBC, the police have caught almost 590 people so far driving without seat belts and 40 people driving while using their mobile phones.

The British police have used traffic cameras to detect drivers breaking the speed limit for some time, and now they have the ability to analyze imagery of what the drivers are actually doing inside the vehicles with the help of machine learning. 

Funding is Going Into Computer Vision

Despite recession fears and widespread funding concerns, some areas of AI are still receiving injections of funding. One such area is computer vision. 

For example, V7 just announced that it has raised $33m to automate training data for computer vision AI models. 

Software like this is crucial for companies wanting to leverage AI for digital transformation. Building AI models is often seen as time consuming due to the sheer volume of quality data needed to train the models. 

As a result, a wave of startups are rising that aim to streamline that process. V7’s focus is on data labeling, and specifically on automatically identifying and categorizing data to speed up how AI models are trained. 

The Big Picture 

AI news is moving fast and it can be difficult to stay ahead of the news and conversations. Our goal is to create a monthly roundup of (some of) the biggest topics and news stories that have come to light. 

Contact us to set up a custom demo if you’d like to learn more about how cutting-edge AI could transform your business. 

What Happened Last Month? 

Read about Bruce Willis deepfakes, digital rights and more here.

No items found.