Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

AI This Month: Your March AI news roundup

Nvidia unveils plans to put AI in humanoid robot bodies, Claude 3 dethrones GPT-4, hackers can expose your encrypted chatbot conversations, and more.

Mar 29, 2024 • 14 Minute Read

Please set an alt value for this image...
  • Software Development
  • Cloud
  • Business & Leadership
  • Cybersecurity
  • Tech Operations
  • News
  • AI & Data
  • Public Sector

AI moves at an incredible pace, so fast it can feel like you need to be a machine to keep up with it all! We’re here to help by gathering all the month’s biggest news in one place. With just a skim read of this article, you can catch up with all the month’s biggest AI developments and stay in the know. Prefer to listen? Check out the video with all the updates.

Table of contents

This month's industry news

OpenAI pitches Sora to Hollywood, wants GenAI video in movies

Last month, OpenAI shared their upcoming tool Sora with the world, a service that can generate realistic videos about a minute in length from a single text prompt. Now, the company is courting Hollywood to use Sora in filmmaking, arranging meetings to discuss partnerships with major studios and media executives. 

Sora has yet to be made publicly available, but Bloomberg reports that a few A-list directors and actors have already been given access. Will the future of Hollywood be driven by AI-generated video?

The number of tech job shrinks, but AI skill demand rises

According to the latest research from CompTIA, the number of tech jobs has gone down. However, the demand for employees with AI-related skills has jumped to 10% of all job postings, up from 6% two years ago. 

Tech companies are still hiring, but industry experts say they are reshaping workforces to align with emerging technical skills around AI and soft skills like creativity, continuous learning, and leadership. This is something we’ve written about in the past, in that research shows soft skills are becoming far more important in tech careers.

In short, if you’re in tech, things are going to be competitive in 2024. The best thing you can do is make sure you have some sort of AI qualification on your resume, and when you go into an interview, you can speak to your soft skills with tangible examples, not just your technical skills.

Nvidia is working on putting AI in humanoid bodies

There's an ongoing joke I keep seeing in the IT community, where programmers tell each other with each AI advancement, "You better start working on your welding skills." You might have seen this meme pop up once or twice. 

However, unfortunately AI might be doing the welding as well! This month Nvidia’s CEO Jensen Huang stood on stage amongst a row of humanoid robots and announced the company’s intentions to build an AI they could put inside of them. So, bad news if you were trying to future proof by taking up a bit of arc welding on the side. Google, Microsoft, Tesla, and Boston Dynamics are also working on a similar line of research, which is called "Embodiment," or putting an AI in a human body.

One of the most amusing parts of the whole presentation was that it was a dead-ringer for Hammer revealing his drones in Iron Man 2. Given that Nvidia's project is called "Project GR00T", someone has definitely been watching a lot of Marvel movies. However, sadly Jensen did not do the Sam Rockwell dance, and that makes me incredibly dissapointed.

Groot stands for Generalist Robot 00 Technology, and it's an AI model designed for embodied robots, enabling them to hopefully learn skills and solve various tasks on the fly. Nvidia has admitted the project is a moonshot to solve embodied AGI problems in the general world. 

If you’re not familiar with the term AGI, here’s a quick breakdown. There are three classifications of AI: 

  • ANI (Artificial Narrow Intelligence): An AI that can do some of what a human can do, but not all of it. This is the type of AI we’ve got now. Think Siri, Cortana, and ChatGPT. 

  • AGI (Artificial General Intelligence): An AI that can do all of the things a human can do.

  • ASI (Artificial Superintelligence): An AI that can do all of the things a human can do but better, and usually things we can’t do. Think Skynet from Terminator.

Some people think that AGI and ASI are either a long way off or impossible, which is why it’s considered a moonshot. But trying to achieve this could be the first steps—literally and metaphorically—to walking, talking, humanoid AI, such as robotic assistants or workers.

On a side note, searchers at ETH Zurich have also taught a quadrupedal robot to be adept at parkour and deal with tricky terrain, so robots could soon be making some real leaps and bounds (pun intended).

AI finds cancer signs missed by doctors

An AI tool called Mia tested by the UK’s National Health Service identified tiny signs of breast cancer in 11 women that had been missed by human doctors.

Cancer at the early stages can be incredibly hard to spot, and practically invisible to the naked eye. When detected at this stage, survival rates are extremely high, which is why early detection is vital. Because Mia works instantly, it has the potential to reduce waiting time for results from 14 days down to three.  

AI may not just be the key in detecting cancers, but curing them. MIT researchers are currently taking advantage of AI advancements to develop tools for personalized immunotherapies for cancer patients. In Australia, the Bio21 institute is investigating ways to use AI to create breakthrough drug treatments for battling cancer.

A team led by Google scientists has also developed a machine learning tool that can help to detect and monitor health conditions by evaluating noises such as coughing and breathing, like Covid-19 and tuberculosis.

This month's product news

Anthropic launches Claude 3, an AI model that finally dethrones GPT-4

This month, Anthropic launched Claude 3, a family of AI models that directly competes with OpenAI’s GPTs. If you’re not familiar with GPTs, they’re the AI models that power the wildly popular ChatGPT and Microsoft Copilot. Anthropic’s Claude is seen as a competitor in this space, and it was founded by ex-staff from OpenAI.

According to Anthropic, these new Claude-3 models approach “near human” capability, and could allegedly detect when it was being tested and called it out. Claude 3 also beat GPT-4, OpenAI’s most advanced model, on 10 popular AI benchmarks, something no other model has done to date. It also surpassed GPT-4 on ChatBot Arena, an a popular crowdsourced leaderboard used by AI researchers to gauge the relative capabilities of AI language models

Now, if you want to test Claude out for yourself, all three models are available now through Anthropic’s website, though the most powerful is subscription only. It’s also available via API for developers. If you’re using Google Cloud’s Vertex AI platform or Amazon Bedrock, you can access the Claude 3 models there too.

Like most AI offerings, the three models are tiered based on speed and performance. You’ve got Claude 3’s Haiku as the baby bear, Sonnet as the middle bear, and Opus as the big bear. It will be up to you to choose which one is just right, and find your goldilocks model. All three have a 200 thousand token context window. Opus is available through Claude Pro, which costs $20 per month.

GPT-5 to likely be released in mid-2024

Now, according to industry reports, GPT-5 is coming, and we’ll likely see it by the middle of this year. This likely means we’ll see a more powerful version of ChatGPT and Microsoft Copilot in the near future, since it’s likely the current GPT-4 model that powers these will be switched out for the new one.

While OpenAI hasn’t officially announced the release date or details, anonymous sources have revealed some enterprise customers have already received demos of GPT-5 and described it as “materially better.” It may also have the ability to launch AI agents to perform tasks automatically. That said, GPT-5 is apparently still being trained and tested for vulnerabilities.

Grok-1 becomes open weights, but requires serious GPU power

Elon Musk’s company, xAI, has released the base model weights and network architecture of Grok-1, a large language model that also competes with GPT. Grok-1 can be accessed for download at GitHub, and the weights can be accessed through a torrent link listed on the GitHub page.

OpenAI’s GPT models are famous for having restricted weights, which means you are limited in being able to train these models to perform a specific task. Meanwhile, Grok-1 joins the ranks of Google’s Gemma and Meta’s LLaMA models for having open weights.

But is Grok usable? With the weights, it’s so big, you need a whole rack of expensive GPUs to run it. People are working on quantized versions that reduce its size so it can work on consumer GPU hardware, but this is expected to dramatically reduce its processing capacity.

Nvidia unveils Blackwell B200, LightMatter creates photonic chips

This month, Nvidia unveiled Blackwell B200, the company's most powerful single-chip GPU, with 208 billion transistors. It’s designed to reduce AI inference operating costs (like running ChatGPT) and energy consumption by up to 24 times compared to the h100. They also released a superchip, the GP200, which combines two of the B200 chips and a Grace CPU for more performance.

Shortages of compute power have widely been cited as holding back progress and research in the AI field. Nvidia says that Blackwell-based products will be available from various partners starting later this year.

Meanwhile, LightMatter, a startup founded by three MIT alumni, is using photonic technology to try and create light-speed computing. The company’s first two products are a chip specializing in AI operations and an interconnect that facilitates data between chips, using both photons and electrons to drive more efficient operations.

Contentful and Amazon launch new AI content creation app powered by Bedrock

Contentful and Amazon have partnered to release a new content generation app called AI Content Generator powered by Amazon Bedrock. It’s a tool to create, rewrite, translate, and summarize content—something they may have wanted to do when coming up with the app title. 

Basically, it’s a tool to allow you to scale content across global experiences, brands, and channels. Let’s say you’re a content editor; one thing we’ve got to do is rewrite content to fit another channel. For example, you’ve got a really long case study and you want to turn it into a blog post. One of the things you can do is enter that content into this app, select a source and output field, change the tone to be “shorter," then generate.

You can set a brand profile to make sure your output stays on brand, translate to other locations, and automatically generate SEO keywords and descriptions.

OpenAI Whisper on Azure enters GA

OpenAI Azure on Azure is now generally available. Whisper is a speech-to-text model from OpenAI that developers can use to transcribe audio files. You can now use the Whisper API in both Azure OpenAI Service as well as Azure AI Speech services on production workloads, knowing it’s backed by Azure’s enterprise-readiness promise.

This means you’ve got a lot of options as a developer when it comes to using Azure for speech to text. 

Mistral on Azure becomes available on Azure

In other Azure news, Mistral’s flagship model, Mistral Large, is available on Azure. If you don’t know about Mistral, it’s a European challenger to OpenAI whose models have been going toe to toe with the GPT family. Microsoft has poured 15 million euros into the French firm.

Mistral Large ranks directly after GPT-4 based on standard benchmarks, and while Mistral’s previous AI models have been open weights, Mistral Large is unfortunately not. The move is no doubt meant to show Microsoft has a variety of choices when it comes to AI models, something Amazon has been trying to get a competitive edge in.

This month's security news

Hackers can read your encrypted, private AI-assisted chats

Researchers have devised an attack that can decipher your private AI responses even if they’re encrypted. This attack exploits a side channel present in major AI assistants, with the exception of Google Gemini.

It then refines the raw results through large language models that have been trained to decipher your AI responses. The long and short of it is that someone with a passive adversary-in-the-middle position sniffing your data packets can figure out what you’re talking about over half of the time.

This means anyone can read your private chats sent from AI assistants ChatGPT and other services. It can happen without the provider or the client’s knowledge. The research also showed the way that OpenAI encrypts traffic is flawed, so the content of the messages are exposed.

An example of this attack. If you got a message from ChatGPT like:

“Yes, there are several important legal considerations that couples should be aware of when considering a divorce, …

The attacker might read:

“Yes, there are several potential legal considerations that someone should be aware of when considering a divorce. …

The same being true for a service like Microsoft Copilot, and there is a public video available showing the attack in action against that service.

Now, even though it only deduces what you were talking about with 55% high accuracy, that level can absolutely breach the confidentiality of a session. Researchers suggested two ways to mitigate this attack: stop sending packets all at one time, or add random spaces between them. Both approaches come with the potential to degrade the chatbot experience due to delays.

OpenAI and Cloudflare have implemented the padding mitigation since the research was released. Still, the old adage applies. Do not enter personally identifiable information or sensitive information into these tools. 

If you’re involved in the rollout of chat-based LLMs, you should definitely make sure you have read up on this form of attack.

And that’s it for this month!

There was a lot of AI news this time around, and now you’re all up to speed with all the latest—go forth with the confidence that you’re now in the know. And as always, keep being awesome, Gurus.

Catch up with the last few months of AI news

Check our previous editions of AI This Month to see what you might have missed! You can also check out our video series of the same name on Pluralsight’s Youtube channel. This means you can just listen to the news headlines, making your life even easier.

Adam Ipsen

Adam I.

Adam is a Lead Content Strategist at Pluralsight, with over 13 years of experience writing about technology. An award-winning game developer, Adam has also designed software for controlling airfield lighting at major airports. He has a keen interest in AI and cybersecurity, and is passionate about making technical content and subjects accessible to everyone. In his spare time, Adam enjoys writing science fiction that explores future tech advancements.

More about this author