Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

January news roundup: What’s new in the world of AI?

OpenAI’s new GPT store and business-ready ChatGPT plans, NCSC predicts rise in AI-powered cybercrime, AIs making the web more accessible, and more.

Jan 31, 2024 • 5 Minute Read

Please set an alt value for this image...
  • AI & Data
  • Business & Leadership
  • News

Hello Gurus! Wondering what’s changed with AI this month? We’ve written an article with everything you need to know to keep in the loop. Here’s the key highlights you might have missed.

The January AI headlines

OpenAI’s GPT Store launched

Two months ago, OpenAI released GPTs, a way for users to create custom versions of ChatGPT that you can tailor for specific tasks or roles. This is a particularly cool feature for power users who don’t want to constantly be teaching ChatGPT to do something over and over, and you can build them without any coding at all. 

At the time, OpenAI promised a GPT Store where people would not only be able to share the GPTs they built, but monetize them. They have made good on the first part of this with the launch of the GPT store, and a promise that a GPT builder revenue program will be go live sometime in Q1 2024.

According to OpenAI, “As a first step, US builders will be paid based on user engagement with their GPTs. We'll provide details on the criteria for payments as we get closer.” Does this mean non-US builders won’t be able to earn revenue on the launch of this program? We’ll have to wait and see.

To learn how to make your own GPT using the GPT Builder, check out our article: “How to create custom GPTs in ChatGPT.

New ChatGPT Team plan now available

Up until now, there’s been only two paid plans for ChatGPT — ChatGPT Plus, which is an individual user purchase, and ChatGPT Enterprise, which as the name suggests, is a plan for big companies like Zapier, PwC, Canva, and others. But what if you fall between an individual user and an enterprise company? 

This month, OpenAI announced ChatGPT Team, a new self-serve plan that offers access to all the premium features of ChatGPT Plus, but also offers a collaborative workspace for your team, admin controls for team management, and the ability to own and control your business data (just like with ChatGPT Enterprise).

The cost for ChatGPT Team is only slightly higher than ChatGPT Plus (USD $20 a month for Plus, $25 per person for Team with a minimum of two users). Honestly, that’s a fairly reasonable increase for the extra feature set and privacy, making ChatGPT Enterprise’s features accessible even to the smallest of SMBs.

OpenAI launched new embedding models and API updates, lowers pricing

If you’re deep into playing around with OpenAI models, there was some great news this month: OpenAI released two new embedding models, reduced prices for GPT-3.5 Turbo, and introduced new ways for developers to manage API keys and understand API usage. To learn more specifics about these models and the costing changes, check out the OpenAI announcement.

NCSC: AI will substantially increase cyber threats in next two years, only be offset by counter-use of AI

According to a research report by the UK’s National Cyber Security Center (NCSC), AI will “almost certainly increase the volume and heighten the impact of cyber attacks over the next two years.” According to the research agency, “All types of cyber threat actors – state and non-state, skilled and less skilled – are already using AI, to varying degrees.”

According to the NCSC, AI will definitely be used by threat actors in the following areas:

  • Social engineering attacks: Creating convincing interaction with victims, including the creation of lure documents, without the translation, spelling and grammatical mistakes that often reveal phishing.

  • Summarizing data at pace: To quickly identify high-value assets for examination and exfiltration, enhancing the value and impact of cyber attacks.

  • Increase efficiency and effectiveness: Boosting ransomware construction, reconnaissance, phishing, and coding.

  • Assisting with malware development: AI has the potential to generate malware that could evade detection by current security filters, but only if it is trained on quality exploit data. According to the NCSC, “There is a realistic possibility that highly capable states have repositories of malware that are large enough to effectively train an AI model for this purpose.”

According to the report, the only solution will likely be to fight fire with fire: “The impact of AI on the cyber threat will be offset by the use of AI to enhance cyber security resilience through detection and improved security by design. More work is required to understand the extent to which AI developments in cyber security will limit the threat impact.”

UK and Canada signed an AI collaborative compute agreement

Right now, almost every country is trying to be the leader in the AI space. It seems the UK and Canada are pooling their efforts to this end, signing a deal to collaborate on computing power to advance AI research and development.

To develop AI, you need compute power and data. Access to state-of-the-art computing infrastructure helps countries stay globally competitive. Under the agreement, the UK and Canada will look at collaborating for shared research priorities like biomedicine. However, the door isn’t closed — they’re open to working with like-minded countries on sharing this compute power.

Other areas the UK-Canada science partnership involves are quantum computing, AI, semiconductors and clean energy.

Researchers are developing AI to make the internet more accessible

For people with disabilities, the internet can be woefully inaccessible, to say the least. In order to combat this, researchers at The Ohio State University have been working on an AI agent that can complete complex tasks on any website using simple voice commands.

"For some people, especially those with disabilities, it's not easy for them to browse the internet," said Yu Su, co-author of the study and an assistant professor of computer science and engineering at Ohio State. "We rely more and more on the computing world in our daily life and work, but there are increasingly a lot of barriers to that access, which, to some degree, widens the disparity."

Robot trained to read braille at twice the speed of humans

Researchers at the University of Cambridge have developed a robotic sensor that incorporates artificial intelligence techniques to read braille at speeds roughly double that of most human readers.

Using machine learning algorithms, the robot was able to read the braille at 315 words per minute at close to 90% accuracy. While the robot braille reader is not designed as an assistive technology, the high sensitivity required to read braille is considered an ideal test case for developing robot hands or prosthetics that have similar sensitivity to human fingertips.

Adam Ipsen

Adam I.

Adam is a Senior Content Strategy Manager at Pluralsight, with over 13 years of experience writing about technology. An award-winning game developer and ISC2 Certified in Cybersecurity, Adam has contributed to mission-critical software design, including FAA-compliant airfield lighting controls. He’s passionate about making complex tech topics accessible, drawing on his knowledge in software development, AI, cloud computing, and app design. In his spare time, Adam writes science fiction exploring the future of technology.

More about this author