Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

RSA Conference 2024 Recap: All the news from the show

Couldn't make RSAC this year? Don't miss out with this feed of latest from the ground: keynote and session recaps, what's on the expo floor, and more.

May 7, 2024 • 3 Minute Read

Please set an alt value for this image...
  • AI & Data
  • Business & Leadership
  • Cybersecurity

UPDATE: RSA '24 is now over! Check out the news feed below to get our impressions from the event, and some of the things you might have missed. You can check out a selection of keynotes yourself on RSA's YouTube channel. Many of the sessions and keynotes listed below are only available on-demand in the RSA content library, where they will be available after June 9.

RSA Conference 2024 has begun—and we’re on the ground to bring you the latest news and takeaways as they happen.  We’ll be adding to this blog post throughout the week, so be sure to check back for the latest updates!

The latest news from RSAC '24

Thursday's news

Wednesday's news

Tuesday's news

Monday's news

RSA Conference 2024 ends on an amazing Alicia Keys-note

... The second pun I was considering for this headline was about public RSA key usage, but I'll spare you the additional wordplay.

I didn't expect a cybersecurity conference to be able to pull sixteen-time GRAMMY® Award-winning singer, songwriter, musician, producer, and actress Alicia Keys. However, I'm being constantly surprised by how many celebrities RSAC can actually bring to the stage. The lines outside Moscone West for this one were outrageous.

Alicia played a ton of her hit songs --- including "Girl on Fire", which the end of Borderlands 3 has utterly ruined for me --- and covers of other famous songs, including by Prince. This one unfortunately wasn't recorded, so if you missed it, you'll just have to imagine it from these photos. It's rare to see a performer where there's so little quality drop between her recordings and her live performing. 

I will say, she deserves sixteen awards just for being able to simultaneously sing and play the piano in those heels, especially given she was tapping her foot on that elevated step. 

Cybersecurity leaders urge the industry to lower barriers to entry, donate time to training

It's a well known fact that the cybersecurity industry has a shortage of workers. According to the World Economic Forum, 71% of organizations have unfilled cybersecurity positions, and globally there's a shortage of nearly 4 million professionals.

There was a lot of angst over this at the RSA panel called "Art of Possible: Transforming How We Develop the Next-Gen Cyber Workforce." Academics and White House representatives were practically wringing their hands over the issue, and discussing steps they'd been taking to try and mitigate it.

Suggestions ranged from seasoned cybersecurity professionals volunteering their time at high schools and colleges to inspire and train young professionals, taking advantage of government grants, reducing the barriers to entry for cybersecurity certifications (For example, that daunting half-decade of experience you need to get the CISSP), hiring people from outside of traditional IT roles within your organization, and making cybersecurity more attractive to students trying to pick a career.

One thing that was unmentioned --- and I feel needed to be discussed --- was the gating behavior of existing cybersecurity professionals and leaders to those aspiring to get into the field. What is being done here? There is a toxic attitude of looking down on anyone who has too many certifications and a non-traditional IT background, and yet some of the best CISO success stories we've heard at RSA are from people who lacked both of these. 

My opinion? There's people outside the door, you've just got to let them in. Also, upskill the people you've already got.

A desire to learn is the most important skill. Technical skills are a constant WIP, so you just need someone willing to start drinking from the firehose.

CISO success stories: From fighter pilot and janitor to the top job, and how you can do it too

Thursday at RSA is College Day, and so a lot of the panel discussions are focused on career growth and education. We attended one of the more inspiring panels, "From Aspiring to Achieving: CISO Journeys Unveiled", which shared how some people got the Chief Information Security Officer (CISO) job after taking unconventional career paths.

Catharina "Dd" Budiharto is the CISO of Cyber Point Advisory, but she started her career as a janitor who couldn't speak English. She migrated from Indonesia to avoid being forced into an early marriage. At first, she had no intentions of going into cybersecurity, aspiring instead to be a librarian but couldn't get the job due to her inability to speak English. Her career took her after university into accounting, then an EDP auditor, and then a security analyst. From there, she kept moving up to the top job through a combination of integrity, situational awareness, and a strong ability to communicate technical things in business language.

The second speaker was Larry Trittschuh, former EVP, CSO at Health Equity and CISO of Barclays America. He started his career as a pilot in the US Air Force, and did not have any tertiary IT qualifications to speak of. He shifted from the USAF to consulting in the private sector, then fell into cybersecurity much like Dd did. At the time, nobody had heard the term "Advanced Persistent Threat", and he seized on an opportunity to protect GE against them by becoming the company's Director of Incident Response. He later became GE's Executive Director, Global Information Security Operations, running cyber intelligence and relations, and security programs. 

The majority of attendees were existing leaders looking to break into that coveted CISO role. There was a great lesson here that there's no one true path to succeeding in a cybersecurity career, just a desire for continuous learning and growth.

Top leaders meet to discuss AI governance and ethics (minus the ethics)

Thursday’s keynotes kicked off fairly well, with the "AI Governance & Ethics" panel. This one was fairly star-studded, with privacy leaders from OpenAI, Workday, Uber Technologies, and Anu Talus --- head of the European Data Protection Board. Anu is arguably he most influential person in data protection on the planet, and the body who oversees the famous GDPR (and likely the future EU AI Act). 

The panel largely focused on predictions around how the EU AI act — being the world’s first comprehensive AI law — would influence the rest of the world to adopt similar frameworks. Even though it was acknowledge the GDPR had influenced many nations to follow suit with their own federal data protection laws — and expectations the EU AI act would do the same thing for AI policy — there was a LOT of cynicism that the US would follow in the EU’s footsteps. Mainly because the US was the only G-7 and G20 country that had not followed up with its own federal version of the GDPR long after it was implemented.

Notably, Anu didn’t raise her thumb for an informal poll indicating the US would adopt it by 2028. If she thought the US would adopt it at all was left unsaid. Anu was remarkably deft at avoiding any remarks that threw critique at the US's approach, instead preferring to explain how and why the EU is doing things the way they are (and letting us draw our own conclusions).

Conspicuously absent from the discussion on ethics were any career ethicists, or any discussion of ethics whatsoever, despite the title of the keynote. Things revolved mostly around the logistics of policy, and musing about the future rather than the present. I couldn't help but feel this panel would have been a good one to throw Dr Rumman Chowdhury into.

On a side note, it was pretty awesome to see so many powerful women on stage. Cybersecurity, like much of IT, has a shortage of women in the field, so it was a refreshing panel to attend.

The five most dangerous attacks SANS say you need to know are terrifying indeed

Perhaps the most anticipated and popular keynotes at RSA every year is SAN's briefing on the biggest threats in cybersecurity that year. In fact, they've been offering the same panel for the last 15 years. And by the murmuring behind me, it was still a hit:

"The last panels were full of too much marketing. This panel is always good," someone said behind me just as it started.

What were the top threats? According to SANs experts, even though 'top threats' is naturally subjective, it broke down to the following: technical debt, sextortion, election interference and the general offensive speed of AI in the hands of bad actors. 

And they made some persuasive cases. By the end of their seven minute explainations, I was genuinely worried about the threat of sextortion - which can affect anyone, including colleagues - as well as how AI and legacy code would impact my organization and the democratic process.

We'll be writing a deeper dive on each of these on our blog in the future, because each of these require - like the AI ethics talk today - a much deeper dive to do them justice. Until then, keep tuned.

A (literally) flashy keynote at RSAC --- and a note to check yourself for synesthesia

The keynotes on Wednesday were literally lit up at the start by omni-talented --- and prepare for a slog --- executive director of software at JP Morgan, Hova Labs co-founder, Neuroscientist, TEDMED speaker, Women Who Code advisor, and violinist Kaitlyn Hova.

"Is there anything you can't do?" the MC rightly asked her. I'm sure I wasn't the only one in the audience feeling a mite bit unskilled.

Kaitlyn told her personal story about not knowing she had synesthesia --- a crossover of the senses, like seeing colours with words --- until a professor brought up randomly about people seeing sound, and only then did she click that "did not everyone else see things that way?"

The presentation came with a note: with one in eight people experiencing synesthesia, you almost certainly know someone with synesthesia, even if you don't know it, like she didn't.

It could even be you.

So, you know, get tested! Meanwhile, Kaitlyn shared her lived experience by playing a LED-filled violin her partner and her had made, which was an exquisite glimpse into a completely different sensory world view.

RSA ‘24 has a star-studded cast, and today's was Ted Lasso

If you’re a fan of Ted Lasso, Jason Sudeikis took the stage on Wednesday at #RSAC ‘24! Far from the first star appearance, we’ve had Matthew Broderick from War Games (if you haven’t seen it, DON’T admit it at a cybersec conference), and Alicia Keys is set to close out the whole event. 

Broderick’s speech was a bit of a mixed bag, as he was notably awkward trying to relate to an audience of cybersec professionals, despite being notable for “acting like a hacker.” Jason Sudeikis had a better reception, as he shared what people were after—anecdotes and behind the scene gossip on the hit show Ted Lasso. Hey, give the people what they want!

For the second year, the AI Safety keynote by Microsoft was a massive hit

UPDATE: It turns out Dr. Rumman Chowdry, who was originally scheduled to speak, left the panel for a last minute emergency.

Last year, Microsoft’s “Data Cowboy” Ram Shankar Siva Kumar held a panel discussion that was an absolute banger—and this year's sequel was no exception. Ram might be the founder of Microsoft’s AI Red Team, but he ran the panel like a game show host (in a good way). 

He poked and prodded experts from Google, Nvidia, the Center for AI Safety, and Harvard with complete irreverence, making things highly spicy for a cybersecurity conference. His angle was clear: get technology leaders to talk about the AI “terminator” in the room and if it was coming to kill us.

These suggestions were shot down by famous cryptographer Bruce Schneier and Google’s VP of Security Engineering Heather Adkins. Both of them insisted securing AI was the same as what professionals had done for the last few decades, and Heather said the method was no different with AI than you’d use for Gmail.

“I think it’s important to say securing AI systems is just another way of saying we’re securing software,” Heather said. “AI is just software. If you look at it, you’ve got inputs and outputs. You’ve got the data, you’ve got the algorithms, the fine tuning, and, of course, you’ve got the use case.”

At this point, the guy next to me in the crowd was nodding, muttering a “Yeah” under his breath as if to shout, “Amen to that!”

True to form, Ram later asked why Bruce, who’d agreed along similar lines, had then signed the famous Statement on AI Risk. You might have seen it in the media—the document thousands of AI leaders put their name to, warning of “existential threats to AI.”

Bruce admitted he’d signed it thinking the exact opposite. “It’s an embarrassing story . . . I read it thinking [AI's] not something to be worried about. . . . And that’s not the reaction the world got when it [was] published,” he said. “I missed the point of the statement when I signed it. . . . We have a lot of security problems, but AI taking over the world is not high on my list.”

And then he turned to the creator of the statement, who was also on the panel—the founder for the Center for AI Safety, Dan Hendryks—and asked:

“So, what DOES it mean?”

Which, you know, was pretty funny (in the cybersecurity world).

Speaking of, last year’s standout on the panel had been Dr. Rumman Chowdry, AI ethics badass (if it’s not an official title, it should be) and US Science envoy. She had also grilled her fellow panelists with a similar engaging irreverence on what they were really doing for AI safety. This year, she was meant to be here, but had been notably replaced by Bruce Schneier. Personally, this was disappointment, as the keynote was doubly a must-see due to her being on the list.

We’ll be doing a more detailed article on the back and forth of this panel discussion on our Pluralsight blog --- with an actual deep dive into the concerns in this space, which justify more detail than this quick update --- so keep an eye out for it!

Novel C2 attack discovered that doesn't need to bypass firewalls, evades detection

You don't need to be a cybersecurity expert to know that an attack that doesn't even need to go through your firewalls, can evade all modern detection systems, and looks just like your regular traffic is a terrifying thing. Especially if, by all indications, it's been in the wild for five years.

It was exactly this novel attack that Pluralsight's Director of Security Research and Curriculum, Aaron Rosenmund, demonstrated for the audience at RSA '24 on Tuesday. This came after some debate if he should share it at all, but in the end decided it was better to show professionals it exists and discuss countermeasures, than live in ignorance. 

The attack completely bypasses the protection of the network boundary and evades current detection systems by sending waveforms over the wire with no application layer payload, making a truly tiny cross-section for countermeasures to detect.

Ironcat has publicly released his GitHub repository of the C2/Exfil POC Tooling he used, coined "QuantumCat," for the community to investigate. He said it was fairly likely the attack had been in the wild since 2017. He also shared a staged plan on how cybersecurity professionals should respond to it over the next six months.

"I did this with (a bit of assistance from) ChatGPT, and it took two nights to finish. It's highly likely threat actors are already doing it," Aaron said.

A more detailed blog post detailing the attack will be on the Pluralsight blog in the future, and you can access the QuantumCat repository here.

Quantum computing will soon destroy encryption; some cryptography leaders say post-quantum solutions need more testing

Quantum computing --- the upcoming advancement that will make the best of our current supercomputers look like a '90s Macintosh Classic --- are widely expected to make asymmetric encryption obsolete. Needless to say, this is a huge deal, and something cryptographers are rightly worried about.

At an RSA panel of leading cryptographers from AWS, IBM, Triple Blind, academia, and the famous Adi Shamar (Hint: If you're wondering who that is, he's the "A" in the RSA protocol he co-invented), some of the cryptographers raised that post-quantum solutions need to be tested more.

Part of this is because just last month, a scientific paper containing a method that allegedly rendered post-quantum cryptography obsolete was released, throwing Quantum Twitter into overdrive. While peer review established in just a week that there was a mistake and this wasn't the case --- which is unusually fast for peer review --- lead cryptographers say it has shaken people awake who may have been complacent with questioning post-quantum solutions.

"A paper like this... has put a fire under people... who are now trying to attack (these solutions)," said Tal Rabin, Senior Principal Applied Scientist, AWS. "People were not fierce enough... against quantum."

"Absolutely. NIST is a great convener, and this is one of the most crowdsourced (examples) in the term of the input that's gotten into that," said Debbie Taylor Moore, VP and Senior Partner, Cybersecurity, IBM Consulting. "I think it's important that we don't panic, but we're happy that folks are challenging these algorithms that are moving towards being standards."

OpenAI's Head of Security shares how orgs should go about using LLMs for cybersec

If you're wondering how to implement large language models (LLMs) to improve cybersecurity in your organization, OpenAI's Head of Security, Matthew Knight, ran a workshop on exactly how you can make the most out of it.

Some of the techniques he suggested were:

  • Using a LLM's linguistic capabilities to write counter-intelligence briefs based on lengthy reports in other languages
  • Summarizing and finding relationships in large sets of data (e.g. the Conti leaked message chats)
  • Assisting in writing scripts to pick up security issues in applications
  • Session summarization (to pick up threats in existing code)

He made some interesting points about scaling LLMs to scan data for signals that would be cruel and unusual punishment to task an analyst with, so doing this detects things that would otherwise be unnoticed.

For a deeper breakdown of OpenAI's suggestions, check out our article: "Improving cybersecurity with LLMs: How OpenAI says to do it."

Want to learn how OpenAI is using their own LLMs to improve their security stance? Check out our article "How OpenAI is using their own AI in house for cybersecurity."

The future of cybersecurity teams? 4 digital staff for every human

I love technology. Every day, you learn something wild, especially in AI and cybersecurity. But today’s keynote speaker at RSA Conference, Jeetu Patel, Cisco’s Executive Vice President and General Manager of Security and Collaboration, nearly blew my branded socks off with his vision for what the next decade might look like in cybersec.

Specifically, our world of eight billion people might suddenly jump to feeling like it’s eighty billion in the near future.

“We as humans, since the beginning of time, have worked in a world of complete scarcity,” Jeetu said. “In practical terms, all of us in the IT industry, we have a certain constrained budget . . . and we’ve been expected every year to do a little more with less. For the first time, we’re now entering a time when abundance can become a reality.”

The reason? The ability to augment human operations with AI, which is set to make a “seismic change” in how cybersecurity—and a lot of industries—operate. But, as Jeetu noted, these augmenting assistants will need a special arrangement of cybersecurity protections.

To learn more, check out our article here.

Top cybersecurity trends of 2024, according to Google research

The great thing about Google is they’ve got the resources to do some very thorough research. And based on over a thousand cybersecurity investigations and several hundred red teams over the year, Kevin Madia, CEO of Mandiant at Google Cloud, shared some interesting insights at the RSA Conference.

To learn more about these insights, such as changes in the activity of threat actors, how MFA is being undermined, and more, check out our article here.

Intel's Former Head of Cybersecurity Policy: "Policy will dictate the future of AI."

The first day of RSAC featured a talk by Dr. Amit Elazari, CEO of Open Policy Group and former Head of Cybersecurity Policy at Intel, who explained the connection between AI security, policy, and compliance. In her session “The AI Security Regulatory Landscape - Government leading the Way,” she explained how policy will dictate the future of AI, which will in turn dictate cybersecurity.

Look at US and EU AI policy as guides

Dr. Elazari pointed to the Biden administration’s AI executive order and the EU’s AI Act as examples of how government policies are shaping compliance and core security requirements. 

“You need to understand these so you can prepare your infrastructure,” she said. “Sometimes you’re waiting for the requirements from the CISO, and by the time you get them, you’re already behind. You were meant to implement them for the customer.”

And the federal requirements are coming faster than people had expected. “There [are] memos from the White House to agencies that you’ve got six months to action this,” she explained. “If you’ve got a piece of software that is somehow covered by the executive order, you should start thinking about your security requirements, because these requirements are going to trickle down the supply chain.”

Dr. Elazari also encouraged US companies to pay attention to EU legislation. “Security requirements are not just going to be dictated just by US requirements. You cannot ignore what’s going on in the EU AI Act. Cyber, privacy, and AI are being bundled, especially in Europe. Like with GDPR. . . because of the breadth of AI tech coming into Europe, this legislation will impact the whole market.”

Rethink red team requirements and data security

One of the most important things companies should be thinking of is not just how to secure a system using AI, but how to demonstrate that you’re securing it. “You should think, ‘What are the artifacts of compliance we’re going to be producing?’ It’s not just about securing something, but demonstrating it,” explained Dr. Elazari.

That may mean rethinking your red teaming requirements. “Your organization should be working with ‘friendly hackers’ [to prove you’ve met requirements]. The traditional concept of AI is hackers finding vulnerability issues, but now it’s also expanding to things like finding bias issues [in your AI],” she said.

“There’s also going to be a focus on data, data security, [and] data in context. With supply chains, we know the foundation of how to secure something is understanding what you have.”

More takeaways from day 1: AI safety at all levels

Some of the other key takeaways from day one? Exposing future AI threats—and learning how to prepare for them.

  • AI safety was the big topic of the day. OpenAI, Anthropic and Meta all seemed to agree that when creating AI, they often manifest cybersecurity abilities they weren't trained or built to do. E.g. Translating Russian 4-Chan slang, creating biological weapons, or creating new forms of cyberattack.

  • Anthropic's Jason Clinton says we can expect AI that can infect and autonomously replicate themselves on other machines (ASL-3 grade) within the next year and AI that can scan open-source code and create novel exploits (ASL-4 grade) within two years—so cybersecurity teams should start preparing now.

  • Anthropic discussed the rather terrifying possibility of people using master AIs to call other AIs in a tree-like structure (like Perplexity.ai) and having your own AI (such as your web-facing chatbot) co-opted for cyber-crime.

  • Meta's Joshua Saxe and Sahana Chennabasappa talking about their open source tools CyberSecEval for quantifying GenAI risks and Code Shield to mitigate them. Code Shield acts as a filter between your LLM and the code it produces to make sure it doesn't introduce vulnerabilities. According to Joshua and Sahana’s research, the better an LLM is at coding, the worse it is at keeping that code secure.

  • Meta highlights that we're potentially reaching the part in the technology adoption lifecycle where the hype dies down, since the reality of the business cases AI can be applied to is narrowing.

Start building in-demand security skills

Ready to meet the future of AI head on? Start a free trial of Pluralsight Skills to build hands-on AI and security skills.

Adam Ipsen

Adam I.

Adam is a Lead Content Strategist at Pluralsight, with over 13 years of experience writing about technology. An award-winning game developer, Adam has also designed software for controlling airfield lighting at major airports. He has a keen interest in AI and cybersecurity, and is passionate about making technical content and subjects accessible to everyone. In his spare time, Adam enjoys writing science fiction that explores future tech advancements.

More about this author