Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Biden's big AI executive order says a lot about the future of tech

The US policy shift leaves almost no AI stone unturned, touching everything from model creation, job security, civil rights and more. We unpack what it means.

Nov 03, 2023 • 7 Minute Read

Please set an alt value for this image...
  • Upskilling
  • News
  • AI & Data
  • Business & Leadership
  • Public Sector

On Monday, President Joe Biden issued an executive order on AI, setting the tone for how the technology will be governed in America moving forward. The order seeks to address nearly every AI issue that’s cropped up in the last twelve months. It includes testing mandates for advanced AI models to make sure they’re not weaponized, mandatory watermarking of AI-generated content, and minimizing the risk of workers losing their jobs to AI.

Even though the scope of the order is limited to the US — and specifically, the federal government — the country currently leads the world in AI, so any shifts are likely to influence the global debate on how to regulate the technology. By stating US federal government agencies can only work with companies that comply with these new standards, Biden is leveraging $694 billion in federal contracts to drive wider industry compliance.

The AI executive order is anything but short — at 20,000 words, it’s a quarter of the length of Harry Potter and the Sorcerer’s Stone — so we’ve saved you the pain and read over it for you. Below are our key takeaways and impressions of the order. 

AI will need to be more ethical and tested properly

The order states government agencies will need to perform “robust, reliable, repeatable, and standardized evaluations of AI systems.” Historically, there’s been a bit of confusion about how to even evaluate AI models, with professionals forced to approve them or risk shadow AI usage.

Agencies will also have to ensure any AI models they purchase are developed ethically, tested properly, and have watermarks on AI-generated content. This will also be an interesting demand of government agencies. Currently, the top 10 AI foundation model companies are woeful at sharing how “ethically” their AI were made — even Anthropic’s Claude, the ‘ethical AI’ company, lacks in this regard, and the only “open” thing about OpenAI is in the name.

Additionally, the US Government will be strengthening and enforcing consumer protection laws in regards to AI, hunting down those engaged in unintended bias, discrimination, privacy infringements, and “other harms from AI.”

“The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected,” the order states. “Use of new technologies, such as AI, does not excuse organizations from their legal obligations, and hard-won consumer protections are more important than ever in moments of technological change.

According to Pluralsight Author and AI expert Simon Allardice, companies building AI-driven applications should be making sure their applications are as transparent as possible, in response to the executive order.

“If I were a company building AI-driven applications involved with aspects of finance, healthcare, or human resources — let’s say, a tool that has any potential impact on loans, recruitment decisions or medical access — right now I’d be very careful to be planning for explainability of any AI-driven decisions,” Simon said. 

“Similarly, we can expect new levels of control and higher expectations around data privacy in general.”

There are clear timelines for both the US government and tech companies to meet

Biden’s executive order is not just “general guidance” around non-technical aspects. It gets incredibly specific around generative AI, and what needs to be done.

“While it isn’t a ‘law’ in the conventional sense of the word, we can expect to see a great deal of legislation following quickly,” Simon said. “The document is full of dozens of requests for specific deliverables on detailed timeframes, such as new guidelines and best practices from NIST around generative AI and foundation models within 270 days, and reports and recommendations on how to detect and label synthetic content within 240 days."

However, Simon said because so much of the order is delegating details to agencies, some parts are lacking in details. For example, there is a section that states companies developing foundational models must notify the federal government if something poses a ‘serious risk to serious risk to national security, national economic security, or national public health and safety.’

“Let's say you're a company developing a new foundation model. Who decides whether it poses a serious risk?” 

AI companies must now demonstrate best practice cybersecurity

Within 90 days of the order, the Secretary of Commerce will require all companies developing foundational models to continuously report they’re taking proper security measures. This includes consistent red-teaming and cybersecurity testing, the results of which they’ll need to disclose.

According to Pluralsight’s Senior Director of Security and Generative AI skills, Aaron Rosenmund, the move was “absolutely prudent” but could reduce competition of smaller firms in the AI space.

“I felt this was a perfect move, especially when focusing on how foreign militaries may be looking to use AI or vulnerabilities in AI systems as part of their plans to disrupt technology,” he said. “For most organizations, there seems to be a reasonable expectation of free use for existing capabilities.”

The US government is very worried about AI being used for bio-weapons

A key part of the order is about assessing the threat of AI being used to assist in CBRN threats — that’s chemical, biological, radiological, or nuclear. However, Biden seems to see AI as also being the solution, much like how it can be used both offensively and defensively in cybersecurity.

The same technology that is being used to make great strides in AI drug discovery are able to be repurposed for chemical and biological weapons, according to researchers. In one fictional plague pandemic scenario, a large language model gave advice on potential biological agents to use, considering budget constraints and success factors. It suggested “obtaining and distributing Yersinia pestis–infected specimens while identifying the variables that could affect the projected death toll.”

Given AI is currently the worst it’s ever going to be, this is a very real concern. The executive order is going to pressure companies working with these tools to prove they can’t be used to inadvertently create these CBRN threats.

There’s a strong AI upskilling focus to lead the world and prevent job losses

For most people, the rise of AI comes with the fear of losing your job to a machine. According to research, 69% of people worry AI could take over their job. Developers aren’t immune either, with 45% of developers experiencing “AI Skill Threat”, or a sense of key skills being made redundant with the rise of AI.

The executive order focuses a lot on encouraging the workforce to upskill their employees to both maximize the benefits of AI and minimize job losses (On a side note, it’s almost like Biden is reading from Aaron Skonnard’s speech from Navigate last week on human intelligence). The US sees upskilling as key to continuing to dominate the AI space, and states it will be “investing in AI-related education, training, development, research, and capacity”, as well as attracting AI talent from overseas. 

The US Government will be training non-technical staff in AI as well. According to its order, “employees who do not serve in traditional technical roles, such as policy, managerial, procurement, or legal fields” would be “eligible to receive funding for programs and courses that focus on AI, machine learning, data science, or other related subject areas.”

“I was glad to see that the order does seem to fully embrace the idea of AI being a transformative shift for the workforce in general — there’s a recognition that AI is no longer just a technical skill for a technical role—it affects everyone,” Simon said.

According to Aaron Rosenmund, AI should “allow humanity as a whole to elevate the level of work” they do, not replace them. However, this would only be possible if training was made readily available to all, which the order seems to be trying to achieve.

“But for it to be unlocked in this way, training on how to work with AI tools will need to become ubiquitously available,” Aaron said. “This is potentially the largest, and most widespread requirement for upskilling of a workforce we will see in our lifetime, as AI capabilities begin to touch all aspects of our lives.”

Conclusion: Riding the wave of future change in AI

Until now, most nations have been playing catchup with AI, especially the US government. Now, if this ambitious order is enacted by the various US federal agencies, it looks like there will be more stringent requirements on AI development, and a lot more of a push to upskill everyone, regardless of background, in learning about the technology.

Also, this order likely means there’ll be a lot more jobs in cybersecurity roles to properly vet AI-based products and services, a field that was already on the top of Pluralsight’s 2023 top tech careers. In early 2023, there were only enough cybersec professionals to fill 68% of all the US’s open job positions, so getting cybersecurity certified (Perhaps with your CIISP, CCSP, or CEH) is a wise career choice.

The executive order isn’t legislation, and nothing will change overnight. But it is a strong indication of where the winds of AI are blowing, and what 2024’s industry landscape is increasingly looking like, not just for tech professionals or AI companies, but for the wider global workforce.

Adam Ipsen

Adam I.

Adam is a Senior Content Strategy Manager at Pluralsight, with over 13 years of experience writing about technology. An award-winning game developer and ISC2 Certified in Cybersecurity, Adam has contributed to mission-critical software design, including FAA-compliant airfield lighting controls. He’s passionate about making complex tech topics accessible, drawing on his knowledge in software development, AI, cloud computing, and app design. In his spare time, Adam writes science fiction exploring the future of technology.

More about this author