Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

AI in Software Development: Results of a pilot study of an intervention to overcome AI Skill Threat

As AI is increasingly being used in software development, AI Skill Threat is emerging as a risk to developer experience. Read the results of a pilot study of an intervention to overcome AI in software development.

Mar 22, 2024 • 10 Minute Read

AI in software development pilot study
  • Software Development
  • AI & Data

Is AI skill threat part of the new developer experience?

In October 2023, the Developer Success Lab launched new research – aptly titled The New Developer – that looked at the sociocognitive factors predicting how software engineers can engage, adapt, and thrive as they transition to AI-assisted software work. Looking across thousands of developers, our study presented a new, evidence-based framework for helping developers, engineering managers, and leaders as they grapple with the human side of this transition: AI Skill Threat, a pervasive sense of worry and anxiety about the future of software development skills..

Because one of our core values as a research team is to put research into practice, we also launched a Generative-AI Adoption Toolkit to help teams decrease AI Skill threat. In this post, we’re sharing results from a case study that we led here at Pluralsight, using our Toolkit to run AI Skill Threat-busting pre-mortems with three software teams. We share methods, key results, and recommendations for leaders and software teams that want to use our Generative-AI Adoption Toolkit. One key finding in our case study is that holding pre-mortems helped to increase developers’ perception of team learning culture, an attribute that our research has found is a key driver in decreasing AI Skill Threat.

This and other findings point to a need to provide software developers with:

  • access to AI tools, as well as clear guidance on how to use them

  • opportunities to engage in focused upskilling on these tools, with a particular mind toward evaluating their effectiveness and security

  • safe spaces to share vulnerably with leaders and teammates about their fears, anxieties, and worries during this early phase of AI-assisted coding

Research Background

As AI is increasingly being used in software development, AI Skill Threat is emerging as a risk to developer experience. AI Skill Threat occurs when developers fear that the skills they’ve used to find success as code writers may become obsolete with the introduction of AI-assisted coding, thus threatening their identities as software practitioners. AI Skill Threat can also occur as developers worry that others – for example, teammates or managers – will have inaccurate expectations for their productivity because of misconceptions about the capabilities of AI.

And this sense of threat is widespread: We found that 45% of developers showed evidence of worry, anxiety and fear about whether they could succeed in this era with their current technical skill set. And although about 60% of developers reported AI tools were already being used either by their immediate team or individual work, 28% of developers also reported an overall negative opinion on the quality of AI-assisted coding.

With these statistics in mind, the Developer Success Lab paired with Pluralsight’s Technology Center of Excellence to create the Generative-AI Adoption Toolkit. The toolkit is meant to answer an important question for today’s technology leaders: How can I help my team adopt AI-assisted coding tools and practices in a human-centered and evidence-based way?

One piece of that toolkit is the Generative-AI Pre-Mortem, an adaptation of the post-mortem, which is a common software ceremony. Whereas a post-mortem analyzes why a failure happened, the pre-mortem is designed to help engineering teams surface anxieties and fears about what could happen, guiding them in defining what adoption “failure” looks like at individual, team, and organizational levels, so that they might take action today to prevent those failures from manifesting in the future.

As researchers and interventionists, we at the Developer Success Lab strive to understand the effectiveness of these interventions. And so, we launched a pilot study to do just this with the Generative AI Pre-Mortem. We paired the pre-mortem with another important component of the toolkit – the AI Skill Threat Benchmarking Assessment, which includes empirically validated survey items around AI Skill Threat – and used it as a pre- and post-measure of the pre-mortem’s impact on teams’ aggregate AI Skill Threat.

We had two primary objectives:

  • To assess the effects of the pre-mortem ceremony on teams’ aggregate AI Skill Threat

  • To provide participating teams with insight into their current levels of AI Skill Threat, as well as recommendations for addressing & mitigating it

What We Did

We recruited teams to participate by reaching out to engineering leaders via Slack and offering to facilitate pre-mortems for their teams.

We recruited a total of three teams – two of which were large teams consisting of multiple smaller teams – with 32 total individuals. Four other teams wanted to participate, but weren’t able to fit the pre-mortem into their end-of-the-year schedule, and so their results aren’t included here.

For each team, we led a 90-minute pre-mortem ceremony based on our toolkit template. The basic structure of the ceremony asks teams to:

  • Situate themselves 12 months into the future, and imagine their worst-case scenarios around the adoption of generative-AI assisted coding for themselves, their teams, or their companies.

  • Imagine how those worst-case scenarios could have happened.

  • Analyze which of those worst-case scenarios is most high-stakes, and create action items for themselves, their teams, and their companies to prevent those scenarios from happening.

Participants were asked to complete the AI Skill Threat Benchmarking Assessment both before and after the pre-mortem. This pre-post methodology allows us to examine changes occurring as a result of the pre-mortem over time.

For each team, we ran statistical analysis of the benchmarking assessment results, as well as analyzed the qualitative data that was generated during the live pre-mortem. We wrote up these results for each team and made the team report available to all team members. We also created an executive summary of all results and shared this with engineering leadership at the company.

What We Found

Several interesting insights emerged from both the qualitative and quantitative data.

44% of developers reported that they and their teammates do NOT use AI tools. We believe this points to a need to provide developers with access to tools, clear guidance on how to use them, and opportunities to upskill around them.

For those who do use AI tools, 85% reported primarily using them to generate code, 77% to conduct research or learn, and 54% to debug or test code. Very few developers reported using AI to help with writing documentation, navigating code, and doing code reviews. Surfacing where developers were finding AI tools useful helped the teams identify tasks they may not have yet considered as opportunities for AI-assisted work.

AI Skill Threat decreased after the pre-mortem, though not statistically significantly. In a small sample size, this is to be expected, but also highlights that discussing anxieties can decrease developers’ fears and anxieties about AI over time. It’s important to note that anxiety ratings should not be used as the only target of success for interventions like these pre-mortems. For example, some teams may notice a small increase in AI Skill Threat immediately following a pre-mortem. Termed the “early spike” in clinical science, these early spikes in anxiety immediately following an intervention are considered normal and are more common on teams that have not discussed these fears and worries before. In these cases, discussing fears and worries can simultaneously trigger feelings of anxiety and worry in the moment, but also encourage individuals to take action to address the anxiety, thus resulting in better long-term outcomes.

Learning Culture and Belonging increased after the pre-mortem, by .27 and .44 points respectively, meaning that developers came out of these sessions with a boost to their Developer Thriving.

This boost was small, likely because our sample size was small and these teams started quite high in learning culture and belonging already–nearly at the top of the scale! This means that our teams had less to “improve” in these sessions, which likely results in smaller room for change. But even these teams experienced a small lift to their perceptions of shared learning culture and belonging.

Our teams still found that openly sharing and triaging examples of AI Skill Threat was a moment to lean into shared team strengths around collective learning, analyzing mistakes, and considering how to adopt AI tools in a designed, supported way as a team. While in our case study the change in learning culture and belonging was once again an observational difference with a small sample size,one way to increase the depth of these measures is for software teams to come together to run multiple pre-mortems over a longer period of time. Teams that are already high in learning and belonging can use these pre-mortems as a way to maintain their thriving cultures, while teams that find themselves low in learning and belonging can set an explicit target for measured improvements. 

Several themes around developers’ fears, anxieties and worries emerged from the qualitative analysis of the pre-mortem session:

  • AI tools will introduce more complexity and bugs that make work harder, rather than faster or easier, and decrease code and product quality.

  • AI tools’ capabilities and benefits will be exaggerated, resulting in unrealistic expectations and job loss for developers.

  • AI tools will bring about the end of coding as a valuable and creative craft.

  • Failure or variability in organizational access, adoption, and guidance will impede the successful adoption of AI tools by developers.

  • Which new skills will be required for this new era of software work is unclear. Developers feel threats to their career prospects and their identities as developers.

Recommendations for Engineering Leaders

Create spaces for open, honest communication around AI tool use and AI Skill Threat.

Some developers reported using AI tools despite their team not using them, or not using AI tools despite their team using them. This finding points to an opportunity for open and proactive communication between teammates around AI tool use. Practitioners could use this information as a first step to assess if there are any blockers to AI tool use on the team.

Additionally, the fears and anxieties that participants shared vulnerably during the pre-mortem were at times poignant, further supporting the validity of AI Skill Threat as a valid construct that deserves attention and intervention.

Invest in broad access, governance, and guidance on AI tool use to teams.

Folks are feeling uncertain about how they should proceed with using AI tools as part of their jobs, and are looking to leaders and the company to provide answers.

What does this potentially look like? The Pluralsight Technology Center of Excellence has spearheaded some efforts through its Prompt Engineering course clubs, the Generative AI Learnathon, and – of course – running ceremonies like the pre-mortem. Leadership could use these strategies as a starting point to gather organizational data insights before building out a more comprehensive plan.

We also think it’s important for companies to think holistically about their strategy for team-level  access to AI tools. While many developers might be experimenting as individuals, rolling out AI-assisted coding as a shared and team-level practice will give developers a supportive framework to begin experimenting with, upskilling in, and – most importantly – evaluating these tools and reporting their findings back to the organization itself. 

We recommend empowering developers to act like scientists of their own experience, exploring AI tools hands-on. Our previous research has found that successfully implementing new shared practices requires developer buy-in. Developers are eager to contribute to the vetting of AI tools, and they’re eager to upskill in these technologies. They fear hasty tool adoption, and understand the role they can and should play in choosing which tools to use.

Finally, clear governance on how these tools can and should be used (e.g., what security practices to follow, which tools are recommended, whether teams should make their own decisions, etc.) should be provided to developers. We recommend that leadership create, codify, and train folks on this governance so that engineers can begin evaluating these tools without the anxiety of wondering if they’re breaking rules or introducing security threats.