GitHub Copilot review: Do AI coding tools boost dev productivity?
Seasoned engineer and author Josh Cummings reviews GitHub Copilot, exploring how it handles documentation, input validation, unit tests, and code cleanup.
Aug 27, 2024 • 12 Minute Read
As someone who’s been a Java developer since before the turn of the century, I’ve seen countless changes in programming languages and the software industry. This year it’s AI-assisted coding tools, which promise to save us time and make our lives easier—but do they actually do this, or is this just another case of overhyped tech, like Blockchain and Web3?
In this article, I’ll share my initial experience using AI in my IntelliJ IDE to see whether it enhances or complicates my workflow. I’ve chosen GitHub Copilot for this experiment, as it’s the most common AI coding assistant on the market right now.
As a long-time contributor to Spring Security, I’ll be looking into how much AI could help me to improve that non-trivial, real-life codebase. Though, since this is an experiment, I won’t be committing any Copilot-generated code.
- Setting up GitHub Copilot for the first time: Easy
- Using GitHub Copilot for writing documentation: Helpful, but not ground-breaking
- Testing GitHub Copilot with input validation: Good at following patterns
- Using GitHub Copilot for writing unit tests: Clever at following patterns
- Using GitHub Copilot for code cleanup: Helpful, but imperfect
- Final Verdict: Very helpful with regular human oversight, legal considerations
- Interested in contributing to the Pluralsight community?
Setting up GitHub Copilot for the first time: Easy
This was incredibly straightforward, since there’s a plugin for my IDE, and activating it through my GitHub account was a cinch. Also, it’s nice that I can tell it not to make any suggestions that match any public code, addressing one of my concerns about copyright and licensing.
Really, though, my favorite part is that there was no credit card needed. Thank you, GitHub, for not being a used car salesperson.
Using GitHub Copilot for writing documentation: Helpful, but not ground-breaking
Sure, documentation isn’t “coding”, but it’s one of those things that’s usually lacking in a codebase since we developers rarely take the time. After testing, I found GitHub Copilot gave me a useful head start on JavaDocs, and helped with repetitive tasks and synonyms. However, while its accuracy was surprising, at times much of the help felt comparable to using a thesaurus,
Let’s dive deeper into how I tested it out. I started with a new Java feature I’m writing that needs documentation. I have a class called AnnotationSynthesizer that takes a Spring Security annotation and synthesizes it with its associated meta-annotations and placeholder values. Right now, it looks like this:
public interface AnnotationSynthesizer<A extends Annotation> {
A synthesize(AnnotatedElement element);
}
Let’s see what Copilot suggests for class-level JavaDoc:
/**
* A strategy for synthesizing an annotation from an {@link AnnotatedElement}.
*
* @param <A> the annotation type
*/
Okay, not bad. However, it does remind me a bit of a tweet I saw about documentation a few years ago from Mario Fusco:
Still, I appreciate that it gives me at least a head start. Let’s see if it helps me as I try describing the idea of “synthesis” in more detail. I added this:
Synthesis generally refers to the process of taking an annotations meta-annotations
and placeholders, resolving them, and then
Writing synthesis twice in a row though is a bit repetitive. To fix that, I asked Copilot for a synonym, and it suggested “combining.” Nice! Now I have:
/**
* A strategy for synthesizing an annotation from an {@link AnnotatedElement}.
*
* <p>Synthesis generally refers to the process of taking an annotation’s meta-annotations
* and placeholders, resolving them, and then combining these elements into a facade of
* the raw annotation instance.</p>
*
* @param <A> the annotation type
*/
I had a moment of surprise as I started to write my next paragraph:
<p>Since the process of synthesizing
…and it suggested:
an annotation can be complex, this interface
Hey, not bad! It didn’t read my mind, but it did see where I was headed, since I was just about to say synthesis can be expensive. After a small adjustment, it continued to make some nice recommendations. In the end, GitHub Copilot helped me mostly with joining words and got me unstuck on one particular synonym.
One of the things I’m noticing is that since it is helping me with minutiae, I have more headspace to make things cleaner and nicer. I’m not forgetting to add the `<p>` tags, for example, which I usually do. The assistance is giving me space to provide the reader with more nuanced information that will hopefully help them use the class better.
I found as I used it more with JavaDoc, it began to get smarter with its inferences. For example, I wrote:
* <p>If the given annotation can be applied to types, this class will search for annotations
* across the entire {@link MergedAnnotations.SearchStrategy type hierarchy};
… And then it wrote:
otherwise, it will only look for annotations directly attributed to the element.
Correct, Copilot! And just a little bit creepy.
Testing GitHub Copilot with input validation: Good at following patterns
Writing validation logic can be tedious and repetitive, which is exactly what AI tools are meant to help us with, right? GitHub Copilot was excellent at mimicking existing input validation already in the codebase, though I don’t know that I’d rely on it yet to create novel validation.
To test this feature with Copilot, I added the following method:
public void setTemplateDefaults(AnnotationTemplateExpressionDefaults templateDefaults) {
this.templateDefaults = templateDefaults;
}
To add validation, and after typing Assert.notNull, it offered this:
Assert.notNull(templateDefaults, "templateDefaults cannot be null");
Awesome! It added this because Spring Security regularly adds this validation to its setters. I imagine it wouldn’t have made this as a novel suggestion.
I’m excited to see how it does when I add unit tests for validating input as those are often also repetitive.
Using GitHub Copilot for writing unit tests: Clever at following patterns
GitHub Copilot was both impressive and frustrating when it came to unit tests. It was excellent at recognizing and applying naming conventions, and inferring test conditions from the code. However, its inability to directly modify the class—forcing me to manually intervene—made the workflow less efficient than expected.
To test this, I had some pre-existing unit tests I’d jotted down. They didn’t follow any naming conventions and weren’t very maintainable. I started by renaming the first few from something like this:
@Test
void testOne() {
Method method = AnnotationOnInterface.class.getDeclaredMethod("method");
// …
}
… to something like this:
@Test
void synthesizeWhenAnnotationOnInterfaceThenResolves() {
Method method = AnnotationOnInterface.class.getDeclaredMethod("method");
// …
}
After this, I asked GitHub Copilot to rename the rest of the methods. All the tests in question followed a very similar pattern.
Unfortunately, I was sad to see it just printed it out all out in the chat window, instead of modifying the class. That is, it didn’t rename all the methods inline, leaving me in a gross copy-paste scenario where I’m worried I’ll accidentally stomp on a method it didn’t change. That said, it did a fine job detecting the pattern, which was:
@Test
void synthesizeWhenNameOfClassThenResolves() {
}
Not all the test cases are scenarios where synthesis should resolve, though. In that case, the name should be ThenException instead of ThenResolves. I tested out CoPilot to see if it can detect this. Instead of trying the copy-paste route, I removed a test method name to see if Copilot would autocomplete and follow the pattern.
Happily, it did! On top of this, that one failure example was enough for it to recognize the name should have been ThenException. Those tests are originally of the form:
@Test
void testNineteen() {
Method method = ClassInheritingMultipleInheritance.class.getDeclaredMethod("method");
assertThatExceptionOfType(AnnotationConfigurationException.class)
.isThrownBy(() -> this.synthesizer.synthesize(method));
}
Copilot was able to derive the naming convention from the three tests I changed (two successful and one failure). Impressive! Lastly, there are a couple more use cases that I hadn’t covered yet, so I wrote the classes needed to simulate them:
private static class MultipleMethodInheritance implements AnnotationOnInterfaceMethod, AlsoAnnotationOnInterfaceMethod {
@Override
public String method() {
return "ok";
}
}
private interface InterfaceInheritingInterfaceAnnotation extends AnnotationOnInterface {}
private static class ClassInheritingGrandparentInterfaceAnnotation implements InterfaceInheritingInterfaceAnnotation {
@Override
public String method() {
return "ok";
}
}
Upon starting the next unit test, Copilot correctly suggested without prompting that the first class would be a failure situation. For the one after that, while it needed line-by-line prompting, it correctly ascertained not only that it would pass, but what the final test condition should be.
Honestly, Copilot impresses me with how much it can infer by looking at the structure of the document. I’m intrigued to see how much it can help people follow the prevailing writing style in an existing class.
Using GitHub Copilot for code cleanup: Helpful, but imperfect
When it comes to tidying up, GitHub Copilot was useful in suggesting improvements I hadn’t considered and correcting things I had forgotten. However, it also didn’t consider obvious improvements, it removed best practice code, and it added some unnecessary clauses. Pre-existing unit tests as well as a keen eye on your robot assistant are required here to make sure it doesn’t slip up.
To test code cleanup, I put Copilot to work on one of the more complex classes in this feature called UniqueMergedAnnotationSynthesizer. I pulled up the chatbot again and asked it for suggestions. One example snippet that it said could be simplified was the following:
if (element instanceof Parameter) {
List<MergedAnnotation<A>> annotations = findDirectAnnotations(element);
return requireUnique(element, annotations);
}
else if (element instanceof Method method) {
return findDistinctAnnotation(method, targetClass);
}
throw new AnnotationConfigurationException("Unsupported element of type " + element.getClass());
It explained:
This could be simplified by breaking down the method into smaller, type-specific methods. This enhances readability and maintainability.
Then, it provided a suggested revision:
@Override
public MergedAnnotation<A> merge(AnnotatedElement element, Class<?> targetClass) {
if (element instanceof Parameter) {
return handleParameterElement(element);
} else if (element instanceof Method) {
return handleMethodElement((Method) element, targetClass);
} else {
throw new AnnotationConfigurationException("Unsupported element of type " + element.getClass());
}
}
private MergedAnnotation<A> handleParameterElement(AnnotatedElement element) {
List<MergedAnnotation<A>> annotations = findDirectAnnotations(element);
return requireUnique(element, annotations);
}
private MergedAnnotation<A> handleMethodElement(Method method, Class<?> targetClass) {
return findDistinctAnnotation(method, targetClass);
}
Not bad. I like the idea of breaking up merge into a smaller set of methods. I’m a little disappointed that it also didn’t consider Java’s instanceof pattern matching introduced in Java 14. And it even took out one instance of it that I originally had. It also added an unnecessary else clause.
Still, it was enough to point out various improvements that I hadn’t considered. Taking inspiration from its suggestion, I changed the code to:
@Override
public MergedAnnotation<A> merge(AnnotatedElement element, Class<?> targetClass) {
if (element instanceof Parameter parameter) {
return handleParameterElement(parameter);
}
if (element instanceof Method method) {
return handleMethodElement(method, targetClass);
}
throw new AnnotationConfigurationException("Unsupported element of type " + element.getClass());
}
private MergedAnnotation<A> handleParameterElement(Parameter parameter) {
List<MergedAnnotation<A>> annotations = findDirectAnnotations(parameter);
return requireUnique(parameter, annotations);
}
private MergedAnnotation<A> handleMethodElement(Method method, Class<?> targetClass) {
return findDistinctAnnotation(method, targetClass);
}
GitHub Copilot’s next suggestion was to consolidate duplicate code, though without examples this time. I asked it for more detail, and it said that the exception handling in requireUnique and findDistinctAnnotation was duplicated. I’d forgotten to take care of that, so this was a great catch. Both methods have something that looks roughly like this:
switch (annotations.size()) {
case 0 -> null;
case 1 -> annotations.get(0);
default -> throw new AnnotationConfigurationException("""
Please ensure there is one unique annotation of type %s attributed to %s. \
Found %d competing annotations: %s"""
.formatted(this.types, element, annotations.size(), annotations));
};
What I like about this is that each snippet wasn’t an exact match, but Copilot was still able to deduce that they were similar enough that consolidation may be possible. The similar snippet had a piece of logic that I’d forgotten to duplicate. Consolidating would give both code paths the same error handling, and saved me from a future bug.
Next, I needed to define a couple of magic numbers. The algorithm introduces the idea of “distance” to help decide whether two annotations are conflicting. Method annotations are considered closer than class and interface annotations. So, I added the following code:
private static final int METHOD_SCORE = 11;
And then refactored the code to use METHOD_SCORE.
Then, Copilot suggested adding the following:
private static final int TYPE_SCORE = 13;
It was cool that Copilot could read the code, pick up the other magic number, and correctly ascertain what it was for.
Final Verdict: Very helpful with regular human oversight, legal considerations
In the end, my experiments with GitHub Copilot were interesting and eye-opening. That aside, the AI assistant helped me write Javadoc better and unit tests faster. It also helped me find bugs through code cleanup suggestions. On the other hand, some of its suggestions were outdated Java code. So long as I had unit tests first before trying out any of Copilot’s refactoring suggestions, I stayed in good shape.
While I enjoyed using GitHub Copilot for this experiment, I didn’t commit its generated code to Spring Security. This was due to questions about the legal status of the code produced by the AI assistant, and if it would affect Spring Security’s Apache 2.0 license. Depending on the project you’re working on, you may also want to investigate what the impact will be here.
Aside from those caveats, AI coding assistants like GitHub Copilot are a lot more useful and intuitive than I had imagined. If you’re a developer, I strongly recommend getting hands-on experience with these tools, and seeing what they can do to help you with your projects.
If you're looking for an expert-led course on how to use GitHub Copilot as a developer, GitHub’s Aaron Stewadt has a video training course on “GitHub Copilot Fundamentals: AI Paired Programming.” It’s got a nearly five-star ranking with over three thousand reviews and only takes two hours to watch. The course starts with covering the Copilot fundamentals, and finishes with a demo on how to build a simple game with the AI coding assistant. There’s also another article on this blog with a massive list of AI courses to choose from if you’re interested in expanding your knowledge beyond GitHub Copilot.
Interested in contributing to the Pluralsight community?
If you’ve got an article you want to write, we want to hear from you! Visit our community contribution page and register your interest.