.NET Core: The Good, the Bad, and the Ugly
This past summer, we converted one of our production webservices from WebApi2 to .NET Core. It was a fairly involved process and ended up taking us a few months. Since it was the first time we had created a production dependency on .NET Core, we moved very deliberately, taking extra time to make sure that we had lots of testing and monitoring in place. As a result of that learning, when we transitioned a second webservice to .NET Core in January, it only took us around a week. Since the difference between those two timelines is significant, especially when trying to sell the change to management, we decided to write this post to help preserve and share what we learned.
In this first post, I’ll summarize our findings about the platform itself - where it shines, where it falls down, and the little quirks that give it “character”. We’ll be following up with posts that address some of the technical hurdles we went through in converting our application, as well as some insights we gained when rolling our first application to production.
The Good
.NET Core had an uphill battle, having to contend with quick-moving, adaptable competitors like Node.JS on the one side, and 15 years of stability and features in ASP.NET on the other. It made a bunch of promises (cross-platform! lightweight! blazing fast!) to help convince developers to come over. For the most part, it has delivered on those promises for us. Let’s talk about some of the biggest improvements that we’ve seen with .NET Core as we’ve deployed and run several web sites.
Cross Platform
First, I want to make it absolutely clear that I believe that the decision to run a website on Windows and IIS is fine. Yes, IIS is probably heavier weight than you really need, and certainly has a bunch of features that you don’t want. It’s also true that the license to run Windows is marginally greater than the cost of running an equivalent Linux server, assuming you don’t need technical support. On the other hand, the cost difference is almost certainly smaller than the cost of developing your software, and Microsoft has an excellent track record for supporting products for years.
At Pluralsight, though, we support a wide variety of different technology solutions, allowing different teams to make decisions independently. Given the languages and frameworks we support, our most common operating system in production is Linux, and the administration of Windows is different enough that we basically have to build two different sets of tools. We’re generally willing to pay that cost, but it’s exciting when we can get the benefit of allowing developers to use tools they are efficient with, while also standardizing the operational side of things. Running .NET on Linux means that we can use our standard tools to manage instances, and still take advantage of a modern, statically-typed language with strong library support.
This has been one of .NET Core’s strongest points in our experience. Although we have run into a couple of things that are platform specific, basically all of our code has been completely cross-platform compatible. While we were in the process of migrating, we even compiled on Windows and deployed the binaries to Linux (not that we recommend this, but it did work!).
Testable
Remember those great times trying to unit test things in ASP.NET, only to find that you’ve got a dependency on a concrete implementation that’s also sealed
?
Maybe, like me, you broke down and decided to try using the actual object, only to find that there’s not even a public constructor?
These challenges and many more are a thing of the past thanks to the design decisions of .NET Core!
There’s also much simpler integration with third-party unit testing libraries with dotnet
.
Both NUnit 3 and XUnit are supported “out of the box” (templates to create projects are included with the basic installation of the dotnet sdk).
Even better, since .NET Core 2.1, dotnet watch
is also included with the default installation, allowing you to set your tests to re-run automatically every time you make a change.
If you haven’t tried this before, you’ll be amazed at how much smoother this makes the red-green-refactor cycle.
Composable
One of the major goals for the .NET Core team was making it possible for you to decide what you need, and only include that in your code. They also understood the desire to replace the components that they provide with alternative implementations that provide different capabilities. Interfaces everywhere and a customizable request pipeline mean that you can include only the what you need, building a fast and efficient processing pipeline.
Clears out Baggage
15 years of improvements mean that the code that goes into a modern ASP.NET MVC application is significantly different from the code that went in a classic ASP.NET app.
In some cases, the language improvements have been shoe-horned in to the framework (async void
methods are a great example).
In others, like many of the event handlers, we’re still stuck with the way that C# 1.0 worked.
By throwing out some of the unavoidable false starts that have occurred over the last decade and a half, .NET core allows for building modern applications with the full toolset of C# 7.
The Bad
Release Schedule
Quick release cycles are very much in vogue at the moment. Continuous delivery is a goal that many, including the team at Pluralsight, aspire to achieve. By releasing early and often, we can deliver higher quality software than we would otherwise, while also reducing the time it takes to solve customer problems. When this same strategy is applied to frameworks, however, a number of problems can emerge.
One of the things that we tend to undervalue as software developers is working code. In every codebase of even moderate size, there is code that gets ignored. Code that sits for years, never being touched, quietly doing it’s job. Too often, we deride that code as “legacy”, without understanding that it is in fact some of the most productive, best code for return on investment. Code that does what it needs to without any interference on the part of people who “know better”.
When frameworks go through major versions every few months, though, this kind of productive code becomes challenging to support.
The complexity involved in frameworks make vulnerabilities inevitable, and so staying on a version that receives security updates is vital for any internet aware application.
While the .NET core LTS promise of at least 3 years of support is mostly acceptable, it feels like a step back from .NET framework support, which generally received 5+ years.
The current
branch, with its promise of only 3 months of support, fails to meet even the minimum required timeline for me to recommend it for production code.
While I’d love to see more frameworks adopt longer support windows, it seems like the best we can do is remember that the Framework is not your friend, and keep it behind architectural boundaries.
Loss of Tooling
One of the great advantages of using a mature language and framework is the rich set of tools that are available to assist you in developing and monitoring applications. With .NET core, while we haven’t ended up with a tool set that’s only 3 years old, we have lost a bunch of conveniences that are available for .NET Framework. For example, we use NewRelic to monitor our production applications and servers. Fortunately, NewRelic has support for .NET Core, even while running on Linux. Unfortunately, the tooling for .NET Core no longer reports the memory consumed or the CPU used by the process.
That story, of tooling that still mostly works, but not quite as well, has been repeated several times through our .NET Core experience. The good news is that things are getting better, even if there are occasional missteps. It will just take time for the ecosystem to grow as robust and diverse as it is currently with .NET Framework.
The Ugly
The Packages
Composability is a major benefit of .NET Core. Unfortunately, there’s a dark side that’s attached to this benefit - in an attempt to let you chose which things you include, the packages have gotten a lot smaller.
How much smaller?
In the Microsoft.AspNetCore.App
metapackage (a tool designed to save you from the complexity brought on by making the packages so much smaller), there are 144 dependencies.
That’s not counting any transitive dependencies that might exist (literally. I don’t have time to look at the transitive dependencies of 144 nuget packages to see if they’re all accounted for).
This kind of fragmentation introduces two major problems.
First, it becomes very difficult to understand what you actually depend on.
Very few people (if anyone at all) are going to actually use all 144 packages that are included in the Microsoft.AspNetCore.App
metapackage.
Your code will actually only depend on a subset (How many of you are actually going to use Microsoft.AspNetCore.Authentication.Twitter
, after all?).
Finding out what you actually need from that giant list is next to impossible.
Which means you have to watch for security vulnerabilities and performance problems that come up in any of that code.
Second, packages that small present a real, significant discoverability problem.
If you can’t find the code that would solve your problem, it doesn’t matter how good that code might be.
The .NET Core team has done a couple things to try and solve the discoverability problem.
Creating the Microsoft.AspNetCore.App
package was part of it, by reducing the number of packages you actually need to know about dramatically.
The other thing they’ve done is to include a wealth of extension methods, written in the namespace of the base class.
While this makes discoverability easier if you’ve included the package already, discovering where IApplicationBuilder.UseDeveloperExceptionPage
is defined requires fortitude, determination, and a good amount of luck.
“Upgrade” Path
Throwing out the baggage of 15 years of development has meant a leaner, cleaner interface for .NET Core. The downside of that decision is that switching from ASP.NET to .NET Core effectively requires you to re-write your applications. Effectively every component that interacts with the framework has changed, and needs to be replaced.
Some might suggest that .NET Core isn’t an upgrade path from ASP.NET. It’s true that Microsoft is currently supporting both ASP.NET and .NET Core, and is continuing to release new versions of both. On the other hand, it seems unlikely that the current situation will continue indefinitely. Especially with the announcement that .NET Core 3.0 will provide support for desktop applications, the only reasonable assumption is that all .NET Framework code will need to migrate to .NET Core at some point.
Fortunately, by .NET Core 2.x, most of the APIs you are likely to use are supported under .NET Core. This means that code that isn’t directly tied to ASP (or whatever framework you use) is likely to work without a major re-write. There are enough changes that you still need to be careful, but not much more so than any other major upgrade. I’ll discuss more how we architected our code to isolate framework changes from business logic (and why that’s a good idea, even if you’re not changing to .NET Core) in a future post.
Conclusion
On the whole, we’ve been quite happy with the move to .NET Core. The promise of seamless cross-platform C# has largely been fulfilled, and the performance and tooling have been sufficient for our needs. It’s important to remember that all decisions come with consequences, and not all of them are benefits. Hopefully this look at the good and bad of .NET Core can help you understand if now is the right time to migrate for your organization.
This is part one of an ongoing series about our transition to .NET Core. More information can be found in part 2, and part 3.