Upgrading a Legacy Enterprise Application

Upgrading a more than 10-year-old, complex, and heavily integrated enterprise application is not only a technical task – it’s a strategic challenge.

This is how we upgraded a legacy enterprise application at our client, a major bank, from .NET Framework to .NET 8.

While the application was not particularly big, it was rather complex and heavily integrated, sending or receiving data from almost two dozen other applications.

We did not have the budget, time or authority to upgrade or even modify these external dependencies, so the goal was to preserve backward compatibility in every way possible.

keretrendszer-frissítés, Upgrading a Legacy Enterprise Application

Key Takeaways (If You Don’t Have the Time)

  • Technical upgrade ≠ simple version bump – when upgrading an app integrated with 20+ systems, even small changes can trigger a domino effect.
  • WCF survival tips – moving to REST, introducing CoreWCF, and applying a few creative workarounds.
  • Adopting modern best practices – dependency injection, async/await, and code-first database management.
  • Codebase cleanup – removing 15,000+ lines of unused code and fixing long-standing bugs.
  • Small details, big impact – like how a change in double formatting broke our workflow and why we had to “re-bug” it to keep things working.

The Process of Upgrading a Legacy Enterprise Application

Before we started, we had to decide on the scope of the project. The goal was a framework upgrade, with no changes in business logic. The application was comprised of multiple modules:

  • a Windows service to manage scheduled tasks,
  • WCF (Windows Communication Foundation) applications to communicate with other applications and between different internal network domains,
  • and two ASP.NET MVC websites for internal and external users.

We weighed the option of migrating our frontend to a modern JavaScript framework but decided against it due to time constraints and no tangible gains in the refactoring of the whole frontend. For similar reasons, we decided to keep the general architecture the same.

While a simpler .NET Framework application can be upgraded in a few clicks, in our case this was not really possible. The method we settled on was a gradual, project-by-project upgrade plan. We created a new solution and upgraded every project along the project dependency tree. For every project we created its equivalent in the new solution, copied the files, resolved third-party dependencies and refactored as needed.

This was a slow and painful process but allowed greater control over how to resolve third-party dependencies, how to change interfaces, and what to refactor.

  • We started with the cross-cutting layer, then the data access layer. The application was previously database-first, so we also made the leap to code-first for easier database schema management. We scaffolded the database and created the necessary migrations.
  • After the data access layer, we migrated the business logic layer, which was relatively painless.
  • With the core of the application done, we could finally work in parallel on the top-level modules: the Windows Service, the WCF services, and the websites.

While upgrading and ASP.NET MVC application to ASP.NET MVC Core is relatively straightforward, the authentication and authorization layer was a custom implementation scattered around in different controllers, interceptors and the Global.asax file. We had to fully rewrite it using middleware.

WCF: A Tool with Several Challenges

Without a doubt, the greatest challenge of upgrading our application was WCF. Microsoft is seemingly trying to move away from WCF, so it is not fully supported in .NET 8. But WCF is still widely used in enterprise applications, and our application is no exception. It was used for querying data from other applications, publishing an interface for other applications to use, and even for communication between modules of our applications. For each use case we had to use a different solution.

  • For communication between modules, we had no external dependencies, so we could freely switch to REST API calls. We had some custom logic implemented using behaviors, we migrated this to HTTP headers and middlewares.
  • For querying data from other applications, we had no other choice but to keep using WCF. We regenerated the WCF clients in .NET 8 and it worked almost flawlessly. We had one major issue: in certain binding configuration and IIS settings combinations the WCF client failed to identify the correct provider for Windows authentication – we had to use a custom HTTP message handler to force using the correct provider. To allow the customization of the WCF client bindings, we recreated the configuration options similarly to the classic XML configuration options.
  • For the interfaces published by our application, we again had no choice but to keep using WCF, so other applications could still consume these interfaces without interruption. However, unlike WCF clients, .NET 8 does not offer a built-in solution for publishing WCF services. Thankfully there is a .NET Core port of WCF, maintained by the community: CoreWCF. While it doesn’t have full functionality, it was more than enough for our purposes. We ran into a minor issue, where it could not parse the impersonation level sent by the caller, so we had to add a behavior in the configuration of the callers to set the impersonation level to none, but this could be done without code modification in the callers.

Other Challenges

While migrating WCF was the single greatest challenge (we had to solve these even before starting the project, as any one of them could have meant that the project could not proceed), we also ran into many other minor issues and interesting challenges along the way.

As the codebase was more than a decade old, there was no dependency injection or async-await, but now both are considered best practice and have framework and language support in .NET 8. We refactored every database and external service call to async and refactored the internal services to use dependency injection, but his was not without difficulty. For dependency injection to work, we had to heavily refactor some services to resolve circular dependencies. We also encountered an obscure SQLClient error, which slowed down the async querying byte arrays, so we had to switch back to using sync methods for these queries.

We also made the mistake initially to enable nullable reference types – which also by default enabled null validation on REST API requests. This meant that when a property in a request object was null in one specific scenario, we got a validation error. The application was not built originally with nullable reference types in mind, and it was not possible to decide which ones of the hundreds of parameters in our hundreds of calls can or cannot be nullable in every possible scenario, we finally decided to revert to nullable warnings until we could properly migrate to nullable reference types enabled.

Before the upgrade, the application also used BinaryFormatter to serialize objects into byte arrays, but BinaryFormatter is fundamentally unsafe, and its use is heavily discouraged. In .NET 8 it could still be made to work with a feature switch, but it was safer to remove it completely from the application. We used JSON serialization instead, which has the added benefit of being readable as strings. The only problem was that the serialized byte arrays were sometimes stored in database columns and were read again – which meant that we had to migrate those with the new serialization method.

One of the most annoying problems we faced was due to a bugfix in the framework. In .NET Framework the ToString(“F99”) method was not working properly when called on a double: after the first 15 significant digits, regardless of the value, it would simply fill the remaining digits with zero. And if the original precision was less than 15, the digits after the original precision were filled with zeroes.

.NET Framework: 1234.043 -> ToString(“F99”) -> “1234.0430000000000000000000000…”

.NET 8: 1234.043 -> ToString(“F99”) -> “1234.0438239482794242283948234…”

We read these values from an Excel file, which has exactly 15-digit precision, so this never really caused any problems, and the component we passed the values to also couldn’t process more digits.

However, in .NET Core this issue has been resolved, and the formatting now works as expected: it filled all 99 digits with random numbers. And even when we used ToString(“F15”), now the digits originally filled with zeros were filled with random numbers (due to how double is stored). This has caused all kinds of issues in the component we passed the values to and in the end had no other choice but to try to approximate the original behavior.

Summary: What We Learned

While we faced numerous challenges during the upgrade process and it was difficult at times, it wasn’t for nothing.

During development we learned a lot about the differences between .NET Framework and .NET Core applications, the finer details of how they behave and some obscure details of the framework. Microsoft did a lot to clean up and streamline how applications can be built, the startup process, and configuration. While they are still far from feature parity with .NET Framework, .NET 8 covers a lot of use cases. And there are some great new features like built-in dependency injection or how easy it is to create Web APIs.

We also learned about the codebase, refactored and cleaned up a lot of inconsistencies and inefficiencies (we got rid of some 15 thousand lines of code). We fixed a substantial number of older bugs we found during development and the following full regression test. The result was an easier to maintain, more modern codebase, which I believe was worth the effort.

Do you think there’s a similar, complax legacy system at your organization that needs an upgrade? Why not discuss this over a cup of coffee?