This page contains the article High Availability and .NET Framework



Introduction

One feature that remains unchanged in mission-critical and industrial applications is that operational stability, safety, and security are its principal requirements. The mechanisms related to increasing the guarantee of stability are among the core architectural changes made possible by new technologies, specifically in software. The advent of the Microsoft .NET Framework introduced higher and unique features for creating high-availability systems.

.NET features, including productivity tools, development aids, class libraries, and communication standards, play a crucial role in improving a Project Development Platform. However, this article focuses on two important areas that .NET manages particularly well:

  1. Creating Intrinsically Safe Software Architecture
  2. Managing Multiple Versions of Operating Systems, Software Tools, and Projects

Software Safety

Intrinsically Safe Software

In field instrumentation, security is not only guaranteed by internal procedures or manufacturers' warranties but also primarily by the system architecture, which uses voltages and currents that are "intrinsically safe" in the environment where the instrumentation will operate. This ensures that even in the event of specific equipment failure, the system remains protected.

When we began using the term "intrinsically safe software," we received many questions about its meaning in the context of software. It refers to applying the same concepts we have in hardware to software systems. Specifically, even if a software component fails, the system architecture and design should have intrinsic protection to ensure safety and operational integrity.

The previous generation of technology relied on C/C++, pointers, and several modules sharing the same memory area, with direct access to hardware and operating system resources. These methods, while necessary at the time, are considered intrinsically unsafe by today's standards.

New Generation x Previous Generation

The new generation of software utilizes computational environments such as the .NET Framework, where processes are natively isolated from each other and the operating system, regardless of the programmer. This approach allows for better utilization of computers with multiple processor cores and ensures higher operational stability, even in the face of potential driver and hardware errors or failures in individual system modules. This also applies to user scripting within applications.

Previous generations relied on proprietary scripts or interpreted languages, such as JavaScript, VBScript, VBA, or proprietary expression editors. The new generation leverages modern, compiled languages like C# and VB.NET, offering control over exceptions, multi-threading, enhanced security, object orientation, and better execution control.

Interpreted languages do not allow for full code validation during development phases. Validation occurs only when the code is executed, meaning many issues are discovered only during project execution, not during technical configuration. A typical project may have hundreds to thousands of potential execution paths for the code, and exhaustive testing of all these paths is not feasible.

The ability to detect potential errors during engineering and to recover and isolate errors during runtime are critical for safety and operational stability. This level of assurance is achievable only by migrating legacy interpreted scripts to newer compiled and managed languages.



Releases and Compatibility

High Availability

An important consideration regarding high availability is how to manage new releases and system life-cycle management.

In practice, most systems require updates, corrections, and enhancements after their initial deployment. One of the most common factors that jeopardize a system's availability is change management. Even if the application itself remains unchanged, installing new computers, updating operating systems, and applying software tool updates can affect system stability.

During the initial development of our platform, we spent many months studying how to better manage this issue and how virtualized programming environments like .NET and Java could assist in achieving this goal. Ultimately, we discovered that starting from a clean design allows for embedding architectural components that facilitate future releases and updates, thus improving the management of system changes and maintaining high availability.

System Availability

About the platform, .NET has proven to be a superior execution framework for several reasons, including its support for multiple programming languages, easier integration with Microsoft server and client components, more and faster graphical tools, and a high productivity development platform, among others.

Leveraging the features of the .NET platform, this article will explore three techniques used to ensure maximum compatibility and system availability, even in the event of new releases of .NET or applications using deprecated classes or methods. These techniques are:

  • Application layer on top of Microsoft .NET Framework

  • Built-in Microsoft .NET compiler supporting multi-targeting

  • Leverage side-by-side execution 

Using consolidated technologies, such as SQL databases, to store application programming and configuration also helps maintain backward compatibility as the product evolves. However, this article will focus on explaining the three techniques most relevant to new .NET versions and potential deprecated methods.



Application Layer on .NET Framework

Independent Application Layer

The first design requirement is to ensure that the software tool used to create the application is entirely in .NET managed code. For instance, our version 2014.1 was fully developed in C# using .NET Framework 4.0. The concept involves using the software tool to create final application projects, exposing most functionality not as low-level platform calls but as an "Independent Application Layer." Let us explore and understand this concept.

Except for user scripts that may include direct .NET calls—discussed in the next section—the remainder of the software tool’s functionality is designed to avoid exposing .NET directly to the engineering user. Instead, it presents a higher-level configuration tool.

Consider displays, drawings, and animations in our platform. Instead of requiring application engineers to delve into .NET programming, the platform provides a higher-level interface for enabling dynamic properties through dialogues and internal deployment of those features. Users interact with high-level drawing and animation tools rather than dealing with WPF, XAML, or .NET programming.

Thus, when an internal .NET class is changed, the software can maintain the same user settings but internally implement the feature using the new .NET classes. This approach allows us to create display drawings that run natively on both Microsoft .NET WPF and Apple iOS Cocoa.

Example

Let us use the example of the "Bevel Bitmap effect," which was deprecated in version 4.0, to illustrate this concept. Assume your application was using this effect.

According to the concept, rather than implementing the Bevel effect directly in user programming code, you would have a "Configuration checkbox to enable Bevel dynamic" available in your "DYNAMICS" interface—similar to the "SHINE - Outer Glow effect." The user would select this animation for an object, and the implementation would be handled internally by the software configuration tool.

When the Bevel method is removed from the .NET Framework, the software package would replace the deprecated implementation with another method or library that mimics the same visual effect as closely as possible, even though the internal deployment differs.

This approach ensures that the user experience remains consistent, despite changes in the underlying technology. The same concept applies to other potential issues. By having an abstraction layer between your project development and the platform, you can maintain project compatibility and manage necessary changes internally. This makes it possible to keep the project configuration compatible with future .NET releases in a way that is transparent to application engineers.


Multi-Targeting

Code Generation and Compiling Classes

Another significant advantage of the .NET platform is its support for code generation and compiling classes, which allows embedding a full .NET compiler within a Project Configuration tool. This capability provides complete control over how the system parses and compiles user-created scripts, including the option to compile the code for different versions of the Microsoft .NET Framework.

By default, the system will compile application scripts to match the .NET version used by the latest software tool, running them within their own AppDomain. Additionally, this control allows the system to compile scripts to an earlier .NET Framework version if necessary.

Operating System Independency

With .NET 8, you can run the exact same code on Linux, macOS, and Windows computers. 

Example

Let us explore a case scenario: imagine that you have a system created in Microsoft .NET version 4.0, and you need to migrate the system to a new .NET release 4.X. For this discussion, let us assume the application you created had scripts using methods that were deprecated in the updated .NET Framework. What now?

There are two solutions for this:

  • Solution A — If the deprecated method was replaced by another similar enough to be replaced automatically, the Project Configuration tool may have an Upgrade Utility to locate those methods in the code and replace them with the new ones automatically.

  • Solution B — The project configuration tool can allow users to select the .NET TARGET of the project scripts. Just as Visual Studio can create DLLs for .NET 2.0, 3.5, 4.0, or 4.5 based on user selection, the embedded .NET compiler within the Project Development tool can use similar features. Therefore, if necessary, we can enable your project to compile the scripts in both distinct .NET versions, even when using the latest software configuration tool, according to the user's selection of the TARGET. It is possible to have the same project running side-by-side with some script tasks under 4.0 and others under 4.X, as we will see in the next section.


Side-by-Side Execution 

Multiple Versions

Finally, .NET has a potential feature that must be leveraged in a Project Development Platform design: it can run different versions of the same component simultaneously or have multiple processes using different versions of the .NET FrameworX running concurrently.


.NET Framework

For more details, check the Side-by-Side Execution in the .NET Framework article by Microsoft.


Using, for example, the core components included in our platform's project management and user-interface services, different versions of the product can run concurrently on the same computer. Furthermore, when running one project, the various modules, real-time database, scripts, and communication components do not run in the same Windows processes; they are isolated in their own processes, exchanging data via a WCF (Windows Communication Foundation) connection.

Similarly, device communication creates a separate Windows Process (one .NET AppDomain) for each protocol, which can also apply to different user scripts in the project—one compiled and running for .NET 4.0, and another for other releases. Both processes run side-by-side, with no conflict, each in its own domain, and both accessing real-time tags on the server process.

32 and 64 bits

Another similar scenario is the feature that allows us to manage 32-bit and 64-bit execution within the same project configuration.

By default, when installed on 64-bit computers, a .NET application will leverage the operating system platform and run all its modules natively in 64-bit mode when installed on 64-bit operating systems.

However, sometimes you may need to force a specific script, communication driver, or even the graphical user displays to run in 32-bit mode.

For instance, if you are using an external code DLL or a graphical component from a third party that only works in 32-bit mode, we provide a RunModule32.exe version to allow that specific project component—such as scripts, displays, or device communication—to run in a 32-bit process, while the rest of the application runs in 64-bit mode. The communication between these processes uses WCF, ensuring complete isolation among the processes.


Reliable Application

Creating a reliable application that is manageable throughout its lifecycle is crucial and one of the top benefits of using .NET to build new software tools from the ground up, rather than migrating previous code. Designing a Project Configuration package with the goal of leveraging currently available technologies is essential.

Despite the advancements in technology, many systems still rely on code ported from DOS or use Windows low-level APIs and older versions, which can lead to the DLL-hell effect in Windows. This situation occurs when updates, installations of unrelated software, or changes in any part of the system can break everything.

By utilizing the right features provided by the .NET Framework, SQL databases, and incorporating a well-designed "Independent Application Layer," you can create systems that maintain high compatibility with previous applications while continuing to evolve in terms of features and technology.


In this section...

  • No labels