You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

This page contains the article High Availability and .NET Framework



Introduction

One feature that remains unchanged in mission-critical and industrial applications is that operational stability, safety, and security are its principal requirements. The mechanisms related to increasing the guarantee of stability are among the core architectural changes made possible by new technologies, specifically in software. The advent of Microsoft .NET Framework brought higher and unique features to create high-availability systems.

.NET features regarding productivity tools, development help, class libraries, and communication standards with a crucial role in improving a Project Development Platform. However, this article aims at two important areas that only .NET was able to manage appropriately. Those are:

  • Creating intrinsically safe software architecture

  • Managing multiple versions of Operating Systems, Software tools, and projects



Software Safety

Intrinsically Safe Software

In field instrumentation, security is not only guaranteed by internal procedures or manufacturers' warranty but also and primarily by the system architecture, using voltages and currents which are "intrinsically safe" in the environment that the instrumentation will operate, so even in case of a specific equipment failure, the system is protected.

When we started using the expression "intrinsically safe software", we got many questions about its meaning for software. It is only to apply the same concepts we have in hardware to software systems, namely that even with a software component failure, the system architecture and design have intrinsic protection for its safety and operation.

The previous generation of technology used C/C++, pointers, and several modules sharing the same memory area, direct access to hardware and to the operating system resources, and necessary procedures vis-à-vis computers and languages available at the time. However, we consider these to be intrinsically unsafe.

New Generation x Previous Generation

The new generation of software uses computational environments, such as the .NET Framework, where processes are natively isolated between themselves and the operating system regardless of the programmer, allowing better use of computers with multiple-processor cores and ensuring higher operational stability, even in the face of potential drivers and hardware errors or failures on individual modules of the system — this also applies to user scripting in applications.

Previous generations used proprietary scripts or interpreted languages, such as JavaScript, VBScript, VBA, or proprietary expressions editors; the new generation relies on more modern and compiled languages such as C#, VB.NET, with control of exception, multi-threading, enhanced security, and object-orientation and more execution control.

You cannot perform full code validation during the development phases with interpreted languages. The final validation is performed only when the code is executed, which means that many problems are tested only during the execution of the project, but not during the technical configuration. A typical project may have hundreds to thousands of possible execution paths for the code, and the testing scenarios cannot test all these paths by exhaustively running all possible cases.

The ability to detect potential errors during the engineering and the ability to recover and isolate the errors during runtime are key elements for safety and operational stability, which are only possible by fully migrating the legacy interpreted scripts to the newly compiled and managed languages.



Releases and Compatibility

High Availability

An important question regarding high availability is how to manage the new release and system life-cycle management.

In real life, most systems require updates, corrections, and enhancements after their initial deployment; one of the most common factors that put the system's availability at risk is exactly change management. Even if the application itself did not change, it is necessary to install new computers, operating systems changes, and software tool updates at some point.

When working on the initial development of our platform, we spent many months studying how to manage thit issue better and how virtualized programming environments such as .NET and Java could help to achieve this goal. In the end, we discovered that when creating a system from the ground up, starting from a clean design, you can embed architectural components to help managing future releases.

System Availability

About the platform, .NET also showed a better platform over execution frameworks in this scenario for a few reasons, including the support of multiple programming languages, easier integration with Microsoft server and client components, more and faster graphical tools, high productivity development platform, among others.

Leveraging the .NET platform features, this article will explore three techniques used to ensure maximum compatibility and system availability, even in case of new releases of .NET or the application using classes or methods deprecated in the new platform releases. Those are:

  • Application layer on top of Microsoft .NET Framework

  • Built-in Microsoft .NET compiler supporting multi-targeting

  • Leverage side-by-side execution 

Using consolidated technologies, such as SQL database, to store your application programming and configuration also helps to keep backward compatibility when evolving the product. However, we will focus on explaining the three items connected most with the new .NET versions and potential deprecated methods.



Application Layer on .NET Framework

Independent Application Layer

The first design requirement is to ensure that the Software Tool to create the application is entirely in .NET Managed code. For example, we fully developed our version 2014.1 in C# code, .NET Framework 4.0. The first concept is using the software tool to create the final application projects, to expose most of the functionality not directly as a low-level platform call but as an "Independent Application Layer". Let us explore and understand this concept.

Except for the Scripts created by the user that may include direct .NET calls — we will talk about user scripts in the next section —, the remainder of the software tool functionality, as much as possible, does not expose the .NET directly to the engineering user, presenting a higher-level configuration tool instead.

Think about, for instance, displays, drawings, and animations in our platform. Instead of making the application engineers go deep into .NET programming, it provides a higher-level interface where you can enable the dynamic properties using dialogues and deployment of those dynamic features internally in the system. Users do not have to interact with WPF, XAML, or .NET programming; they only use our high-level drawing and animation tools.

Thus, when an internal .NET Class is changed, the software can keep the same user settings but internally implement the feature using the new .NET classes. It allows us to create a display drawing that will run natively in Microsoft .NET WPF and Apple iOS Cocoa.

Example

Let us use the example of the "Bevel Bitmap effect" deprecated in version 4.0. Assume your application was using it.

According to this concept, instead of implementing the Bevel in user programming code, you would have a "Configuration checkbox to enable Bevel dynamic" available in your "DYNAMICS" interface — very similar to what we have as "SHINE- Outer Glow effect" — so the user would select that animation to the object and the implementation would be done internally by the software configuration tool.

By removing that method from the .NET Framework, the package would replace the implementation with another method or library to mimic that same visual effect to the user as much as possible, even though the internal deployment is totally different.

The same similar concept applies to other potential issues. Having an abstraction layer between your project development and the platform allows you to keep project compatibility and makes it possible to handle the necessary changes to make the project configuration compatible with future .NET releases internally and transparently to the application engineers.



Multi-Targeting

Code Generation and Compiling Classes

Another great advantage of the .NET platform is its code generation and compiling classes, which allow you to embed a full .NET compiler within a Project Configuration tool. It allows the package to have complete control over how we will parse and compile the scripts created by the users, even electing to compile the code to another version of the Microsoft .NET Framework.

By default, the system will compile the application scripts to the same .NET version the latest software tool is using but run them in their own AppDomain. Also, we control the compiling, which means we can compile them, if necessary, to a previous .NET Framework version. 

Example

Let us explore a case scenario: imagine that you have a system created in Microsoft .NET version 4.0, and you have to migrate the system to a new .NET release 4.X. For this discussion, let us assume the application you created had scripts created using methods that were deprecated

in the updated .NET Framework. What now?

There are two solutions for that:


  • Solution A — If the deprecated method was replaced by another similar enough to be replaced automatically, the Project Configuration tool may have the Upgrade Utility to locate those methods in the code and replace them with the new one automatically. 

  • Solution B — The project configuration tool can enable the users to select the .NET TARGET of the project scripts. In the same way, Visual Studio can create DLLs to .NET 2.0, 3.5, 4.0, or 4.5, based on the user selection, and the embedded .NET compiler inside the Project Development tool can use similar features. Therefore, if necessary, we can enable your project to allow compiling the scripts in both too distinct .NET versions even if using the latest software configuration tool, according to the user selection of the TARGET. It is possible to have the same project running side-by-side with some script tasks under 4.0 and some script tasks under 4.X, as we will see in the next section.


Side-by-Side Execution 

Multiple Versions

Finally, .NET has a potential feature that must be leveraged in a Project Development Platform design: it can run different versions of the same component simultaneously or have multiple processes using a different version of the .NET Framework running concurrently.


.NET Framework

For more details, check the Side-by-Side Execution in the .NET Framework article by Microsoft.


Using, for example, the core components we included in our platform's project management and user-interface services, the system's different versions of the product can run concurrently on the same computer. Furthermore, when running one project, the many modules, real-time database, scripts, and communication, do not run in the same Windows processes; they are isolated in their process, exchanging data by a WCF (Windows Communication Foundation) connection.

In the same way, we have a device communication that creates channel one Windows Process (one .NET AppDomain) for each protocol that can also apply to different user scripts in the project — one can be compiled and running for .NET 4.0; another one to other releases. Both processes run side-by-side, with no conflict, each one in its domain, both accessing the real-time tags on the server process.

32 and 64 bits

Another similar scenario is the feature that we have to manage 32 and 64 bits execution on the same project configuration.

By default, when installed on 64-bit computers, a .NET application will leverage the operating system platform independently and run all its modules natively in real 64-bit mode when installing on 64 bits operating systems.

However, sometimes, you must force one specific script, a communication driver, or even the graphics user Displays to run in 32 bits. 

For instance, you may be consuming an external code DLL or a graphical component from a third party, which only works in 32 bits mode. To accomplish that, we just provided a RunModule32.exe version to allow one specific project component such as Scripts, Displays, or Device Communication to run a particular process in 32 bits, while the rest of the application is running 64 bits. The communication between those processes uses WCF, ensuring total isolation among the processes.



Reliable Application

Creating a reliable application that is manageable on its life-cycle is of ultimate importance and one of the top benefits of using .NET and creating new Software Tools from the ground up instead of migrating previous code, having a Project Configuration package that had its very first design with the goal to leverage the currently available technologies.

Even though this new technology is available, many systems still rely on code ported from DOS or using Windows low-level APIs and previous versions that create the DLL-hell effect of Windows, where any update, any installation of another unrelated software, any change in any part of the system, could break it all.

Using the right features provided by .NET Framework, SQL databases, and adding a well-designed "Independent Application Layer", you can create systems that have high compatibility with previous applications but cannot deny the ability to keep evolving the feature set and the technology.



In this section...

The root page @parent could not be found in space v10.

  • No labels