You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

This page contains the article High Availability and .NET Framework



Introduction

One feature that remains unchanged on mission critical and industrial applications is that operational stability, safety and security are its main requirements. The mechanisms related to increase the guarantee of stability are among the main architectural changes, made possible by new technologies, specifically on software. The advent of Microsoft .NET Framework brought higher and unique features to create high availability systems.

There are also many .NET features regarding productivity tools, development help, class libraries and communication standards which have an important role to improve a Project Development Platform, but this article is focused on two very important areas that only with .NET were able to properly managed. Those are:

  • Creating intrinsically safe software architecture
  • Managing multiple versions of Operating Systems, Software tools and projects



Intrinsically Safe Software

In field instrumentation, security is not only guaranteed by internal procedures or manufacturers' warranty, but also and primarily by the system architecture, using voltages and currents which are "intrinsically safe" in the environment that instrumentation will operate, so even in case of an specific equipment failure, the system is protected.

When we started using the expression "intrinsically safe software", we got many questions on its meaning for software; that is just to apply the same concepts we have in hardware to software systems, which means that even in case of a software component failure, the system architecture and design has an intrinsic protection for its safety and operation.

The previous generation of technology used C/C++, pointers and several modules sharing the same memory area, direct access to hardware and to the operating system resources, necessary procedures vis-à-vis computers and languages available at the time. However, we consider these to be intrinsically unsafe.

The new generation of software uses computational environments, such as the .NET Framework, where processes are natively isolated between themselves and the operating system regardless of the programmer, allowing better use of computers with multiple-processor cores and ensuring greater operational stability, even in the face of potential drivers and hardware errors or failures on individual modules of the system.

That also applies to user scripting in applications, previous generations used proprietary scripts or interpreted languages, such as JavaScript, VBScript, VBA or proprietary expressions editors; the new generation relies on more modern and compiled languages such as C#, VB.NET, with exception control, multi-threading, enhanced security and object-orientation and more execution control.

With the interpreted languages, you cannot perform a complete code validation during the development stages, only when execution passes by the code that the final verification is made, which means many problems are only possible to test when running the project, not during the engineering configuration. A typical project may have hundreds to thousands of possible execution paths for the code, and the testing scenarios cannot test all those paths by exhaustively running all the possible cases. The ability to detect potential errors during the engineering and the ability to recover and isolate the errors during runtime are key elements for safety and operational stability, which are only possible by fully migrating the legacy interpreted scripts to the new compiled and managed languages.



New releases and compatibility with previous applications

A very important question regarding high availability is how to manage the new release and life-cycle management of the system.

In real-life, most of the systems require updates, corrections and enhancements after its initial deployment; one of the most common factors that put the system availability under risk is exactly the change management. Even the application itself was not changed; it will be necessary to install new computers, operating systems changes and software tool updates in some point.

When working at the initial development of Tatsoft FactoryStudio, we spent many months studying how to better manage that issue, and how virtualized programming environments such as .NET and Java could help to achieve that goal. In the end, we found that when creating a system from ground up, starting from a clean design, you can embed architectural components to help to manage the future release.

About the platform, NET also showed a better platform over execution frameworks in this scenario, for a few reasons including the support of multiple programming languages, easier integration with Microsoft server and client components, richer and faster graphical tools, high productivity development platform, among others.

Leveraging the .NET platform features, this article will explore three techniques used to ensure the maximum compatibility and system availability, even in case of new releases of .NET, or the application using classes or methods that are deprecated in the new platform releases. Those are:

  • Application layer on top of Microsoft NET Framework
  • Built-in Microsoft .NET compiler supporting multi-targeting
  • Leverage side-by-side execution 

Using consolidate technologies, as SQL database, to store your application programming and configuration also helps a lot to keep backward compatibility when evolving the product, but in this article, we focus on explaining the three items that are most connected with the new .NET versions and potential deprecated methods.



Application layer on top of Microsoft NET Framework

The first design requirement is to make sure that the Software Tool to create the application is created entirely in .NET Managed code. As an example case, version 2014.1 of Tatsoft FactoryStudio was entirely developed in C# code, .NET Framework 4.0. The first concept is using the software tool to create the final application projects, to expose most of functionality not directly as a low-level platform call, but an "Independent Application Layer". Let us explore and understand this concept.

Except for the Scripts created by the user that may include direct .NET calls — we will talk about user scripts in next section —, the remainder of the software tool functionality, as much as possible, does not expose the .NET directly to the engineering user, presenting instead a higher-level configuration tool.

Think about, for instance, displays, drawings and animations in our platform. Instead of making the application engineers to go deep into .NET programming, it provides a higher level interface, where you can enable the dynamic properties using dialogues, the implementation of those dynamic features internally in the system. Users do not have to interact with WPF, XAML or .NET programming; they only use our high-level drawing and animation tools.

By that, when an internal .NET Class is changed, the software can keep the same user configuration, but internally implement the feature using the new .NET classes. That is, for instance, what allows us even to create a display drawing that will run natively in Microsoft .NET WPF and in Apple iOS Cocoa.

Let us use the example of the "Bevel Bitmap effect", which as deprecated in version 4.0. Let us assume your application was using it.
According to this concept, instead of implementing the Bevel in user programming code, you would simply have a "Configuration checkbox to enable Bevel dynamic" available in your "DYNAMICS" interface, very similar to what we have as "SHINE- Outer Glow effect", so the user would select that animation to the object and the implementation would be done internally by the software configuration tool.
By removing that method from the .NET Framework, the package would replace the implementation by another method or library to mimic that same visual effect to the user as much as possible, even though the internal implementation is completely different.

The same similar concept applies to other potential issues. Having an abstraction layer between your project development and the platform allows to keep project compatibility and makes it possible to handle the necessary changes to make the project configuration compatible to the future .NET releases internally and transparently to the application engineers.



Built-in compiler supporting multi-targeting

Another huge advantage of .NET platform is its code generation and compiling classes, which allow you to embed a full .NET compiler within a Project Configuration tool. It allows the package to have complete control on how we will parse and compile the scripts created by the users, even electing to compile the code to another version of the Microsoft .NET Framework.

By default, the system will compile the application scripts to the same .NET version the latest software tool is using, but those scripts run in their own AppDomain and we control the compiling, which means we can compile them, if necessary, to a previous .NET Framework version. 

Let us explore a case scenario: imagine that you have a system created in Microsoft .NET version 4.0, and you have to migrate the system to a new .NET release 4.X. For this discussion, let us assume the application you created had scripts created using methods that were deprecated

in the new .NET Framework. What now?

There are two solutions for that:

(A) — If the deprecated method was replaced by another one similar enough to be automatically replaced, the Project Configuration tool may have the Upgrade Utility to automatically locate those methods in the code and replace with the new one. 
(B) — The project configuration tool can enable the users the ability to select the .NET TARGET of the project scripts. The same way, Visual Studio can create DLLs to .NET 2.0, 3.5, 4.0 or 4.5, based on the user selection, the embedded .NET compiler inside the Project Development tool, can use similar features. Therefore, if necessary, we can enable your project, even if using the latest software configuration tool, to allow compiling the scripts both to distinct .NET versions, according to the user selection of the TARGET. In fact, it is possible to have the same project running side-by-side some script-tasks under 4.0 and some scripts tasks under 4.X, as we will see in the next section.



Leverage side-by-side execution 

Finally, the .NET has a potential feature, which should be leveraged in a Project Development Platform design, which is to run different versions of the same component at the same time, or to have different processes using different version of .NET Framework running at the same time.
For more details, take a look at this article by Microsoft.

Using as example, the core components we included at our platform's project management and user-interface services, the system different versions of the product can run at the same time, in the same computer. Furthermore, when running one project, the many modules, real-time database, scripts, communication, do not run in the same Windows processes; they are isolated in their own process, exchanging data by a WCF (Windows Communication Foundation) connection.

The same way we have a Device communication, it creates to each protocol channel one Windows Process (one .NET AppDomain), that also can apply to different user scripts in the project, one can be compiled and running for .NET 4.0, another one to other release. Both processes run side-by-side, with no conflict, each one in its own domain, both accessing the real-time tags on the server process.
Another similar scenario is the feature that we have to manage 32 and 64 bits execution on the same project configuration.

By default, when installed on 64-bit computers, a .NET application will leverage the operating system platform independently and it will run all its modules natively in true 64-bit mode, when installing on 64 bits operation systems.
However, sometimes, you need to force one specific script, or a communication driver, or even the graphics user Displays, to run in 32 bits; for instance, you may be consuming an external code DLL or a graphical component from a third-party, which only works in 32 bits mode. In order to accomplish that, we just provided in the system a RunModule32.exe version to allow one specific project component such as Scripts, Displays or Device communication to run that specific process in 32 bits, while the rest of the application is running 64 bits. The communication between those processes uses WCF, ensuring the total isolation among the processes.



Conclusion

Creating reliable application which are manageable on its life-cycle is of ultimate importance and one of the top benefits of using .NET and, instead of migrating previous code, creating new Software Tools from a ground-up, having Project Configuration package that had its very first design with the goal to leverage the current available technologies.

Even though this new technology is available, many systems still relies in code ported from DOS, or using Windows low-level APIs and previous versions that creates the DLL-hell effect of Windows, where any update, any installation of another unrelated software, any change in any part of the system, could break it all.

Using the right features provided by .NET Framework, SQL databases, and adding on top of that a well designed "Independent Application Layer", you can create systems that have high compatibility with previous applications, but not denying the ability to keep truly evolving the feature set and the technology.



In this section...

The root page @parent could not be found in space v10.

  • No labels