Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Easy Heading Macro
headingIndent40
navigationTitleOn this page
selectorh2,h3
wrapNavigationTexttrue
navigationExpandOptiondisable-expand-collapse

This page contains the article High Availability and .NET Framework



Introduction

One feature that remains unchanged

on

in mission-critical and industrial applications is that operational stability, safety, and security are its

main

principal requirements. The mechanisms related to

increase

increasing the guarantee of stability are among the

main

core architectural

changes,

changes made possible by new technologies, specifically

on

in software. The advent of Microsoft .NET Framework brought higher and unique features to create high-availability systems.

There are also many

.NET features regarding productivity tools, development help, class libraries, and communication standards

which have an important role to improve

with a crucial role in improving a Project Development Platform. However,

but

this article

is focused on

aims at two

very

important areas that only

with

.NET

were

was able to

properly managed

manage appropriately. Those are:

  • Creating intrinsically safe software architecture

  • Managing multiple versions of Operating Systems, Software tools, and projects



Software Safety

Intrinsically Safe Software

In field instrumentation, security is not only guaranteed by internal procedures or manufacturers' warranty

,

but also and primarily by the system architecture, using voltages and currents which are "intrinsically safe" in the environment that the instrumentation will operate, so even in case of

an

a specific equipment failure, the system is protected.

When we started using the expression "intrinsically safe software", we got many questions

on

about its meaning for software

; that

. It is

just

only to apply the same concepts we have in hardware to software systems,

which means

namely that even

in case of

with a software component failure, the system architecture and design

has an

have intrinsic protection for its safety and operation.

The previous generation of technology used C/C++, pointers, and several modules sharing the same memory area, direct access to hardware and to the operating system resources, and necessary procedures vis-à-vis computers and languages available at the time. However, we consider these to be intrinsically unsafe.

New Generation x Previous Generation

The new generation of software uses computational environments, such as the .NET Framework, where processes are natively isolated between themselves and the operating system regardless of the programmer, allowing better use of computers with multiple-processor cores and ensuring

greater

higher operational stability, even in the face of potential drivers and hardware errors or failures on individual modules of the system

.That

— this also applies to user scripting in applications

, previous

.

Previous generations used proprietary scripts or interpreted languages, such as JavaScript, VBScript, VBA, or proprietary expressions editors; the new generation relies on more modern and compiled languages such as C#, VB.NET, with control of exception

control

, multi-threading, enhanced security, and object-orientation and more execution control.

With the interpreted languages, you

You cannot perform

a complete

full code validation during the development

stages,

phases with interpreted languages. The final validation is performed only when

execution passes by

the code

that the final verification

is

made

executed, which means that many problems are

only possible to test when running

tested only during the execution of the project, but not during the

engineering

technical configuration. A typical project may have hundreds to thousands of possible execution paths for the code, and the testing scenarios cannot test all

those

these paths by exhaustively running all

the

possible cases.

The ability to detect potential errors during the engineering and the ability to recover and isolate the errors during runtime are key elements for safety and operational stability, which are only possible by fully migrating the legacy interpreted scripts to the

new

newly compiled and managed languages.



Releases and Compatibility

High Availability

An

New releases and compatibility with previous applications

A very

important question regarding high availability is how to manage the new release and system life-cycle management

of the system

.

In real

-

life, most

of the

systems require updates, corrections, and enhancements after

its

their initial deployment; one of the most common factors that put the system's availability

under

at risk is exactly

the

change management. Even if the application itself

was

did not

changed; it will be

change, it is necessary to install new computers, operating systems changes, and software tool updates

in

at some point.

When working

at

on the initial development of

Tatsoft FactoryStudio

our platform, we spent many months studying how to

better

manage

that

thit issue

,

better and how virtualized programming environments such as .NET and Java could help to achieve

that

this goal. In the end, we

found

discovered that when creating a system from the ground up, starting from a clean design, you can embed architectural components to help

to manage the

managing future

release

releases.

System Availability

About the platform, .NET also showed a better platform over execution frameworks in this scenario

,

for a few reasons, including the support of multiple programming languages, easier integration with Microsoft server and client components,

richer

more and faster graphical tools, high productivity development platform, among others.

Leveraging the .NET platform features, this article will explore three techniques used to ensure

the

maximum compatibility and system availability, even in case of new releases of .NET

,

or the application using classes or methods

that are

deprecated in the new platform releases. Those are:

  • Application layer on top of Microsoft .NET Framework

  • Built-in Microsoft .NET compiler supporting multi-targeting

  • Leverage side-by-side execution 

Using

consolidate

consolidated technologies, such as SQL database, to store your application programming and configuration also helps

a lot

to keep backward compatibility when evolving the product. However,

but in this article,

we will focus on explaining the three items

that are

connected most

connected

with the new .NET versions and potential deprecated methods.



Application

layer

Layer on

top of Microsoft

.NET Framework

Independent Application Layer

The first design requirement is to

make sure

ensure that the Software Tool to create the application is

created

entirely in .NET Managed code.

As an

For example

case

, we fully developed our version 2014.1

of Tatsoft FactoryStudio was entirely developed

in C# code, .NET Framework 4.0. The first concept is using the software tool to create the final application projects, to expose most of the functionality not directly as a low-level platform call

,

but as an "Independent Application Layer". Let us explore and understand this concept.

Except for the Scripts created by the user that may include direct .NET calls — we will talk about user scripts in the next section —, the remainder of the software tool functionality, as much as possible, does not expose the .NET directly to the engineering user, presenting

instead

a higher-level configuration tool instead.

Think about, for instance, displays, drawings, and animations in our platform. Instead of making the application engineers

to

go deep into .NET programming, it provides a higher-level interface

,

where you can enable the dynamic properties using dialogues

, the implementation

and deployment of those dynamic features internally in the system. Users do not have to interact with WPF, XAML, or .NET programming; they only use our high-level drawing and animation tools.

By that

Thus, when an internal .NET Class is changed, the software can keep the same user

configuration,

settings but internally implement the feature using the new .NET classes.

That is, for instance, what

It allows us

even

to create a display drawing that will run natively in Microsoft .NET WPF and

in

Apple iOS Cocoa.

Example

Let us use the example of the "Bevel Bitmap effect"

, which as

deprecated in version 4.0.

Let us assume

Assume your application was using it.

According to this concept, instead of implementing the Bevel in user programming code, you would

simply

have a "Configuration checkbox to enable Bevel dynamic" available in your "DYNAMICS" interface

,

very similar to what we have as "SHINE- Outer Glow effect"

,

so the user would select that animation to the object and the implementation would be done internally by the software configuration tool.

By removing that method from the .NET Framework, the package would replace the implementation

by

with another method or library to mimic that same visual effect to the user as much as possible,

even

 even though the internal

implementation

deployment is

completely

totally different.

The same similar concept applies to other potential issues. Having an abstraction layer between your project development and the platform allows you to keep project compatibility and makes it possible to handle the necessary changes to make the project configuration compatible

to the

with future .NET releases internally and transparently to the application engineers.

Anchor
_GoBack
_GoBack



Multi-Targeting

Code Generation and Compiling Classes

Another great advantage of the

Built-in compiler supporting multi-targeting

Another huge advantage of

.NET platform is its code generation and compiling classes, which allow you to embed a full .NET compiler within a Project Configuration tool. It allows the package to have complete control

on

over how we will parse and compile the scripts created by the users, even electing to compile the code to another version of the Microsoft .NET Framework.

By default, the system will compile the application scripts to the same .NET version the latest software tool is using

,

but

those scripts

run them in their own AppDomain

and

. Also, we control the compiling, which means we can compile them, if necessary, to a previous .NET Framework version. 

Example

Let us explore a case scenario: imagine that you have a system created in Microsoft .NET version 4.0, and you have to migrate the system to a new .NET release 4.X. For this discussion, let us assume the application you created had scripts created using methods that were deprecated

in the

new

updated .NET Framework. What now?

There are two solutions for that:

(


  • Solution A

)
  • — If the deprecated method was replaced by another

one
  • similar enough to be replaced automatically

replaced
  • , the Project Configuration tool may have the Upgrade Utility to

automatically
  • locate those methods in the code and replace them with the new one automatically

(
  • Solution B

)
  • — The project configuration tool can enable the users

the ability
  • to select the .NET TARGET of the project scripts.

The
  • In the same way, Visual Studio can create DLLs to .NET 2.0, 3.5, 4.0, or 4.5, based on the user selection, and the embedded .NET compiler inside the Project Development tool

,
  • can use similar features. Therefore, if necessary, we can enable your project

, even if using the latest software configuration tool,
  • to allow compiling the scripts in both

to
  • too distinct .NET versions even if using the latest software configuration tool, according to the user selection of the TARGET.

In fact, it
  • It is possible to have the same project running side-by-side with some script

-
  • tasks under 4.0 and some

scripts
  • script tasks under 4.X, as we will see in the next section.


Leverage side

Side-by-

side execution 

Side Execution 

Multiple Versions

Finally,

the

.NET has a potential feature

, which should

that must be leveraged in a Project Development Platform design

, which is to

: it can run different versions of the same component

at the same time,

simultaneously or

to

have

different

multiple processes using a different version of the .NET Framework running

at the same time.

concurrently.


Info
title.NET Framework
For more details,
take a look at this
check the Side-by-Side Execution in the .NET Framework article by Microsoft.


Using

as

, for example, the core components we included

at

in our platform's project management and user-interface services, the system's different versions of the product can run

at the same time, in

concurrently on the same computer. Furthermore, when running one project, the many modules, real-time database, scripts, and communication, do not run in the same Windows processes; they are isolated in their

own

process, exchanging data by a WCF (Windows Communication Foundation) connection.

The

In the same way, we have a

Device

device communication

, it

that creates

to each protocol

channel one Windows Process (one .NET AppDomain)

,

for each protocol that can also

can

apply to different user scripts in the project

,

one can be compiled and running for .NET 4.0

,

; another one to other

release

releases. Both processes run side-by-side, with no conflict, each one in its

own

domain, both accessing the real-time tags on the server process.

32 and 64 bits

Another similar scenario is the feature that we have to manage 32 and 64 bits execution on the same project configuration.

By default, when installed on 64-bit computers, a .NET application will leverage the operating system platform independently and

it will

run all its modules natively in

true

real 64-bit mode

,

when installing on 64 bits

operation

operating systems.

However, sometimes, you

need to

must force one specific script,

or

a communication driver, or even the graphics user Displays

,

to run in 32 bits

; for

For instance, you may be consuming an external code DLL or a graphical component from a third

-

party, which only works in 32 bits mode.

In order to

To accomplish that, we just provided

in the system

a RunModule32.exe version to allow one specific project component such as Scripts, Displays, or Device

communication

Communication to run

that specific

a particular process in 32 bits, while the rest of the application is running 64 bits. The communication between those processes uses WCF, ensuring

the

total isolation among the processes.



Conclusion

Reliable Application

Creating a reliable application which are that is manageable on its life-cycle is of ultimate importance and one of the top benefits of using .NET and , instead of migrating previous code, creating new Software Tools from a the ground - up instead of migrating previous code, having a Project Configuration package that had its very first design with the goal to leverage the current currently available technologies.

Even though this new technology is available, many systems still relies in rely on code ported from DOS , or using Windows low-level APIs and previous versions that creates create the DLL-hell effect of Windows, where any update, any installation of another unrelated software, any change in any part of the system, could break it all.

Using the right features provided by .NET Framework, SQL databases, and adding on top of that a well-designed "Independent Application Layer", you can create systems that have high compatibility with previous applications , but not denying cannot deny the ability to keep truly evolving the feature set and the technology.



In this section...

Page Tree
root@parent
spacesV10