Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Overview

This page provides detailed information about how the archiving process works, what triggers it and the available options for archiving.

On this page:

Table of Contents
maxLevel3


Events to start the Archiving

The Module Historian process to archive data is composed by 3 steps:

Step 1: An Event starts the request to archive a group of values.  

There are two types of events (Trigger or TagChange), configured in a HistorianTable.

Step 2: The Historian archives the values in the Target database. 

The Target Database can be a SQL Database or a TagProvider configured to act as historian. 

Step 3: If Store and Forward is enabled, the data synchronization is executed.

With this option, if the Target Database is not available, the data is stored in a local database, and sent to the target when it becomes available.

Trigger

The Trigger can be a Tag Value, a Tag Property, or any object from the runtime namespace (e.g.: server.minute). When a value change in the object is detected, the archive request event is generated. 

Only Tags of domain Server or objects residing in server side namespaces can be used as Triggers, since the Historian process runs on the Server computer. 

The Trigger is configured in the HistorianTable. All tags and objects connected to that HistorianTable will be requested to be archived, regardless of having or not a new value. 

Save On Change

Save on change is a check-box on the TableHistorian configuration.

When enabling this checkbox, the Historian process will verify all tags connected to that HistorianTable. When the tag has a new value, the archive request event is generated.

The request to archive will be generated only to the Tag that has a new value. But, according the Historian Target Database, only the tag or all tags in the group with be archived. Further information on that is included on the next section of this page. 



Selecting the Target Database

The request to achieve data was created, either by a Trigger or TagChange, defined in a HistorianTable.

Each HistorianTable has a property Target Database that defines the  where the data shall be archived.

The Target Database can be SQL Database, defined at the module Datasets-DBs-TagHistorian object), or a Tag Provider configured to act as Historian target.


Archiving to SQL TagHistorian

The Datasets Module has a pre-defined object named TagHistorian. That Object is by default pointing to a SqlLite database, but it can be easily modified to point to any SQL source. Refer to the Database Module documentation for information on how to modify the target SQL Database.

When archiving the SQL database, define by the TagHistorian object,  two schemas of tables can be used to store the data: Standard and Normalized table schemas.

Standard Tables

In this format, both the Trigger event, and the TagChange event, will create one additional row in the database.

In this table schema, each column is the name of one Tag in the HistorianTable group of tags, therefore all tags in that group will have the value added, even only one tag had a new value.

The Timestamp of the row is the timestamp of the Trigger object, when the archive event was created by a Trigger; or the timestamp of the Tag that generated the archive request, when using OnTagChange events.

All the tags listed in the related HistorianTable will be stored, independently of having or not new value, and sharing only one timestamp, as previously defined. 

When using OnTagChange events, if many tags change value, one row will be inserted will all tags in the group, using the timestamp of the tag that created the event.

In order to avoid growing the database too quickly, the Time Deadband configuration will prevent a new row to be created quicker than the specified dead band. In this case the new row will be created using the timestamp of the last event (M<<<<<<<<. check programming >>>>>>>>>>>>>

Standard tables schema

The standard SQL historian table contain the following columns:

SQL historian table standard columns

ID

 BigInt

(8 Bytes)

The primary key of the table used as reference by the system.

UTCTimeStamp_Ticks

BigInt

(8 Bytes)

Date and time in Universal Time for that row in 64-bit .NET ticks. The value of this property is 100-nanosecond (1/10th of a microsecond) intervals that have elapsed since 12:00 A.M., January 1, 0001. This is a new date/time standard used by the Microsoft .NET Framework.

LogType

TinyInt

(1 byte)

Auxiliary column to show when the row was inserted: 0=on startup, 1=normal logging, 2=on shutdown.

NotSync

Int

(4 Bytes)

Auxiliary column to show if the data was synchronized or not when the Redundancy option is enabled. See Deploying Redundant Systems.

TagName

Float

(8 Bytes)

Column automatically created using the name of each tag as the column title. It stores the data value using double precision.

_TagName_Q

Float

(8 Bytes)

Column automatically created for the quality of the data for each tag, using the OPC quality specification.

Typically, you can associate up to 200 tags with each historian table, but that number depends on how many columns your target database allows. The tags should be defined in the same table when they have similar storing rates and process dynamics because the entire row must be saved in order to save a tag in the table. 

Normalized Tables

Normalized tables tab will be used only the OnTagChange events. If that table schema is selected, the Trigger option is disabled in the HistorianTable configuration. 

In this table schema, each one has only the TimeStamp of Tag, ID of the Tag, and the Value of Tag that generated the arquive event.

Normalized tables schema
TagsDictionary default columns

ID

 BigInt

(8 Bytes)

The primary key of the table used as a reference by the system.

TagName

NVarchar


The name of all the tags configured to "normalized" databases on the Historian.

NotSync

Integer

(4 Bytes)

Not used for this release. It was created for future changes and new features.

The system will automatically create four more tables as follows:

  • TableName_BIT
  • TableName_FLOAT
  • TableName_NTEXT
  • TableName_REAL

The schema for these table is:

Tables default columns

ID

 BigInt

(8 Bytes)

The primary key of the table used as reference by the system.

UTCTimeStamp_Ticks

BigInt

(8 Bytes)

Date and time in Universal Time for that row in 64-bit .NET ticks. The value of this property is 100-nanosecond (1/10th of a microsecond) intervals that have elapsed since 12:00 A.M., January 1, 0001. This is a new date/time standard used by the Microsoft .NET Framework.

ObjIndex

Integer

(4 Bytes)

The foreign key used as reference to the column ID on the TagsDictionary table.

ObjValue

Can be: Bit, Float, NText, or Real, depending on which table it is


Represents the value of the tag on the specified timestamp.

ObjQuality

TinyInt

(1 Byte)

Represents the quality of the tag on the specified time, using the OPC quality specification.

NotSync

Int

(4 Bytes)

Not used for this release. It was created for future changes and new features.

It is important to remember that the normalized database cannot be synchronized through the Redundancy option.


Archiving to a ExternalTags Historian

<<<<<<< more info on Providers >>>>>>>>>>

<< add specific information about Canary  InfluxDB and PI >>>

When archiving to a ExternalTags Historian, the schemas are defined by the system defined in the ExternalTags. It means that when data is archived into these historians, the structural organization, naming conventions, and other specifics are determined by the ExternalTags settings.

About Providers:

The Providers essentially act as intermediaries between the software and the external data historian systems. They interpret and translate data formats, protocols, and other communication specifics to ensure seamless data archiving and retrieval.

Specifics:

  1. CanaryLabs: A robust data historian system that's optimized for real-time data collection and analytics. When archiving to CanaryLabs, the data is stored in a highly compressed format that facilitates faster retrieval and analytics.

  2. InfluxDB: An open-source time series database designed for high-availability and real-time analytics. InfluxDB is particularly useful when working with large sets of time-series data where timely data retrieval is of the essence.

  3. GE Proficy: A comprehensive platform that provides real-time data collection and advanced analytics capabilities. GE Proficy is a scalable system designed to integrate and analyze vast amounts of industrial data.

On the Historian tab, navigate to TargetDBs and click on the "+" icon to add a new entry.


Configuring a Historian TargetDB:

  • Name: Enter a descriptive name for the TargetDB.

  • Description: Provide a brief description or note regarding this specific TargetDB.

  • Store and Forward: This setting determines if the data will be temporarily stored (and forwarded later) in case the direct archiving to the historian fails, ensuring no data loss.

  • Target Type: Define the type or nature of the target. This could be related to the specific kind of data or its use-case.

  • Target Provider: Choose the external data historian system you wish to archive to. Options include CanaryLabs, GE Proficy, and InfluxDB.

  • Station: Input the connection string specific to the chosen Target Provider. This ensures proper communication and data archiving to the external system. Configure your Provider clicking on a three point button and always test your connection.


Using Store and Forward

When the option to use Store and Forward is disabled, the archive requests events are sent directly to the Target Database as the events occurs.

There is a built-in protection when using the SQL-Dataset-TagHistorian target with Normalized tables. In this case, the new rows are buffered and included in the database every 5 seconds.


Store and Forward process

When receiving data archive requests, The Historian module will try to store the data in the Target Database, and if there is a fail, it will store the data in a local database, automatically created using SQLite.

Every 5s, the process tries to copy from the local SQLite database (the Rows inserted when the Target database was not accessible) to the the Target Database, in maximum blocks of 250 rows. 

All Historian tables are verified for a maximum of 4s. If there is not enough time to process all tables, the verification is resumed in the next 5 seconds cycle.

If the copy process is successful, the rows are deleted from the temporary SQLite database, and if it is empty, the database file itself is deleted. 

When an application queries data, if the Target Database is not available, the system will search the temporary SQLite database for data

This is a summary of the steps to execute the database synchronization:

  • The temporary local SQLite database is accessed, checking in all tables for the NotSync column flag (not synchronized rows), with a select limit of 250.

  • The result of the Selected Query (up to 250 rows) is inserted in the Target Database. 

  • After successful completion of the Insert in the Target Database, the rows are deleted from local SQLite cache.

Suppose many tables are to be synchronized with a large amount of data. In that case, the availability of this data in the main database may take some time, depending on the insertion performance of the main database and the local database (SQLite). However, after a certain period, the data will become available. On average, it takes around 1 second per table for these steps (i) to (iii).

Another important consideration is the volume of data. For large projects with a significant amount of data, it is recommended to use more robust databases such as SQL Server or Oracle, as they offer better performance and can handle high data volumes. However, this limitation does not apply to SQLite, which has a maximum limit of 10GB with limited performance. Therefore, using the "Keep Local Copy" functionality for large projects means expecting SQLite to replicate the entire history in these large databases with 100% availability. This functionality works well for smaller data models or when immediate synchronization with the main database is not necessary, especially if the main database is being used by other projects or third-party software and may experience occasional unavailability.



In this section...

Page Tree
root@parent
spacesV10