Overview

On this page, you'll find further information regarding how the archiving process works, available triggers, and the options for archiving data.

On this page:


Archiving Process Overview

The Historian module process to archive data is composed of three steps:

  1. An event triggers the request to archive a group of values. You configure two types of events (Trigger or Tag Change) when creating a HistorianTable.

  2. The Historian archives the values in the Storage Location after the trigger. You can use SQL databases or a Tag Provider when configuring the Storage Location.

  3. If you enable the Store and Forward feature, the system executes the data synchronization. This option stores data in a local database if the configured database is unavailable and sends it to the target when it becomes available.

In the following sections, you find additional details regarding each step.


Triggering Events

In the platform, two possible actions can initiate the archiving process. You can configure a Trigger based on a Tag or always save the Tags' value changes using the Save on Change option.

Trigger

You have three options to define as triggers in the Historian module:

  • A Tag value.
  • A Tag property.
  • Any object from the runtime namespace, such as Server.minute.

Whenever there's a change in the object's value, it creates an archive request event.


Triggers are limited to Tags falling under the domain of the Server or objects situated in server-side namespaces to ensure compatibility with the Historian process. This restriction exists because the Historian process operates exclusively on the Server computer.

You can choose one Trigger for each HistorianTable. When the trigger happens, all current values of Tags and objects connected to that Historian Table will be archived, regardless of whether or not they have a new value.

Save On Change

When creating or editing a HistorianTable, you can set the Save OnChange option as the Trigger.

When you enable the SaveOnChange, the Historian module continuously verifies all Tags connected to each HistorianTable. As a Tag changes its value, the archive request event is generated. Only the Tag whose value changed will be archived.



Archiving Data

After the archive request is created, the platform system will check how the data will be stored depending on the StorageLocation of the current HistorianTable. You configure this option when creating the HistorianTable.

Archiving the data will differ if you use a SQL database or a TagProvider as a Historian.

Archiving to SQL database (Tag Historian)

The Datasets module has a pre-defined object named TagHistorian. By default, a SqlLite database is used, but you can choose other databases. Access the HistorianTables to learn how to do it.

When archiving to the SQL database defined by the TagHistorian object, you can choose between the Standard and Normalized table schemas.

Standard Tables

If you use standard tables, both Trigger and Tag Change events result in a single additional row in the database. Each column in the table corresponds to a Tag in the HistorianTable group, ensuring that all tags receive an entry, even if only one Tag has a new value.

The row's timestamp is determined by the Trigger object when the archive event is triggered. For the OnTagChange event, if there is only one tag in the table, it retrieves the tag's timestamp for the row. If there are two or more tags in the table, the timestamp will reflect the execution time of the code, which is always slightly later than the tag timestamp.

All tags listed in the associated HistorianTable are stored, independent of whether they have new values, sharing a single timestamp as defined earlier. In the case of OnTagChange events involving multiple tag value changes, a single row is inserted with all tags in the group, utilizing the timestamp of the Tag that triggered the event.


Avoid exponential database growth

To prevent rapid database growth, you can use the Time Deadband configuration to ensure that a new row is not created every time a Tag's value changes. The system will not archive a new Tag's value until the dead band time isn't reached. After the deadband, the new row is generated using the timestamp of the last event.


Standard Tables Schema

The following table describes all existing columns from a Standard SQL Table:

Column Name

Data Type

Size

Description

ID

 BigInt

(8 Bytes)

The primary key used as a reference within the system.

UTCTimeStamp_Ticks

BigInt

(8 Bytes)

Date and time in Universal Time, represented in 64-bit .NET ticks. The value is based on 100-nanosecond intervals since 12:00 A.M., January 1, 0001, following the Microsoft .NET Framework standard.

LogType

TinyInt

(1 byte)

Auxiliary column indicating the insertion event:

  • 0: startup
  • 1: normal logging
  • 2:shutdown.

NotSync

Int

(4 Bytes)

Auxiliary column to show if the data was synchronized or not when the Redundancy option is enabled. See Deploying Redundant Systems.

TagName

Float

(8 Bytes)

Automatically generated column with the tag name as the title, storing data values using double precision.

_TagName_Q

Float

(8 Bytes)

Automatically generated column for the data quality of each tag, following the OPC quality specification.


You can usually assign up to 200 tags to each HistorianTable. However, the exact number can vary depending on how many columns your target database can accommodate. As a best practice, define tags in the same table if they have similar storing rates and process dynamics.

Normalized Tables

Normalized tables archive data only after an On Tag Change events. If you check the Normalized feature when creating or editing the HistorianTable, the Trigger option is disabled. 

Normalized tables store In this table schema, each one has only the TimeStamp of Tag, the ID of the Tag, and the Value of Tag that generated the archived event.

Normalized Tables Schema

Column Name

Data Type

Size

Description

ID

 BigInt

(8 Bytes)

Primary key used as a reference within the system.

TagName

NVarchar


The name of all the Tags configured as normalized databases on the Historian.

NotSync

Integer

(4 Bytes)

Not used for this release. It was created for future changes and new features.

The system automatically creates four more tables:

  • TableName_BIT
  • TableName_FLOAT
  • TableName_NTEXT
  • TableName_REAL

The following table describes the schemas used by the created tables.

Column Name

Data Type

Size

Description

ID

 BigInt

(8 Bytes)

The primary key of the table is used as a reference by the system.

UTCTimeStamp_Ticks

BigInt

(8 Bytes)

The date and time in Universal Time are expressed in 64-bit .NET ticks. The value represents 100-nanosecond intervals since 12:00 A.M., January 1, 0001, following the Microsoft .NET Framework's time standard.

ObjIndex

Integer

(4 Bytes)

Foreign key referencing the ID column in the TagsDictionary table, establishing a relationship.

ObjValue

It can be Bit, Float, NText, or Real, depending on which table it is.


It represents the value of the Tag at the specified timestamp. The data type varies based on the context of the associated table.

ObjQuality

TinyInt

(1 Byte)

Indicates the quality of the tag at the specified time, based on the OPC quality specification.

NotSync

Int

(4 Bytes)

Currently not utilized in this release. Reserved for potential future changes and new features.

It is not possible to synchronize a normalized database using the Redundancy option.

Archiving Externally using a TagProvider

When archiving data externally using a TagProvider, the external system defines the schemas. It determines the structural organization, naming conventions, and other specific settings.

You need to specify the Protocol to add a new Storage Location using a TagProvider. The Protocol is an intermediary between the solution you build with the platform and the external data historian systems. They interpret and translate data formats, protocols, and other communication specifics to ensure seamless data archiving and retrieval. Currently, the platform provides three protocol options to connect using TagProviders:

  • CanaryLabs: A robust data historian system that's optimized for real-time data collection and analytics. When archiving to CanaryLabs, the data is stored in a highly compressed format that facilitates faster retrieval and analytics.

  • InfluxDB: An open-source time-series database designed for high availability and real-time analytics. InfluxDB is particularly useful when working with large sets of time-series data where timely data retrieval is of the essence.

  • GE Proficy: A comprehensive platform that provides real-time data collection and advanced analytics capabilities. GE Proficy is a scalable system that integrates and analyzes vast amounts of industrial data.

You can use the Store and Forward feature when configuring a new StorageLocation using TagProvider.


Using Store and Forward

The Store and Forward feature ensures you will not lose data if the system can't connect with the external database.

When you define an StorageLocation using a TagProvider and disable Store and Forward, the archive requests events are sent directly to the external database as the events occur, independent of an existing working connection. A built-in protection exists for SQL-Dataset-Tag Historian targets with Normalized tables, buffering new rows and including them in the database every five seconds.

Store and Forward Process

When the Historian module receives an archive request, it'll try to store the data in the Storage Location. If unsuccessful, it stores the data in a locally created SQLite database. After an unsuccessful attempt, the Historian module will attempt to copy data from the local SQLite database (rows inserted when the Target database was inaccessible) to the Target Database every 5 seconds, in maximum blocks of 250 rows.

HistorianTables are verified within a 4-second window. If not all tables are processed in time, the verification continues in the next 5-second cycle. If the copy process to the StorageLocation succeeds, meaning the connection was reestablished, the copied rows are removed from the temporary SQLite cache. If the temporary SQLite database is empty after the process, it is deleted.

In applications with a high volume of data and several tables to be synchronized, the data availability in the StorageLocation (external database) may take some time. The synchronization velocity depends on the insertion performance of the main database and the local databases' insertion performance (SQLite). In most applications, the Store and Forward synchronization process takes up to 1 second per table.

Due to the possible synchronization restrictions, it's essential to take the following points when deciding the database system to be used in your solution:

  • For large projects with significant data volumes, it's recommended to use robust databases like SQL Server or Oracle for better performance.
  • SQLite has a 10GB limit and limited performance and is suitable for smaller data models. The Keep a Local Copy feature works well for projects not requiring immediate synchronization, especially if the main database experiences occasional unavailability due to other projects or third-party software usage.

In this section...

  • No labels