Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Easy Heading MacroheadingIndent40navigationTitle

Overview

On this page

selectorh2,h3wrapNavigationTexttruenavigationExpandOptiondisable-expand-collapse This page has information about

, you'll find further information regarding how the archiving process works, available triggers, and the options for archiving data.

On this page:

Table of Contents
maxLevel3


Historian

Archiving Process Overview

The Module Historian module process to archive data is composed by 3 of three steps:1)

  1. An

Event starts
  1. event triggers the request to archive a group of values.

 There are
  1. You configure two types of events (Trigger or

TagChange), configured in a Historian Table.2)
  1. Tag Change) when creating a HistorianTable.

  2. The Historian archives the values in the

Target database. 

The Target Database can be a SQL Database or a TagProvider configured to act as historian. 

3) If Store and Forward is enabled, the data synchronization is executed.

With this option, if the Target Database is not available, the data is stored in a local database, and sent
  1. Storage Location after the trigger. You can use SQL databases or a Tag Provider when configuring the Storage Location.

  2. If you enable the Store and Forward feature, the system executes the data synchronization. This option stores data in a local database if the configured database is unavailable and sends it to the target when it becomes available.

Events to start the Archiving

In the following sections, you find additional details regarding each step.


Triggering Events

In the platform, two possible actions can initiate the archiving process. You can configure a Trigger based on a Tag or always save the Tags' value changes using the Save on Change option.

Trigger

You have three options to define as triggers in the Historian module:

  • A Tag value.
  • A Tag property.
  • Any object from the runtime namespace, such as Server.minute.

Whenever there's a change in the object's value, it creates an archive request event.


Image Added

Triggers are limited to Tags falling under the domain of the Server or objects situated in server-side namespaces to ensure compatibility with the Historian process. This restriction exists because the Historian process operates exclusively on the Server computer.

You can choose one Trigger for each HistorianTable. When the trigger happens, all current values of Tags and objects connected to that Historian Table will be archived, regardless of whether or not they have a new value.

Save On Change

When creating or editing a HistorianTable, you set the Save OnChange option as the Trigger.

When you enable the SaveOnChange, the Historian module continuously verifies all Tags connected to each HistorianTable. As a Tag changes its

Trigger

The Trigger can be a Tag Value, a Tag Property, or any object from the runtime namespace (e.g.: server.minute). When a value change in the object is detected, the archive request event is generated. 

Only Tags of domain Server or objects residing in server side namespaces can be used as Triggers, since the Historian process runs on the Server computer. 

The Trigger is configured in the HistorianTable. All tags and objects connected to that HistorianTable will be requested to be archived, regardless of having or not a new value. 

Tag Change

The Tag Change is a check-box on the TableHistorian configuration.

When enabling the Tag Change checkbox, the Historian process will verify all tags connected to that HistorianTable. When the tag has a new value, the archive request event is generated. The request to archive will be generated only to Only the Tag that has a new value. But, according the Historian Target Database, only the tag or all tags in the group with be archived. Further information on that is included on the next section of this page. 

Selecting the Target Database

whose value changed will be archived.


Image Added


Archiving Data

After the archive request is created, the platform system will check how the data will be stored depending on the StorageLocation of the current HistorianTable. You configure this option when creating the HistorianTable.

Archiving the data will differ if you use a SQL database or TagProvider as a Historian.

Archiving to SQL database (Tag Historian)

The Datasets module

The request to achieve data was created, either by a Trigger or TagChange, defined in a HistorianTable.

Each HistorianTable has a property Target Database that defines the  where the data shall be archived.

The Target Database can be SQL Database, defined at the module Datasets-DBs-TagHistorian object), or a Tag Provider configured to act as Historian target.

Archiving to SQL TagHistorian

The Datasets Module has a pre-defined object named TagHistorian. That Object is by default pointing to By default, a SqlLite database is used, but it can be easily modified to point to any SQL source. Refer to the Database Module documentation here <<<< LINK>>>>.  for information on how to modify the target SQL Databaseyou can choose other databases. Access the HistorianTables to learn how to do it.

When archiving to the SQL database , define defined by the TagHistorian object,   two schemas of tables can be used to store the data: you can choose between the Standard and Normalized table schemas.

Standard Tables

In this formatIf you use standard tables, both the Trigger event, and the TagChange event, will create one and Tag Change events result in a single additional row in the database. In this table schema, each column is the name of one Each column in the table corresponds to a Tag in the HistorianTable group of tags, therefore ensuring that all tags in that group will have the value addedreceive an entry, even if only one tag had Tag has a new value.

The Timestamp of the row's timestamp is determined by the timestamp of the Trigger object , when the archive event was created by a Trigger; or the timestamp of the Tag that generated the archive request, when using is triggered or by the Tag timestamp that initiated the archive request in the case of OnTagChange events. All the tags listed in the related associated HistorianTable will be are stored, independently of having or not new value, and sharing only one timestamp, as previously defined. When using OnTagChange events, if many tags change value, one row will be inserted will independent of whether they have new values, sharing a single timestamp as defined earlier. In the case of OnTagChange events involving multiple tag value changes, a single row is inserted with all tags in the group, using utilizing the timestamp of the tag Tag that created triggered the event.In order to avoid growing the database too quickly, the Time Dead Band configuration will prevent a new row to be created quicker than the specified dead band. In this case the new row will be created


Info
titleAvoid exponential database growth

To prevent rapid database growth, you can use the Time Deadband configuration to ensure that a new row is not created every time a Tag's value changes. The system will not archive a new Tag's value until the dead band time isn't reached. After the deadband, the new row is generated using the timestamp of the last event

(M<<<<<<<<

.

checar programação >>>>>>>>>>>>>


Standard

tables schema

Tables Schema

The standard SQL historian table contain the following columns:

ID - PK, BigInt

following table describes all existing columns from a Standard SQL Table:

Column Name

Data Type

Size

Description

ID

 BigInt

(8 Bytes)

The primary key

of the table

used as a reference

by

within the system.

UTCTimeStamp_Ticks

-

BigInt

(8 Bytes)

Date and time in Universal Time

for that row

, represented in 64-bit .NET ticks. The value

of this property

is based on 100-nanosecond

(1/10th of a microsecond)

intervals

that have elapsed

since 12:00 A.M., January 1, 0001

. This is a new date/time standard used by

, following the Microsoft .NET

framework

Framework standard.

LogType

-

TinyInt

(1 byte)

Auxiliary column

to show when the row was inserted: 0=on startup, 1=normal logging, 2=on

indicating the insertion event:

  • 0: startup
  • 1: normal logging
  • 2:shutdown.

NotSync

-

Int

(4 Bytes)

Auxiliary column to show if the data was synchronized or not when the Redundancy option is enabled. See Deploying Redundant Systems.

TagName

-

Float

(8 Bytes)

— Column automatically created using the name of each tag as the column title. It stores the data value

Automatically generated column with the tag name as the title, storing data values using double precision.

_TagName_Q

-

Float

(8 Bytes)

— Column automatically created

Automatically generated column for the data quality of

the data for

each tag,

using

following the OPC quality specification.


Typically, you can associate You can usually assign up to 200 tags with each historian table, but that number depends to each HistorianTable. However, the exact number can vary depending on how many columns your target database allows. The tags should be defined can accommodate. As a best practice, define tags in the same table when if they have similar storing rates and process dynamics because the entire row must be saved in order to save a tag in the table. 

Normalized Tables

Normalized tables tab will be used only the OnTagChange events. If that table schema is selectedarchive data only after an On Tag Change events. If you check the Normalized feature when creating or editing the HistorianTable, the Trigger option is disabled in the HistorianTable configuration

Normalized tables store In this table schema, each one has only the TimeStamp of Tag, the ID of the Tag, and the Value of Tag that generated the arquive archived event.

Normalized

tables schema

Table: TagsDictionary

ID - PK, BigInt

Tables Schema

Column Name

Data Type

Size

Description

ID

 BigInt

(8 Bytes)

— The primary key of the table

Primary key used as a reference

by

within the system.

TagName

-

NVarchar


The name of all the

tags

Tags configured

to "

as normalized

"

databases on the Historian.

NotSync

- Int

Integer

(4 Bytes)

Not used for this release. It was created for future changes and new features.

The system will automatically create creates four more tables as follows:

  • TableName_BIT
  • TableName_FLOAT
  • TableName_NTEXT
  • TableName_REAL

The schema for these table is:

ID - PK, BigInt

following table describes the schemas used by the created tables.

Column Name

Data Type

Size

Description

ID

 BigInt

(8 Bytes)

The primary key of the table is used as a reference by the system.

UTCTimeStamp_Ticks

-

BigInt

(8 Bytes)

— Date

The date and time in Universal Time

for each row

are expressed in 64-bit .NET ticks. The value

of this property is the number of

represents 100-nanosecond

(1/10th of a microsecond)

intervals

that have elapsed

since 12:00 A.M., January 1, 0001

. This is a new date/time standard used by

, following the Microsoft .NET

framework

Framework's time standard.

ObjIndex

- FK,

Integer

(4 Bytes)

— Reference to the column ID on

Foreign key referencing the ID column in the TagsDictionary table, establishing a relationship.

ObjValue

— Can

It can be Bit, Float, NText, or Real, depending on which table it is.

Represents


It represents the value of the

tag on

Tag at the specified timestamp. The data type varies based on the context of the associated table.

ObjQuality

-

TinyInt

(1 Byte)

— Represents

Indicates the quality of the tag

on

at the specified time,

using

based on the OPC quality specification.

NotSync

-

Int

(4 Bytes)

— Not used for

Currently not utilized in this release.

It was created

Reserved for potential future changes and new features.


Info

It is

important to remember that the normalized database cannot be synchronized through

not possible to synchronize a normalized database using the Redundancy option.

Archiving

to

Externally using a TagProvider

Historian

When archiving to data externally using a TagProvider Historian, the schemas are defined by the system defined in the Tag Provider.external system defines the schemas. It determines the structural organization, naming conventions, and other specific settings.

You need to specify the Protocol to add a new Storage Location using a TagProvider. The Protocol is an intermediary between the solution you build with platform and the external data historian systems. They interpret and translate data formats, protocols, and other communication specifics to ensure seamless data archiving and retrieval. Currently, the platform provides three protocol options to connect using TagProviders:

  • CanaryLabs: A robust data historian system that's optimized for real-time data collection and analytics. When archiving to CanaryLabs, the data is stored in a highly compressed format that facilitates faster retrieval and analytics.

  • InfluxDB: An open-source time-series database designed for high availability and real-time analytics. InfluxDB is particularly useful when working with large sets of time-series data where timely data retrieval is of the essence.

  • GE Proficy: A comprehensive platform that provides real-time data collection and advanced analytics capabilities. GE Proficy is a scalable system that integrates and analyzes vast amounts of industrial data.

You can use the Store and Forward feature when configuring a new StorageLocation using TagProvider.


Using Store and Forward

The Store and Forward feature ensures you will not lose data if the system can't connect with the external database.

When you define an StorageLocation using a TagProvider and disable Store and Forward

<<<<<<< more info on Providers >>>>>>>>>>

<< add specific information about Canary  InfluxDB and PI >>>

Using Store and Forward

When the option to use Store and Forward is disabled, the archive requests events are sent directly to the Target Database external database as the events occurs.There is a occur, independent of an existing working connection. A built-in protection when using the exists for SQL-Dataset-TagHistorian target Tag Historian targets with Normalized tables. In this case, the buffering new rows are buffered and included including them in the database every 5 five seconds.

Store and Forward

process

pProcess

When receiving data archive requests, The Historian module will the Historian module receives an archive request, it'll try to store the data in the Target Database, and if there is a fail, it will store Storage Location. If unsuccessful, it stores the data in a local database, automatically locally created using SQLite .Every 5s, the process tries to copy database. After an unsuccessful attempt, the Historian module will attempt to copy data from the local SQLite database (the Rows rows inserted when the Target database was not accessibleinaccessible) to the the Target Database every 5 seconds, in maximum blocks of 250 rows. 

All Historian tables HistorianTables are verified for a maximum of 4swithin a 4-second window. If there is not enough time to process all tables are processed in time, the verification is resumed continues in the next 5 seconds -second cycle. If the copy process is successfulto the StorageLocation succeeds, meaning the connection was reestablished, the copied rows are deleted removed from the temporary SQLite database, and if it cache. If the temporary SQLite database is empty , the database file itself after the process, it is deleted. 

When an application queries data, if the Target Database is not available, the system will search the temporary SQL Lite database for data

This is a summary of the steps to execute the database synchronization:

  • The temporary local SQLite database is accessed, checking in all tables for the NotSync column flag (not synchronized rows), with a select limit of 250.
  • The result of the Selected Query (up to 250 rows) is inserted in the Target Database. 
  • After successful completion of the Insert in the Target Database, the rows are deleted from local Sqlite cache.
<<<<. information to review >>>>>
Se houver muitas tabelas a serem sincronizadas, com muitos dados a serem sincronizados, dependendo da performance de inserção no banco principal e do banco local (SQLLite), pode ser que esse dado leve algum tempo para estar disponível no banco principal, mas após algum tempo ele estará disponível. Uma média de performance é que essas etapas de (i) a (iii) levem em torno de 1s por tabela.Outra observação é com relação ao volume de dados, recomendamos usar SQLServer, Oracle e outros bancos mais robustos para grandes projetos com grande volume de dados, pois além de melhor performance, esses bancos suportam um alto volume de dados, e isso não vale para o SQLite (limite de 10G com performance limitada). Consequentemente, usar o Keep Local Copy para grandes projetos é desejar que o SQLite tenha replicado todo o histórico que está nesses grandes bancos, com uma disponibilidade de 100% do tempo, essa funcionalidade funciona bem para modelos de dados menores, ou onde não se há necessidade de sincronização imediata com o banco de dados principal, que pode estar sendo utilizado por outros projetos ou software de terceiros, além de poder estar indisponível de tempos em tempos

In applications with a high volume of data and several tables to be synchronized, the data availability in the StorageLocation (external database) may take some time. The synchronization velocity depends on the insertion performance of the main database and the local databases' insertion performance (SQLite). In most applications, the Store and Forward synchronization process takes up to 1 second per table.

Due to the possible synchronization restrictions, it's essential to take the following points when deciding the database system to be used in your solution:

  • For large projects with significant data volumes, it's recommended to use robust databases like SQL Server or Oracle for better performance.
  • SQLite has a 10GB limit and limited performance and is suitable for smaller data models. The Keep a Local Copy feature works well for projects not requiring immediate synchronization, especially if the main database experiences occasional unavailability due to other projects or third-party software usage.

In this section...

Page Tree
root@parent
spacesV10