Search Results
6 results found with an empty search
- PLC Shift MQTT Implementation Details
In this blog post we take a closer look at how PLC Shift uses MQTT and Sparkplug B. We'll start with some MQTT and Sparkplug B basics and then look at how the node and device lifecycle work. After that, we'll discuss publishing and subscribing to tag data. We'll finish up by looking at how PLC Shift uses the DRECORD functionality in the Cirrus Link Recorder module to export record based data, like flow computer history. MQTT MQTT is an open messaging protocol that is lightweight and flexible. By "lightweight" we mean that it's easily implemented and works on low cost and low performance devices, and by "flexible" we mean that because the MQTT protocol itself has no strict requirements for the format of the payload, it's suitable for moving many different types of data. MQTT messages always have a topic, which is a string, and a payload, which can be anything from a string, to JSON, to binary data. MQTT is fundamentally a publish/subscribe protocol. This means that clients always publish messages to a broker, and have no idea about subscribers that may be listening. In the same way, clients only ever subscribe to messages from the broker, and do not communicate with publishing clients directly. A broker is required, and using a broker allows for publishers and subscribers to be decoupled and have no knowledge of one another. Because the payload can be anything, MQTT is extremely flexible, and it's easy to get started with your own messaging scheme. However, this flexibility means that with just MQTT, there really is no idea of interoperability between products that weren't designed to use the use messaging scheme. This is where Sparkplug B comes in. Sparkplug B is a binary serialization mechanism which is used to format messages in a known way. This allows for products that use MQTT and the Sparkplug B serialization format to communicate with one another. Sparkplug B Payloads Sparkplug B is built on top of Google's extremely popular protocol buffers technology. Because of its popularity, serialization and deserialization libraries for protobufs are available in pretty much any programming language imaginable. PLC Shift apps are written in C#, so we use Google's GRPC.Tools library. The library compiles a .proto file into C# code. The .proto file describes the structure of the serialized data. The .proto file for Sparkplug B is available online at Github. The proto compiler takes a message, like the Metrics message type below, and turns it into a C# class defintion. We populate an instance of the metrics class with the data that we want to send, and then the protobufs library turns that class into a sequence of bytes that we can transmit over TCP/IP, MQTT, or some other transport mechanism. On the receiving side, the protobufs library takes the sequence of bytes that was transmitted and turns it back into data. The receiver can be implemented in some other programming language, as long as that language has protobufs support. To be clear, it's not possible to send a Metrics message directly. This is merely an example to illustrate how protobufs work. All SparkPlug B messages are a serialized "Payload" message. This message type can contain multiple Metrics as well as other nested data types. SparkplugB Node and Device Lifecycle Sparkplug B is not just a serialization and deserialization mechanism, however. Sparkplug B also brings some statefulness to MQTT communications. Sparkplug B has the concepts of nodes and devices. A node is analgous to an edge computer, and a device is analgous to a physical sensor, RTU, or some other standalone object (I'm trying hard not to use the word "device" here!). What's important to understand is that devices belong to nodes, and that nodes are the top-level object in the hierarchy. One node owns multiple devices. In PLC Shift, a device, which owns apps, is equivalent to a Sparkplug B node, and the apps under that device are equivalent to Sparkplug B devices. The naming here is unfortunate, but there's only 2 levels of hierarchy, so the complexity is manageable. Complete information on the lifecycle of nodes and devices can be found in the Official Sparkplug Specification. What follows is a summary of the behavior, and some details are omitted. When a node first comes online and connects to a broker, it sends a NBIRTH message. This message will contain a mix of static and variable information about all the node level metrics. Static information is things like the name of the metric, its 64 bit alias, units and range. The variable information is the value of the metric, the quality and other information that will change during the time that the node is connected. Changing the static information requires a new NBIRTH to be issued. The Sparkplug B aware host keeps a map of metric names to aliases, and further updates of metrics only require the alias. This means that we don't need to send the metric name with every transmission, which is good, but it also makes it a bit harder to get those metric aliases as a subscriber unless you get the BIRTH message or use some other mechanism. Aliases for all metrics must be unique across all metrics that the node owns, so this constraint also applies to device level metrics. No two device level metrics that are owned by the same node can have the same alias. For PLC Shift specifically, where each device (aka Sparkplug B node) can own multiples of the same app, and thus the same tag IDs at the app level, we use a mechanism whereby the top 32 bits of the metric alias comes from the name that is unique per app, and the bottom 32 bits of the metric alias comes from the specific parameter's ID. Each parameter in a PLC Shift app has an ID that is unique, but fixed for the app. Each app parameter in PLC Shift that has a "Publish to MQTT" option selected is mapped to a Sparkplug B metric. After a node issues a NBIRTH message, the node will issue DBIRTH messages on behalf of all the devices that it owns. A DBIRTH message is very similar to an NBIRTH message and contains a list of metrics, with each metric having a mix of static and variable information. Just as with a node, and changes to the static information will require a new DBIRTH message to be issued. For PLC Shift, the device will issue DBIRTHs for all the apps that have tags that are published via MQTT. To update node metric values, a node sends NDATA messages. To update device metric values, the node sends DDATA messages on behalf of the connected devices. When a device is ready to go offline, it issues a DDEATH message. This allows the Sparkplug B host to release resources owned by that device and otherwise clean up. When a node is ready to go offline, it first issues DDEATH messages for all the connected devices, and then issues its own NDEATH message. Devices and nodes may not always go offline cleanly, however, such as when there's a communications outage or power failure. In this case, the MQTT Last Will and Testament features can be used to indicate that a node and all its devices went offline in an ungraceful fashion. For PLC Shift, when a new app is added to the device, a DBIRTH is issued for just that app, assuming that the app has some data that is published via MQTT. When an app is deleted, the device issues a DDEATH for the app. When the app is updated, and some static information changes, like the units for a parameter, or the number of parameters that are published, a DDEATH is issued followed by a DBIRTH with the updated information. Sparkplug B Topics Publishing In the previous section, we discussed the Sparkplug B node and device lifecycle. When we say that a node sends a NBIRTH message, or a DDATA message, what we're really saying is the PLC Shift runtime sends a Sparkplug encoded payload to a specific topic. Topics are strings that the broker uses to decide what type of message that it's receiving and thus what to do with the message. From the Sparkplug B specification, NBIRTH messages should be published on a topic that looks like: namespace/group_id/NBIRTH/edge_node_id The namespace for Sparkplug B is always "spBv1.0". For PLC Shift, the group_id is configurable at the PLC Shift device level using the Export Settings->MQTT Group Name configuration setting and has a default value of "plc-shift". NBIRTH indicates the type of Sparkplug B message. The spec has a list of all the legal values here. For PLC Shift, the edge_node_id is either the Device Name or the Device Export Name. The Device Name is used if the Device Export Name is empty. Note that both of these will be cleaned for characters that are illegal in MQTT topic names. From the Sparkplug B specification, DDATA messages should be published on a topic that looks like: namespace/group_id/DDATA/edge_node_id/device_id This beginning of this topic is very similar to a NBIRTH message, but the topic has one extra field: The device_id field. For PLC Shift apps, this is either the app's name (User Configured Name), or the App Export Name. Both of these are configurable at the app level. The User Configured Name is used when the App Export Name is empty. Subscribing Up to now, we've shown how to publish data from nodes or devices to broker using MQTT and Sparkplug B. To subscribe to changes to metrics from the host to the node, or from the host to a device, we subscribe to *CMD topics. Specifically: namespace/group_id/NCMD/edge_node_id namespace/group_id/DCMD/edge_node_id/device_id The NCMD topic is used to subscribe to node level metrics and the DCMD topic is used to subscribe to device level metrics. The values in the topics are the same as described above. In the PLC Shift runtime, when a message is received on those topics, the Sparkplug B payload is deserialized, and the new value is processed. Only PLC Shift parameters that have their "Subcribe MQTT" option set will be updated. The Sparkplug B payload will be matched to the parameter name using the alias if the alias is non-zero, or by the parameter name if the alias is 0. See our PLC Shift Apps - Cloud Deployment blog post for more details on how to update parameters in apps using Node Red. Data Records PLC Shift apps can send tabular, record based data using MQTT and Sparkplug B. At the time that this blog was written, this is not yet a part of the official spec, and is only supported by the Cirrus Link Recorder module. However, this is a very powerul feature that expands the capabilities of Sparkplug B from working with just streaming data to also being able to handle record based data. Record based data typically has a single timestamp and then multiple columns for each value that occured at that time. Cirrus Link explains how this works in their Recorder application note. The idea is fairly straightforward, and is very similar to publishing streaming data. A DRECORD payload consists of a list of metrics. A JSON example is below. The serialized payload is published to a topic that looks like: spBv1.0/group/DRECORD/edge_node_id/device_id Specifically for PLC Shift, an app may be configured to publish record data, but not be configured to publish streaming data, so the mechanism to publish records is as follows: When record data is ready, the node publishes a NBIRTH using the device name _RECORDS appended. This creates a unique node topic. No metrics are required in this NBIRTH message, because no streaming values will be sent. For each app that has record data, the node publishes a DBIRTH using the app name with an _RECORDS appended. This creates a unique device topic. No metrics are required in this DBIRTH message, because no streaming values will be sent. The node publishes serialized record payloads to the DRECORD topic as required, until there are no more records left to upload. The node issues a DDEATH for the records device. The node issues a NDEATH for the records node. This mechanism allows record based data to be uploaded in the background and not interfere with streaming data upload. Streaming data can be sent immediately on change and doesn't get held up because records are being uploaded. Code snippets below show how a record and a single column value are encoded by PLC Shift in C#. Conclusion MQTT is a lightweight and flexible communications protocol that implements a publish/subscribe model. Sparkplug B is a serialization mechanism that leverages Protocol Buffers technology to bring interoperability to the the MQTT protocol. Sparkplug B isn't just a serialization message, however. It also brings some statefulness to the MQTT protocol through the birth and death mechanism. MQTT and Sparkplug B have recently been extended to allow for transmission of record based data. The combination of MQTT and Sparkplug B is an excellent way to publish data to external systems. PLC Shift has a great MQTT implementation that allows for the publishing of high resolution contextualized data with just a few configuration settings.
- Reliability of PLC Shift Apps
PLC Shift applications are written in the C# programming language and use Microsoft's .NET runtime technology. This means that the exact same code can be used to target any operating system where .NET runs, including Microsoft Windows and many common Linux distributions. This also means that the same code can be used on different processor architectures, including ARM and x86. With typical embedded systems and programs, code is written in the C or C++ programming languages, and it's usually difficult to run and test the program except on the specific hardware that it's written for. This is also true for PLC programs written in IEC-61131 languages. Additionally, C# and .NET have sophisticated automated testing frameworks. This type of functionality is simply not available in PLC programming languages or is a manual process of forcing some set of variables and logging the outputs. In other words, using .NET makes it easier to run our applications anywhere, which makes it far easier to implement automated testing for our applications, which ultimately results in high reliability. Automated Testing For every application that we release, we have also written a corresponding series of tests. Because all of our applications use our automation framework, we have built software that scripts the process of setting input variables and configuration parameters, running the app, and then validating that the outputs are exactly what we expected. We run these tests as fast as possible, which means that it's possible to test how any app runs over many days in a few seconds or minutes. Automated testing is especially valuable when we add features or make changes to an existing application. We can be assured that the changes that we made did not break any existing functionality. A summary of the test suite for the PLC Shift Gas Flow app at version 1.4 is shown below. There are 66 tests in total, and the entire test suite takes about 2.5 minutes to execute. Test Script Overview We use what we call a test script to actually run tests. A script is really just more code, but it allows us to do a few different things, including: Set a test to run for a specific amount of time. This is the simulated time that a test runs for, not the actual time that it takes to run the test. Set inputs to a specific value at a specific time. Print values from the app to the console. Verify that the values generated by the app match the expected value. If the values don't match, stop the test and indicate a failure. The code below shows an example of a script for the PLC Shift Plunger Lift app. In this case, we are testing that the app correctly detects normal plunger arrival. The script does the following things: Set the test to run for 15 minutes. Set the value of Flow Conditions->Off Time and Shut In Conditions->On Time to 240s. Also, set the value of Plunger Condition->Plunger Exists to true. All other values for this app remain at their default values. Set the casing, tubing, and line pressure values as shown. This isn't required for this test. Set up asserts to verify that the control states are as expected at 2 and 242 seconds after the test is started. Increment the plunger arrival counts at 592 seconds into the test. Set up asserts to verify that the Plunger Arrived tag is set and the plunger arrival counts match the expected counts. These are all tested at 593 seconds into the test. The last line actually executes the script against the app run time code. Any errors stop execution. Although the test is set up to run for 15 minutes, this test takes just under 1 second to run on our development computer. The output generated by this test is shown below. This is almost the same information that you would see in the application's diagnostic log on a live system. Looking at the timestamps, the app thinks that the test started at 19:44, and it runs until 19:59, which matches our 15-minute run time, even though the test only actually took 1 second to run. Gas Flow Contract Hour Automated testing is very flexible and powerful. In the test script snippet below, the Contract Hour parameter is generated randomly for each test run. Once we know what the contract hour is, in the test we can calculate the time from the start of the test to the next contract day, and then validate that the accumulated volumes are correct at the end of the contract day and that the accumulated volumes correctly reset to 0 at the start of the new contract day. Using random values allows us to really exercise the system over a wide variety of use cases. Gas Flow Calculation Accuracy The Alberta Energy Regulator (AER) publishes a set of guidelines for electronic flow measurement which is known as AER Directive 17. Section 4.3.6.2 has various test cases that electronic flow measurement devices must meet. We use our automated test framework to verify that our calculated measurements are well within the allowable range for all 15 test cases. The results are summarized below. Let us know if you want the configurations so that you can run your own tests. In all cases, the allowable deviation is +/-0.25%. The deviation is much less than this in all cases. Test case 6 with an upstream tap has the worst deviation at 0.003988718981597353%. This is still 60 times better than the allowable deviation of 0.25%. Conclusion PLC Shift applications use modern software development methodologies to ensure high reliability. We can test various operating modes, randomly generate inputs, do long-term testing, and really exercise our apps automatically. As a customer, you can be assured of quality right from the start.
- The Case for Apps in Automation
What are apps? We use the term "apps", which is short for "applications", to refer to standardized and configurable programs that integrate with programmable logic controllers (PLCs), remote terminal units (RTUs), or other real-time controllers. Reverity's PLC Shift apps run on a Linux-based computer and synchronize tag data with the controller once per second using real-time industrial protocols like Modbus, Ethernet/IP, and others. Apps extend the functionality of the controller; they do not replace the controller. The controller is still responsible for process safety, communicating with IO, and other functions that PLCs are good at. The figure below shows a basic PLC Shift system. A more sophisticated architecture can be found on our website. Why do we need apps? PLCs and RTUs are programmable devices, which means that they are inherently very flexible. However, they are not programmed using general-purpose programming languages, but using IEC-61131-3 languages. These languages are very much suited to real-time control, but they are not suitable for general-purpose programming. Additionally, PLCs are designed to provide guarantees around the time it takes for a program to execute. This makes sense for hard real-time applications like motion control, but the paradigm is wrong for general-purpose computing, like pulling power pricing from a 3rd party like OpenADR using a RESTful interface, which may take many seconds to execute. The advantages of each type are listed below. PLC Advantages Fully programmable for great flexibility Deterministic hard real-time control Hardened IO Safety rated High reliability, including redundant architectures App Advantages Standardized solutions to common problems Math-heavy algorithms and optimization Make app configuration changes without modifying PLC logic Vendor supportable Scalable and flexible deployment Modern automation systems need to do many other things outside of real-time control. In addition to communications with external APIs as mentioned previously, functionality like recipe management, batching, logging data, machine learning, optimization, data export, and others are well suited to apps, but poorly suited to PLCs. As a specific example, let's look at a gas flow computer. A flow computer calculates temperature and pressure-corrected flow using algorithms designed by the American Gas Association (AGA). Calculating corrected flow in a PLC is fairly straightforward if you have function blocks that encapsulate the required algorithms. However, a flow computer does a lot more than just calculate corrected flow. To meet regulatory and operational requirements, a flow computer also has to: Generate hourly and daily accumulations Log configuration changes and operational changes (event log) Log alarms (alarm log) Average inputs and generate hourly and daily records Make history and logs available to other systems Keep history and logs in memory for later retrieval This functionality is very poorly suited to PLCs, which is why you won't see a gas flow computer implemented directly in a PLC. A gas flow computer is an extreme case, but there are any number of processes that PLCs are used for that would benefit from the app approach, especially once optimization, history, and other complicated features are layered in. With PLC Shift apps, we combine the best features of general-purpose computers and apps with the best features of PLCs. We free each system to do things that they are good at without burdening them with the things that they are not good at. This leads to robust, configurable, and scalable systems that are easy to deploy and maintain. What are the benefits of apps? Configuration vs. Custom Code A PLC needs a program to do anything useful. Writing a reliable, fully featured, and documented program is a time-consuming task, and requires a very specialized skill set. It is possible to write PLC programs in a modular fashion, but it's difficult, and the resulting programs are not truly modular like general-purpose applications. In other words, the configuration of the PLC program is usually mixed in with the execution, because the PLC programming paradigm does not allow for any other approach. To modify the configuration of the program in the PLC, the program itself must be modified. With multiple sites, the program in each PLC ends up being slightly different to account for site-to-site variations even when PLCs at each site are ostensibly doing the same thing. Apps, however, are configurable. Each configurable parameter has an explanation of what it does, and each value is validated before being accepted by the app. Validation is not just limited to making sure that a value is within a certain range, but also includes cross-validation, where any configuration value is also validated against all other values, to make sure that the new value is legal in the context of the current operating state. The screen grab below shows that the methane portion of the gas composition cannot be changed to 0.8 because this would cause the sum of all components to not be exactly 1.0. The change remains in the pending state. If another component is changed such that sum becomes 1.0, all of the pending changes will be accepted and the flow computer will calculate compressibility based on the new gas composition. Validation of this type is very difficult to do in a PLC. The end result is that configurable applications make it easy to implement complicated processes quickly in your automation system. To change the behavior of the app, only the configuration of the app needs to be modified, not the PLC program itself. Apps allow you to build reliable, repeatable, and configurable systems quickly with minimal custom code. Scalable Integration Standardization is key to building out scalable integration with other systems, such as SCADA, enterprise historians, or data lakes. Without standardization, each integration needs to be tweaked with site-specific information. It doesn't take many sites until this becomes overwhelming and stops working. In the previous section, we showed that the PLC programming paradigm results in a lack of standardization. With fully-custom code, sites that are doing the same thing invariably end up with different programs. The opposite is true for configurable applications, which are inherently standardized. This makes it easy to move the data that is generated by apps into other systems, with no site-specific tweaking required. With PLC Shift apps specifically, because we have so much context around the application and the data that it's generating, we can even automate the creation of data models. Contextualized Data and Automated Export Check out our blog post on Levels of Context for some background on this. Configurable and standardized apps are the best way to easily implement all of the levels of context described in that post, especially the high-level behavioral context. PLC Shift apps can export contextualized data to the cloud or other systems with just a bit of configuration. This eliminates tedious and error-prone manual tag mapping. Data is immediately useful for analysis, with no further processing required. PLC Shift can also export record-based data, like flow computer history, directly to the cloud or to other systems. This is trivial to set up compared to legacy approaches like custom drivers and 3rd party polling engines. Diagnostic Logging When things go wrong, as they inevitably do, how quickly can you figure out what's going on and get back online? PLC ladder logic is extremely easy to debug visually, which is why it remains popular. However, as a PLC ladder program gets more complicated and larger, it gets harder and harder to debug. PLC Shift apps generate sophisticated diagnostic logs and configuration change logs that make it easy to understand what's gone wrong, which makes it easy to get back on track. We also understand our program in great detail, and support you to help you get back online. All configuration changes are logged automatically, and include a source, which makes it easy to understand when a change happened and where it came from. Conclusion Apps extend the functionality of PLCs, but they do not replace them. Combining apps with PLCs allows each system to do what they're inherently good at, which ultimately leads to full-featured, reliable, and scalable automation systems. With our in-house automation application framework, we can deliver fully-featured applications for your specific needs in weeks, not months or years. Get in touch and let us know what you need!
- PLC Shift Apps - Cloud Deployment
PLC Shift apps are tag-based and can run on most Debian-based Linux systems. They can also publish and subscribe to tags using MQTT transport with Sparkplug B payload. This raises some interesting possibilities, like running PLC Shift apps in a data center, instead of at the edge. Reasons to do this include: Less hardware at the edge, reducing power consumption and complexity. Increased cyber security, as a data center is easier to secure than computers in the field, which are susceptible to attacks based on physical access. Of course, this requires robust and reliable communications, but communications reliability is increasing every year, and end users continue to make investments in communications infrastructure. This demo requires PLC Shift version 1.5 or later PLC to Gas Flow Computer In this system, we use a Rockwell Automation Compact Logix PLC to generate the differential pressure (dp), static pressure (sp), and temperature (temp) inputs that are needed by the gas flow computer. We use the PLC Shift Datalogger app to pull data from the PLC once per second using the EtherNET/IP protocol. The data logger then publishes the values to the MQTT Broker whenever the values change. We use Node-RED to subscribe to the values that are generated by the data logger and republish them to a topic that the gas flow app expects. This is required because Sparkplug B publishes data to a topic on the #DDATA# channel, but the gas flow computer is required to subscribe on the #DCMD# channel. The gas flow app calculates corrected flow and exports all of the flow history to the cloud. Node-RED must have the node-red-contrib-mqtt-sparkplug-plus package installed. This makes it easy to work with SparkPlug B payloads. We start by subscribing to the topic spBv1.0/plc-shift/DDATA/edge/dl-test-mqtt/ in Node-RED. In this case, the Sparkplug B group ID is "plc-shift", the device name of the PLC Shift device is "edge" and the app name is "dl-test-mqtt". All of these values are configurable in PLC Shift Manager. The datalogger app publishes Sparkplug B metrics to this topic. Metrics are serialized into the message's payload using the Sparkplug B binary format. Each value that is published via SparkPlug B has a 64-bit unsigned integer alias. We use this alias to determine whether each published metric is the dp, sp, or temp, and then apply the tag name that the gas flow computer expects and also zero out the alias. We zero out the alias so that the subscribing application can match the tag by name, instead of using the alias. When the alias is not zero, the PLC Shift runtime will try to match by alias instead. Each parameter in the PLC Shift app has a unique parameter ID, which is where the alias comes from. The parameter ID isn't shown anywhere, but we can tell you what it is, so contact us at support@reverity.io if you need help. We'll fix this oversight in a future release. We then publish the updated metrics to the spBv1.0/plc-shift/DCMD/edge/gf-test-mqtt topic. The gas flow application is subscribed to this topic. The gas flow app sees the updated values. The code in the Node Red function is shown below, and the entire flow as well as app configurations can be downloaded here. This demo requires PLC shift version 1.5 or later if(msg.payload.metrics === undefined) { // return the original message return(msg); } var metrics = msg.payload.metrics; //console.log(metrics) //console.log(msg.payload.metrics[0]) //console.log(msg.payload.metrics[1]) //console.log(msg.payload.metrics[2]) var count = 0; for (var i = 0; i < metrics.length; i++) { //console.log(metrics[i]); //metrics.push(msg.payload.metrics[i]); if (metrics[i].alias.low == 1116468) { metrics[i].alias = 0; metrics[i].name = "Inputs/temp"; //console.log("temp " + i); } else if (metrics[i].alias.low == 1116268) { metrics[i].alias = 0; metrics[i].name = "Inputs/dp"; //console.log("dp " + i); } else if (metrics[i].alias.low == 1116368) { metrics[i].alias = 0; metrics[i].name = "Inputs/sp"; //console.log("sp " + i); } count++ } //console.log(metrics) //console.log("end"); var newMsg = { payload: msg.payload }; return (newMsg); We're using the PLC Shift Datalogger app to read the tag values from the PLC using EtherNET/IP and to publish them via MQTT, but this app can only move data from the PLC to MQTT. It cannot move data from MQTT to the PLC. If bidirectional data flow is required, then another piece of software, like Ignition Edge, should be used at the edge, instead of the PLC Shift DataLogger. The animation below shows tag values from the data logger at the top and tag values in the gas flow app below. Values are polled from the PLC by the data logger and published to the broker using MQTT. The gas flow app has subscribed to tag values via MQTT. The values change in the data logger app, and then shortly thereafter appear in the gas flow app as inputs. Everything is kept in sync, and API requirements for one-second update times are met. The PLC program that is generating the tag values includes some noise, so that tag values are always changing and are realistic. Conclusion PLC Shift applications are quite flexible, and many different deployment architectures are possible, including deployment to the datacenter. Note that using MQTT is only one option. We could also poll the PLC directly using EtherNET/IP from the data center, assuming that latency and communications reliability requirements are met. We didn't show containerized deployment, but preliminary testing shows that it's possible. We still need to do some work here to create a polished solution. PLC Shift adapts to your requirements, instead of the other way around. Let us know what you're trying to do, and we'll do our best to make it work.
- Levels of Context for Industrial Data
Industrial processes are continuously generating data. Careful analysis of this data can lead to greater production, reduced downtime, lower maintenance costs, and reduced energy usage, as well as other desirable outcomes. However, to truly be valuable, data must be contextualized before it can be analyzed. This means that to fully understand the data and to make use of it, we need not just the values, but also metadata like units, and the relationships between various pieces of data. We use the term "context" to describe the totality of this information that surrounds any single piece of data. Without this context, useful, reliable, and scalable data analysis is simply not possible. Context is key to making the data generated by your industrial systems useful. We'll start by discussing the different types of context, then look at the consequences of a lack of context, and finally consider how to apply and preserve context from the data source and otherwise properly contextualize data. Types of Context We can break down context into the levels listed below, which are ordered from the simplest to the most complicated. Descriptive This type of context describes the data and can be captured in the tag name. For example, a tag with the descriptive name "16-05-064-13W4.100.PT-101" makes it pretty clear where the data is coming from, as long as the name matches our tag naming convention. We can know from our naming rules that it's a pressure value. Descriptive context also includes the data type. For example, if we know that this value is a floating point value and we can also be sure that the value is continuous. Metadata Metadata typically includes units and quality, as well as other descriptive things like location. Hierarchy Hierarchy describes how a piece of data is related to another piece of data. For example, a hierarchical model can make it clear whether two pieces of data are on the same well, or on the same well pad, or if they are completely unrelated. Tag names can imply hierarchy, but because tag names are limited in length, they cannot express the depth of a corporate hierarchy. Note that hierarchy is not fixed and can often be viewer dependent. Different types of users will use the same data in different ways and will want data to be organized in a way that meets their specific needs. Behavior This is the most complex type of context and describes the behavior of our system. For example, when trying to optimize a process, we need to understand the operating state of the process before analyzing the data. Lack of Context When data has missing context or is contextualized improperly, the following issues arise: Analyzing the Wrong Data When the descriptive data is wrong, then the wrong data is analyzed. This isn't a theoretical concern. Common industrial protocols like Modbus have no concept of a tag or a tag name, so a tag name is applied to a value when the value is retrieved from the controller. If a program is updated, and the value in a Modbus register refers to a different value than the value that was originally expected, it is definitely possible for the data at the analytics level to be incorrect. An incorrect tag name, or a mismatch between the tag name and the underlying value, is a critical error that makes the data worthless and analysis impossible. Missing or Incorrect Metadata Much like with descriptive data, most commonly used industrial protocols like Modbus have no capacity to attach units or quality to a value. This means that the system must be explicitly programmed to make this data available, or metadata must be attached to values at some other level in the system. This means that units can be wrong and quality data may not exist, which means that the data cannot be used for any useful analysis. Behavior A specific example of behavioral context is knowing when a system is in a manual override state instead of a fully automatic state. Any analysis of this system should understand the operating state of the system so that data that is generated during specific operating states can be excluded. Without this behavioral context, it's possible that analysis can lead to incorrect conclusions. Applying Context We know what context is and why it's valuable. How can we best contextualize data without errors over the lifecycle of the system? Doing this correctly requires specific knowledge of the system, but here are some general guidelines to follow. Minimize the number of systems that data passes through, especially if context is destroyed and must be recreated at each stage in the system. As much as possible, push data directly from the place where it's generated to analytical systems. Prefer protocols that support sending context with data. For example, MQTT transport with Sparkplug B payload allows sending data values with units and quality. Tag names are preserved when data is sent. Compare this to polling data via Modbus, which only provides a raw value. A tag name, units, and quality must be added later, which can lead to errors. Standards are essential when trying to apply behavioral context. Using standardized programs across your system means that behavior is identical at all of your sites, which means that structured data analysis is possible without accounting for a bunch of corner cases and exceptions. Note that this doesn't mean that each site needs to be identical. Instead of modifying your automation programs directly for the specifics of each site, deploy configurable programs with known behavior. This allows for site-specific automation, but for standardized data at the analytics level. Scalable systems require strict adherence to standards. Conclusion Moving data from industrial systems to analytical systems is a solved problem. However, applying context to that data, and making the data useful in modern analytics systems requires additional work. The first step in the process is understanding what type of context that each piece of data needs. Some of this is obvious, like units, whereas some are less obvious, like behavioral context. Applying context correctly and without errors over the lifecycle of the system is just as much work as collecting the data in the first place, but properly applied context unlocks the true value of data.
- PLC Shift Gas Flow Computer - How Many Runs?
The PLC Shift Gas Flow computer is a software-only solution that calculates pressure and temperature-corrected gas flow. It implements the AGA-3 1992, AGA-3 2013, AGA-8 1994, and AGA-8 2017 Part 1 and Part 2 algorithms. It also generates and stores data required for regulatory compliance, like an event log, alarm log, configuration change log, and others. A common question that we get asked about our gas flow computer is, "How many runs can your flow computer do?". Our answer is, "It depends", and is quite unsatisfying for you, the end user. But, it really does depend, and some of the factors that it depends on are: The performance of the CPU that you are running our software on. How quickly sensors can be polled. Using Modbus RTU at 9600 baud and polling sensors sequentially limits how quickly data can get into the flow computer. What other software that the computer that you are using is running. We recommend either dedicating the computer to PLC Shift or using a very fast computer. The amount of RAM on the computer that you are using. The architecture of the CPU (ARM vs x86). The PLC Shift flow computer is different than existing solutions because our flow computer will run on most Debian Linux-based systems, whereas conventional flow computers are tied to a specific type of custom hardware. This means that you can scale our system by adding hardware, and hardware can be purchased from a variety of vendors, which matters in today's world where component shortages are rampant. In any effort to provide a better answer to the question of how many flow runs we can support on a single piece of hardware, we've tested a system where we add runs to see how execution times are affected. All tests are run with the following standard configuration, except where noted. Raspberry Pi 4 hardware with 2 GBytes of RAM. PLC Shift Gas Flow app with configuration version 1.3. All gas flow runs are configured to use the AGA-3 2013 algorithm. All gas flow runs are configured to use the AGA-8 1994 compressibility algorithm, except in a few tests as noted below. All flow data (events, alarms, configuration, minute, hour, and day) is exported to Azure table storage. Gas flow applications are configured to poll a simulated Schneider Electric 4102 multivariable sensor (MVS) directly using the Modbus TCP protocol. Resource usage is monitored using the "htop" program. Raspberry Pi is not just a toy for hobbyists. Many companies are making industrial computers out of Raspberry Pi compute modules, including: Kunbus RevPi OnLogic Factor 200 Elastel EG500 Strato Pi Base System The same table is used for all tests: Run Count is the number of active gas flow runs. CPU (%) is the CPU utilization by the PLC Shift runtime as seen in htop. RAM (%) is the amount of RAM consumed by the PLC Shift process as seen in htop. The test Pi has 2 GBytes of RAM. Manager Connected indicates whether the PLC Shift Manager is connected. When the manager is connected, tag data is constantly synchronized with the manager, and there is a noticeable increase in CPU utilization. The PLC Shift Manager is not expected to be connected during normal operations. Exec Time (ms) is the execution time for all applications and is pulled from the Total Execution Time Average (Minute) status parameter at the device level. This is the time it takes for all applications to execute their run loop. Apps are executed sequentially. Data export is a background task and does not interfere with the execution of applications. PLC Shift Runtime One Flow Run, Manager Connected One Flow Run Significant drop in CPU usage when PLC Shift Manager is not connected. One Flow Run, AGA-8 2017 Part 2 (GERG) Compressibility CPU and RAM usage is stable, but execution time has increased by 10 ms. GERG is algorithmically complex, and the PLC Shift Gas Flow computer executes the compressibility calculation once per second for maximum accuracy. Two Flow Runs Going from one run to two, execution time increases by about 3 ms, 0.5% increase in CPU usage, no change in RAM. Two Flow Runs, AGA-8 2017 Part 2 (GERG) Compressibility Going from one run to two with GERG enabled, execution time increases by about 10 ms, 0.6% increase in CPU usage, no change in RAM. Executing a run with GERG enabled consistently takes about 10 ms, whereas a run with AGA-8 1994 enabled takes about 3 ms to execute. GERG is 3x slower. Eight Flow Runs Total execution time is 20 ms for 8 runs, or around 2.5 ms per run. Sixteen Flow Runs Total execution time is 43 ms for 16 runs, or around 2.7 ms per run. Thirty-Two Flow Runs Total execution time is 80 ms for 32 runs, or around 2.7 ms per run. Desktop Computer The execution time for a single run on a desktop computer with an AMD 3900X processor and 32 GBytes of RAM is about 0.4 ms, or about 6 times faster than on a Rasberry Pi 4. With GREG enabled, the execution time is about 1.2 ms, which is around 3x slower than AGA-8 1994, and about 8 times faster than a Rasberry Pi. The execution time for 32 runs on the same desktop computer is 8.1 ms, or around 10x faster than the Raspberry Pi 4. By moving from ARM to a powerful x86 desktop processor, execution times can be greatly reduced, which means that the real answer to, "How many runs can you do", is "What type of hardware do you have?". This is not a simple answer with a single number like other flow computers, but it gives you, the user, the flexibility to tune your system as needed for whatever comes up. Conclusion It's possible to configure a PLC Shift system such that there as many gas flow runs as you need by choosing different types of hardware. If you only need a few runs, save money by using a low-cost ARM computer. If you need many runs, use a desktop-class processor.





