Understanding Industrial Protocols in the Perspective of IoT and Cloud

In-Short

CaveatWisdom

Caveat: To take advantage of latest technologies like Generative AI on Cloud, data is being ingested from different sources into the Cloud, coming to real-time industrial data, it’s important to understand the nature of data and it’s flow from its source on shop floor of the industry to its destination in the cloud.

Wisdom: To understand the nature of data and its flow, we need to understand the protocols involved at different levels of data flow, like Modbus, Profibus, EtherCAT, DNP3, OPC, MQTT, etc.

 

In-Detail

Before we jump into the IoT and Cloud, it’s important to understand the fact that sophisticated industrial automation systems which includes many sensors, instruments, actuators, PLCs, SCADA, etc., existed decades before the advent of Cloud and IoT technologies.

History

Industry 1.0 started with the advent of machine powered by steam engines which replaced the tools powered by human labour. This is during 1760s.

Industry 2.0 started when the machines were powered by electricity which made production more efficient. This is during 1870s.

Industry 3.0 started when machines were controlled by computers (Programmable Logic Controllers – PLCs) and SCADA (Supervisory Control and Data Acquisition) systems. This is during 1970s

Industry 4.0 started with the advent of Cloud and IoT Technologies from year 2011, this enabled analysing huge amounts of industrial data with respect to enterprise data.

Automation in industries is implemented with the help of Sensors, Instruments, PLCs, Actuators, Relays and SCADA systems.

Protocols:

To establish communication between sensors, instruments, PLCs and SCADA systems and also to support their products many major industrial automations companies like Schneider Electric, Siemens, Allen Bradly, GE, Mitsubishi, etc., have developed many industrial protocols like Modbus, Profibus, EtherCAT, DNP3, etc. If we go to any industry like Refineries, Cement Plants, Wind Farms etc., we find automation systems and instruments working on these protocols.

Modbus: Modbus is a data communication protocol that allows devices to communicate with each other over networks and buses. Modbus can be used over serial, TCP/IP, and UDP, and the same protocol can be used regardless of the connection type.

Profibus: is a fieldbus communication standard for industrial automation that allows devices like sensors, controllers, and actuators to share process values. It’s a digital network that connects field sensors to control systems. Profibus is used in many industries, including manufacturing, process industries, and factory automation.

Many of these industrial protocols are synchronous in nature with Client-Server architecture, they are designed to operate within the plant network delivering data on sub-milli second latency for machine operations.

When industrial systems became more and more complex with multiple providers of equipment in a single plant, major industrial automation companies formed an organization called OPC Foundation to define a standard protocol called OPC which can be interoperable between major industrial protocols. Initially OPC DA use to stand for “OLE for Process Control Data Access” based on Microsoft’s OLE (Object Linking and Embedding) technology which was used for communication between applications in the Windows ecosystem. This OPC DA became a legacy protocol (still used in many old industries) and OPC UA has evolved which is interoperable with multiple operating systems, today the acronym OPC stand for “Open Platform Communications” and UA stands for “Unified Architecture”. You can find more information at OPC Foundation.

As a first step to ingest data to cloud, basically we need to convert industrial protocols to OPC UA first. We can write a Driver software for that in .Net or Java using the OPC standard from OPC Foundation, or we can use software from companies who have already done it. There are many providers for this OPC software and open-source implementations are also available. Some major providers are Kepware OPC Server and MatrikonOPC.

The OPC servers will poll the data on different devices with Industrial Protocol Drivers and make the data available on OPC for the clients.

These industrial protocols including OPC are heavier, that is the data packet size is higher and because of the synchronous nature it becomes difficult to have a reliable connection over internet or to send data to a remote server in another geography over long distances, this is the reason for invention of light weight MQTT protocol by Andy Stanford-Clark (IBM) and Arlen Nipper (then working for Eurotech, Inc.) who authored the first version of the protocol in 1999.

Because of MQTT’s light weight nature and pub-sub model, it has been widely adopted for IoT (Internet of Things) after the advent of cloud computing.

As a second step to ingest data into cloud we must convert OPC to MQTT. IoT SiteWise OPC UA collector does this for us by becoming a client to the OPC server, subscribing or polling the data from OPC server and then converting it to MQTT. IoT SiteWise OPC UA collector is a component of AWS IoT Greengrass which is an edge runtime helping to build, deploy and manage IoT applications on the devices.

Important Points to Note:

Data is transitioning from Synchronous industrial protocols to Asynchronous IoT protocol, due to which there could be latency for the real-time data in the cloud.

  1. Key decisions to control the machine should be done at the factory level this can be achieved by running Lambda functions on IoT Greengrass.
  2. Important data processing for tasks such as Predictive Preventive Maintenance can be done with ML Inference components in IoT Greengrass.
  3. Post data ingestion to cloud, tasks like Analytics, Visualization and integrations with other services like Generative AI Apps, SAP systems can be done in the Cloud.

Running Containers on AWS as per Business Requirements and Capabilities

We can run containers with EKS, ECS, Fargate, Lambda, App Runner, Lightsail, OpenShift or on just EC2 instances on AWS Cloud. In this post I will discuss on how to choose the AWS service based on our organization requirements and capabilities.

In-Short

CaveatWisdom

Caveat: Meeting the business objectives and goals can become difficult if we don’t choose the right service based on our requirements and capabilities.

Wisdom:

  1. Understand the complexity of your application based on how many microservices and how they interact with each other.
  2. Estimate how your application scales based on business.
  3. Analyse the skillset and capabilities of your team and how much time you can spend for administration and learning.
  4. Understand the policies and priorities of your organization in the long-term.

In-Detail

You may wonder why we have many services for running the containers on AWS. One size does not fit all. We need to understand our business goals and requirements and our team capabilities before choosing a service.

Let us understand each service one by one.

All the services which are discussed below require the knowledge of building containerized images with Docker and running them.

Running Containers on Amazon EC2 Manually

You can deploy and run containers on EC2 Instances manually if you have just 1 to 4 applications like a website or any processing application without any scaling requirements.

Organization Objectives:

  1. Run just 1 to 4 applications on the cloud with high availability.
  2. Have full control at the OS level.
  3. Have standard workload all the time without any scaling requirements.

Capabilities Required:

  1. Team should have full understanding of AWS networking at VPC level including load balancers.
  2. Configure and run container runtime like docker daemon.
  3. Deploying application containers manually on the EC2 instances by accessing through SSH.
  4. Knowledge of maintaining OS on EC2 instances.

The cost is predictable if there is no scaling requirement.

The disadvantages in this option are:

  1. We need to maintain the OS and docker updated manually.
  2. We need to constantly monitor the health of running containers manually.

What if you don’t want to take the headache of managing EC2 instances and monitoring the health of your containers? – Enter Amazon Lightsail

 Running Containers with Amazon Lightsail

The easiest way to run containers is Amazon Lightsail. To run containers on Lightsail we just need to define the power of the node (EC2 instance) required and scale that is how many nodes. If the number of containers instances is more than 1, then Lightsail copies the container across multiple nodes you specify. Lightsail uses ECS under the hood. Lightsail manages the networking.

Organization Objectives:

  1. Run multiple applications on the cloud with high availability.
  2. Infrastructure should be fully managed by AWS with no maintenance.
  3. Have standard workload and scale dynamically when there is need.
  4. Minimal and predictable cost with bundled services including load balancer and CDN.

Capabilities Required:

  1. Team should have just knowledge of running containers.

Lightsail can dynamically scale but it should be managed manually, we cannot implement autoscaling based on certain triggers like increase in traffic etc.

What if you need more features like building a CI/CD pipeline, integration with a Web Application Firewall (WAF) at the edge locations? – Enter AWS App Runner

 

Running Containers with AWS App Runner

AWS App runner is one more easy service to run containers. We can implement Auto Scaling and secure the traffic with AWS WAF and other services like private endpoints in VPC. App Runner directly connects to the image repository and deploy the containers. We can also integrate with other AWS services like Cloud Watch, CloudTrail and X-Ray for advanced monitoring capability.

Organization Objectives:

  1. Run multiple applications on the cloud with high availability.
  2. Infrastructure should be fully managed by AWS with no maintenance.
  3. Auto Scale as per the varying workloads.
  4. Implement high security features like traffic filtering and isolating workloads in a private secured environment.

Capabilities Required:

  1. Team should have just knowledge of running containers.
  2. AWS knowledge of services like WAF, VPC, CloudWatch is required to handle the advanced requirements.

App Runner supports full stack web applications including front-end and backend services. At present App Runner supports only stateless applications, stateful applications are not supported.

What if you need to run the containers in a serverless fashion, i.e., an event driven architecture in which you run the container only when needed (invoked by an event) and pay only for the time the process runs to service the request? – Enter AWS Lambda.

Running Containers with AWS Lambda

With Lambda, you pay only for the time your container function runs in milliseconds and how much RAM you allocate to the function, if your function runs for 300 milliseconds to process a request then you pay only for that time. You need to build your container image with the base image provided by AWS. The base images are open-source made by AWS and they are preloaded with a language runtime and other components required to run a container image on Lambda. If we choose our own base image then we need to add appropriate runtime interface client for our function so that we can receive the invocation events and respond accordingly.

Organization Objectives:

  1. Run multiple applications on the cloud with high availability.
  2. Infrastructure should be fully managed by AWS with no maintenance.
  3. Auto Scale as per the varying workloads.
  4. Implement high security features like traffic filtering and isolating workloads in a private secured environment.
  5. Implement event-based architecture.
  6. Pay only for the requests process without idle time for apps.
  7. Seamlessly integrate with other services like API Gateway where throttling is needed.

Capabilities Required:

  1. Team should have just knowledge of running containers.
  2. Team should have deep understanding of AWS Lambda and event-based architectures on AWS and other AWS services.
  3. Existing applications may need to be modified to handle the event notifications and integrate with runtime client interfaces provided by the Lambda Base images.

We need to be aware of limitations of Lambda, it is stateless, max time a Lambda function can run is 15 minutes, it provides a temporary storage for buffer operations.

What if you need more transparency i.e., access to underlying infrastructure at the same time the infrastructure is managed by AWS? – Enter AWS Elastic Beanstalk.

Running Containers with AWS Elastic Beanstalk

We can run any containerized application on AWS Elastic Beanstalk which will deploy and manage the infrastructure on behalf of you. We can create and manage separate environments for development, testing, and production use, and you can deploy any version of your application to any environment. We can do rolling deployments or Blue / Green deployments. Elastic Beanstalk provisions the infrastructure i.e., VPC, EC2 instances, Load Balances with Cloud Formation Templates developed with best practices.

For running containers Elastic Beanstalk uses ECS under-the-hood. ECS provides the cluster running the docker containers, Elastic Beanstalk manages the tasks running on the cluster.

Organization Objectives:

  1. Run multiple applications on the cloud with high availability.
  2. Infrastructure should be fully managed by AWS with no maintenance.
  3. Auto Scale as per the varying workloads.
  4. Implement high security features like traffic filtering and isolating workloads in a private secured environment.
  5. Implement multiple environments for developing, staging and productions.
  6. Deploy with strategies like Blue / Green and Rolling updates.
  7. Access to the underlying instances.

Capabilities Required:

  1. Team should have just knowledge of running containers.
  2. Foundational knowledge of AWS and Elastic Beanstalk is enough.

What if you need to implement more complex microservices architecture with advanced functionality like service mesh and orchestration? Enter Elastic Container Service Directly

Running Containers with Amazon Elastic Container Service (Amazon ECS)

When we want to implement a complex micro-services architecture with orchestration of container, then ECS is the right choice. Amazon ECS is a fully managed service with built-in best practices for operations and configuration. It removes the headache of complexity in managing the control plane and gives option to run our workloads anywhere in cloud and on-premises.

ECS give two launch types to run tasks, Fargate and EC2. Fargate is a serverless option with low overhead with which we can run containers without managing infrastructure. EC2 is suitable for large workloads which require consistently high CPU and memory.

A Task in ECS is a blueprint of our microservice, it can run one or more containers. We can run tasks manually for applications like batch jobs or with a Service Schedular which ensures the scheduling strategy for long running stateless microservices. Service Schedular orchestrates containers across multiple availability zones by default using task placement strategies and constraints.

Organization Objectives:

  1. Run complex microservices architecture with high availability and scalability.
  2. Orchestrate the containers as per complex business requirements.
  3. Integrate with AWS services seamlessly.
  4. Low learning curve for the team which can take advantage of cloud.
  5. Infrastructure should be fully managed by AWS with no maintenance.
  6. Auto Scale as per the varying workloads.
  7. Implement high security features like traffic filtering and isolating workloads in a private secured environment.
  8. Implement complex DevOps strategies with managed services for CI/CD pipelines.
  9. Access to the underlying instances for some applications and at the same time have a serverless option for some other workloads.
  10. Implement service mesh for microservices with a managed service like App Mesh.

Capabilities Required:

  1. Team should have knowledge of running containers.
  2. Intermediate level of understanding of AWS services is required.
  3. Good knowledge of ECS orchestration and scheduling configuration will add much value.
  4. Optionally Developers should have knowledge of services mesh implementation with App mesh if it is required.

What if you need to migrate existing on-premises container workloads running on Kubernetes to the Cloud or what if the organization policy states to adopt open-source technologies? – Enter Amazon Elastic Kubernetes Service.

 

Running Containers with Amazon Elastic Kubernetes Service (Amazon EKS)

Amazon EKS is a fully managed service for Kubernetes control plane and it gives option to run workloads on self-managed EC2 instances, Managed EC2 Instances or fully managed serverless Fargate service. It removes the headache of managing and configuring the Kubernetes Control Plane with in-built high availability and scalability. EKS is an upstream implementation of CNCF released Kubernetes version, so all the workloads presently running on-premises K8S will work on EKS. It gives option to extend and use the same EKS console to on-premises with EKS anywhere.

Organization Objectives:

  1. Adopt open-source technologies as a policy.
  2. Migrate existing workloads on Kubernetes.
  3. Run complex microservices architecture with high availability and scalability.
  4. Orchestrate the containers as per complex business requirements.
  5. Integrate with AWS services seamlessly.
  6. Infrastructure should be fully managed by AWS with no maintenance.
  7. Auto Scale as per the varying workloads.
  8. Implement high security features like traffic filtering and isolating workloads in a private secured environment.
  9. Implement complex DevOps strategies with managed services for CI/CD pipelines.
  10. Access to the underlying instances for some applications and at the same time have a serverless option for some other workloads.
  11. Implement service mesh for microservices with a managed service like App Mesh.

Capabilities Required:

  1. Team should have knowledge of running containers.
  2. Intermediate level of understanding of AWS services is required and deep understanding of networking on AWS for Kubernetes will a lot, you can read my previous blog here.
  3. Learning curve is high with Kubernetes and should spend sufficient time for learning.
  4. Good knowledge of EKS orchestration and scheduling configuration.
  5. Optionally Developers should have knowledge of services mesh implementation with App mesh if it is required.
  6. Team should have knowledge on handling Kubernetes updates, you can refer to my vlog here.

 

Running Containers with Red Hat OpenShift Service on AWS (ROSA)

If the Organization manages its existing workloads on Red Hat OpenShift and want to take advantage of AWS Cloud then we can migrate easily to Red Hat OpenShift Service on AWS (ROSA) which is a managed service. We can use ROSA to create Kubernetes clusters using the Red Hat OpenShift APIs and tools, and have access to the full breadth and depth of AWS services. We can also access Red Hat OpenShift licensing, billing, and support all directly through AWS

 

I have seen many organizations adopt multiple service to run their container workloads on AWS, it is not necessary to stick to one kind of service, in a complex enterprise architecture it is recommended to keep all options open and adopt as the business needs changes.

Interweaving Purpose-Build Databases in the Microservices Architecture

It is best practice to have a separate database for each microservice based on its purpose. In this post we will understand how to analyse the purpose based on a scenario and choose the right database.

In-Short

CaveatWisdom

Caveat: We can easily run into cost overruns if we do not choose the right database and design it properly based on the purpose of our application.

Wisdom:

  1. Understand the access patterns (Queries) which you make on our database.
  2. Understand how your database storage scales, will it be in terra bytes or petabytes.
  3. Analyse what is most important for your application among Consistency, Availability and Partition Tolerance.
  4. Choose Purpose-Built databases on AWS cloud based on Application Purpose.

In-Detail

Scenario

Let us consider we are developing a loan processing application for a Bank. The Loan could be an Auto Loan, Home Loan, Personal Loan or any other loan.

Requirements of Scenario

  1. Customer visits loan application portal, reviews all the loan products and interest rates and applies for a loan. The portal should give a smooth experience to the customer without any latencies. Once Application is submitted it should be acknowledged immediately and should be queued for processing. 
  1. Bank Expects huge volume of loan applications across regions with its marketing efforts, loan application data should be stored in a scalable database and as it is very important data it should be replicated in multiple regions for high availability.  
  1. While processing the loan, credit worthiness of the customer has to be analysed.  
  1. Customer Profiles and Loan Documents with content management system should be stored in a secured scalable database.  
  1. Based on credit worthiness of customer loan documents should be sent for final manual approval.  
  1. Loan account for the customer should be created and loan transactions data should be maintained in it. We also need to do ad hoc queries on the transactions data in relation to floating interest rates and repayment schedules for generating different statements and reports.  
  1. Loan Application data should be sent to data warehouse for marketing and customer analytics. In the same data warehouse data from other sources and products will be ingested to improve marketing strategies and showcase relevant products to customers.  
  1. Immutable records of loan application events should be maintained for regulatory and compliance purposes and also these records should be securely shared with the insurer of the asset created by the customer with the loan.

Note: The architecture is simplified for purpose of discussion, real production scenario architecture could be much more complex.

Architecture Brief

Customer facing loan application portal is a static website hosted on Amazon S3 with the CloudFront integration. API Calls are made to API Gateway which transfers the request data to Lambda functions for processing. Fanout mechanism with a combination of SNS and SQS is adopted to process and ingest data to multiple databases parallelly. Process workflow including manual approval is handled by AWS Step Functions. SNS is used for internal notifications and SES is used to intimate the status of the loan by email to the customer.

Command Query Responsibility Segregation (CQRS) pattern is adopted in the above architecture with separate lambda functions for ingesting data and processing the data.

Analysing Purpose and Choosing the Database

  1. When handling customer queries, especially at the time of acquiring a new customer or selling a new product to an existing customer, the response time should be very less. To give a very good experience to the customer all general frequent query responses, session state and the data which afford to be stale should be cached. Amazon ElastiCache for Redis is a managed distributed in-memory data store built for this kind of purpose. It gives a high-performance microsecond latency caching solution. It comes with multi-AZ capability for high availability.
  2. Loan application data will be mostly key value pairs like loan amount, loan type, customer id, etc. As per the requirement 2, huge volume of loan data has to stored and retrieved for procession and at the same time it should be replicated to multiple regions for very high availability. Amazon DynamoDB is a key-value store database which can give single-digit millisecond latency even at peta-byte scale. It has an inherent capability to replicate the data to other regions with Global Tables and enabling DynamoDB streams. So, this is suitable for storing Loan application data and triggering Loan Processing Lambda function with DynamoDB streams.
  3. As per the requirement 3 of the scenario, creditworthiness of the customer has to be analysed before arriving at a decision of sanctioning loan amount to the customer. It is assumed that bank collects data of customers from various sources and also maintain the data of its relationships with its existing customers. Creditworthiness is calculated on many factors especially the history of the relationship the customer had with bank with various products like savings account, credit cards, income and repayment history. When it comes to querying the relationships and analysing the data, we need a graph database. Amazon Neptune is a fully managed graph database service that work with highly connected datasets. It can scale to handle billions of relationships and lets you query them with milliseconds latency. It stores data items as vertices of the graph, and the relationships between them as edges. Loan application data can be ingested to Amazon Neptune and creditworthiness can be analysed.
  4. As per requirement 4, customer profiles and loan documents should be maintained with a content management system. Loan documents contain critical legal information which could be changing based on various products. The documents can differ based on the law of different states in which the bank operates. To address these requirements schema of the database should be dynamic. We may need to query and process these documents with-in milliseconds. We need to have a No-SQL Document Database for content management system of loan documents which scale. Amazon DocumentDB (with MongoDB Compatibility) is a fully managed database service which can supports both instance-based clusters which can scale up-to 128tb and also Elastic Clusters which can scale even to petabytes of data. We can put loan documents with dynamic schema as JSON documents in DocumentDB. We can use MongoDB drivers for developing our application with the DocumentDB. Additionally signed and scanned copies of the documents can be maintained in an S3 bucket with a reference of the document in the DocumentDB.
  5. As per the requirement 6, Loan Account should be opened for the customer where transactions data should be maintained. Here we need to maintain the integrity of the transactions data with a fixed schema and Online Transaction Processing (OLTP ). SQL is more suitable for doing ad hoc queries for generating statements. A Relational Database is more suitable for this purpose. Amazon RDS which supports six SQL database engines (Aurora, MySQL, PostgreSQL, MariaDB, Oracle and MS SQL Server) is managed service for relational databases. Amazon RDS manages backups, software patching, automatic failure detection, and recovery which are tedious manual tasks if we maintain the database ourselves. We can focus on our application development instead of maintaining these tasks. If we are comfortable with MySQL and PostgreSQL we can choose Amazon Aurora based on version compatibility. Aurora gives more throughput than standard MySQL and PostgreSQL engines as it uses Clustered Volume storage which is native to cloud.
  6. As per requirement 7, loan application data has to be stored for marketing and customer analytics. The data warehouse also stores data from multiple sources, which could run into petabytes. The data could be analysed with Machine Learning algorithms which can help in targeted marketing. Amazon Redshift is a fully managed, petabyte-scale data warehouse service with a group of nodes called cluster. Amazon Redshift service manages provisioning capacity, monitoring and backing up the cluster, and applying patches and upgrades to the Amazon Redshift engine which can run one or more databases. We can run SQL commands for analysis on the database. Amazon Redshift supports SQL client tools connecting through Java Database Connectivity (JDBC) and Open Database Connectivity (ODBC). Amazon Redshift also give serverless options where we need not to provision any clusters, it automatically provisions data warehouse capacity and scales the underlying resources. With serverless we pay only when the data warehouse. We can use Amazon Redshift ML to train and deploy machine learning models with SQL. We can use Amazon SageMaker to train the model with data in Amazon RedShift for customer analytics. 
  7. As per the requirement 8, immutable records of loan application events should be maintained for regulatory and compliance purposes. We need to have ledge database to maintain immutable records and also securely share the date with other stakeholders in block chain applications. The insurer who is insuring the asset which is created out of the loan take by the customer may need this data for insuring purpose. With Amazon Quantum Ledger Database (Amazon QLDB) we can maintain all the activities with respect to the loan in an immutable, and cryptographically verifiable transaction log owned by the bank. We can track the history of credits and debits in loan transactions and also verify the data lineage of an insurance claim on the asset. Amazon QLDB is a fully managed service and we pay only for what we use.  

In this post I have discussed how to choose a purpose-built database based on our application. I will be discussing designing and implementation of these databases in my future posts.

Query Lambda for RDS MySQL Private Database

Github link https://github.com/getramki/QueryLambda.git

It is important to create a database in private subnets in a VPC and not to expose it to internet, however it is challenging to connect to a private database instance and create the initial Schema and seed the database. This Query Lambda addresses this consern. This repo contains code for a Lambda function written in NodeJS and a SAM template to deploy it.

The Lambda function makes use of best practices of getting the secrets from Secrets Manager and using Layers for MySQL Package.

Prerequisites

AWS Account and IAM user with necessary permissions for creating Lambda, aws cli, SAM cli, configure IAM user with necessary programmatic permissions, RDS MySQL database in a VPC. Please install and configure above before going further

  • You can incur charges in your AWS Account by following this steps below
  • The code will deploy in us-west-2 region, change it where ever necessary if deploying in another region

After downloading the repo in the terminal Change Directory to repo directory and follow the steps for

  • Change Directory into Layer/nodejs folder and run
npm install mysql --save 

or Manually Create the Lambda function and create a layer and add it to Lambda function

  • Create Secret for RDS MySQL Database you have created in the Secrets Manager (in the same region)

Lambda Function Usage

Once lambda is deployed you can make use of Testing built in the Lambda console to interact with database. The function expects three inputs Quesry String – querystr, Database Name – dbname, Secret Manager’s Secret – secret

You can configure test events as follows

{"querystr": "CREATE DATABASE sampledb2", "dbname": "sampledb", "secret": "dbsecret"}
{"querystr": "CREATE TABLE customers (name VARCHAR(255), address VARCHAR(255))", "dbname": "sampledb","secret": "dbsecret"}
{"querystr": "INSERT INTO customers (name, address) VALUES ('Rama', 'Whitefield Bangalore')", "dbname": "sampledb", "secret": "dbsecret"}
{"querystr": "SELECT * FROM customers","dbname2": "sampledb","secret": "dbsecret"}