This the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Documentation

United Manufacturing Hub - the open source manufacturing system

About The Project

The United Manufacturing System is an open source solution for extracting and analyzing data from manufacturing plants and sensors. The Hub includes both software and hardware components to enable the retrofit of productions plants by plug-and-play as well as to integrate existing machine PLCs and IT systems. The result is an end-to-end solution for various questions in manufacturing such as the optimization of production through OEE analysis, preventive maintenance through condition analysis and quality improvement through stop analysis.

  • Open. open-source (see LICENSE) and open and well-documented standard interfaces (MQTT, REST, etc.)
  • Scalable. Horizontal scaling incl. fault tolerance through Docker / Kubernetes / Helm. Edge devices can be quickly set up and configured in large numbers.
  • Flexible. Flexible deployment options, from public cloud (Azure, AWS, etc.) to on-premise server installations to Raspberry Pis, everything is possible. Free choice of programming language and systems to be connected through central message broker (MQTT).
  • Tailor-made for production. Pre-built apps for manufacturing. Use of established automation standards (OPC/UA, Modbus, etc.). Quick connection of production assets either by retrofit or by connecting to existing interfaces.
  • Community and support. Enterprise support and community for the whole package. Built exclusively on well-documented software components with a large community.
  • Information Security & Data Protection. Implementation of the central protection goals of information security. High confidentiality through e.g. end-to-end encryption, flexible provisioning options and principle of least privilege. High integrity through e.g. ACID databases and MQTT QoS 2 with TLS. High availability through e.g. use of Kubernetes and (for SaaS) a CDN.

Demo

1 - Getting Started

The guide is split into three parts

1.1 - 0. Understanding the technologies

Strongly recommended. This section gives you an introduction into the used technologies. A rough understanding of these technologies is fundamental for installing and working with the system. Additionally, this article provides further learning materials for certain technologies.

The materials presented below are usually teached in a 2-3 h workshop session on a live production shopfloor at the Digital Capability Center Aachen. You can find the outline further below.

Introduction into IT / OT

The goal of this chapter is to create a common ground on IT / IT technologies and review best-practices for using IIoT technologies. The target group are people coming from IT, OT and engineering.

Introduction

IIoT sits at the intersection of IT and OT.

History: IT & OT were typically seperate silos but are currently converging to IIoT

Operational Technology (OT)

OT connects own set of various technologies to create highly reliable and stable machines

OT is the hardware and software to manage, monitor and control industrial operations. It tasks range from monitoring critical assets to controlling robots on the shopfloor. It basically keeps machines and factories running and producing the required product.

Typical responsibilities:
  • Monitoring processes to ensure best product quality
  • Controlling machine parameters
  • Automation of mechanical and controlling processes
  • Connecting machines and sensors for communication
  • Maintenance of machines and assets
  • Certifiying machines for safety and compliance
  • Retrofitting assets to increase functionality
  • And many more…
Typical vendors for Operational Technology:

The concepts of OT are close to eletronics and with a strong focus on human and machine safety

ConceptDescriptionExamples
Process controlDesigning a stable process which create the desired output with continuously changing inputs External and internal factors influence the process but are not allowed to change the result​Controlling refrigerator based on internal temperature​
Sensor technology​Using various sensors types to measure pressure, force, temperature, velocity, etc.​ Converting sensor signals to digital outputs, interpreting their signals and generating insights​Light barrier counting parts on a conveyor belt​ Vibration sensor for process control in CNC machining​
AutomationUsing hardware and software to automate repetitive or dangerous tasks​ Reducing reaction time and increasing speed to increase productivity​Robot assembling smartphones ​
Reliability and safety​Ensuring that neither humans nor machines are damaged in case of unforeseen events​ Regular checks, maintenance and certification of crucial assets​Emergency stop buttons for worker safety ​ Regular maintenance to prevent machine breakdown​

OT focuses on handling processes with highest possible safety for machines and operators

High importance in OT:
CategoryDescription
Reliability & SafetyMalfunction can result in extreme damages to human and property​
Maintainability & standardsMachines typically run between 20-30 years, sometimes even 50+ years​
CertificationsLegally required certifications for safety and optional certificates for reliability​
Of lesser importance:
CategoryDescription
User experienceThe operator will be trained anyway, therefore intuitive User Interfaces (UI) are not required​
Quick development cycles, e.g., agileCan result in missing out important safety elements and damage workers or machines​
IT Security20+ year old machines are not designed with cyber security in mind​

Nobody wants to build a nuclear reactor using agile “move fast, break things” principles

Typical device architecture and situation for the OT

Typical device architecture and situation for the OT. 1 Programmable logic controller; 2 Human Machine Interface; 3 Supervisory Control and Data Acquisition

Fundamentals 1: Programmable Logic Controller (PLC)

The Programmable Logic Controller (PLC) is the heart of every modern machine, which stores and runs the program. It is an PC with industrial standards and does not require monitor, keyboard or other devices to function properly. It collects sensor data and calculates complex algorithms to control actuators.

Background:
  • Very old machines use only relays (electric switches) to control actuators and sensors
  • PLCs were introduced due to being more reliable and flexible than electrical parts
  • The logic of simple switches is still very present in the OT space (programming)
Programming languages:
  • The various suppliers like Siemens, Rockwell, Bosch etc. offer different programming languages
  • PLCs can be programmed with graphical elements or with code
  • Machine vendor programs are not always openly accessible and do not allow changes (loss of warranty)
Communication protocols:
  • PLC manufacturers have different communication protocols and functional standards which limits interoperability
  • Newer protocols like Profinet or Ethernet/IP are easy to connect to an IT network (if open interface). Others like Profibus require additional hardware and implementation effort

Fundamentals 2: PLCs & PLC programming

Fundamentals 3: Process control using PLCs

Information Technology (IT)

IT connects millions of devices and manages their data flows

IT is the hardware and software to connect thousands of devices in a network and manage their exchange of information. The purpose is to enable data storage and its usage for business and operations. Task range from connecting simple telephones to managing complex global networks.

Typical responsibilities:
  • Setting up phones, PCs, printers and other office hardware
  • Monitoring devices and networks for security breaches
  • Maintaining local servers
  • Configuration of business systems e.g. ERP/SAP
  • Updating devices to ensure IT security
  • Setting up local networks and WiFi
  • Implementing business solutions like automation and
  • And many more…
Typical vendors:

The concepts of IT are focusing on digital data and networks

ConceptDescriptionExamples
Data storage and analytics​Data has to be managed and stored in a manner which allows quick access and driving insights to improve business KPIs​ Terabytes of data without contextualization does not have any business value ​Aggregating sales data and calculating KPIs every quarter​
Device Management​Remote device management allows the monitoring and updating of devices​ Blocking and updating devices to reduce security risks and malicious actions ​Updating and restarting computer remotely​
Network security ​Policies, processes and practices like firewalls and two-factor authentication adopted to prevent cyber attacks​ Limiting risk by limiting the number of accesses and rights of users e.g. not all users are admins, users are only granted access when it is required for their work etc. ​Limiting internet access to specific services​
Scalability​New software and functionality can be installed and rolled out only with a few clicks​ Update to existing solutions does not always require new hardware like in OT​Handing out Microsoft Office to all employees​

What is important in IT? What is not important?

High importance in OT:
CategoryDescription
Quick developmentcycles, e.g., agileGood user experience is more important than a perfectly designed app​
ScalabilityApps need to handle millions of users at the same time (e.g., Google, Netflix)​
User experienceIf something is unintuitive, people tend to not use it​
Of lesser importance:
CategoryDescription
Reliability & SafetyHardware is redundant, if one fails another can take over Consequences of hardware failures are smaller​
Maintainability & standardsStandards are usually best-practices and might change over time. No hard-written norms.​
CertificationsTherefore, certifications are not legally required​

Nobody wants to build an app for years just so that the end-user removes it within 30 seconds

Fundamentals 1: Networking

Fundamentals 2: Cloud and Microservices

The term cloud refers to servers and the software running on them. These servers can be used to compute data e.g. process a customer order or simulate the weather and at the same time store it. This data can be accessed around the globe simultaneously with high-speed which enables a centralized “single source of truth”

Cloud products:
  • Cloud providers offer their capabilities on advanced analytics and machine learning to reduce time for insights generation (Platform as a Service - PaaS)
  • Storage and computational power can be booked flexibly and used freely
  • Out of the box applications running in the browser without installation
Microservices:
  • Small stand alone blocks running only small functions
  • Whenever one microservice block crashes the rest is unaffected (high stability)
  • One solutions can be designed out of multiple already available microservices
Scalability:
  • Microservice blocks can be flexibly turned on and off depending on the user requirements
  • Easy scalability allows customers to only pay what they use
  • Single solutions can be deployed and accesses globally without installation on each personal computer

Fundamentals 3: How microservices are built: Docker in 100 seconds

Fundamentals 4: How to orchestrate IT stacks: Kubernetes in 100 seconds

Fundamentals 5: Typical network setups in production facilities

Typical network setups in production facilities

Industrial Internet of Things (IIoT)

Whats it’s all about

Why is digital transformation relevant now?

Technology advancements have lowered barriers to industrial IoT to come down. The benefits of IIoT are real and sizable.

How can manufacturing organizations capture value at scale?

A digital transformation in manufacturing requires an orchestrated approach across the dimensions of business, organization and technology. A holistic framework focuses on full value capture through having a return-on-investment, capability building and technical IIoT ecosystem focus

Value created through digital transformation

Following a digital transformation approach cases show great impact in e.g., throughput, production efficiency, gross margin, quality across various industries.

A full digital transformation of manufacturing needs to consider business, technology and organization

Business: Impact Drive Solutions
  • Impact comes from a top-down prioritized portfolio of use cases to address highest value first
  • Digital transformations need to have a return-on-investment mindse
Organization: New way of working and dedicated approach to skills and capabilities
  • Digital transformations are multiyear, company-wide journeys requiring a central transformation engine and a value capture approach
  • Innovative digital-capability-building approaches allow the rapid upskilling of thousands of employee
Technology: Cloud enabled platforms and ecosystem (focus of the United Manufacturing Hub)
  • Form a comprehensive, secure, affordable and scalable technology infrastructure based on IoT platform architectures
  • Build and lead a focused ecosystem of technology partner

IIoT sits at the intersection of IT and OT

IIoT sits at the intersection of IT and OT.

Architecting for scale

Architecting for scale.

Best-practices 1: Avoid common traps in the IIoT space

See also Open source in Industrial IoT: an open and robust infrastructure instead of reinventing the wheel

Avoid lock-in effects
  • Use open and freely accessible standards
  • Use open-source whenever reasonable
  • When acquiring new software, hardware or machines define contracts on data property, security plans and service levels
  • Check for and request full and understandable documentation of each device and interface
Use both of best worlds (IT and OT)

Look into the other world to get alternative solutions and inspiration e.g.

  • Grafana dashboard instead of a built in HMI
  • Industrial PCs instead of Raspberry PIs
  • Secure crucial connections with firewalls (e.g. pfSense)
Avoid quick fixes
  • Use industry-wide IT/OT standards wherever applicable before developing on your own
  • Invest enough time for architecture basics and the big picture to avoid rework in the near future
  • Always document your project even when it seem unnecessary at the moment

Best-practices 2: Protocols which allow communication of IT and OT systems

Coming from IT: MQTT

MQTT

  • A light weight with low-bandwidth and power requirements which is the leading standard for IoT applications
  • All devices connect to a MQTT Broker which saves and distributes information to devices which subscribed to it (similar to social network)
Coming from IT: REST API

REST API

  • Standard web application interface which is used on almost every website
  • Can be used to request information from the web e.g. weather data or distance between two cities (Google Maps) or request actions e.g. save a file in the database
Coming from OT: OPC/UA

OPC/UA

  • Standard interface for automation and machine connectivity
  • Highly complex protocol with wide range of capabilities but low in user friendliness

Best-practices 3: Reduce complexity in machine connection with tools like Node-RED

See also Node-RED in Industrial IoT: a growing standard

Best-practices 4: Connect IT and OT securely using a Demilitarized Zone (DMZ)

See also Why are our networks open by default and how do I protect my valuable industrial assets?

Architecture

See also Architecture

Example projects

See also Examples

1.2 - 1. Installation

This section explains how the system (edge and server) can be setup for development and testing enviroments.

There are three options to setup a development environment:

  1. using a seperate device in combination with k3OS and our installation script (preferred). This requires an external device and is a fully automated installation.
  2. using minikube (recommended for developers working on the core functionalities of the stack). This method allows you to install the stack on your device and is semi-automated.
  3. manual installation (recommended for production environments, if you want to have fine grained control over the installation steps). This can be executed either on an external device or on your device.

The focus of this article is to provide all necessary information to install it in a compressed tutorial. There are footnotes providing additional information on certain steps, that might be new to certain user groups.

Option 1: using a seperate device in combination with k3OS and our installation script

Note: this content is also available in a presence workshop with an experienced facilitator guiding the participants through the installation and answering questions. Contact us for more information!

Prerequisites

This installation methods requires some previous setup

  • an edge device with x86 architecture. We recommend using the K300 from OnLogic
  • the latest version of k3OS 1 installed on a bootable USB-stick 2.
  • a computer with SSH / SFTP client 3 and Lens (for accessing the Kubernetes cluster) installed. We recommend a laptop with an Ethernet port or with an Ethernet adapter.
  • local LAN (with DHCP) available via atleast two Ethernet cables and access to the internet.4.
  • a computer monitor connected with the edge device
  • a keyboard connected with the edge device

Installation

This step is also available via a step-by-step video: TODO

k3OS

  1. Insert your USB-stick with k3OS into your edge device and boot from it 5
  2. Install k3OS. When asked for a cloud-init file, enter this URL and confirm: https://www.umh.app/development.yaml. If you are paranoid or want to setup devices for production you could copy the file, modify and host it yourself. Here is the template

This process takes around 15 - 20 minutes depending on your internet connection and there will be no further information about the installation status on the output of the device visible (the information on the computer screen).

Getting access to the device

To verify whether the installation worked and to access Grafana (the dashboard) and Node-RED, we will first enable SSH via password authentification, fetch the login details for Kubernetes and then login via Lens.

Step 1: Login

The login console will look like “messed up” due to the logs of the installation process in the steps above.

Immediatly after start. Nothing is messed up yet.

"Messed up" login screen

You can “clean it up” by pressing two times enter.

You can also immediatly proceed with entering the default username rancher (do not forget to press enter) and the default password rancher to login.

Logging into k3OS using the username `rancher`

Logging into k3OS using the password `rancher`

After a successfull login you should see the current IP address of the device on your computer screen.

Successfully logged into k3OS

Step 2: Enable SSH password authentification

Enable SSH password authentification in k3OS 6. This is currently not necessary anymore as the automated setup script will do that automatically, therefore this step can be skipped. This paragraph only exists to remind you that this setting is not the default behavior of k3OS and should be deactivated in production environments.

For production environments we recommend using a certificate to authenticate, which is enabled by default. This can be archieved by modifying the cloud-init file and linking to a public key stored on your GitHub account.

Step 3: Connect via SSH

Connect via SSH 3 from your laptop with the edge device. The IP address is shown on the computer screen on your edge device (see also step 1). If it is not available anymore, you can view the current IP address using ip addr.

Username: rancher Password: rancher

Step 4: Getting Kubernetes credentials

Execute cat /etc/rancher/k3s/k3s.yaml in your SSH session on your laptop to retrieve the Kubernetes credentials. Copy the content of the result into your clipboard.

Execute `cat /etc/rancher/k3s/k3s.yaml`

Copy the content

Connect with the edge device using the software Lens and the Kubernetes credentials from your clipboard.

Add a new cluster in Lens

Select `Paste as text`

Paste from the clipboard

Ensure that you have adjusted the IP in the Kubernetes credentials with the IP of the edge device.

You have now access to the Kubernetes cluster!

Verifying the installation and extracting credentials to connect with the dashboard

The installation is finished when all Pods are “Running”. You can do that by clicking on Pods on the left side.

Click in Lens on Workloads and then on Pods

Select the relevant namespaces `factorycube-server` and `factorycube-edge`

everything should be running

Some credentials are automatically generated by the system. One of them are the login credentials of Grafana. You can retrieve them by clicking on “Secrets” on the left side in Lens. Then search for a secret called “grafana-secret” and open it. Press “decode” and copy the password into your clipboard.

Press on the left side in Lens on Configuration and then Secret.

Then select grafana-secret

Then click on the eye on the right side of adminpassword to decrypt it

Opening Grafana and Node-RED

Grafana is now accessible by opening the following URL in your browser: http://<IP>:8080 (e.g., http://192.168.1.2:8080). You can login by using the username admin and password from your clipboard.

Node-RED is accessible by opening the following URL in your browser: http://<IP>:1880/nodered.

Once you have access, you can proceed with the second article Connecting machines and creating dashboards.

Option 2: using minikube

This option is only recommended for developers. Therefore, the installation is targeted for them and might not be as detailed as option 1.

Prerequisites

  • minikube installed according to the official documentation
  • repository cloned using git clone https://github.com/united-manufacturing-hub/united-manufacturing-hub.git or downloaded and extracted using the download button on GitHub.
  • helm 7 and kubectl 8 installed

Steps

  1. Start minikube using minikube start. If minikube fails to start, see the drivers page for help setting up a compatible container or virtual-machine manager.

    Output of the command `minikube start`

  2. If everything went well, kubectl is now configured to use the minikube cluster by default. kubectl version should look like in the screenshot.

    Expected output of `kubectl version`

  3. Go into the cloned repository and into the folder deployment/factorycube-edge
  4. execute: kubectl create namespace factorycube-edge && kubectl create namespace factorycube-server

    Expected output of `kubectl create namespace`

  5. Execute the following command to get an example development configuration: curl https://docs.umh.app/examples/factorycube-server/development_values.yaml --output development_values.yaml

    Output of curl

  6. Install factorycube-edge by executing the following command: helm install factorycube-edge . --values ./development_values.yaml -n factorycube-edge

    Output of `helm install`

  7. Go to the factorcube-server folder, e.g. cd ../factorycube-server
  8. Install factorycube-server by executing the following command: helm install factorycube-server . -n factorycube-server

    Output of `helm install`

Now go grab a coffee and wait 15-20 minutes until all pods are “Running”.

Now you should be able to see the cluster using Lens

Option 3: manual installation

For a manual installation, we recommend that you take a look at the installation script and follow these commands manually and adjust them when needed.


  1. See also out guide: What is semantic versioning ↩︎

  2. See also out guide: How to flash an operating system on a USB-stick ↩︎

  3. See also out guide: How to connect via SSH ↩︎

  4. See also out guide: How to setup a development network ↩︎

  5. See also out guide: How to boot from a USB-stick ↩︎

  6. See also out guide: Enabling k3os password authentication ↩︎

  7. Can be installed using the following command: export VERIFY_CHECKSUM=false && curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 && chmod 700 get_helm.sh && ./get_helm.sh ↩︎

  8. Can be installed on Ubuntu using the following command: sudo apt-get install kubectl ↩︎

1.3 - 2. Connecting machines and creating dashboards

This section explains how the United Manufacturing Hub is used practically

1. Extract data using factorycube-edge

The basic approach for data processing on the local hardware is to extract data from various data sources (OPC/UA, MQTT, Rest), extract the important information, and then make it available to the United Manufacturing Hub via a predefined interface (MQTT). For this data processing on the local hardware we use Nodered.

To extract and pre-process the data from different data sources we use the open source software node-red. node-red is a low-code programming for event-driven applications.

If you haven’t worked with node-red yet, here is a good documentation from node-red!

Here you can download the flow

General Configuration

Basically, 3 pieces of information must be communicated to the system. For more information feel free to check this article. These 3 information must be set to the system via the green configuration node-red, so that the data can be assigned exactly to an asset

The customer ID to be assigned to the asset: customerID

The location where the asset is located: location

The name of the asset: AssetID

Furthermore, you will find under the general settings, the state logic which determines the machine state with the help of the activity and detectedAnomaly topic. For more information feel free to check this article.

Inputs:

With the help of the inputs you can tap different data sources. Like for example:

Interaction with sensorconnect (Plug and Play connection of IO-Link Senosors):

With the help of Sensorconnect, different sensors can be connected quickly and easily via an IFM gateway. The sensor values are automatically extracted from the software stack and made available via MQTT.

To get a quick and easy overview of the available MQTT messages and topics we recommend the MQTT Explorer. If you don’t want to install any extra software you can use the MQTT-In node to subscribe to all available topics by subscribing to # and then direct the messages of the MQTT in nodes into a debugging node. You can then display the messages in the nodered debugging window and get information about the topic and available data points.

Topic structure: ia/raw/<transmitterID>/<gatewaySerialNumber>/<portNumber>/<IOLinkSensorID>

Example for ia/raw/

Topic: ia/raw/2020-0102/0000005898845/X01/210-156

This means that the transmitter with the serial number 2020-0102 has one ifm gateway connected to it with the serial number 0000005898845. This gateway has the sensor 210-156 connected to the first port X01.

{
"timestamp_ms": 1588879689394, 
"distance": 16
}

Extract information and make it available to the outputs:

In order for the data to be processed easily and quickly by the United Manufacturing hub, the input data (OPC/UA, Siemens S7) must be prepared and converted into a standardized data format (MQTT Topic). For a deep explanation of our MQTT data model check here and here.

The 4 most important data points:

  • Information whether the machine is running or not: /activity
  • Information about anomalies or concrete reasons for a machine standstill: /detectedAnomaly
  • The produced quantity: /count
  • An interface to communicate any process value to the system (e.g. temperature or energy consumption) - /processvalue

Using the information from the topics /activtiy and /detectedAnomaly the statelogic node calculates the discrete machine state. By first checking if the machine is running or not. If the machine is not running the machine state is set equal to the last /detectedAnomaly analogous to state model. The discrete machine state is then made available again via the /state topic.

Implementation example: You would like to determine the output and machine condition of a filling machine.

Used Sensors:

  • Lightbarrier for counting the bottles
  • A button bar via which the machine operator can inform the system that he is on break, for example
  1. Extract via the MQTT in node the information of the light barrier whether a bottle was produced. If a bottle was produced send a message to the output/count topic analog to MQTT datamodel.
  2. Use the output_to_activity node to use the information “a bottle was produced” to determine the information “the machine is running”. E.g. If every X seconds a bottle is produced set the activity equal to true analog to MQTT datamodel.
  3. Use the information of the button bar to tell the system why the machine is not running. e.g. Whenever button 3 is pressed send pause to the detectedAnomaly node analog to MQTT datamodel.

Now the machine status is automatically determined and communicated to the united manufacturing hub for further analysis. Like for example the speed loss.

TODO: #63 add example Flow for data processing

Testing:

With the help of the testing flows you can test your entire system or simply simulate some sample data for visualization.

See also DCC Aachen example in our showcase.

2. Create dashboards using factorycube-server

TODO

1.4 - 3. Using it in production

This section explains how the system can be setup and run safely in production

This article is split up into two parts:

The first part will focus on factorycube-edge and the Industrial Automation world. The second part will focus on factorycube-server and the IT world.

factorycube-edge

The world of Industrial Automation is heavily regulated as very often not only expensive machines are controlled, but also machines that can potentially injure a human being. Here are some information that will help you in setting it up in production (not legal advice!).

If you are unsure about how to setup something like this, you can contact us for help with implementation and/or certified devices, which will ease the setup process!

Hardware & Installation, Reliability

One key component in Industrial Automation is reliability. Hardware needs to be carefully selected according to your needs and standards in your country.

When changing things at the machine, you need to ensure that you are not voiding the warranty or to void the CE certification of the machine. Even just installing something in the electrical rack and/or connecting with the PLC can do that! And it is not just unnecessary regulations, it is actually important:

PLCs can be pretty old and usually do not have much capacity for IT applications. Therefore, it is essential when extracting data to not overload the PLCs capabilities by requesting too much data. We strongly recommend to test the performance and closely watch the CPU and RAM usage of the PLC.

This is the reason we install sometimes additional sensors instead of plugging into the existing ones. And sometimes this is enough to get the relevant KPIs out of the machine, e.g., the Overall Equipment Effectiveness (OEE).

Network setup

To ensure the safety of your network and PLC we recommend a network setup like following:

Network setup having the machines network, the internal network and PLC network seperated from each other

The reason we recommend this setup is to ensure security and reliability of the PLC and to follow industry best-practices, e.g. the “Leitfaden Industrie 4.0 Security” from the VDMA (Verband Deutscher Maschinenbauer) or Rockwell.

Additionally, we are taking more advanced steps than actually recommended (e.g., preventing almost all network traffic to the PLC) as we have seen very often, that machine PLC are usually not setup according to best-practices and manuals of the PLC manufacturer by system integrators or even the machine manufacturer due to a lack of knowledge. Default passwords not changed or ports not closed, which results in unnecessary attack surfaces.

Also updates are almost never installed on a machine PLC resulting in well-known security holes to be in the machines for years.

Another argument is a pretty practical one: In Industry 4.0 we see more and more devices being installed at the shopfloor and requiring access to the machine data. Our stack will not be the only one accessing and processing data from the production machine. There might be entirely different solutions out there, who need real-time access to the PLC data. Unfortunately, a lot of these devices are proprietary and sometimes even with hidden remote access features (very common in Industrial IoT startups unfortunately…). We created the additional DMZ around each machine to prevent one solution / hostile device at one machine being able to access the entire machine park. There is only one component (usually node-red) communicating with the PLC and sending the data to MQTT. If there is one hostile device somewhere it will have very limited access by default except specified otherwise, as it can get all required data directly from the MQTT data stream.

Our certified device “machineconnect” will have that network setup by default. Our certified device “factorycube” has a little bit different network setup, which you can take a look at here.

Other useful commands

Quick setup on k3OS:

  1. export VERIFY_CHECKSUM=false && curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 && chmod 700 get_helm.sh && ./get_helm.sh
  2. curl -L https://github.com/united-manufacturing-hub/united-manufacturing-hub/tarball/v0.4.2 | tar zx && mv $(find . -maxdepth 1 -type d -name "united-manufacturing-hub*") united-manufacturing-hub
  3. helm install factorycube-edge /home/rancher/united-manufacturing-hub/deployment/factorycube-edge --values "/home/rancher/CUSTOM.yaml" --kubeconfig /etc/rancher/k3s/k3s.yaml

factorycube-server

In general the factorycube-server installation is tailored strongly to the environments it is running in. Therefore, we can only provide general guidance on setting it up.

WARNING: THIS SECTION IS STILL IN WORK, PLEASE ONLY USE AS A ROUGH START. WE STRONGLY RECOMMEND CONTACTING US IF YOU ARE PLANNING ON USING IT IN PRODUCTION ENVIRONMENT AND WITH EXPOSURE TO THE INTERNET

Example deployment on AWS EKS

To give you an better idea, this section explains an example production deployment on AWS EKS.

Preparation

General

  • Use Cloudflare as DNS and firewall. This will provide an additional security layer on top of all your HTTP/HTTPS applications, e.g., factoryinsight or Grafana

AWS

  • Setup a AWS EKS cluster using eksctl
  • Setup a S3 bucket and a IAM user
  • Add IAM policy to the user (assuming the bucket is called umhtimescaledbbackup)
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::umhtimescaledbbackup/*",
                "arn:aws:s3:::umhtimescaledbbackup"
            ]
        }
    ]
}

If you do not add the IAM policy you might get a ACCESS DENIED in pgbackrest pod.

Kubernetes

  • Create a namespace called dev2
  • Use later the release name dev2. If you are using a different release name, you might need to adjust dev2 in the following aws_eks.yaml file
  • Setup nginx-ingress-controller (e.g., using the bitnami helm chart)
  • Setup external-dns
  • Setup cert-manager and create a certificate issuer called letsencrypt-prod (see also link)
  • to enable backup using S3 buckets create a secret called dev2-pgbackrest and enter the following content:
kind: Secret
apiVersion: v1
metadata:
  name: dev2-pgbackrest
  namespace: dev2
data:
  PGBACKREST_REPO1_S3_BUCKET: <redacted>
  PGBACKREST_REPO1_S3_ENDPOINT: <redacted>
  PGBACKREST_REPO1_S3_KEY: <redacted>
  PGBACKREST_REPO1_S3_KEY_SECRET: <redacted>
  PGBACKREST_REPO1_S3_REGION: <redacted>
type: Opaque

aws_eks.yaml

We recommend the following values to get your journey to production started (using release name dev2 and namespace dev2):


### factoryinsight ###
factoryinsight:
  enabled: true
  image: unitedmanufacturinghub/factoryinsight
  replicas: 2
  redis:
    URI1: dev2-redis-node-0.dev2-redis-headless:26379
    URI2: dev2-redis-node-1.dev2-redis-headless:26379
    URI3: dev2-redis-node-2.dev2-redis-headless:26379
  db_host: "dev2-replica"
  db_port: "5433"
  db_password: "ADD_STRONG_PASSWORD_HERE"
  ingress:
    enabled: true
    publicHost: "api.dev2.umh.app"
    publicHostSecretName: "factoryinsight-tls-secret"
    annotations:
      external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
      cert-manager.io/cluster-issuer: "letsencrypt-prod" 
  resources:
    limits:
       cpu: 1000m         
    requests:
       cpu: 200m      


### mqtt-to-postresql ###
mqtttopostgresql:
  enabled: true
  image: unitedmanufacturinghub/mqtt-to-postgresql
  replicas: 2
  storageRequest: 1Gi

### timescaleDB ###
timescaledb-single:
  enabled: true
  replicaCount: 2
  
  image:
    # Image was built from
    # https://github.com/timescale/timescaledb-docker-ha
    repository: timescaledev/timescaledb-ha
    tag: pg12-ts2.0-latest
    pullPolicy: IfNotPresent 
  
  backup:
    enabled: true
  
  persistentVolumes:
    data:
      size: 20Gi 
    wal:
      enabled: true
      size: 5Gi
  
### grafana ###
grafana:
  enabled: true

  replicas: 2

  image:
    repository: grafana/grafana
    tag: 7.5.9
    sha: ""
    pullPolicy: IfNotPresent

  service:
    type: ClusterIP 

  ingress:
    enabled: true
    annotations: 
      kubernetes.io/ingress.class: nginx
      kubernetes.io/tls-acme: "true"
    labels: {}
    path: /

    # pathType is only for k8s > 1.19
    pathType: Prefix

    hosts:
      - dev2.umh.app 

    tls: []

  ## Pass the plugins you want installed as a list.
  ##
  plugins: 
      - grafana-worldmap-panel
      - grafana-piechart-panel
      - aceiot-svg-panel
      - grafana-worldmap-panel
      - natel-discrete-panel
      - isaozler-paretochart-panel
      - williamvenner-timepickerbuttons-panel
      - agenty-flowcharting-panel
      - marcusolsson-dynamictext-panel
      - factry-untimely-panel
      - cloudspout-button-panel 


  ## Grafana's primary configuration
  ## NOTE: values in map will be converted to ini format
  ## ref: http://docs.grafana.org/installation/configuration/
  ##
  grafana.ini:
    paths:
      data: /var/lib/grafana/data
      logs: /var/log/grafana
      plugins: /var/lib/grafana/plugins
      provisioning: /etc/grafana/provisioning
    analytics:
      check_for_updates: true
    log:
      mode: console
    grafana_net:
      url: https://grafana.net
    database:
      host: dev2 
      user: "grafana"
      name: "grafana"
      password: "ADD_ANOTHER_STRONG_PASSWORD_HERE"
      ssl_mode: require
      type: postgres

  ## Add a seperate remote image renderer deployment/service
  imageRenderer:
    # Enable the image-renderer deployment & service
    enabled: true
    replicas: 1

####################### nodered #######################
nodered:
  enabled: true 
  tag: 1.2.9
  port: 1880
  storageRequest: 1Gi
  timezone: Berlin/Europe
  serviceType: ClusterIP
  ingress:
    enabled: true
    publicHost: "nodered.dev2.umh.app"
    publicHostSecretName: "nodered-tls-secret"
    annotations:
      external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
      cert-manager.io/cluster-issuer: "letsencrypt-prod"
  settings: |-  
    module.exports = {
        // the tcp port that the Node-RED web server is listening on
        uiPort: process.env.PORT || 1880,
        // By default, the Node-RED UI accepts connections on all IPv4 interfaces.
        // To listen on all IPv6 addresses, set uiHost to "::",
        // The following property can be used to listen on a specific interface. For
        // example, the following would only allow connections from the local machine.
        //uiHost: "127.0.0.1",
        // Retry time in milliseconds for MQTT connections
        mqttReconnectTime: 15000,
        // Retry time in milliseconds for Serial port connections
        serialReconnectTime: 15000,
        // The following property can be used in place of 'httpAdminRoot' and 'httpNodeRoot',
        // to apply the same root to both parts.
        httpRoot: '/nodered',
        // If you installed the optional node-red-dashboard you can set it's path
        // relative to httpRoot
        ui: { path: "ui" },
        // Securing Node-RED
        // -----------------
        // To password protect the Node-RED editor and admin API, the following
        // property can be used. See http://nodered.org/docs/security.html for details.
        adminAuth: {
            type: "credentials",
            users: [
                {
                    username: "admin",
                    password: "ADD_NODERED_PASSWORD",
                    permissions: "*"
                }
            ]
        },
        
        functionGlobalContext: {
            // os:require('os'),
            // jfive:require("johnny-five"),
            // j5board:require("johnny-five").Board({repl:false})
        },
        // `global.keys()` returns a list of all properties set in global context.
        // This allows them to be displayed in the Context Sidebar within the editor.
        // In some circumstances it is not desirable to expose them to the editor. The
        // following property can be used to hide any property set in `functionGlobalContext`
        // from being list by `global.keys()`.
        // By default, the property is set to false to avoid accidental exposure of
        // their values. Setting this to true will cause the keys to be listed.
        exportGlobalContextKeys: false,
        // Configure the logging output
        logging: {
            // Only console logging is currently supported
            console: {
                // Level of logging to be recorded. Options are:
                // fatal - only those errors which make the application unusable should be recorded
                // error - record errors which are deemed fatal for a particular request + fatal errors
                // warn - record problems which are non fatal + errors + fatal errors
                // info - record information about the general running of the application + warn + error + fatal errors
                // debug - record information which is more verbose than info + info + warn + error + fatal errors
                // trace - record very detailed logging + debug + info + warn + error + fatal errors
                // off - turn off all logging (doesn't affect metrics or audit)
                level: "info",
                // Whether or not to include metric events in the log output
                metrics: false,
                // Whether or not to include audit events in the log output
                audit: false
            }
        },
        // Customising the editor
        editorTheme: {
            projects: {
                // To enable the Projects feature, set this value to true
                enabled: false
            }
        }
    }


##### CONFIG FOR REDIS #####
redis:
  enabled: true
  cluster:
    enabled: true
    slaveCount: 2
  image:
    pullPolicy: IfNotPresent
    registry: docker.io
    repository: bitnami/redis
    tag: 6.0.9-debian-10-r13
  master:
    extraFlags:
    - --maxmemory 4gb
    persistence:
      size: 8Gi
    resources:
      limits:
        memory: 4Gi
      requests:
        cpu: 100m
        memory: 1Gi
  podDisruptionBudget:
    enabled: true
    minAvailable: 2
  slave:
    persistence:
      size: 8Gi
    resources:
      limits:
        memory: 4Gi
      requests:
        cpu: 100m
        memory: 1Gi

##### CONFIG FOR VERNEMQ #####

vernemq:
  enabled: true
  AclConfig: |-
     pattern write ia/raw/%u/#
     pattern write ia/%u/#
     pattern $SYS/broker/connection/%c/state

     user TESTING
     topic ia/#
     topic $SYS/#
     topic read $share/TESTING/ia/#

     user ia_nodered
     topic ia/#     
  CACert: |-
        ADD CERT
  Cert: |-
        ADD CERT
  Privkey: |-
        ADD CERT
  image:
    pullPolicy: IfNotPresent 
    repository: vernemq/vernemq
    tag: 1.11.0
  replicaCount: 2 
  service:
    annotations:
      prometheus.io/port: "8888"
      prometheus.io/scrape: "true"
      external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
      external-dns.alpha.kubernetes.io/hostname: mqtt.dev2.umh.app
    mqtts:
      enabled: true
      nodePort: 8883
      port: 8883
    mqtt:
      enabled: false
    type: LoadBalancer

Further adjustments

VerneMQ / MQTT

  • We recommend setting up a PKI infrastructure for MQTT (see also prerequisites) and adding the certs to vernemq.CAcert and following in the helm chart (by default there are highly insecure certificates there)
  • You can adjust the ACL (access control list) by changing vernemq.AclConfig
  • If you are using the VerneMQ binaries in production you need to accept the verneMQ EULA (which disallows using it in production without contacting them)

Redis

  • The password is generated once during setup and stored in the secret redis-secret

Nodered

  • We recommend disabling external access to nodered entirely and spawning a seperate nodered instance for every project (to avoid having one node crashing all flows)
  • You can change the configuration in nodered.settings
  • We recommend that you set a password for accessing the webinterface in the nodered.settings. See also the official tutorial from nodered

MinIO

We strongly recommend to change all passwords and salts specified in values.yaml

2 - Concepts

The software of the United Manufacturing Hub is designed as a modular system. Our software serves as a basic building block for connecting and using various hardware and software components quickly and easily. This enables flexible use and thus the possibility to create comprehensive solutions for various challenges in the industry.

Architecture

Edge-device / IoT gateway

As a central hardware component we use an edge device which is connected to different data sources and to a server. The edge device is an industrial computer on which our software is installed. The customer can either use the United factorycube offered by us or his own IoT gateway.

More information about our certified devices can be found on our website

Examples:

  • Factorycube
  • Cubi

Data acquisition

The data sources connected to the edge device provide the foundation for automatic data collection. The data sources can be external sensors (e.g. light barriers, vibration sensors), input devices (e.g. button bars), Auto-ID technologies (e.g. barcode scanners), industrial cameras and other data sources such as machine PLCs. The wide range of data sources allows the connection of all machines, either directly via the machine PLC or via simple and fast retrofitting with external sensors.

More information can be found in the technical documentation of the edge helm chart factorycube-edge

Examples:

  • sensorconnect
  • barcodereader

Data processing

The software installed on the edge device receives the data from the individual data sources. Using various data processing services and “node-red”, the imported data is preprocessed and forwarded to the connected server via the MQTT broker.

More information can be found in the technical documentation of the edge and server helm chart factorycube-edge factorycube-server

Examples:

  • node-red

Data storage

The data forwarded by the edge device can either be stored on the customer’s servers or, in the SaaS version, in the United Cloud hosted by us. Relational data (e.g. data about orders and products) as well as time series data in high resolution (e.g. machine data like temperature) can be stored.

More information can be found in the technical documentation of the server helm chart factorycube-server

Examples:

  • TimescaleDB

Data usage

The stored data is automatically processed and provided to the user via a Grafana dashboard or other computer programs via a Rest interface. For each data request, the user can choose between raw data and various pre-processed data such as OEE, MTBF, etc., so that every user (even without programming knowledge) can quickly and easily compose personally tailored dashboards with the help of modular building blocks.

More information can be found in the technical documentation of the server helm chart factorycube-server

Examples:

  • Grafana
  • factoryinsight

Practical implications

Edge devices

Typically you have multiple data sources like sensorconnect or barcodereader, that are containered in a Docker container. They all send their data to the MQTT broker. You can now process the data in node-red by subscribing to the data sources via MQTT, processing the data, and then writing it back.

Server

Database access

The database on the server side should never be accessed directly by a service except mqtt-to-postgresql and factoryinsight. Instead, these services should be modified to include the required functionalities.

2.1 - Node-RED in Industrial IoT: a growing standard

How an open-source tool is establishing itself in a highly competitive environment against billion dollar companies

Using Node-RED and UaExpert to extract data from the PLC of a Saw

Most people know Node-RED from the areas of smart home or programming introductions (those workshops where you connect things with microcontrollers). Yet, very few people realize that it is frequently used in manufacturing as well.

For those of you that do not know it yet, here is the official self-description from the Node-RED website:

Node-RED is a programming tool for wiring together hardware devices, APIs and online services in new and interesting ways.

It provides a browser-based editor that makes it easy to wire together flows using the wide range of nodes in the palette that can be deployed to its runtime in a single-click.

And the best thing: it is open-source

The project started in early 2013 in IBM’s research centers. In 2016 it was one of the founding projects of the JS Foundation. Since the version release 1.0 in 2019 it is considered safe for production use.

A self-conducted survey in the same year showed that from 515 respondents, 31.5% use Node-RED in manufacturing, and from 868 respondents, 24% said they have created a PLC application using it 1. Also, 24.2 % of 871 respondents said that they use InfluxDB in combination with Node-RED. The reason we think that TimescaleDB is better suited for the Industrial IoT than InfluxDB has been described in this article.

But how widespread is it really in manufacturing? What are these users doing with Node-RED? Let’s deep dive into that!

Usage of Node-RED in Industry

Gathering qualitative data of industry usage of specific solutions can be hard to almost impossible as very few companies are open about the technologies they use. However, we can still gather quantitative data, which strongly indicates a heavy usage in various industries in data extraction and processing.

First, it is preinstalled on more and more various automation systems like PLCs. Wikpedia has a really good overview here (and it checks out!). Siemens, in particular, is starting to use it more often, see also Node-RED with SIMATICIOT2000 or the Visual Flow Creator

Furthermore, various so-called “nodes” are available that can only be used in manufacturing environments, e.g., to read out data from specific devices. These nodes also have quite impressive download numbers.

Some examples:

We’ve talked with the former developer of two of these nodes, Klaus Landsdorf from the German company Iniationware, which offers companies support in the topics of OPC-UA, Modbus, BACnet and data modeling.

Klaus confirmed our hypothesis:

We get many requests from German hardware manufacturers that rely on Node-RED and on these industry-specific nodes like OPC-UA. The OPC-UA project was sponsored by just two small companies with round about 5% of the costs for development in the case of the IIoT OPC-UA contribution package. But in view of using the package and testing it across multiple industrial manufacturing environments to ensure a high stability, we had many and also big companies aboard. In education we have a great response from ILS, because they are using the Iniationware package node-red-contrib-iiot-opcua to teach their students about OPC-UA essentials. Unfortunately, just a few companies understand the idea of a commercial backing for open-source software companies by yearly subscriptions, which could safe a lot of money for each of them. Do it once, stable and share the payment in open-source projects! That would bring a stable community and contribution packages for the specific reley on industrial needs like LTS versions. Simplified: it needs a bit money to make money in a long term as well as to provide stable and up to date Node-RED packages.

It is also described in the community as being production-ready and used quite frequently. In a topic discussing the question of production readiness a user with the name of SonoraTechnical says:

Although anecdotal, just Friday, I was speaking to an engineer at a major OPC Software Vendor who commented that they see Node-RED frequently deployed by industrial clients and even use it internally for proving out concepts and technology.

Another one with the name of gemini86 explains the advantages compared with commercial solutions:

I’m also (very much) late to the party on this, but I work in manufacturing and use AB, Siemens, Codesys, etc. I also use Node-RED for SCADA and database bridging. Our site has well pumps in remote areas where data and commands are sent over 900mhz ethernet radios, and Node-RED handles the MQTT <> modbusRTU processing. Node-RED has been as stable and quick, if not quicker than any Siemens or AB install with comparable network functionality. In fact, I struggled to get my S7-1200 to properly communicate with modbusRTU devices at all. I was completely baffled by their lack of documentation on getting it to work. Their answer? “Use profibus/profinet.” So, I myself prefer Node-RED for anything to do with serial or network communications.

Last but not least, it is very frequently used in scientific environments. There are over 3.000 research papers available on Google Scholar on the usage of Node-RED in industrial environments!

Therefore, it is safe to say that it is widespread, with growing numbers of users in industry. But what exactly can you do with it? Let us give some examples of how we are using it!

What you can do with it

The United Manufacturing Hub relies on Node-RED as a tool to

  1. Extract data from production machines using various protocols (OPC/UA, Modbus, S7, HTTP, TCP, …)
  2. Processing and unifying data points into our standardized data model
  3. Customer-specific integrations into existing systems, e.g., MES or ERP systems like SAP or Oracle
  4. Combining data from various machines and triggering actions (machine to machine communication or, in short, M2M)
  5. Creating small interactive and customer-specific dashboards to trigger actions like specifying stop reasons

Let’s explain each one by going through them step-by-step:

1. Extract data from production machines using various protocols

One central challenge of Industrial IoT is obtaining data. The shopfloor is usually fitted out with machines from various vendors and of different ages. As there is almost little or no standardization in the protocols or semantics, the data extraction process needs to be customized for each machine.

With Node-RED, various protocols are available as so-called “nodes” - from automation protocols like OPC/UA (see earlier) to various IT protocols like TCP or HTTP. For any other automation protocol, you can use PTC Kepware, which supports over 140 various PLC protocols.

2. Processing and unifying data points into our standardized data model

Node-RED was originally developed for

visualizing and manipulating mappings between MQTT topics 2 and this is what we are still using it for today. All these data points that have been extracted from various production machines now need to be standardized to match our data model. The machine state needs to be calculated, the machines’ output converted from various formats into a simple /count message, etc.

More information about this can be found in our data model for Industrial IoT.

Example of working with the United Manufacturing Hub. Everything is flow-based.

3. Customer-specific integrations into existing systems

It is not just good for extracting and processing data. It is also very good for pushing this processed data back into other systems, e.g., MES or ERP systems like Oracle or SAP. These systems usually have REST APIs, e.g., here is an example for the REST API for the Oracle ERP.

As the customer implementations of those systems are usually different, the resulting APIs are mostly also different. Therefore, one needs a system that is quick to use to handle those APIs. And Node-RED is perfect for this.

4. Machine to machine communication

The AGV automatically gets the finished products from one machine and brings them to empty stations, which is a good example for M2M

As a result of our data architecture machine to machine communication (M2M) is enabled by default. The data from all edge devices is automatically sent to a central MQTT broker and is available to all connected devices (that have been allowed access to that data).

It is easy to gather data from various machines and trigger additional actions, e.g., to trigger the Automated Guided Vehicle (AGV) to fetch material from the production machine when one station is empty of material.

And the perfect tool to set those small triggers is, as you might have guessed, Node-RED.

5. Creating small interactive and customer-specific dashboards

Example of a dashboard using node-red-dashboard. It features a multi-level stop reason selection and the visualization of production speed.

Sometimes the machine operators need time-sensitive dashboards to retrieve real-time information or to interact with the system. As many companies still do not have a good and reliable internet connection or even network infrastructure, one cannot wait until the website is fully loaded to enter a stop reason. Therefore, sometimes it is crucial to have a dashboard as close to the machine as possible (and not sitting somewhere in the cloud).

For this one, you can use the node-red-dashboard node, which allows you to easily create dashboards and interact with the data via MQTT.

Bonus: What not to do: process control

However, we strongly recommend NOT using it to intervene in the production process, e.g., process control or ensuring safety mechanisms for two reasons:

  1. IT tools and systems like Node-RED are not designed to ensure the safety of machines or people, e.g., guaranteed time-sensitive reactions to a triggered safety alert
  2. It would also be almost impossible to get that certified and approved due to 1. For these aspects, very good and safe tools, like PLCs or NCs, are already out there in the automation world.

Summary

The slogan: “The best things in life are free” also applies in manufacturing:

Node-RED is on the same level as “professional” closed-source and commercial solutions and is used by thousands of researchers and hundreds of daily users in various manufacturing industries.

It is included and enabled in every installation of the United Manufacturing Hub - in the cloud and on the edge.

More information on how we use the system can be found in our Quick Start.


  1. https://nodered.org/about/community/survey/2019/ ↩︎

  2. https://nodered.org/about/ ↩︎

2.2 - The UMH datamodel / MQTT

All events or subsequent changes in production are transmitted via MQTT in the following data model

Introduction

All events or subsequent changes in production are transmitted via MQTT in the following data model. This ensures that all participants are always informed about the latest status.

The data model in the MQTT Broker can be divided into four levels. In general, the higher the level, the lower the data frequency and the more the data is prepared.

If you do not know the idea of MQTT (important keywords: “broker”, “subscribe”, “publish”, “topic”), we recommend reading the wikipedia article first.

All MQTT messages consist out of one JSON with atleast two elements in it.

  1. timestamp_ms: the amount of milliseconds since the 1970-01-01 (also called UNIX timestamp in milliseconds)
  2. <valueName>: a value

Some messages might deviate from it, but this will be noted explicitly. All topics are to be written in lower case only!

1st level: Raw data

Here are all raw data, which are not yet contextualized, i.e. assigned to a machine. These are in particular all data from sensorconnect.

Topic: ia/raw/

All raw data coming in via sensorconnect.

Topic structure: ia/raw/<transmitterID>/<gatewaySerialNumber>/<portNumber>/<IOLinkSensorID>

Example for ia/raw/

Topic: ia/raw/2020-0102/0000005898845/X01/210-156

This means that the transmitter with the serial number 2020-0102 has one ifm gateway connected to it with the serial number 0000005898845. This gateway has the sensor 210-156 connected to the first port X01.

{
"timestamp_ms": 1588879689394, 
"distance": 16
}

Topic: ia/rawImage/

All raw data coming in via cameraconnect.

Topic structure: ia/rawImage/<TransmitterID>/<MAC Adress of Camera>

image_id: a unique identifier for every image acquired
image_bytes: base64 encoded image in JPG format in bytes
image_height: height of the image in pixel
image_width: width of the image in pixel
image_channels: amount of included color channels (Mono: 1, RGB: 3)

Example for ia/rawImage/

Topic: ia/rawImage/2020-0102/4646548

This means that the transmitter with the serial number 2020-0102 has one camera connected to it with the serial number 4646548.

{
	"timestamp_ms": 214423040823,
	"image":  {
		"image_id": "<MACaddress>_<timestamp_ms>",
		"image_bytes": 3495ask484...,
		"image_height": 800,
		"image_width": 1203,
		"image_channels": 3
	}
}

Example for decoding an image and saving it locally with OpenCV

im_bytes = base64.b64decode(incoming_mqtt_message["image"]["image_bytes"])
im_arr = np.frombuffer(im_bytes, dtype=np.uint8)  # im_arr is a one-dimensional Numpy array
img = cv2.imdecode(im_arr, flags=cv2.IMREAD_COLOR)
cv2.imwrite(image_path, img)

2nd level: contextualized data

In this level the data is already assigned to a machine.

Topic structure: ia/<customerID>/<location>/<AssetID>/<Measurement> e.g. ia/dccaachen/aachen/demonstrator/count.

An asset can be a step, machine, plant or line. It uniquely identifies the smallest location necessary for modeling the process.

By definition all topic names should be lower case only!

/count

Topic: ia/<customerID>/<location>/<AssetID>/count

Here a message is sent every time something has been counted. This can be, for example, a good product or scrap.

count in the JSON is an integer. scrap in the JSON is an integer, which is optional. It means scrap pieces of count are scrap. If not specified it is 0 (all produced goods are good).

Example for /count

{
    "timestamp_ms": 1588879689394, 
    "count": 1
}

/scrapCount

Topic: ia/<customerID>/<location>/<AssetID>/scrapCount

Here a message is sent every time products should be marked as scrap. It works as follows: A message with scrap and timestamp_ms is sent. It starts with the count that is directly before timestamp_ms. It is now iterated step by step back in time and step by step the existing counts are set to scrap until a total of scrap products have been scraped.

Important notes:

  • You can specify maximum of 24h to be scrapped to avoid accidents
  • (NOT IMPLEMENTED YET) If counts does not equal scrap, e.g. the count is 5 but only 2 more need to be scrapped, it will scrap exactly 2. Currently it would ignore these 2. see also #125
  • (NOT IMPLEMENTED YET) If no counts are available for this asset, but uniqueProducts are available, they can also be marked as scrap. //TODO

scrap in the JSON is an integer.

Example for /scrapCount

{
    "timestamp_ms": 1588879689394, 
    "scrap": 1
}

/barcode

Topic: ia/<customerID>/<location>/<AssetID>/barcode

A message is sent here each time the barcode scanner connected to the transmitter via USB reads a barcode via barcodescanner.

barcode in the JSON is a string.

Example for /barcode

{
    "timestamp_ms": 1588879689394, 
    "barcode": "16699"
}

/activity

Topic: ia/<customerID>/<location>/<AssetID>/activity

A message is sent here every time the machine runs or stops (independent whether it runs slow or fast, or which reason the stop has. This is covered in state)

activity in the JSON is a boolean.

Example for /activity

{
    "timestamp_ms": 1588879689394, 
    "activity": True
}

/detectedAnomaly

Topic: ia/<customerID>/<location>/<AssetID>/detectedAnomaly

A message is sent here each time a stop reason has been identified automatically or by input from the machine operator (independent whether it runs slow or fast, or which reason the stop has. This is covered in state).

detectedAnomaly in the JSON is a string.

Example for /detectedAnomaly

{
    "timestamp_ms": 1588879689394, 
    "detectedAnomaly": "maintenance"
}

/addShift

Topic: ia/<customerID>/<location>/<AssetID>/addShift

A message is sent here each time a new shift is started.

timestamp_ms_end in the JSON is a integer representing a UNIX timestamp in milliseconds.

Example for /addShift

{
    "timestamp_ms": 1588879689394, 
    "timestamp_ms_end": 1588879689395
}

/addOrder

Topic: ia/<customerID>/<location>/<AssetID>/addOrder

A message is sent here each time a new order is started.

product_id in the JSON is a string representing the current product name. order_id in the JSON is a string representing the current order name. target_units in the JSON is a integer and represents the amount of target units to be produced (in the same unit as count).

Attention:

  1. the product needs to be added before adding the order. Otherwise, this message will be discarded
  2. one order is always specific to that asset and can, by definition, not be used across machines. For this case one would need to create one order and product for each asset (reason: one product might go through multiple machines, but might have different target durations or even target units, e.g. one big 100m batch get split up into multiple pieces)

Example for /addOrder

{
    "product_id": "Beierlinger 30x15",
    "order_id": "HA16/4889",
    "target_units": 1
}

/addProduct

Topic: ia/<customerID>/<location>/<AssetID>/addProduct

A message is sent here each time a new product is added.

product_id in the JSON is a string representing the current product name. time_per_unit_in_seconds in the JSON is a float specifying the target time per unit in seconds.

Attention: See also notes regarding adding products and orders in /addOrder

Example for /addProduct

{
    "product_id": "Beierlinger 30x15",
    "time_per_unit_in_seconds": 0.2
}

/startOrder

Topic: ia/<customerID>/<location>/<AssetID>/startOrder

A message is sent here each time a new order is started.

order_id in the JSON is a string representing the order name.

Attention:

  1. See also notes regarding adding products and orders in /addOrder
  2. When startOrder is executed multiple times for an order, the last used timestamp is used.

Example for /startOrder

{
    "timestamp_ms": 1588879689394,
    "order_id": "HA16/4889",
}

/endOrder

Topic: ia/<customerID>/<location>/<AssetID>/endOrder

A message is sent here each time a new order is started.

order_id in the JSON is a string representing the order name.

Attention:

  1. See also notes regarding adding products and orders in /addOrder
  2. When endOrder is executed multiple times for an order, the last used timestamp is used.

Example for /endOrder

{
"timestamp_ms": 1588879689394,
"order_id": "HA16/4889",
}

/processValue

Topic: ia/<customerID>/<location>/<AssetID>/processValue

A message is sent here every time a process value has been prepared. Unique naming of the key.

<valueName> in the JSON is a integer or float representing a process value, e.g. temperature.

Note: as is a integer or float, booleans like “true” or “false” are not possible. Please convert them to integer, e.g., “true” –> 1, “false” –> 0

Example for /processValue

{
    "timestamp_ms": 1588879689394, 
    "energyConsumption": 123456
}

/productImage

All data coming from /rawImageClassification and were published on the server. Same content as /rawImageClassification, only with a changed topic.

Topic structure: ia/<customer>/<location>/<assetID>/productImage

3rd level: production data

This level contains only highly aggregated production data.

/state

Topic: ia/<customerID>/<location>/<AssetID>/state

A message is sent here each time the asset changes status. Subsequent changes are not possible. Different statuses can also be process steps, such as “setup”, “post-processing”, etc. You can find a list of all supported states here

state in the JSON is a integer according to this datamodel

Example for /state

{
    "timestamp_ms": 1588879689394, 
    "state": 10000
}

/cycleTimeTrigger

Topic: ia/<customerID>/<location>/<AssetID>/cycleTimeTrigger

A message should be sent under this topic whenever an assembly cycle is started.

currentStation in the JSON is a string lastStation in the JSON is a string sanityTime_in_s in the JSON is a integer

Example for /cycleTimeTrigger

{
  "timestamp_ms": 1611170736684,
  "currentStation": "1a",
  "lastStation": "1b",
  "sanityTime_in_s": 100
}

/uniqueProduct

Topic: ia/<customerID>/<location>/<AssetID>/uniqueProduct

A message is sent here each time a product has been produced or modified. A modification can take place, for example, due to a downstream quality control.

UID: Unique ID of the current single product. isScrap: Information whether the current product is of poor quality and will be sorted out productID: the product that is currently produced, begin_timestamp_ms: Start time end_timestamp_ms: Completion time stationID: If the asset has several stations, you can also classify here at which station the product was created (optional).

Example for /uniqueProduct

{
  "begin_timestamp_ms": 1611171012717,
  "end_timestamp_ms": 1611171016443,
  "productID": "test123",
  "UID": "161117101271788647991611171016443",
  "isScrap": false,
  "stationID": "1a"
}

/scrapUniqueProduct

Topic: ia/<customerID>/<location>/<AssetID>/scrapUniqueProduct

A message is sent here each time a unique product has been scrapped.

UID: Unique ID of the current single product.

Example for /scrapUniqueProduct

{
  "UID": "161117101271788647991611171016443",
}

4th level: Recommendations for action

/recommendations

Topic: ia/<customerID>/<location>/<AssetID>/recommendations

Shopfloor insights are recommendations for action that require concrete and rapid action in order to quickly eliminate efficiency losses on the store floor.

recommendationUID: Unique ID of the recommendation. Used to subsequently deactivate a recommendation (e.g. if it has become obsolete). recommendationType: The ID / category of the current recommendation. Used to narrow down the group of people recommendationValues: Values used to form the actual recommendation set

Example for /recommendations

{
    "timestamp_ms": 1588879689394,
    "recommendationUID": 3556,
    "recommendationType": 8996,
    "enabled": True,
    "recommendationValues": 
    {
        "percentage1": 30, 
        "percentage2": 40
    }
}

in development

/qualityClass

A message is sent here each time a product is classified. Example payload:

**qualityClass 0 and 1 are defined by default. {.is-warning}

qualityClassNameDescriptionColor under which this “State” is automatically visualized by the traffic light
0GoodThe product does not meet the quality requirementsGreen
1BadThe product does not meet the quality requirementsRed

The qualityClass 2 and higher are freely selectable. {.is-warning}

qualityClassNameDescriptionColor under which this “State” is automatically visualized by the traffic light
2Cookie center brokenCookie center brokenFreely selectable
3Cookie has a broken cornerCookie has a broken cornerFreely selectable
{
"timestamp_ms": 1588879689394, 
"qualityClass": 1
}

/detectedObject

in progress (Patrick) {.is-danger}

Under this topic, a detected object is published from the object detection. Each object is enclosed by a rectangular field in the image. The position and dimensions of this field are stored in rectangle. The type of detected object can be retrieved with the keyword object. Additionally, the prediction accuracy for this object class is given as confidence. The requestID is only used for traceability and assigns each recognized object to a request/query, i.e. to an image. All objects with the same requestID were detected in one image capture.

{
"timestamp_ms": 1588879689394, 
}, "detectedObject": 
 {
   "rectangle":{
    "x":730,
    "y":66,
    "w":135,
    "h":85
   },
   { "object": "fork",
   "confidence":0.501
  },
"requestID":"a7fde8fd-cc18-4f5f-99d3-897dcd07b308"
}

/cycleTimeScrap

Under this topic a message should be sent whenever an assembly at a certain station should be aborted because the part has been marked as defective.

{ 
"timestamp_ms" : 1588879689394,
"currentStation" : "StationXY"
}

TimescaleDB structure

Here is a scheme of the timescaleDB structure:

2.3 - Available states for assets

This data model maps various machine states to relevant OEE buckets.

Introduction

This data model is based on the following specifications:

  • Weihenstephaner Standards 09.01 (for filling)
  • Omron PackML (for packaging/filling)
  • EUROMAP 84.1 (for plastic)
  • OPC 30060 (for tobacco machines)
  • VDMA 40502 (for CNC machines)

Additionally, the following literature is respected:

  • Steigerung der Anlagenproduktivität durch OEE-Management (Focke, Steinbeck)

Abbreviations

  • WS –> “TAG NAME”: Valuename (number)
  • PackML –> Statename (number)
  • EUROMAP –> Statusname (number)
  • Tobacco –> ControlModeName (number)

ACTIVE (10000-29999)

The asset is actively producing.

10000: ProducingAtFullSpeedState

The asset is running on full speed.

Examples for ProducingAtFullSpeedState

  • WS_Cur_State: Operating
  • PackML/Tobacco: Execute

20000: ProducingAtLowerThanFullSpeedState

The asset is NOT running on full speed.

Examples for ProducingAtLowerThanFullSpeedState

  • WS_Cur_Prog: StartUp
  • WS_Cur_Prog: RunDown
  • WS_Cur_State: Stopping
  • PackML/Tobacco: Stopping
  • WS_Cur_State: Aborting
  • PackML/Tobacco: Aborting
  • WS_Cur_State: Holding
  • WS_Cur_State: Unholding
  • PackML/Tobacco: Unholding
  • WS_Cur_State: Suspending
  • PackML/Tobacco: Suspending
  • WS_Cur_State: Unsuspending
  • PackML/Tobacco: Unsuspending
  • PackML/Tobacco: Completing
  • WS_Cur_Prog: Production
  • EUROMAP: MANUAL_RUN
  • EUROMAP: CONTROLLED_RUN

NOT INCLUDED FOR NOW:

  • WS_Prog_Step: all

UNKNOWN (30000-59999)

The asset is in an unspecified state.

30000: UnknownState

We do not have any data for that asset (e.g. connection to PLC aborted).

Examples for UnknownState

  • WS_Cur_Prog: Undefined
  • EUROMAP: Offline

40000: UnspecifiedStopState

The asset is not producing, but we do not know why (yet).

Examples for UnspecifiedStopState

  • WS_Cur_State: Clearing
  • PackML/Tobacco: Clearing
  • WS_Cur_State: Emergency Stop
  • WS_Cur_State: Resetting
  • PackML/Tobacco: Clearing
  • WS_Cur_State: Held
  • EUROMAP: Idle
  • Tobacco: Other
  • WS_Cur_State: Stopped
  • PackML/Tobacco: Stopped
  • WS_Cur_State: Starting
  • PackML/Tobacco: Starting
  • WS_Cur_State: Prepared
  • WS_Cur_State: Idle
  • PackML/Tobacco: Idle
  • PackML/Tobacco: Complete
  • EUROMAP: READY_TO_RUN

50000: MicrostopState

The asset is not producing for a short period (typically around 5 minutes), but we do not know why (yet).

MATERIAL (60000-99999)

The asset has issues with materials.

60000: InletJamState

The machine does not perform its intended function due to a lack of material flow in the infeed of the machine detected by the sensor system of the control system (machine stop). In the case of machines that have several inlets, the condition of lack in the inlet refers to the main flow, i.e. to the material (crate, bottle) that is fed in the direction of the filling machine (central machine). The defect in the infeed is an extraneous defect, but because of its importance for visualization and technical reporting, it is recorded separately.

Examples for InletJamState

  • WS_Cur_State: Lack

70000: OutletJamState

The machine does not perform its intended function as a result of a jam in the good flow discharge of the machine detected by the sensor system of the control system (machine stop). In the case of machines that have several discharges, the jam in the discharge condition refers to the main flow, i.e. to the good (crate, bottle) that is fed in the direction of the filling machine (central machine) or is fed away from the filling machine. The jam in the outfeed is an external fault 1v, but it is recorded separately “because” of its importance for visualization and technical reporting.

Examples for OutletJamState

  • WS_Cur_State: Tailback

80000: CongestionBypassState

The machine does not perform its intended function due to a shortage in the bypass supply or a jam in the bypass discharge of the machine detected by the sensor system of the control system (machine stop). This condition can only occur in machines that have two outlets or inlets and in which the bypass is in turn the inlet or outlet of an upstream or downstream machine of the filling line (packaging and palletizing machines). The jam/shortage in the auxiliary flow is an external fault, but is recorded separately due to its importance for visualization and technical reporting.

Examples for CongestionBypassState

  • WS_Cur_State: Lack/Tailback Branch Line

90000: MaterialIssueOtherState

The asset has a material issue, but it is not further specified.

Examples for MaterialIssueOtherState

  • WS_Mat_Ready (Information about which material is lacking)
  • PackML/Tobacco: Suspended

PROCESS (100000-139999)

The asset is in a stop which is belongs to the process and cannot be avoided.

100000: ChangeoverState

The asset is in a changeover process between products.

Examples for ChangeoverState

  • WS_Cur_Prog: Program-Changeover
  • Tobacco: CHANGE OVER

110000: CleaningState

The asset is currently in a cleaning process.

Examples for CleaningState

  • WS_Cur_Prog: Program-Cleaning
  • Tobacco: CLEAN

120000: EmptyingState

The asset is currently emptied, e.g. to prevent mold for food products over the long breaks like the weekend.

Examples for EmptyingState

  • Tobacco: EMPTY OUT

130000: SettingUpState

The machine is currently preparing itself for production, e.g. heating up.

Examples for SettingUpState

  • EUROMAP: PREPARING

OPERATOR (140000-159999)

The asset is stopped because of the operator.

140000: OperatorNotAtMachineState

The operator is not at the machine.

150000: OperatorBreakState

The operator is in a break. note: different than planned shift as it could count to performance losses

Examples for OperatorBreakState

  • WS_Cur_Prog: Program-Break

PLANNING (150000-179999)

The asset is stopped as it is planned to stop (planned idle time).

160000: NoShiftState

There is no shift planned at that asset.

170000: NoOrderState

There is no order planned at that asset.

TECHNICAL (180000-229999)

The asset has a technical issue.

180000: EquipmentFailureState

The asset itself is defect, e.g. a broken engine.

Examples for EquipmentFailureState

  • WS_Cur_State: Equipment Failure

190000: ExternalFailureState

There is a external failure, e.g. missing compressed air

Examples for ExternalFailureState

  • WS_Cur_State: External Failure

200000: ExternalInterferenceState

There is an external interference, e.g. the crane to move the material is currently unavailable.

210000: PreventiveMaintenanceStop

A planned maintenance action.

Examples for PreventiveMaintenanceStop

  • WS_Cur_Prog: Program-Maintenance
  • PackML: Maintenance
  • EUROMAP: MAINTENANCE
  • Tobacco: MAINTENANCE

220000: TechnicalOtherStop

The asset has a technical issue, but it is not specified further.

Examples for TechnicalOtherStop

  • WS_Not_Of_Fail_Code
  • PackML: Held
  • EUROMAP: MALFUNCTION
  • Tobacco: MANUAL
  • Tobacco: SET UP
  • Tobacco: REMOTE SERVICE

2.4 - Digital Shadow - track and trace

A system of features allowing tracking and tracing of individual parts through the production process.

Digital shadow is still in development and not yet deployable.

Introduction

Goal: In order to gain detailed insight in the production process and into the produced products we needed a system of features to acquire and access information gained by scanners, sensors etc. This allows better quality assurance and enables production improvement.

Solution: We send MQTT messages containing a timestamp with a single value like a scanned ID, a measured value etc. from our edge devices to the MQTT broker and contextualize them with microservices. The gained data is pushed into a database by the MQTT-to-postgres microservice. After that factoryinsight provides an interface for formatted data providing maximal usability of BI-Tools like tableau. The data is made available to a Tableau-server via a MySQL Database.

Overall Concept

This is the overview of the digital shadow concept.

The following chapters are going through the concept from left to right (from the inputs of the digital shadow to the outputs).

Data Input

Data can be sent as JSON in a MQTT message to the central MQTT broker. UMH recommends to stick to the data definition of the UMH datamodel for the topics and messages, but the implementation is client specific and can be modeled for the individual problem. The relevant input data for digital shadow is on Level 1 and Level 2.

Example 1, raw, Level 1 data:

Topic: ia/rawBarcode/2020-0102/210-156
Topic structure: ia/rawBarcode/<transmitterID>/<barcodeReaderID>

{
"timestamp_ms": 1588879689394, 
"barcode": "1284ABCtestbarcode"
}

Example 2, sensorValue, Level 2:

Topic: ia/testCustomerID123/testLocationID123/testAssetID123/processValue
Topic structure: ia/<customerID>/<location>/<AssetID>/processValue

{
"timestamp_ms": 1588879689394, 
"torque": 5.345
}

Contextualization + Messages for MQTT-to-postgres

Now the information is available at the mqtt-broker and because of that to all subscribed services. But we still need to contextualize the information, meaning: we want to link gained data to specific products, because right now we only have asset specific values with timestamps. We are using two different kinds of product ID’s for that: AID’s and UID’s (identifiers are later explained in detail).

First microservices should be used (stateless if possible) to convert messages under a raw topic into messages under processValue or processValueString. This typically only requires resending the message under the appropriate topic or breaking messages with multiple values apart into single ones.

There are four specific kinds of messages regarding the digital shadow which need to be sent to the the MQTT-to-postgres function:

  • productTag topic: MQTT Messages containing one specific datapoint for one asset (specified in the MQTT topic): ia/<customerID>/<location>/<AssetID>/productTag
{
"timestamp_ms": 215348452385,
"AID": "14432504350",
"name": "torque",
"value": 2.12
}
  • productTagString Topic: Because we also want to send strings we need a MQTT topic for strings: ia/<customerID>/<location>/<AssetID>/productTagString
{
"timestamp_ms": 1243204549,
"AID": "32493855304",
"name": "QualityClass",
"value": "Quality2483"
}
  • addParentToChild: To describe relations between products and states of those products in the production MQTT-to-postgres expects MQTT messages under the topic: ia/<customerID>/<location>/<AssetID>/addParentToChild. One message always contains one child AID and one parent AID. This specifies what parent product was used to generate the child.
{
"timestamp_ms": 124387,
"childAID": "23948723489",
"parentAID": "4329875"
}
  • uniqueProduct: To indicate the generation of a new product or a new product state send a MQTT message to MQTT-to-postgres under the topic: ia/<customerID>/<location>/<AssetID>/uniqueProduct. The is_scap entry in the uniqueProduct MQTT message is always false, except we are sure that the part is actually scrap. If we don’t know if a product is scrap or not, the is_scrap flag is set to false.

    There are two cases of when to send a message under the uniqueProduct topic:

    • The exact product doesn’t already have a UID (-> This is the case, if it has not been produced at an asset incorporated in the digital shadow). Specify a space holder asset = “storage” in the MQTT message for the uniqueProduct topic.
    • The product was produced at the current asset (it is now different from before, e.g. after machining or after something was screwed in). The newly produced product is always the “child” of the process. Products it was made out of are called the “parents”.
{
  "begin_timestamp_ms": 1611171012717,
  "end_timestamp_ms": 1611171016443,
  "product_id": "test123",
  "is_scrap": false,
  "uniqueProductAlternativeID": "12493857-a"
}

Generating the contextualized messages

The goal is to convert messages under the processValue and the processValueString topics, containing all relevant data, into messages under the topic productTag, productTagString and addParentToChild. The latter messages contain AID’s which hold the contextualization information - they are tied to a single product.

The implementation of the generation of the above mentioned messages with contextualized information is up to the user and depends heavily on the specific process. To help with this we want to present a general logic and talk about the advantages and disadvantages of it:

General steps:

  1. Make empty containers for predefined messages to MQTT-to-postgres when the first production step took place
  2. Fill containers step by step when relevant messages come in.
  3. If full, send the container.
  4. If the message from the first production step for the new product is received before the container is full, send container and set missing fields to null. Also send an error message.

Example process:

  1. parent ID 1 scanned (specifically the later explained AID) -> barcode sent under processValueString topic
  2. screws fixed -> torque processValue send
  3. child AID scanned -> barcode processValueString send
  4. parent AID 2 scanned -> barcode processValueString send

Example of generating a message under productTagString topic containing the measured torque value for the Example process:

  • when parent AID scanned: make empty container for message because scanning parent AID is first step
{
"timestamp_ms": 
"AID":
"name": "toque",
"value":
}
  • when torque value comes in: fill in value and timestamp
{
"timestamp_ms": 13498435234,
"AID":
"name": "toque",
"value": 1.458
}
  • when child AID comes in: fill it in:
{
"timestamp_ms": 13498435234,
"AID": "34258349857",
"name": "toque",
"value": 1.458
}

Now the container is full: send it away.
Important: always send the uniqueProduct message first and afterwards the messages for the related productTag/productTagString and messages on the addParentToChild topic.

Advantages and disadvantages of presented process

ProCon
simplenot stateless
general usability goodmight need a lot of different containers if the number of e.g. productTag messages gets to big

Identifiers

At this point it makes sense to talk about uniqueProductID’s and uniqueProductAlternativeID’s, in short UID’s and AID’s. The concept behind these different types of ID’s is crucial to understand, if you want to understand the later presented datastructures. UID and AID are identifying a single product. The UID is generated for every state a product was/is in and is mainly important for the database. The AID on the other hand might be from a physical label, or a written product number. It is usually the relevant ID for engineers and for production planning. The physical labels stay the same after assembly (the same AID can be related to multiple different UID’s). If we have multiple labels on one part we can also choose one of them for the AID.

AID’s and UID’s are stored in combination one-to-one in the uniqueProductTable (timescaleDB).

Definition of when to change the UID

If we can move a product from point “A” in the production to point “B” or back without causing problems from a process perspective, the UID of the product should stay the same. (For example if the product only gets transported between point “A” and “B”).

If moving the object produces problems (e.g. moving a not yet tested object in the bin “tested products”), the object should have gotten a new UID on its regular way.

Example 1: Testing

Even though testing a product doesn’t change the part itself, it changes its state in the production process:

  • it gets something like a virtual “certificate”
  • the value increases because of that

-> Make a new UID.

Example 2: Transport

Monitored Transport from China to Germany (This would be a significant distance: transport data would be useful to include into digital shadow)

  • parts value increases
  • transport is separately paid
  • not easy to revert

-> Make a new UID

Life of a single UID

Typecreation UIDdeath UID
without inheritance at creationtopic: storage/uniqueProduct/addParentToChild (UID is parent)
with inheritance at creationtopic: <asset>/uniqueProduct + addParentToChild (UID is child)/addParentToChild (UID is parent)

MQTT messages under the productTag topic should not be used to indicate transport of a part. If transport is relevant, change the UID (-> send a new MQTT message to MQTT-to-postgres under the uniqueProduct topic).

Example process to show the usage of AID’s and UID’s in the production:

Explanation of the diagram:

Assembly Station 1:

  • ProductA and ProductB are combined into ProductC
  • Because ProductA and ProductB have not been “seen” by the digital shadow, they get a new UID and asset = “storage” assigned (placeholder asset for unknown/unspecified origin).
  • After ProductC is now produced it gets a new UID and as an asset, Assy1, because it is the child at Assembly Station 1
  • The AID of the child can always be freely chosen out of the parent AID’s. The AID of ProductA (“A”) is a physical label. Because ProductB doesn’t have a physical Label, it gets a generated AID. For ProductC (child) we can now choose either the AID from ProductA or from ProductB. Because “A” is a physical label, it makes sense to use the AID of ProductA.

MQTT messages to send at Assembly 1:

  • uniqueProduct message for ProductA origin, with asset = storage, under the topic: ia/testcustomer/testlocation/storage/uniqueProduct

    {
      "begin_timestamp_ms": 1611171012717,
      "end_timestamp_ms": 1611171016443,
      "product_id": "test123",
      "is_scrap": false,
      "uniqueProductAlternativeID": "A"
    }
    
  • uniqueProduct message for ProductB origin, with asset = storage, under the topic: ia/testcustomer/testlocation/storage/uniqueProduct

    {
      "begin_timestamp_ms": 1611171012717,
      "end_timestamp_ms": 1611171016443,
      "product_id": "test124",
      "is_scrap": false,
      "uniqueProductAlternativeID": "B"
    }
    
  • uniqueProduct message for ProductC, with asset = Assy1, under the topic: ia/testcustomer/testlocation/Assy1/uniqueProduct

    {
      "begin_timestamp_ms": 1611171012717,
      "end_timestamp_ms": 1611171016443,
      "product_id": "test125",
      "is_scrap": false,
      "uniqueProductAlternativeID": "A"
    }
    
  • addParentToChild message describing the inheritance from ProductA to ProductC, under the topic: ia/testcustomer/testlocation/Assy1/addParentToChild

    {
    "timestamp_ms": 124387,
    "childAID": "A",
    "parentAID": "A"
    }
    
  • addParentToChild message describing the inheritance from ProductB to ProductC, under the topic: ia/testcustomer/testlocation/Assy1/addParentToChild

    {
    "timestamp_ms": 124387,
    "childAID": "A",
    "parentAID": "B"
    }
    
  • productTag message for e.g. a measured process value like the temperature,under the topic: ia/testcustomer/testlocation/Assy1/productTag

    {
    "timestamp_ms": 1243204549,
    "AID": "A",
    "name": "temperature",
    "value": 35.4
    }
    

Now the ProductC is transported to Assembly Station 2. Because it is a short transport, doesn’t add value etc. we do not need to produce a new UID after the transport of ProductA.

Assembly Station 2:

  • ProductC stays the same (in the sense that it is keeping its UID before and after the transport), because of the easy transport.
  • ProductD is new and not produced at assembly station 2, so it gets asset = “storage” assigned
  • ProductC and ProductD are combined into ProductE. ProductE gets a new UID. Both AID’s are physical. We again freely choose the AID we want to use (AID C was chosen, maybe because after the assembly of ProductC and ProductD, the AID Label on ProductD is not accessible while the AID Label on the ProductC is).

Assembly Station 3:

  • At Assembly Station ProductE comes in and is turned into ProductF
  • ProductF gets a new UID and keeps the AID of ProductE. It now gets the Assy3 assigned as asset.

Note that the uniqueProduct MQTT message for ProductD would not be under the Topic of Assembly2 as asset but for example under storage. The convention is, that every part never seen by digital shadow “comes” from storage even though the UID and the related uniqueProduct message is created at the current station.

Batches of parts

If for example a batch of screws is supplied to one asset with only one datamatrix code (one AID) for all screws together, there will only be one MQTT message under the topic uniqueProduct created for the batch with one AID, a newly generated UID and with the default supply asset storage.

  • The batch AID is then used as parent for a MQTT message under the topic addParentToChild. (-> mqtt-to-postgres will repeatedly fetch the same parent uid for the inheritanceTable)
  • The batch AID only changes when new batch AID is scanned.

MQTT-to-postgres

The MQTT-to-postgres microservice now uses the MQTT messages it gets from the broker and writes the information in the database. The microservice is not use-case specific, so the user just needs to send it the correct MQTT messages.

MQTT-to-postgres now needs to generate UID’s and save the information in the database, because the database uses UID’s to store and link all the generated data efficiently. Remember that the incoming MQTT messages are contextualized with AID’s.

We can divide the task of MQTT-to-postgres in three (regarding the digital shadow):

  • Use the MQTT message under the Topic uniqueProduct which gives us the AID and the Asset and make an entry in the uniqueProduct table containing the AID and a newly generated UID.

    1. Generate UID (with snowflake: https://en.wikipedia.org/wiki/Snowflake_ID)
    2. Store new UID and all data from uniqueProduct MQTT Message in the uniqueProductTable
  • Use productTag and productTagString topic MQTT messages. The AID and the AssetId is used to look for the uniqueProduct the messages belong to. The value information is then stored with the UID in the TimescaleDB

    1. Look in TimescaleDB, uniqueProductTable for the uniqueProduct with the same Asset and AID from the productTag massage (the child)
    2. Get the UID when found from the child (that is why it is important to send the uniqueProduct message before sending productTag/productTagString).
    3. Write value information without AID, instead with the found UID in the uniqueProductTable
  • Use the addParentToChild message. Retrieve the child UID by using the child AID and the Asset. Get the parent UID’s by finding the last time the parents AID’s were stored in the uniqueProductTable.

    1. Look in TimescaleDB, uniqueProductTable for the uniqueProduct with the same Asset and AID as written in the child of the /addParentToChild message
    2. Look in the TimescaleDB, uniqueProductTable for all other assets for the last time the AID of the parent was used and get the UID
    3. Write UID of child and UID of the parent in the productInheritanceTable

Possible Problems:

  • The uniqueProduct MQTT message of the child has to be made before we can store productTag or productTagString messages.
  • All uniqueProducts of one step at one asset need to be stored before we can process addParentToChild messages. This means we also need to send possible parent uniqueProduct MQTT messages (asset = storage) before.

Sql Database Structure (timescaleDB)

The structure of the timescaleDB might be changed in the future.

Four tables are especially relevant:

  • uniqueProductTable contains entries with a pair of one UID and one AID and other data.
  • productTagTable and productTagStringTable store information referenced to the UID’s in the uniqueProductTable. Stored is everything from individual measurements to quality classes.
  • productInheritanceTable contains pairs of child and parent UID’s. The table as a whole thereby contains the complete inheritance information of each individual part. One entry describes one edge of the inheritance graph.

The new relevant tables are dotted, the uniqueProductTable changes are bold in the timescaleDB structure visualization.

Factoryinsight + Rest API

To make the relevant data from digital shadow available we need to provide new REST API’s. Factoryinsight is the microservice doing that task. It accepts specific requests, accesses the timescale database and returns the data in the desired format.

Implemented functionality for digital shadow

The following function returns all uniqueProducts for that specific asset in a specified time range. One datapoint contains one childUID, AID and all parentAID’s regarding the asset. All uniqueProductTags and uniqueProductTagStrings (value and timestamp) for the childUID are returned to the same datapoint.

get /{customer}/{location}/{asset}/uniqueProductsWithTags from <timestamp1> to <timestamp2> (in RFC 3999 Format).

Example Return with two data points:

{
  "columnNames":
  [
    "UID",
    "AID",
    "TimestampBegin",
    "TimestampEnd",
    "ProductID",
    "IsScrap",
    "torque2",
    "torque1",
    "torque3",
    "torque4",
    "VH_Type123",
    "Gasket_Type123"
  ],
  "datapoints":
  [
    [
      2,
      "57000458",
      1629807326485,
      null,
      15,
      false,
      5.694793469033914,
      5.500782656464146,
      5.868141105450906,
      5.780416969961664,
      "57000458",
      "12000459"
    ],
    [
      6,
      "57000459",
      1629807443961,
      null,
      15,
      false,
      5.835010327979067,
      5.9666619086350945,
      5.425482064635844,
      5.6943075975030535,
      "57000459",
      "12000460"
    ]
  ]
}

Implemented logic of factoryinsight to achieve the functionality

  1. Get all productUID’s and AID’s from uniqueProductTable within the specified time and from the specified asset.
  2. Get all parentUID’s from the productInheritanceTable for each of the selected UID’s.
  3. Get the AID’s for the parentUID’s from the uniqueProductTable.
  4. Get all key, value pairs from the productTagTable and productTagStringTable for the in step 1 selected UID’s.
  5. Return all parent AID’s, the child UID and AID, all the productTag and all the productTagString values.

SQL Database to connect to Tableau server

For the digital shadow functionality we need to give the tableau server access to the data. Because the tableau server can’t directly connect to the REST API, we need to either use a database in between, or a tableau web data connector. We were advised against the tableau web data connector (general info about tableau webdata connectors: https://help.tableau.com/current/pro/desktop/en-us/examples_web_data_connector.htm ). Because of that we implemented a sql database. We used timescaleDB because it is opensource, works well with node-red and with tableau and is fast with timeseries data, which makes it the best choice for the task. According to the structure overview in the beginning of this article we are using node-red to fetch the required data from the REST API of factoryinsight and push it into the timescaleDB. The database can then be accessed by the tableau server.

Industry Example

To test the digital shadow functionality and display its advantages we implemented the solution in a model factory.

This graphic displays the events and following MQTT messages, MQTT-to-postgres receives.

Long term: planned features

We plan to integrate further functionalities to the digital shadow. Possible candidates are:

  • multiple new REST API’s to use the digital shadow more flexible
  • detailed performance analysis and subsequent optimization to enable digital shadow for massive production speed and complexity
  • A buffer in microservice MQTT-to-postgres. If productTag/productTagString messages are sent to the microservice before writing the message uniqueProduct in the database the tags should be stored until the uniqueProduct message arrives. A buffer could hold productTag/productTagString messages and regularly try to write them in the database.

2.5 - Open source in Industrial IoT: an open and robust infrastructure instead of reinventing the wheel.

How we are keeping up with the established players in Industrial IoT and why we believe the United Manufacturing Hub is changing the future of Industrial IoT and Industry 4.0 with the help of Open Source.

Image author: Christopher Burns from Unsplash

How do we keep up with the big players in the industry despite limited resources and small market share? The best way to do this is to break new ground and draw on the collective experience of organizations and their specialists instead of trying to reinvent the wheel.

The collaborative nature of open source enables companies and individuals alike to turn their visions into reality and keep up with established players such as Siemens, Microsoft, and Rockwell, even without a large number of programmers and engineers. This is the path we are taking at United Manufacturing Hub.

Open source software has long since outgrown the insider stage and has become a veritable trend that is becoming the standard in more and more industries. Many, in the IT world, common and intensively used applications (e.g. Kubernetes, TensorFlow, f-prime by NASA 1) have nowadays emerged in a collaborative approach and are available for free.

Open-Source on Mars: the Mars helicopter Ingenuity, relies heaviliy on open-source components like f-prime. Image author: JPL/NASA

Typically, these applications are not yet ready for production or Industry 4.0 use. Some, such as Grafana, are intended for completely different industries (Observability & Monitoring).

However, the source code of these software projects is freely accessible to everyone and can be individually adapted to specific needs. Thus, the application in the Industrial IoT is also no problem. In part, those applications were programmed over decades 2 by several thousand developers and are continuously developed further 3.

The status quo

Today, it is common to develop proprietary Industrial IoT programs and software platforms - the opposite of open source.

A reason behind is, that companies do not want to have foreign code written into their applications and they want to offer the customer a self-made, end-to-end solution.

It is common for a team of over 20 or even 30 people to be assigned to develop a dashboard or IoT gateway, with the focus on a pretty looking (usually self-branded) user interface (UI) and design. Existing open source solutions or automation standards are rarely built upon.

Self-developed, in-house architectures are often strongly influenced by company-specific know-how and therefore usually also favor the company’s own products and services in their interfaces.

The result: the wheel is often reinvented in both the software and hardware areas. The resulting architectures create a lock-in effect that leads to a dependency of the manufacturing companies on their software and hardware suppliers.

Reinventing the wheel: The software world

In our opinion, good examples in the category “reinvented the wheel” from the software world are:

  1. Self-developed visualizations such as visualizations from InfluxDB, PI Vision from OSIsoft or WAGO IoT Cloud Visualization (instead of Grafana).

  2. Flow-based low code / no code apps such as Wire Graph by Eurotech (instead of node-red)

  3. The bulk of Industrial IoT platforms that are claiming to be a “one-stop solution.” Such platforms are trying to cover every aspect from data acquisition, over processing, to visualization with in-house solutions (instead of relying on established technologies and just filling the gaps in the stack).

Both Grafana and node-red are highly professional solutions in their respective fields, which have already been used in various software projects for several years. Orchestrating such specialized applications means that offered and tested solutions can be put to good use.

Reinventing the wheel: The hardware world

There are numerous examples in the Industrial IoT hardware world where there is a conscious or unconscious deviation from established industry standards of the automation industry.

We have particularly noticed this with vendors in the field of Overall Equipment Effectiveness (OEE) and production overviews. Although they usually have very good dashboards, they still rely on self-developed microcontrollers combined with consumer tablets (instead of established automation standards such as a PLC or an industrial edge PC) for the hardware. In this case, the microcontroller, usually called IoT gateway, is considered a black box, and the end customer only gets access to the device in rare cases.

The advantages cannot be denied:

  1. the system is easy to use,
  2. usually very inexpensive,
  3. and requires little prior knowledge.

Unfortunately, these same advantages can also become disadvantages:

  1. the house system integrator and house supplier is not able to work with the system, as it has been greatly reduced for simplicity.
  2. all software extensions and appearing problems, such as integrating software like an ERP system with the rest of the IT landscape, must be discussed with the respective supplier. This creates a one-sided market power (see also Lock-In).

Another problem that arises when deviating from established automation standards: a lack of reliability.

Normally, the system always need to work because failures lead to production downtime (the operator must report the problem). The machine operator just wants to press a button to get a stop reason or the desired information. He does not want to deal with WLAN problems, browser update or updated privacy policies on the consumer tablet.

The strongest argument: Lock-In

In a newly emerging market, it is especially important for a manufacturing company not to make itself dependent on individual providers. Not only to be independent if a product/company is discontinued but also to be able to change providers at any time.

Particularly pure SaaS (Software-as-a-Service) providers should be handled with caution:

  • A SaaS offering typically uses a centralized cloud-based server infrastructure for multiple customers simultaneously. By its very nature, this makes it difficult to integrate into the IT landscape, e.g., to link with the MES system installed locally in the factory.
  • In addition, a change of provider is practically only possible with large-scale reconfiguration/redevelopment.
  • Lastly, there is a concern regarding the data ownership and security of closed systems and multiple SaaS offerings.

Basically, exaggerating slightly to make the point, it is important to avoid highly sensitive production data with protected process parameters getting to foreign competitors.

One might think that the manufacturing company is initially entitled to all rights to the data - after all, it is the company that “produced” the data.

In fact, according to the current situation, there is no comprehensive legal protection of the data, at least in Germany, if this is not explicitly regulated by contract, as the Verband der deutschen Maschinenbauer (VDMA) (Association of German Mechanical Engineering Companies) admits 4.

Even when it comes to data security, some people feel queasy about handing over their data to someone else, possibly even a US startup. Absolutely rightly so, says the VDMA, because companies based in the USA are obliged to allow US government authorities access to the data at any time 5.

An open source project can give a very good and satisfactory answer here:

United Manufacturing Hub users can always develop the product further without the original developers, as the source code is fully open and documented.

All subcomponents are fully open and run on almost any infrastructure, from the cloud to a Raspberry Pi, always giving the manufacturing company control over all its data.

Interfaces with other systems are either included directly, greatly simplifying their development, or can be retrofitted themselves without being nailed down to specific programming languages.

Unused potential

In the age of Industry 4.0, the top priority is for companies to operate as efficiently as possible by taking full advantage of their potential.

Open source software, unlike classic proprietary software, enables this potential to be fully exploited. Resources and hundreds of man-hours can be saved by using free solutions and standards from the automation industry.

Developing and offering a proprietary dashboard or IoT gateway that is reliable, stable, and free of bugs is wasting valuable time.

Another hundred, if not a thousand, man-hours are needed until all relevant features such as single sign-on, user management, or logging are implemented. Thus, it is not uncommon that even large companies, the market leaders in the industry, do not operate efficiently, and the resulting products are in the 6-to-7-digit price range.

But the efficiency goes even further:

Open source solutions also benefit from the fact that a community is available to help with questions. This service is rarely available with proprietary solutions. All questions and problems must be discussed with the multi-level support hotline instead of simply Googling the solution.

And so, unfortunately, most companies take a path that is anything but efficient. But isn’t there a better way?

United Manufacturing Hub’s open source approach.

Who says that you have to follow thought patterns or processes that everyone else is modeling? Sometimes it’s a matter of leaving established paths, following your own convictions, and initiating a paradigm shift. That is the approach we are taking.

We cannot compete with the size and resources of the big players. That is why we do not even try to develop in one or two years, with a team of 20 to 30 programmers what large companies have developed in hundreds of thousands of hours.

But that’s not necessary because the resulting product is unlikely to keep up with the open source projects or established automation standards. That is why the duplicated work is not worth the struggle .

The open source software code is freely accessible and thus allows maximum transparency and, at the same time, security. It offers a flexibility that is not reached by programs developed in the traditional way. By using open source software, the United Manufacturing Hub is taking an efficient way of developing. It allows us to offer a product of at least equal value but with considerably fewer development costs.

Example OEE dashboard created in Grafana

Simplicity and efficiency in the age of Industrial IoT.

At United Manufacturing Hub, we combine open source technologies with industry-specific requirements. To do this, we draw on established software such as Docker, Kubernetes or Helm 1 and create, for example, data models, algorithms, and KPIs (e.g. the UMH data model, the factoryinsight and mqtt-to-postresql components) that are needed in the respective industries.

By extracting all data from machine controls (OPC/UA, etc.), we ensure the management and distribution of data on the store floor. Also, if additional data is needed, we offer individual solutions using industry-specific certified sensor retrofit kits, for example, at a steel manufacturer. More on this in one of the later parts of this series.

Summary

Why should we reinvent the wheel when we can focus our expertise on the areas we can provide the most value to our customers?

Leveraging open source solutions allow us to expose a stable and robust infrastructure that enables our customers to meet the challenges of Industrial IoT.

Because, in fact, manufacturing and Industrial IoT is not about developing new software at the drop of a hat. It is more about solving individual problems and challenges. This is done by drawing on a global network of experts who have developed special applications in their respective fields. These applications allow all hardware and software components to be quickly and easily established in the overall architecture through a large number of interfaces.


  1. For the common technologies see also Understanding the technologies. ↩︎

  2. https://www.postgresql.org/docs/current/history.html ↩︎

  3. https://github.com/kubernetes/kubernetes ↩︎

  4. Leitfaden Datennutzung. Orientierungshilfe zur Vertragsgestaltung für den Mittelstand. Published by VDMA in 2019. ↩︎

  5. *Digitale Marktabschottung: Auswirkungen von Protektionismus auf Industrie 4.0 * Published by VDMA’s Impulse Foundation in 2019. ↩︎

2.6 - Why we chose timescaleDB over InfluxDB

TimescaleDB is better suited for the Industrial IoT than InfluxDB, because it is stable, mature and failure resistant, it uses the very common SQL as a query language and you need a relational database for manufacturing anyway

Introduction

The introduction and implementation of an Industrial IoT strategy is already complicated and tedious. There is no need to put unnecessary obstacles in the way through lack of stability, new programming languages, or more databases than necessary. You need a piece of software that you can trust with your company’s most important data.

We are often asked why we chose timescaleDB instead of InfluxDB. Both are time-series databases suited for large amounts of machine and sensor data (e.g., vibration or temperature).

We started with InfluxDB (probably due to its strong presence in the home automation and Grafana communities) and then ended up with timescaleDB based on three arguments. In this article, we would like to explain our decision and provide background information on why timescaleDB makes the most sense for the United Manufacturing Hub.

Argument 1: Reliability & Scalability

A central requirement for a database: it cannot lose or corrupt your data. Furthermore, as a central element in an Industrial IoT stack, it must scale with growing requirements.

TimescaleDB

TimescaleDB is built on PostgreSQL, which has been continuously developed for over 25 years and has a central place in the architecture of many large companies like Uber, Netflix, Spotify or reddit. This has created a fault-tolerant database that can scale horizontally across multiple servers. In short: it is boring and works.

InfluxDB

In contrast, InfluxDB is a relatively young startup that was funded at 119.9 M USD (as of 2021-05-03) but still doesn’t have 25+ years of expertise to fall back on.

On the contrary: Influx has completely rewritten the database twice in the last 5 years 1 2. Rewriting software can improve fundamental issues or add exciting new features. However, it is usually associated with breaking changes in the API and new unintended bugs. This results in additional migration projects, which take time and risk system downtime or data loss.

Due to its massive funding, we get the impression that they add quite a lot of exciting new features and functionalities (e.g., an own visualization tool). However, after testing, we noticed that the stability suffers under these new features.

In addition, Influx only offers the horizontally scalable version of the database in the paid version, which will scare off companies wanting to use it on a larger scale as you will be fully dependent on the provider of that software (vendor lock-in).

Summary

With databases, the principle applies: Better boring and working than exciting and unreliable.

We can also strongly recommend an article by timescaleDB.

Argument 2: SQL is better known than flux

The second argument refers to the query language, i.e., the way information can be retrieved from the database.

SQL (timescaleDB)

TimescaleDB, like PostgreSQL, relies on SQL, the de facto standard language for relational databases. Advantages: A programming language established for over 45 years, which almost every programmer knows or has used at least once. Any problem? No problem, just Google it, and some smart person has already solved it on Stack Overflow. Integration with PowerBI? A standard interface that’s already integrated!

SELECT time, (memUsed / procTotal / 1000000) as value
FROM measurements
WHERE time > now() - '1 hour';

Example SQL code to get the average memory usage for the last hour.

flux (InfluxDB)

InfluxDB, on the other hand, relies on the homegrown flux, which is supposed to simplify time-series data queries. It sees time-series data as a continuous stream upon which are applied functions, calculations and transformations3.

Problem: as a programmer, you have to rethink a lot because the language is flow-based and not based on relational algebra. It takes some time to get used to it, but it is still an unnecessary hurdle for those not-so-tech-savvy companies who already struggle with Industrial IoT.

From some experience, we can also say that the language quickly reaches its limits. In the past, we worked with additional Python scripts that extract the data from InfluxDB via Flux, then process it and then play it back again.

// Memory used (in bytes)
memUsed = from(bucket: "telegraf/autogen")
  |> range(start: -1h)
  |> filter(fn: (r) =>
    r._measurement == "mem" and
    r._field == "used"
  )

// Total processes running
procTotal = from(bucket: "telegraf/autogen")
  |> range(start: -1h)
  |> filter(fn: (r) =>
    r._measurement == "processes" and
    r._field == "total"
    )

// Join memory used with total processes and calculate
// the average memory (in MB) used for running processes.
join(
    tables: {mem:memUsed, proc:procTotal},
    on: ["_time", "_stop", "_start", "host"]
  )
  |> map(fn: (r) => ({
    _time: r._time,
    _value: (r._value_mem / r._value_proc) / 1000000
  })
)

Example Flux code for the same SQL code.

Summary

In summary, InfluxDB puts unnecessary obstacles in the way of not-so-tech-savvy companies with flux, while PostgreSQL relies on SQL, which just about every programmer knows.

We can also strongly recommend the blog post by timescaleDB on exactly this topic.

Argument 3: relational data

Finally, the argument that is particularly important for production: Production data is more relational than time-series based.

Relational data is, simply put, all table-based data that you can store in Excel in a meaningful way, for example, shift schedules, orders, component lists, or inventory.

Relational data. Author: AutumnSnow, License: CC BY-SA 3.0

TimescaleDB provides this by default through the PostgreSQL base, whereas with InfluxDB, you always have to run a second relational database like PostgreSQL in parallel.

If you have to run two databases anyway, you can reduce complexity and directly use PostgreSQL/timescaleDB.

Not an argument: Performance for time-series data

Often the duel between timescaleDB and InfluxDB is fought on the performance level. Both databases are efficient, and 30% better or worse does not matter if both databases are 10x-100x faster 4 than classical relational databases like PostgreSQL or MySQL.

Even if it is not important, there is strong evidence that timescaleDB is actually more performant. Both databases regularly compare their performance against other databases, and InfluxDB never compares itself to timescaleDB. However, timescaleDB has provided a detailed performance guide of influxDB.

Summary

Who do you trust more? The nerdy and boring, or the good-looking accountant, with 25 new exciting tools?

The same goes for databases: Boring is awesome.


  1. https://www.influxdata.com/blog/new-storage-engine-time-structured-merge-tree/ ↩︎

  2. https://www.influxdata.com/blog/influxdb-2-0-open-source-beta-released/ ↩︎

  3. https://www.influxdata.com/blog/why-were-building-flux-a-new-data-scripting-and-query-language/ ↩︎

  4. https://docs.timescale.com/latest/introduction/timescaledb-vs-postgres ↩︎

3 - Examples

This section is an overview over the various showcases that we already did. It provides for every showcase a quick summary including a picture. More details can be found in the subsequent documents.

Metalworking industry

Flame cutting & blasting

Retrofitting of 11 flame cutting machines and blasting systems at two locations using sensors, barcode scanners and button bars to extract and analyze operating data.

See also the detailed documentation.

Identification of the optimization potential of two CNC milling machines

Two-month analysis of CNC milling machines and identification of optimization potentials. Automatic data acquisition coupled with interviews of machine operators and shift supervisors revealed various optimization potentials.

See also the detailed documentation.

Textile industry

Cycle time monitoring

See also the detailed documentation.

Retrofitting of weaving machines for OEE calculation

Retrofitting of weaving machines that do not provide data via the PLC to extract operating data. Subsequent determination of the OEE and detailed breakdown of the individual key figures

See also the detailed documentation

Filling & packaging industry

Performance management in a brewery

Retrofit of a bottling line for different beer types. Focus on the identification of microstops causes and exact delimitation of the bottleneck machine.

See also the detailed documentation.

Retrofit of a Japanese pharmaceutical packaging line

Retrofit of a Japanese pharmaceutical packaging line for automatic analysis of microstop causes as well as to relief the machine operators of data recording.

See also the detailed documentation.

Quality monitoring in a filling line

quality monitoring

TODO: #69 add short description for DCC quality check

See also the detailed documentation.

Semiconductor industry

Identification of optimization potential in the context of the COVID-19 crisis

Use of the factorycube for rapid analysis of bottleneck stations. The customer was thus able to increase the throughput of critical components for ventilators within the scope of COVID-19.

See also the detailed documentation.

3.1 - Flame cutting & blasting

This document describes the flame cutting & blasting use case

Profile

categoryanswer
IndustrySteel Industry
Employees>1000
Number of retrofitted sites2
Project duration6 months
Number of retrofitted machines11
Types of machines retrofittedPlasma cutting machines, oxyfuel cutting machines, shot blasting machines

Photos

Challenges

Lack of transparency about production processes

  • Lead times are unknown
  • No comparison between target and actual times
  • Duration and causes of machine downtimes are unclear

Heterogeneous machinery and machine controls from different manufacturers

  • Only minimal connection of the machines to the ERP system
  • Manual input of production data into the ERP system
  • Machine controls are partially locked by manufacturers
  • Machine controls use different protocols

Reliable production planning and quotation generation not possible

  • No data on past and current machine utilization available
  • Quotation preparation is done with theoretical target times, as no information about actual times is available

Solution

Integration

TODO: #68 add integration for flame cutting

Installed hardware

factorycube

factorycube sends the collected production data to the server. See also [factorycube].

Gateways

Gateways connect the sensors to the factorycube.

Models:

  • ifm AL1352
  • ifm AL1350

Light barriers

Light barriers are installed on cutting machines and are activated when the cutting head is lowered and in use. Used to measure machine conditions, cycle times and piece counts.

Models:

  • ifm O5D100 (Optical distance sensor)
  • ifm O1D108 (Optical distance sensor)

Vibration sensors

Vibration sensors are installed on the beam attachments and detect the machine condition via vibration. Used to measure machine conditions.

Model:

  • ifm VTV122 (vibration transmitter)

Button bar

Button bar is operated by the machine operator in case of a machine standstill. Each button is assigned a reason for standstill. Used to identify the causes of machine downtime.

Model:

Barcode scanner

Barcode scanners are used to scan production orders, which contain the target times. Used to scan target times for target/actual comparison.

Model:

  • Datalogic PowerScan PD9531
  • Datalogic USB Cable Straight 2m (CAB-438)

Implemented dashboards

The customer opted for our SaaS offering. We created the following dashboards for the client.

Default navigation options from Grafana, which we modified to allow custom menus.

  1. Customizable menu lets you quickly navigate between dashboards
  2. In the time selection you can adjust the times for the current dashboard

Plant-manager dashboard

  1. Dashboard for the plant manager / shift supervisor, which gives an overview of the production in the factory
  2. For each machine the current machine status
  3. For each machine, the overall equipment effectiveness / OEE for the selected time period
  4. For each machine, a timeline showing the machine statuses in color
  5. Overview of all orders, including target/actual deviation and which stop reasons, including setup times, occurred during the order

Machine deep dive

  1. Dashboard for the machine operator / shift supervisor, which displays the details for a machine
  2. The current machine status with time stamp
  3. The overall equipment effectiveness / OEE for the selected time period, including trend over time
  4. An overview of the accumulated duration of each stop reason
  5. A timeline where the machine states are color coded
  6. A timeline where the shifts become visible
  7. A timeline where the orders are displayed
  8. Overview of all orders, including target/actual deviation and which stop reasons, including setup times, occurred during the order
  9. Overview of the number of individual stop reasons

Cross-factory dashboard

  1. Dashboard for the cross-factory manager, who can use this to obtain an overview of the sites
  2. The overall equipment effectiveness / OEE for the selected time period for all machines.
  3. The minimum overall equipment effectiveness / OEE for the selected time period for machine type A.
  4. The average overall equipment effectiveness / OEE for the selected time period for machine type A
  5. The maximum overall equipment effectiveness / OEE for the selected period for machine type A
  6. Overview of all orders, including target/actual deviation and which stop reasons, including setup times, occurred during the order
  7. Export function as .csv

3.2 - Brewery

This document describes the brewery use case

Profile

categoryanswer
IndustryBrewery
Employees~150
Number of retrofitted sites1
Project duration3 months
Number of retrofitted machines8
Types of machines retrofittedEntire filling line (filler, labeler, palletizer, etc.)

Photos

Challenges

Lack of transparency about production processes

  • Duration and causes of machine downtimes are unclear
  • High proportion of smaller microstops with unknown cause
  • Exclusively reactive maintenance, as data on the condition of the components is lacking

Moving bottleneck

  • Since the production process is highly interlinked, a stoppage of a single machine can lead to a stoppage of the entire line
  • The problem machine “bottleneck machine” is difficult to identify, as it can shift during a shift and is difficult to see with the eye

High effort to collect data as part of the introduction of a continuous improvement process

  • Changeover times must be recorded manually with a stop watch and are still not very standardized
  • No data on past and current machine utilization available
  • Maintenance actions recorded manually, no automatic system to log, store and visualize error codes from machine

Solution

Integration

At the beginning, a “BDE entry program” was carried out together with a lean consulting to identify optimization potentials and to present our solution. For this purpose, the [factorycube] was installed at the filler within a few hours in combination with the tapping of electrical signals from the control system and button strips. A connection of the PLC interfaces was initially out of the question due to time and cost reasons. After the customer decided on a permanent solution, the factorycube was dismounted.

All machines have been equipped with the “Weihenstephaner Standards”, a protocol commonly used in the German brewery industry and were already connected within a machines network. Therefore, the installation was pretty straightforward using our enterprise plugin for that protocol and one central server.

Installed hardware

Server

Implemented dashboards

The customer opted for our SaaS offering. We created the following dashboards for the client.

Default navigation options from Grafana, which we modified to allow custom menus.

  1. Customizable menu lets you quickly navigate between dashboards
  2. In the time selection you can adjust the times for the current dashboard

Plant-manager dashboard

Dashboard for the plant manager / shift supervisor, which gives an overview of the production in the factory

  1. For each machine the current machine status
  2. For each machine, the overall equipment effectiveness / OEE for the selected time period
  3. For each machine, a timeline showing the machine statuses in color

Performance cockpit

Dashboard for the supervisor to get an overview of the machine

  1. The current machine status
  2. The overall equipment effectiveness / OEE for the selected time period, including trend over time
  3. The average changeover time
  4. The average cleaning time
  5. A timeline where the machine states are color coded
  6. A timeline where the shifts become visible
  7. A timeline with the machine speed
  8. Overview of the number of individual stop reasons, excluding technical defects as they are not relevant for the shift

Maintenance cockpit

Dashboard for the head of maintenance to get an overview of the machine

  1. The current machine status
  2. The overall equipment effectiveness / OEE for the selected time period, including trend over time
  3. The MTTR (mean time to repeair), an important key figure for maintenance
  4. The MTBF (mean time between failures), an important key figure for maintenance
  5. A timeline where the machine states are color coded
  6. A timeline where the process value “bottle lock open/close” is visualized. This helps the manager of the maintenance to isolate the cause of a problem more precisely.
  7. A timeline with the machine speed
  8. An overview of the accumulated duration of each stop reason, that is relevant for maintenance
  9. Overview of the number of individual stop reasons, that is relevant for maintenance

3.3 - Semiconductor

This document describes the semiconductor use case

Profile

categoryanswer
IndustrySemiconductor industry
Employees>1000
Number of retrofitted sites1
Project duration2 months
Number of retrofitted machines1
Types of machines retrofittedDispensing robot

Photos

Challenges

Increasing demand could not be fulfilled

  • the demand for the product, which was required for ventilators, was increasing over 1000% due to the COVID-19 crisis
  • the production was struggling to keep up with the ramp up

Production downtime needed to be avoided at all costs

  • production downtime would have meant not fulfilling the demand

A quick solution was needed

  • to meet the demand, the company needed a quick solution and could not accept months of project time

Solution

Integration

We were given a 2h time slot by the company to install the sensors, from the time we entered the factory until the time we left (including safety briefings and ESD compliance checks). With the help of videos, we got an overview beforehand and created a sensor plan. During this time slot, we used the machine operator’s break to install all the sensors and verified the data during the subsequent machine run. Through VPN we were able to access the device and fine-tune the configuration.

Installed hardware

factorycube

factorycube sends the collected production data to the server. See also [factorycube].

Gateways

Gateways connect the sensors to the factorycube.

Models:

  • ifm AL1352

Ultrasonic sensor

picture TODO

The ultrasonic sensor was used to measure whether the robot was currently moving and thus whether the machine was running.

Models:

  • TODO

Proximity sensor

Proximity sensors were used to detect if the product was ready for operator removal. Together with the ultrasonic sensors, we were able to measure whether the machine was standing because the machine operator had not removed the product and was therefore not available.

Models:

  • ifm KQ6005

Button bar

Button bar is operated by the machine operator in case of a machine standstill. Each button is assigned a reason for standstill. Used to identify the causes of machine downtime.

Model:

  • Self-made, based on Siemens TODO

Implemented dashboards

The customer opted for our SaaS offering and a additional analysis of the data.

Dashboard screenshot

The customer opted for SaaS solution and required only a very simple dashboard as most insights were gained from a detailed analysis. The dashboard includes the functionality to export data as .csv.

Additional analysis

The data was exported into .csv and then analyzed in Python & Excel. Together with interviews of the operators and supervisors we could extract multiple insights including optimization potential through alignment of the work processes and improvement of changeovers through Single-minute exchange of die (SMED).

3.4 - Cycle time monitoring in an assembly cell

This document describes the cycle time monitoring use case

Profile

An assembly cell was retrofitted to measure and optimize cycle times. Customizable textile wristbands are produced in the assembly cell.

Photos of the machines

Challenges

Lack of information about production performance

  • Cycle times are unknown
  • Bottleneck of the assembly cell cannot be identified
  • No information about productivity of individual employees
  • Piece counts are not documented
  • No comparison between target and actual performance

Lack of transparency about downtimes

  • Frequency and duration of downtimes of the assembly cell are not recorded
  • Causes of downtime are often unknown and not documented

Connection of assembly cell to conventional systems not possible

  • Sewing machines do not have machine controls that could be connected

Solution

Integration

TODO: #66 Add integration for assembly analytics

Installed hardware

factorycube

factorycube sends the collected production data to the server. See also [factorycube].

Gateways

Gateways connect the sensors to the factorycube.

Models:

  • ifm AL1352

Light barriers

Light barriers are installed on the removal bins and are activated when the employee removes material. Used to measure cycle time and material consumption.

Models:

  • ifm O5D100 (Optical distance sensor).

Proximity sensor

Proximity sensors on the foot switches of sewing machines detect activity of the process. Used to measure cycle time.

Models:

  • ifm KQ6005

Barcode scanner

The barcode scanner is used to scan the wristband at the beginning of the assembly process. Process start and product identification.

Model:

  • Datalogic PowerScan PD9531
  • Datalogic USB Cable Straight 2m (CAB-438)

Implemented dashboards

The customer opted for a combination of our SaaS offering with the building kit (and thus an on-premise option). The customer decided to go for PowerBI as a dashboard and connected it via the REST API with factoryinsight.

Used node-red flows

With the help of Assembly Analytics Nodes, it is possible to measure the cycle time of assembly cells in order to measure and continuously improve their efficiency in a similar way to machines.

Here is an exemplary implementation of those nodes:

There are 2 stations with a total of 4 cycles under consideration

Station 1 (AssemblyCell1):

1a: Starts with scanned barcode and ends when 1b starts

1b: Starts with a trigger at the pick to light station and ends when station 1a starts

Station 2 (AssemblyCell2):

2a: Starts when the foot switch at the 2nd station is pressed and ends when 2b starts

2b: Starts when the quality check button is pressed and ends when 2a starts.

Assumptions:

  • Unrealistically long cycle times are filtered out (cycle times over 20 seconds).
  • There is a button bar between the stations to end the current cycle and mark that product as scrap. The upper 2 buttons terminate the cycle of AssemblyCell1 and the lower ones of AssemblyCell2. The aborted cycle creates a product that is marked as a scrap.

Nodes explained:

  • Assembly Analytics Trigger: Cycles can be started with the help of the “Assembly Analytics Trigger” software module.

  • Assembly Analytics Scrap: With the help of the software module “Assembly Analytics Scrap”, existing cycles can be aborted and that produced good can be marked as “scrap”.

  • With the help of the software module “Assembly Analytics Middleware”, the software modules described above are processed into “unique products”.

Here you can download the flow described above

3.5 - Quality monitoring in a bottling line

This document describes the quality monitoring use case

Profile

A bottling line for filling water bottles was retrofitted with an artificial intelligence quality inspection system. With the help of a camera connected to an ia: factorycube, the bottles are checked for quality defects and sorted out by a pneumatic device in the event of a defect.

Photos of the machines

Challenges

Manual visual inspection causes high costs

  • Each individual bottle is checked for quality defects by an employee
  • One employee is assigned to each shift exclusively for quality inspection

Customer complaints and claims due to undetected quality defects

  • Various quality defects are difficult to detect with the naked eye and are occasionally overlooked

No data on quality defects that occur for product and process improvement

  • Type and frequency of quality defects are not recorded and documented
  • No data exists that can be analyzed to derive improvement measures for product and process optimization

Solution

Integration

TODO: #67 Add integration for DCC quality check

Installed hardware

factorycube

A machine learning model runs on the factorycube, which evaluates and classifies the images. See also [factorycube].

Gateways

Gateways connect the sensors to the factorycube.

Models:

  • ifm AL1352

Light barriers

A light barrier identifies the bottle and sends a signal to the factorycube to trigger the camera.

Models:

  • ifm O5D100 (Optical distance sensor)

Camera

A camera takes a picture of the bottle and sends it to the factorycube.

Models:

  • Allied Vision (Mako G-223)

Detectable quality defects

Automated action

As soon as a quality defect is detected the defect bottle is automatically kicked out by the machine.

3.6 - Pharma packaging

This document describes the pharma packaging use case

Profile

categoryanswer
Industrypharma industry
Employees
Number of retrofitted sites
Project duration
Number of retrofitted machines
Types of machines retrofitted

TODO: #70 add pharma packaging case

3.7 - Weaving

TODO

Profile

categoryanswer
Industry
Employees
Number of retrofitted sites
Project duration
Number of retrofitted machines
Types of machines retrofitted

TODO: #71 add weaving case

3.8 - CNC Milling

This document describes the CNC milling use case

TODO #65

4 - Tutorials

This section has tutorials and other documents, that do not fit into the other categories.

4.1 - FAQ

This document gives answers to the frequently asked questions

I cannot login into ia: factoryinsight / Grafana although I’ve entered my credentials correctly several times

The username in Grafana is case-sensitive. That means, if the user is **J**eremy.**T**[email protected] and you enter **j**eremy.**t**[email protected] you will get a failed login message.

4.2 - General

This category contains tutorials for general IT / OT topics

4.2.1 - Setting up the PKI infrastructure

This document describes how to create and manage the certificates required for MQTT

Prerequisites

This tutorial is assuming zou are using ubuntu and have installed easy-rsa using sudo apt-get install easyrsa

Initially setting up the infrastructure

Create a new directory and go into it, e.g.

mkdir ~/mqtt.umh.app/
cd ~/mqtt.umh.app/

Enable batch mode of easyrsa with export EASYRSA_BATCH=1

Setup basic PKI infrastructure with /usr/share/easy-rsa/easyrsa init-pki

Copy the default configuration file with cp /usr/share/easy-rsa/vars.example pki/vars and edit it to your liking (e.g. adjust EASYRSA_REQ_… and CA and cert validity)

Build the CA using export EASYRSA_REQ_CN=YOUR_CA_NAME && /usr/share/easy-rsa/easyrsa build-ca nopass. Replace YOUR_CA_NAME with a name for your certificate authority (CA), e.g., UMH CA

Create the server certificate by using the following commands (exchange mqtt.umh.app with your domain!):

/usr/share/easy-rsa/easyrsa gen-req mqtt.umh.app nopass
/usr/share/easy-rsa/easyrsa sign-req server mqtt.umh.app 

Copy the private key `pki/private/mqtt.umh.app.key` and the public certificate `pki/issued/mqtt.umh.app.crt` together with the root CA `pki/ca.crt` to the configuration of the MQTT broker.

## Adding new clients

Create new clients with following commands (remember to change TESTING with the planned MQTT client id): `export EASYRSA_REQ_CN=TESTING && /usr/share/easy-rsa/easyrsa gen-req $EASYRSA_REQ_CN nopass && /usr/share/easy-rsa/easyrsa sign-req client $EASYRSA_REQ_CN nopass`

4.2.2 - Edge networking

The UMH stack features a sophisticated system to be integrated into any enterprise network. Additionally, it forces multiple barriers against attacks by design. This document should clear up any confusion.

factorycube

The factorycube (featuring the RUT955) consists out of two separate networks:

  1. internal
  2. external

The internal network connects all locally connected machines, sensors and miniPCs with each other. The external network is “the connection to the internet”. The internal network can access the external network, but not the other way around, except specifically setting firewall rules (“port forwarding”).

Example components in internal network

  • Laptop for setting up
  • Router
  • miniPC
  • ifm Gateways
  • Ethernet Cameras

Example components in external network

  • Router (with its external IP)
  • the “Internet” / server

4.2.3 - How to install an operating system from a USB-stick

This article explains how to install an operating system from a bootable USB-stick.

Prerequisites

Steps

  1. Plug the USB-stick into the device
  2. Reboot
  3. Press the button to go into the boot menu. This step is different for every hardware and is described in the hardware manual. If you do not want to look it up you could try smashing the following buttons during booting (the stuff before the operating system is loaded) and hope for the best: F1, F2, F11, F12, delete
  4. Once you are in the boot menu, select to boot from the USB-stick

4.2.4 - How to connect with SSH

This article explains how to connect with an edge device via SSH

For Windows

We recommend MobaXTerm. TODO

For Linux

For Linux you can typically use the inbuilt commands to connect with a device via SSH. Connect using the following command:

ssh <username>@<IP>, e.g., ssh [email protected].

Connect via SSH

There will be a warning saying that the authenticity of the host can’t be established. Enter yes to continue with the connection.

Warning message: The authenticity of host 'xxx' can't be established.

Enter the password and press enter. The default password of the auto setup will be rancher.

Successfully logged in via SSH

4.2.5 - How to flash an operating system onto a USB-stick

There are multiple ways to flash a operating system onto a USB-stick. We will present you the method of using balenaEtcher.

Prerequisites

  • You need a USB-stick (we recommend USB 3.0 for better speed)
  • You need a OS image in the *.iso format. For k3OS you could choose for example this version

Steps

Download balenaEtcher: www.balena.io/etcher/

Insert USB-stick and open balenaEtcher

Select downloaded *.iso by clicking on "Flash from file" (the sceeen might look different based on your operating system)

Select the USB-stick by clicking on "Select target"

Select "Flash"

It will flash the image on the USB-stick

You are done!

These steps are also available as a YouTube tutorial from the user kilObit.

4.2.6 - Versioning in IT

This article explains how version numbers are typically structured in IT.

In IT Semantic Versioning has established itself as the standard to describe versions. It consists out of the format MAJOR.MINOR.PATCH, e.g., 1.0.0.

MAJOR is incremented when making incompatible API changes.

MINOR is incremented when you add functionality

PATCH is incremented when you make bug fixes

If the version is followed by a ‘-’ sign, then it means it is a pre-release and not stable yet. Therefore, the latest stable version means the highest version available that is not a pre-release / has no ‘-’ sign.

More information can be found in the specification of Semantic Versioning 2.0.

4.3 - k3OS

This category contains tutorials for installing, using and maintaining k3OS

4.3.1 - How to add additional SSH keys in k3OS

This article explains how to add an additional SSH key to k3OS, so that multiple people can access the device

Prerequisites

  • Edge device running k3OS
  • SSH access to that device
  • SSH / SFTP client
  • Public and private key suited for SSH access

Tutorial

  1. Access the edge device via SSH
  2. Go to the folder /home/rancher/.ssh and edit the file authorized_keys
  3. Add there your additional SSH key

4.3.2 - How to enable SSH password authentification in k3OS

This article explains how to enable the classic username / password authentification for SSH in k3os

DANGER: NOT RECOMMENDED FOR PRODUCTION! USE DEFAULT BEHAVIOR WITH CERTIFICATES INSTEAD

By default, k3OS allows SSH connections only using certificates. This is a much safer method than using passwords. However, we realized that most mechanical engineers and programmers are overwhelmed with the creation of a public key infrastructure. Therefore, it might make sense to enable password authentication in k3OS for development mode.

Prerequisites

  • Edge device running k3OS
  • Physical access to that device

Tutorial

  1. Access the edge device via computer screen and keyboard and login with username rancher and rancher
  2. Set the value PasswordAuthentication in the file /etc/ssh/sshd_config to yes and restart the service sshd. You can use the following command:
sudo vim /etc/ssh/sshd_config -c "%s/PasswordAuthentication  no/PasswordAuthentication  yes/g | write | quit" && sudo service sshd restart

4.3.3 - How to fix certificate not yet valid issues

curl might fail and not download helm as the certificate is not yet valid. This happens especially when you are in a restricted network and the edge device is not able fetch the current date and time via NTP.

Issue 1

While executing the command export VERIFY_CHECKSUM=false && curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 && chmod 700 get_helm.sh && ./get_helm.sh on k3OS you might get a error message like this:

curl: (60) SSL certificate problem: certificate is not yet valid
More details here: https://curl.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

Checking the time with date results in a timestamp from 2017. There are two possible solutions.

Possible solution 1: configure NTP

The time is not configured properly. It can happen that your NTP server is blocked (especially if you are inside a university network).

You can verify that by entering sudo ntpd -d -q -n -p 0.de.pool.ntp.org. If you get a result like this, then it is definitely blocked:

ntpd: '0.de.pool.ntp.org' is 62.141.38.38
ntpd: sending query to 62.141.38.38
Alarm clock

We recommend using the NTP server from the local university or ask your system administrator. For the RWTH Aachen / FH Aachen you can use sudo ntpd -d -q -n -p ntp1.rwth-aachen.de as specified here

Possible solution 2: set hardware clock via BIOS

Go into the BIOS and set the hardware clock of the device manually.

Issue 2

k3os reports errors, due hardware date being in the past. (ex: 01.01.2017). Example: during startup the k3s certificates are generated, however, it is still using the hardware time. Even after setting the time manually with NTP it wont let you connect with k3s as the certificates created during startup are not not valid anymore. Setting the time is not persisted during reboots.

Steps to Reproduce

  1. Install k3os, without updating BIOS clock
  2. Install UMH
  3. helm will fail on Install factorycube-server step, due to outdated certificates.

Possible solution

Load cloudinit with added ntp_servers on OS install. You can use the one at https://www.umh.app/development.yaml

Be careful: you need to host it on a HTTP server (not HTTPS) as you would get other certificate issues while fetching it.

4.3.4 - How to install k3OS

This article explains how to install k3OS on an edge device using the United Manufacturing Hub installation script.

Prerequisites

  • edge device with keyboard and computer screen successfully booted from a USB-stick (see also Installation)
  • you should see on the computer screen the screen below (it will automatically continue after 10 seconds with the installation, so do not worry if you only see it for some seconds)

boot menu of k3OS

Tutorial

Wait until k3OS is fully started. You should see the screen below:

k3OS installer fully booted

Enter rancher and press enter to login.

k3OS installer fully booted with rancher as username

You should now be logged in.

Pro tip: Execute lsblk and identify your hard drive (e.g., by the size). It will prevent playing russian roulette on a later step.

Now type in sudo k3os install to start the installation process.

entered `sudo k3os install`

You are now prompted to select what you want to install. Select 1 and press enter or just press enter (the stuff in brackets [] is the default configuration if you do not specify anything and just press enter).

Install to disk

At this step the system might ask you to select your hard drive. One of the devices sda or sdb will be your hard drive and one the USB-stick you booted from. If you do not know what your hard drive is, you need to play russian roulette and select one device. If you find out later, that you accidently installed it onto the USB-stick, then repeat the installation process and use the other device.

After that select y when you get asked for a cloud-init file

Configure system with cloud-init file

Now enter the URL of your cloud-init file, e.g., the one mentioned in the Installation guide.

Press enter to continue.

Specify the cloud-init file

example (do not use this URL)

Confirm with y and press enter.

Confirm installation with `y`

Confirm installation with `y`

If the installation fails with not beeing able to fetch the cloud-init file check the URL and the network configuration

If the installation fails with expired or untrusted certificates (curl: (60) SSL certificate problem: certificate is not yet valid or similar), check out this guide.

The device will then reboot. You might want to remove the USB-stick to prevent booting from the USB-stick again.

If the following screen appears you did everything correct and k3OS was successfully installed.

4.4 - Node-RED

This category contains tutorials for Node-RED

4.4.1 - Fixing broken Node-RED flows

This tutorial shows how you can recover Node-RED flows that are stuck in an endless loop of crashing.

Prerequisites

  • Node-RED in a crash loop because of one misconfigured node especially azure-iot-hub and python3-function)
  • Node-RED installed as part of the United Manufacturing Hub (either as factorycube-edge or factorycube-server)

Tutorial

The solution is to boot Node-RED in safe mode by changing the environment variable NODE_RED_ENABLE_SAFE_MODE to true.

After 0.6.1

  1. Open Lens and connect to the cluster (you should know how to do it if you followed the Getting Started guide)
  2. Select the namespace, where Node-RED is in (factorycube-edge or factorycube-server)
  3. Go to Apps –> Releases and select your Releases
  4. Search for NODE_RED_ENABLE_SAFE_MODE and change the value from false to true
  5. Save

Using this method you might encounter an error message on the top right while saving RELEASE not found in Lens. We recommend following this guide to fix the RELEASE not found message while updating a RELEASE in Lens

Before 0.6.1

  1. Open Lens and connect to the cluster (you should know how to do it if you followed the Getting Started guide)
  2. Select the namespace, where Node-RED is in (factorycube-edge or factorycube-server)
  3. Select the StatefulSet Node-RED. A popup should appear on the right side.
  4. Press the edit button

    Press the edit button / pen symbol on top right of the screen

  5. Find the line
env:
    - name: TZ
      value: Berlin/Europe

and change it to this:

env:
    - name: TZ
      value: Berlin/Europe
    - name: NODE_RED_ENABLE_SAFE_MODE
      value: "true"
  1. Check whether it says "true" and not true (use quotation marks!)
  2. Press Save
  3. Terminate the pod manually if necessary (or if you are impatient)
  4. Node-RED should now start in safe mode. This means that it will boot, but will not execute any flows.
  5. Do your changes, fix the Nodes
  6. Do steps 1 - 5, but now set NODE_RED_ENABLE_SAFE_MODE to false

4.5 - United Manufacturing Hub

This category includes tutorial specific for the United Manufacturing Hub

4.5.1 - Certified devices

This section contains tutorials related to our commercial certified devices.

4.5.1.1 - How to use machineconnect

This document explains how to install and use machineconnect

Purpose

machineconnect is our certified device used for the connection of PLCs and installed in the switch cabinet of the machine.

The idea behind machineconnect is to protect the PLC and all components with an additional firewall. Therefore, it is not accessible from outside of machineconnect except explicitly configured in the firewall.

Features

  • Industrial Edge Computer
    • With DIN rail mounting and 24V
    • Vibration resistent according to IEC 60068-2-27, IEC 60068-2-64/ MIL-STD-810, UNECE Reg.10 E-Mark, EN50155
    • Increased temperature range (-25°C ~ 70°C)
  • Open source core installed and provisioned according to customer needs (e.g. MQTT certificates) in production mode (using k3OS)
  • Additional security layer for your PLC by using OPNsense (incl. Firewall, Intrusion Detection, VPN)
  • 10 years of remote VPN access via our servers included

Physical installation

  1. Attach wall mounting brackets to the chassis
  2. Attach DIN Rail mounting brackets to the chassis
  3. Clip system to the DIN Rail
  4. Connect with 24V power supply
  5. Connect Ethernet 1 with WAN / Internet
  6. Connect Ethernet 3 with local switch (if existing). This connection will be called from now on “LAN”.
  7. (optional, see connection to PLC. If skipped please connect the PLC to Ethernet 3) Connect Ethernet 2 with PLC. This connection will be called from now on “PLC network”.

Verify the installation by turning on the power supply and checking whether all Ethernet LEDs are blinking.

Connection to the PLC

There are two options to connect the PLC. We strongly recommend Option 1, but in some cases (PLC has fixed IP and is communicating with engine controllers or HMI and you cannot change the IP adresses there) you need to go for option 2.

  1. Configure the PLC to retrieve the IP via DHCP
  2. Configure OPNsense to give out the same IP for the MAC-address of the PLC for LAN. Go to Services –> DHCPv4 –> LAN and add the PLC under “DHCP static mappings for this device”

Option 2: The PLC has a static IP, which cannot be changed

  1. Adding a new interface for the PLC network, e.g. “S7”.
  2. Adding a new gateway (see screenshot. Assuming 192.168.1.150 is the IP of the PLC and the above created interface is called “S7”)
  3. Adding a new route (see screenshot and assumptions of step 2)
  4. Changing NAT to “Manual outbound NAT rule generation” (see screenshot and assumptions of step 2)
  5. Add firewall rule to the PLC interface (see screenshot and assumptuons of step 2)
  6. Add firewall rule to LAN allowing interaction between LAN network and PLC network

If you are struggling with these steps and have bought a machineconnect from us, feel free to contact us!

Next steps

After doing the physical setup and connecting the PLC you can continue with part 3 of the getting started guide.

4.5.2 - Setting up the documentation

This document explains how to get started with the documentation locally
  1. Clone the repo
  2. Go to /docs and execute git submodule update --init --recursive to download all submodules
  3. git init && git add . && git commit -m "test" (yes it is quite stupid, but it works)
  4. Startup the development server by using sudo docker-compose up --build

4.5.3 - Working with the system practically

Three step implementation plan for using the factorycube

Please ensure that you have read the safety information and manuals before proceeding! Failure to do so can result in damages to the product or serious injuries.

Create a sensor plan

Please create before installing anything a sensor plan. Prior to installing take the layout of your selected line and add:

  • Electricity sockets
  • Internet connections (bring your own)
  • Your planned sensors
  • The position of the factorycube
  • Your planned cabling

Discuss and install

Setup a meeting with your line engineers and discuss your plan. Then install everything according to plan. Ensure, that all sensors and cables are mounted tightly and that they do not interfere with the production process.

Supply your factorycube with power and turn it on

Plug in the power cable to turn the factorycube on. After a few seconds the ia: factorycube should be lit up.

Connect your factorycube to the internet

If you want to use the cloud dashboard, you must first connect the factorycube to the internet.

You need:

  • your credentials
  • an Ethernet cable (provided)
  • a laptop which is not connected to any VPN

For a network overview of the Factorycube, click here

Instructions to login

Connect the factorycube with your computer via an ethernet cable using the IO-Link port (not Internettest_ on the factorycube.

Open the following website on your browser: http://172.16.x.2 (The X stands for the last number(s) of the serial number. e.g. 2019_0103 -> x=3 or 2019_0111 -> x=11)

Enter your credentials according to the information in the customer area. The username is always “admin”

3 ways to connect to the internet: WiFi, 3G/4G or Ethernet

Further information on how to connect the factorycube with the internet can be found in the official router manual

Instructions to setup WiFi

  • Select “Network” → “Wireless/Wlan”. If necessary remove the old Wireless station point
  • Click on “Add” next to wireless station mode
  • Click on “Start Scan” next to wireless station mode
  • Click on the network of your choice
  • “join network”
  • Afterwards enter your credentials and confirm

The computer should now be connected to the internet.

Instructions to setup 3G/4G

  • Insert the SIM-card (if a SIM-card is already provided in the ia: factorycube, skip this step)
  • For the installation of a SIM card please contact our experts
  • Select “Network” → “Mobile”
  • Adjust the settings under the “General” tab as follows:
  • Save your settings

The computer should now be connected to the internet.

Instructions to set up connection via Ethernet

  • Plug the Ethernet cable into the device’s “Internet” port and the other side into the network access port
  • Select “Network” –> “WAN”
  • Select as wired as your main WAN
  • Click save

The computer should now be connected to the internet. You can now the entire United Manufacturing Hub Edge Stack. For more information, take a look in the getting started with edge devices.

Outro

Closely monitor the data and verify over the entire duration of the next days, whether the data is plausible. Things that can go wrong here:

  • Sensors not mounted properly and not calibrated anymore
  • The operators are using the line different from what was discussed before (e. g., doing a changeover and removing the sensors)

4.5.4 - How to setup a development network

This article explains how to setup a development network to work with an edge device

4.5.5 - How to update the stack / helm chart

This article explains how to update the helm chart, so that you can apply changes to the configuration of the stack or to install newer versions

Prerequisites

none

Tutorial

  1. Go to the folder deployment/factorycube-server or deployment/factorycube-edge
  2. Execute helm upgrade factorycube-server . --values "YOUR VALUES FILE" --kubeconfig /etc/rancher/k3s/k3s.yaml -n YOUR_NAMESPACE

This is basically your installation command, but you exchange install with upgrade. You need to change “YOUR VALUES FILE” with the path of your values.yaml, e.g. /home/rancher/united-manufacturing-hub/deployment/factorycube-server/values.yaml and you need to adjust YOUR_NAMESPACE with the correct namespace name. If you did not specify any namespace during the installation you can use the namespace default. If you are using factorycube-edge instead of factorycube-server you need to adjust that as well.

4.5.6 - How to work with MinIO

This article explains how you can access MinIO for development and administration purposes

Please also take a look on our guide on how to use it in production!

Default settings / development setup

By default MinIO is exposed outside of the Kubernetes cluster using a LoadBalancer. The default credentials are minio:minio123 for the console. When using minikube you can access the LoadBalancer service using minikube tunnel and then accessing the external IP using the selected port.

4.5.7 - Working with Helm and Lens

This article explains how to work with Helm and Lens, especially how to update the configuration or how to do software upgrade

Changing the configuration / updating values.yaml

using Lens GUI

Note: if you encounter the issue “not found”, please go to the troubleshooting section further down.

using CLI / kubectl in Lens

To override single entries in values.yaml you can use the --set command, for example like this: helm update factorycube-edge . --namespace factorycube-edge --set nodered.env.NODE_RED_ENABLE_SAFE_MODE=true

Troubleshooting

TODO

5 - Developers

This section has all technical documents and API specifications

This repository contains multiple folders and sub-projects:

  • /golang contains software developed in Go, especially factoryinsight and mqtt-to-postgresql and their corresponding tests (-environments)
  • /deployment contains all deployment related files for the server and the factorycube, e.g. based on Kubernetes or Docker, sorted in seperate folders
  • /sensorconnect contains sensorconnect
  • /barcodereader contains barcodereader
  • /python-sdk contains a template and examples to analyze data in real-time on the edge devices using Python, Pandas and Docker. It is deprecated as we switched to [node-red] and only published for reference.
  • /docs contains the entire documentation and API specifications for all components including all information to buy, assemble and setup the hardware

5.1 - factorycube-server

The architecture of factorycube-server

factoryinsight

factoryinsight is an open source REST API written in Golang to fetch manufacturing data from a timescaleDB database and calculate various manufacturing KPIs before delivering it to a user visualization, e.g. [Grafana] or [PowerBI].

Features:

  • OEE (Overall Equipment Effectiveness), including various options to investigate OEE losses (e.g. analysis over time, microstop analytics, changeover deep-dives, etc.)
  • Various options to investigate OEE losses further, for example stop analysis over time, microstop analytics, paretos, changeover deep-dives or stop histograms
  • Scalable, microservice oriented approach for Plug-and-Play usage in Kubernetes or behind load balancers (including health checks and monitoring)
  • Compatible with important automation standards, e.g. Weihenstephaner Standards 09.01 (for filling), Omron PackML (for packaging/filling), EUROMAP 84.1 (for plastic), OPC 30060 (for tobacco machines) and VDMA 40502 (for CNC machines)

The openapi documentation can be found here

mqtt-to-postgresql

the tool to store incoming MQTT messages to the postgres / timescaleDB database

Technical information and usage can be found in the documentation for mqtt-to-postgresql

grafana-auth

Proxies request from grafana to various backend services, while authenticating the grafana user. Technical information and usage can be found in the documentation for grafana-proxy

grafana-plugins

Contains our grafana datasource plugin and our input panel

5.1.1 - grafana-plugins

Our grafana plugins

umh-datasource

Translates REST calls to factoryinsight into grafana datapoints Technical information and usage can be found in the documentation for umh-datasource

umh-factoryinput-panel

Allows creation of REST requests to the factoryinput service Technical information and usage can be found in the documentation for umh-factoryinput-panel

5.1.1.1 - factoryinput-panel

Documentation of factoryinput-panel

This microservice is still in development and is not considered stable for production use.

Getting started

UMH Factoryinput Panel allows to easily execute MQTT messages inside the UMH stack from the Grafana Panel.

Requirements

  • A united manufacturing hub stack
  • External IP or URL of the grafana-proxy server.
    • In most cases it is the same IP as your Grafana dashboard

Installation

If you have installed the UMH-Stack as described in our quick start Tutorial, then this plugin is already installed on your Grafana installation

If you want to develop this Panel further, please follow the instructions below

Build from source

  1. Goto united-manufacturing-hub/grafana-plugins/umh-factoryinput-panel

  2. Install dependencies

yarn install
  1. Build plugin in development mode or run in watch mode
yarn dev
  1. Build plugin in production mode (not recommended due to Issue 32336)
yarn build
  1. Move the resulting dist folder into your grafana plugins directory
  • Windows: C:\Program Files\GrafanaLabs\grafana\data\plugins
  • Linux: /var/lib/grafana/plugins
  1. Rename the folder to umh-factoryinput-panel

  2. Enable the enable development mode to load unsigned plugins

  3. Restart your grafana service

Usage

Prerequisites

  1. Open your Grafana instance
  2. Log in
  3. Open your Profile and check if your organization name inside Grafana matches the rest of your UMH stack

Creating a new Panel

  1. Create a new Dashboard or edit an existing one
  2. Click “Add an empty panel”
  3. On the right sidebar switch the Visualization to “Button Panel”
  4. Fill out the fields inside “REST Integration”
    1. URL
      • http://{URL to your grafana-proxy}/api/v1/factoryinput/
      • Example:
        • http://172.21.9.195:2096/api/v1/factoryinput/
    2. Location
      • Location of your Asset
    3. Asset
      • Name of the Asset
    4. Value
      • MQTT prefix
        • Example prefixes:
          • count
          • addShift
          • modifyShift
    5. Customer
      • Your organization name
    6. Payload
      • JSON encoded payload to send as MQTT message payload
  5. Modify any additional options are you like
  6. When you are finished customizing, click on “Apply”

Example Panel

Notes

  1. Clicking the button will immediately send the MQTT message, through our HTTP->MQTT stack. Please don’t send queries modifying date you would later need !

Common pitfalls

  • Pressing the button just changes the cog to an warning sign
    1. Open your network inspector view (Ctrl+Shift+I on Chrome)
    2. Press the button again
    3. If no request appears, then you haven’t filled out all required fields
    4. Your request shows:
      • 403
        • Make sure the customer field is set to your grafana organization name
      • 400
        • Your request was incorrectly formatted
        • Check that the URL is in the format specified above
        • Check if your payload contains valid JSON
          • You can validate your payload here
        • Check that the Value field matches a valid MQTT command

Technical information

Below you will find a schematic of this flow, through our stack

License

5.1.1.2 - umh-datasource

What is United Manufacturing Hub Datasource?

UMH Datasource provides an Grafana 8.X compatible plugin, allowing easy data extraction from the UMH factoryinsight microservice.

Installation

Build from source

  1. Clone the datasource repo [email protected]:united-manufacturing-hub/united-manufacturing-hub-datasource.git

  2. Install dependencies

yarn install
  1. Build plugin in development mode or run in watch mode
yarn dev
  1. Build plugin in production mode (not recommended due to Issue 32336)
yarn build
  1. Move the resulting dist folder into your grafana plugins directory
  • Windows: C:\Program Files\GrafanaLabs\grafana\data\plugins
  • Linux: /var/lib/grafana/plugins
  1. Rename the folder to umh-datasource

  2. You need to enable development mode to load unsigned plugins

  3. Restart your grafana service

From Grafana’s plugin store

TODO

Usage

  1. Open Grafana and login

  2. Open umh-datasource’s settings

  3. Configure your customer name & API Key (automatically configured in Helm deployment)

  4. Configure your server url:

    URL: URL/IP:Port of grafanaproxy

    http://{URL}/api/v1/factoryinsight/

    e.g:

    http://172.21.9.195:2096/api/v1/factoryinsight/

  5. Click “Save & Test”

5.1.2 - factoryinput

Documentation of factoryinput

This microservice is still in development and is not considered stable for production use.

This program provides an REST endpoint, to send MQTT messages via HTTP requests. It is typically accessed via grafana-proxy.

Environment variables

This chapter explains all used environment variables.

FACTORYINPUT_USER

Description: Specifies the admin user for the REST API

Type: string

Possible values: all

Example value: jeremy

FACTORYINPUT_PASSWORD

Description: Specifies the password for the admin user for the REST API

Type: string

Possible values: all

Example value: changeme

VERSION

Description: The version of the API to host. Currently, only 1 is a valid value

Type: int

Possible values: all

Example value: 1

CERTIFICATE_NAME

Description: Certificate for MQTT authorization or NO_CERT

Type: string

Possible values: all

Example value: NO_CERT

MY_POD_NAME

Description: The pod name. Used only for tracing, logging and MQTT client id.

Type: string

Possible values: all

Example value: app-factoryinput-0

BROKER_URL

Description: the URL to the broker. Can be prepended either with ssl:// or mqtt:// or needs to specify the port afterwards with :1883

Type: string

Possible values: all

Example value: tcp://factorycube-server-vernemq-local-service:1883

CUSTOMER_NAME_{NUMBER}

Description: Specifies a user for the REST API

Type: string

Possible values: all

Example value: jeremy

CUSTOMER_PASSWORD_{NUMBER}

Description: Specifies the password for the user for the REST API

Type: string

Possible values: all

Example value: changeme

LOGGING_LEVEL

Description: Specifies the logging level. Everything except DEVELOPMENT will be parsed as production (including not set)

Type: string

Possible values: DEVELOPMENT, PRODUCTION

Example value: PRODUCTION

MQTT_QUEUE_HANDLER

Description: Number of queue workers to spawn. If not set, it defaults to 10

Type: uint

Possible values: 0-65535

Example value: 10

Other

  1. Run the program
    • Either using go run github.com/united-manufacturing-hub/united-manufacturing-hub/cmd/factoryinput
    • Or using the Dockerfile
      • Open a terminal inside united-manufacturing-hub
      • Run docker build -f .\deployment\mqtt-to-postgresql\Dockerfile .
      • Look for the image SHA
        • Example: => => writing image sha256:11e4e669d6581df4cb424d825889cf8989ae35a059c50bd56572e2f90dd6f2bc
      • docker run SHA
        • docker run 11e4e669d6581df4cb424d825889cf8989ae35a059c50bd56572e2f90dd6f2bc -e VERSION=1 ....

Rest API

5.1.3 - factoryinsight

This document describes the usage of factoryinsight including environment variables and the REST API

Getting started

Here is a quick tutorial on how to start up a basic configuration / a basic docker-compose stack, so that you can develop.

Go to the root folder of the project and execute the following command:

sudo docker build -f deployment/factoryinsight/Dockerfile -t factoryinsight:latest . && sudo docker run factoryinsight:latest 

Environment variables

This chapter explains all used environment variables.

POSTGRES_HOST

Description: Specifies the database DNS name / IP-address for postgresql / timescaleDB

Type: string

Possible values: all DNS names or IP

Example value: factorycube-server

POSTGRES_PORT

Description: Specifies the database port for postgresql

Type: int

Possible values: valid port number

Example value: 5432

POSTGRES_DATABASE

Description: Specifies the database name that should be used

Type: string

Possible values: an existing database in postgresql

Example value: factoryinsight

POSTGRES_USER

Description: Specifies the database user that should be used

Type: string

Possible values: an existing user with access to the specified database in postgresql

Example value: factoryinsight

POSTGRES_PASSWORD

Description: Specifies the database password that should be used

Type: string

Possible values: all

Example value: changeme

FACTORYINSIGHT_USER

Description: Specifies the admin user for the REST API

Type: string

Possible values: all

Example value: jeremy

FACTORYINSIGHT_PASSWORD

Description: Specifies the password for the admin user for the REST API

Type: string

Possible values: all

Example value: changeme

VERSION

Description: The version of the API used (currently not used yet)

Type: int

Possible values: all

Example value: 1

DEBUG_ENABLED

Description: Enables debug logging

Type: string

Possible values: true,false

Example value: false

REDIS_URI

Description: URI for accessing redis sentinel

Type: string

Possible values: All valids URIs

Example value: factorycube-server-redis-node-0.factorycube-server-redis-headless:26379

REDIS_URI2

Description: Backup URI for accessing redis sentinel

Type: string

Possible values: All valids URIs

Example value: factorycube-server-redis-node-1.factorycube-server-redis-headless:26379

REDIS_URI3

Description: Backup URI for accessing redis sentinel

Type: string

Possible values: All valids URIs

Example value: factorycube-server-redis-node-2.factorycube-server-redis-headless:26379

REDIS_PASSWORD

Description: Password for accessing redis sentinel

Type: string

Possible values: all

Example value: changeme

REST API

5.1.4 - grafana-proxy

Documentation of grafana-proxy

This microservice is still in development and is not considered stable for production use.

Getting started

This program proxies requests to backend services, if the requesting user is logged into grafana and part of the organization he requests.

Either using go run github.com/united-manufacturing-hub/united-manufacturing-hub/cmd/grafana-auth Or using the Dockerfile - Open an terminal inside united-manufacturing-hub/deployment/grafana-auth - Run docker build -f ./Dockerfile ../..

Environment variables

This chapter explains all used environment variables.

FACTORYINPUT_USER

Description: Specifies the user for the REST API

Type: string

Possible values: all

Example value: jeremy

FACTORYINPUT_KEY

Description: Specifies the password for the user for the REST API

Type: string

Possible values: all

Example value: changeme

FACTORYINPUT_BASE_URL

Description: Specifies the DNS name / IP-address to connect to factoryinput

Type: string

Possible values: all DNS names or IP

Example value: http://factorycube-server-factoryinput-service

FACTORYINSIGHT_BASE_URL

Description: Specifies the DNS name / IP-address to connect to factoryinsight

Type: string

Possible values: all DNS names or IP

Example value: http://factorycube-server-factoryinsight-service

LOGGING_LEVEL

Description: Optional variable, if set to “DEVELOPMENT”, it will switch to debug logging

Type: string

Possible values: Any

Example value: DEVELOPMENT

JAEGER_HOST

Description: Optional variable, Jaeger tracing host

Type: string

Possible values: all DNS names or IP

Example value: http://jaeger.localhost

JAEGER_PORT

Description: Optional variable, Port for Jaeger tracing

Type: string

Possible values: 0-65535

Example value: 9411

DISABLE_JAEGER

Description: Optional variable, disables Jaeger if set to 1 or true

Type: string

Possible values: Any

Example value: 1

Notes

Grafana-Proxy accepts all cors requests. For authenticated requests, you must send your Origin, else CORS will fail

Access-Control-Allow-Headers: content-type, Authorization
Access-Control-Allow-Origin: $(REQUESTING_ORIRING) or *
Access-Control-Allow-Credentials: true
Access-Control-Allow-Methods: *

5.1.5 - mqtt-to-blob

The following guide describes how to catch data from the MQTT-Broker and push them to the blob storage from MIN.io

This microservice is still in development and is not considered stable for production use.

Getting started

The following guide describes how to catch data from the MQTT-Broker and push them to the blob storage from MIN.io Go to the root folder of the project and execute the following command:

sudo docker-compose -f ./deployment/mqtt-to-blob/docker-compose.yml build && sudo docker-compose -f ./deployment/mqtt-to-blob/docker-compose.yml up 

Environment variables

This chapter explains all used environment variables.

BROKER_URL

Description: Specifies the DNS name / IP-address to connect to the MQTT Broker postgresql / timescaleDB

Type: string

Possible values: all DNS names or IP

Example value: 127.0.0.1

BROKER_PORT

Description: Specifies the port for the MQTT-Broker. In most cases it is 1883.

Type: int

Possible values: valid port number

Example value: 1883

MINIO_URL

Description: Specifies the database DNS name / IP-address for the MIN.io server.

Type: string

Possible values: all DNS names or IP

Example value: play.min.io

MINIO_ACCESS_KEY

Description: Specifies the key to access the MIN.io Server. Can be seen as the username to login.

Type: string

Possible values: an existing / just created user with access to the specified database

Example value: testuser

MINIO_SECRET_KEY

Description: Specifies the MIN.io Server password that should be used

Type: string

Possible values: all

Example value: changeme

TOPIC

Description: Specifies the topic the MQTT will listen to. To subscribe to all topics use ‘#’ instead of the topic.

Type: string

Possible values: all

Example value: /test/umh

BUCKET_NAME

Description: Specifies the location in MIN.io where data are stored in MIN.io on the online.

Type: string

Possible values: all

Example value: testbucket

5.1.6 - mqtt-to-postgresql

Documentation of mqtt-to-postgresql

mqtt-to-postgresql subscribes to the MQTT broker (in the stack this is VerneMQ), parses incoming messages on the topic “ia/#” and stores them in the postgresql / timescaleDB database (if they are in the correct datamodel)

Getting started

Here is a quick tutorial on how to start up a basic configuration / a basic docker-compose stack, so that you can develop.

docker-compose -f ./deployment/mqtt-to-postgresql/docker-compose-mqtt-to-postgresql-development.yml –env-file ./.env up -d –build

Environment variables

This chapter explains all used environment variables.

POSTGRES_HOST

Description: Specifies the database DNS name / IP-address for postgresql / timescaleDB

Type: string

Possible values: all DNS names or IP

Example value: factorycube-server

POSTGRES_PORT

Description: Specifies the database port for postgresql

Type: int

Possible values: valid port number

Example value: 5432

POSTGRES_DATABASE

Description: Specifies the database name that should be used

Type: string

Possible values: an existing database in postgresql

Example value: factoryinsight

POSTGRES_USER

Description: Specifies the database user that should be used

Type: string

Possible values: an existing user with access to the specified database in postgresql

Example value: factoryinsight

POSTGRES_PASSWORD

Description: Specifies the database password that should be used

Type: string

Possible values: all

Example value: changeme

DRY_RUN

Description: Enables dry run mode (doing everything, even “writing” to database, except committing the changes.)

Type: string

Possible values: true,false

Example value: false

REDIS_URI

Description: URI for accessing redis sentinel

Type: string

Possible values: All valids URIs

Example value: factorycube-server-redis-node-0.factorycube-server-redis-headless:26379

REDIS_URI2

Description: Backup URI for accessing redis sentinel

Type: string

Possible values: All valids URIs

Example value: factorycube-server-redis-node-1.factorycube-server-redis-headless:26379

REDIS_URI3

Description: Backup URI for accessing redis sentinel

Type: string

Possible values: All valids URIs

Example value: factorycube-server-redis-node-2.factorycube-server-redis-headless:26379

REDIS_PASSWORD

Description: Password for accessing redis sentinel

Type: string

Possible values: all

Example value: changeme

MY_POD_NAME

Description: The pod name. Used only for tracing, logging and MQTT client id.

Type: string

Possible values: all

Example value: app-mqtttopostgresql-0

MQTT_TOPIC

Description: Topic to subscribe to. Only set for debugging purposes, e.g., to subscribe to a certain message type. Default usually works fine.

Type: string

Possible values: all possible MQTT topics

Example value: $share/MQTT_TO_POSTGRESQL/ia/#

5.2 - factorycube-edge

sensorconnect

This tool automatically finds connected ifm gateways (e.g. the AL1350 or AL1352), extracts all relevant data and pushes the data to a MQTT broker. Technical information and usage can be found in the documentation for sensorconnect

cameraconnect

This tool automatically identifies connected cameras network-wide which support the GenICam standard and makes them utilizable. Each camera requires its own container. The camera acquisition can be triggered via MQTT. The resulting image data gets pushed to the MQTT broker. Technical information and usage can be found in the documentation for cameraconnect

barcodereader

This tool automatically detected connected USB barcode scanners and send the data to a MQTT broker. Technical information and usage can be found in the documentation for barcodereader

mqtt-bridge

This tool acts as an MQTT bridge to handle bad internet connections. Messages are stored in a persistent queue on disk. This allows using the factorycube-edge in remote environments with bad internet connections. It will even survive restarts (e.g. internet failure and then 1h later power failure). We developed it as we’ve tested multiple MQTT brokers and their bridge functionalities (date of testing: 2021-03-15) and could not find a proper solution:

nodered

This tool is used to connect PLC and to process data. See also Getting Started. Or take a look into the official documentation

emqx-edge

This tool is used as a central MQTT broker. See emqx-edge documentation for more information.

5.2.1 - barcodereader

This is the documentation for the container barcodereader.

Getting started

Here is a quick tutorial on how to start up a basic configuration / a basic docker-compose stack, so that you can develop.

Go to the root folder of the project and execute the following command:

sudo docker build -f deployment/barcodereader/Dockerfile -t barcodereader:latest . && sudo docker run --privileged -e "DEBUG_ENABLED=True" -v '/dev:/dev' barcodereader:latest 

All connected devices will be shown, the used device is marked with “Found xyz”. After every scan the MQTT message will be printed.

Environment variables

This chapter explains all used environment variables.

DEBUG_ENABLED

Description: Deactivates MQTT and only prints the barcodes to stdout

Type: bool

Possible values: true, false

Example value: true

CUSTOM_USB_NAME

Description: If your barcodereader is not in the supported list of devices, you must specify the name of the USB device here

Type: string

Possible values: all

Example value: Datalogic ADC, Inc. Handheld Barcode Scanner

MQTT_CLIENT_ID

Description: The MQTT client id to connect with the MQTT broker

Type: string

Possible values: all

Example value: weaving_barcodereader

BROKER_URL

Description: The MQTT broker URL

Type: string

Possible values: IP, DNS name

Example value: ia_mosquitto

Example value 2: localhost

BROKER_PORT

Description: The MQTT broker port. Only unencrypted ports are allowed here (default: 1883)

Type: integer

Possible values: all

Example value: 1883

CUSTOMER_ID

Description: The customer ID, which is used for the topic structure

Type: string

Possible values: all

Example value: dccaachen

LOCATION

Description: The location, which is used for the topic structure

Type: string

Possible values: all

Example value: aachen

MACHINE_ID

Description: The machine ID, which is used for the topic structure

Type: string

Possible values: all

Example value: weaving_machine_2

5.2.2 - cameraconnect

This docker container automatically detects cameras in the network and makes them accessible via MQTT. The MQTT output is specified in the MQTT documentation

This microservice is still in development and is not considered stable for production use.

Getting started

Using the Helm chart

By default cameraconnect will be deactivated in factorycube-edge. First, you need to enable it in the factorycube-edge values. Then you need to create a folder on the node in /home/rancher/gentl_producer and move your genTL producer files (*.cti) and all required libaries into that folder. Then apply your settings to the Helm chart with helm upgrade.

Another idea for changing it: helm install ....... --set 'cameraconnect.enabled=true'

Or overwroite it in the development_values.yaml

Furthermore, you need to adjust the MQTT_HOST to the externally exposed MQTT IP (e.g., the IP of your node). Usually you can use the Kubernetes internal DNS. But cameraconnect needs to be in hostMode = true and hterefore you need to access from external.

Development setup

Here is a quick tutorial on how to start up a basic configuration / a basic docker-compose stack, so that you can develop.

  1. Specify the environment variables, e.g. in a .env file in the main folder or directly in the docker-compose
  2. execute sudo docker-compose -f ./deployment/cameraconnect/docker-compose.yaml up -d --build

Environment variables

This chapter explains all used environment variables.

CUBE_TRANSMITTERID

Description: The unique transmitter id. This will be used for the creation of the MQTT topic. ia/raw/TRANSMITTERID/…

Type: string

Possible values: all

Example value: 2021-0156

MQTT_HOST

Description: The MQTT broker URL

Type: string

Possible values: IP, DNS name

Example value: ia_mosquitto

Example value 2: localhost

MQTT_PORT

Description: The MQTT broker port. Only unencrypted ports are allowed here (default: 1883)

Type: integer

Possible values: all

Example value: 1883

TRIGGER

Description: Defines the option of how the camera is triggered. Either via MQTT or via a continuous time trigger.
In production the camera should be triggered via MQTT. The continuous time trigger is just convenient for debugging.
If MQTT is selected, the camera will be triggered by any message which arrives via its subscribed MQTT topic.
However, if the arriving MQTT message contains a UNIX timestamp in milliseconds with the key “timestamp_ms”,
the camera will be triggered at that exact timestamp.

Type: string

Possible values: MQTT, Continuous

Example value: MQTT

ACQUISITION_DELAY

Description: Timeconstant in seconds which delays the image acquisition after the camera has been triggered.
This is mostly used, if the camera is triggered with a UNIX timestamp (see variable TRIGGER), to make sure, that the
camera is triggered, even if the UNIX timestamps lies in the past. This could be caused by network latencies.

Type: float

Possible values: all

Example value: 0.7

CYCLE_TIME

Description: Only relevant if the trigger is set to “Continuous”. Cycle time gives the time period which defines
the frequency in which the camera is triggered.
For example: a value of 0.5 would result in a trigger frequency of 2 images per second.

Type: float

Possible values: all

Example value: 1.5

CAMERA_INTERFACE

Description: Defines which camera interface is used. Currently only cameras of the GenICam standard are supported.
However, for development of testing you can also use the DummyCam, which simulates a camera and sends a static image via MQTT,
when triggered.

Type: String

Possible values: GenICam, DummyCam

Example value: GenICam

EXPOSURE_TIME

Description: Defines the exposure time for the selected camera. You should adjust this to your local environment to
achieve optimal images.

Type: int

Possible values: Depends on camera interface. Values between 1 and 80,000 are eligible for most cameras.

Example value: 1000

EXPOSURE_AUTO

Description: Determines if camera automatically adjusts the exposure time. Your settings will only be executed if
the camera supports this. You do not have to check if the camera supports this.

Type: String

Possible values:
“Off” - No automatic adjustment
“Once” - Adjusted once
“Continuous” - Continuous adjustment (not recommended, Attention: This could have a big impact on the frame rate of your camera)

Example value: Off

PIXEL_FORMAT

Description: Sets the pixel format which will be used for image acquisition. This module allows you to acquire
images in monochrome pixel formats(use: “Mono8”) and RGB/BRG color pixel formats (use:“RGB8Packed” or “BGR8Packed”)

Type: String

Possible values: Mono8, RGB8Packed, BGR8Packed

Example value: Mono8

IMAGE_WIDTH

Description: Defines the horizontal width of the images acquired. If the width values surpasses the maximum
capability of the camera the maximum value is set automatically.

Type: int

Possible values: all except 0

Example value: 1000

IMAGE_HEIGHT

Description: Defines the vertical height of the images acquired. If the height value surpasses the maximum
capability of the camera the maximum value is set automatically.

Type: int

Possible values: all except 0

Example value: 1000

IMAGE_CHANNELS

Description: Number of channels (bytes per pixel) that are used in the array (third dimension of the image data
array).You do not have to set this value. If None, the best number of channels for your set pixel format will be used.

Type: int

Possible values: 1 or 3

Example value: 1

MAC_ADDRESS

Description: Defines which camera is accessed by the container. One container can use only one camera.
The MAC address can be found on the backside of the camera.
The input is not case sensitive. Please follow the example format below.

Type: String

Possible values: all

Example value: 0030532B879C

LOGGING_LEVEL

Description: Defines which logging level is used. Mostly relevant for developers. Use WARNING or ERROR in production.

Type: String

Possible values: DEBUG, INFO, WARNING, ERROR, CRITICAL

Example value: DEBUG

Credits

Based on the Bachelor Thesis from Patrick Kunz.

5.2.3 - mqtt-bridge

This tool acts as an MQTT bridge to handle bad internet connections.

Getting started

Here is a quick tutorial on how to start up a basic configuration / a basic docker-compose stack, so that you can develop.

Here is a quick tutorial on how to start up a basic configuration / a basic docker-compose stack, so that you can develop.

  1. Specify the environment variables, e.g. in a .env file in the main folder or directly in the docker-compose
  2. execute sudo docker-compose -f ./deployment/mqtt-bridge/docker-compose.yaml up -d --build

Environment variables

This chapter explains all used environment variables.

REMOTE_CERTIFICATE_NAME

Description: the certificate name / client id

Type: string

Possible values: all

Example value: 2021-0156

REMOTE_BROKER_URL

Description: the URL to the broker. Can be prepended either with ssl:// or mqtt:// or needs to specify the port afterwards with :1883

Type: string

Possible values: all

Example value: ssl://mqtt.app.industrial-analytics.net

REMOTE_SUB_TOPIC

Description: the remote topic that should be subscribed. The bridge will automatically append a /# to the string mentioned here

Type: string

Possible values: all

Example value: ia/ia

REMOTE_PUB_TOPIC

Description: the remote topic prefix where messages from the remote broker should be send to.

Type: string

Possible values: all

Example value: ia/ia

REMOTE_BROKER_SSL_ENABLED

Description: should SSL be enabled and certificates be used for connection?

Type: bool

Possible values: true or false

Example value: true

LOCAL_CERTIFICATE_NAME

Description: the certificate name / client id

Type: string

Possible values: all

Example value: 2021-0156

LOCAL_BROKER_URL

Description: the URL to the broker. Can be prepended either with ssl:// or mqtt:// or needs to specify the port afterwards with :1883

Type: string

Possible values: all

Example value: ssl://mqtt.app.industrial-analytics.net

LOCAL_SUB_TOPIC

Description: the remote topic that should be subscribed. The bridge will automatically append a /# to the string mentioned here

Type: string

Possible values: all

Example value: ia/ia

LOCAL_PUB_TOPIC

Description: the remote topic prefix where messages from the remote broker should be send to.

Type: string

Possible values: all

Example value: ia/ia

LOCAL_BROKER_SSL_ENABLED

Description: should SSL be enabled and certificates be used for connection?

Type: bool

Possible values: true or false

Example value: true

BRIDGE_ONE_WAY

Description: DO NOT SET TO FALSE OR THIS MIGHT CAUSE AN ENDLESS LOOP! NEEDS TO BE FIXED BY SWITCHING TO MQTTV5 AND USING NO_LOCAL OPTION WHILE SUBSCRIBING. If true it sends the messages only from local broker to remote broker (not the other way around)

Type: bool

Possible values: true or false

Example value: true

Important note regarding topics

The bridge will append /# to LOCAL_SUB_TOPIC and subscribe to it. All messages will then be send to the remote broker. The topic on the remote broker is defined by:

  1. First stripping LOCAL_SUB_TOPIC from the topic
  2. and then replacing it with REMOTE_PUB_TOPIC

5.2.4 - sensorconnect

This docker container automatically detects ifm gateways in the specified network and reads their sensor values in the highest possible data frequency. The MQTT output is specified in the MQTT documentation

Getting started

Here is a quick tutorial on how to start up a basic configuration / a basic docker-compose stack, so that you can develop.

  1. Specify the environment variables, e.g. in a .env file in the main folder or directly in the docker-compose
  2. execute sudo docker-compose -f ./deployment/sensorconnect/docker-compose.yaml up -d --build

Environment variables

This chapter explains all used environment variables.

TRANSMITTERID

Description: The unique transmitter id. This will be used for the creation of the MQTT topic. ia/raw/TRANSMITTERID/…

Type: string

Possible values: all

Example value: 2021-0156

BROKER_URL

Description: The MQTT broker URL

Type: string

Possible values: IP, DNS name

Example value: ia_mosquitto

Example value 2: localhost

BROKER_PORT

Description: The MQTT broker port. Only unencrypted ports are allowed here (default: 1883)

Type: integer

Possible values: all

Example value: 1883

IP_RANGE

Description: The IP range to search for ifm gateways

Type: string

Possible values: All subnets in CIDR notation

Example value: 172.16.0.0/24

5.3 - How to add new MQTT messages to mqtt-to-postgresql

For new developers the internal structure of mqtt-to-postgresql might not be self-explaining. Therefore, this tutorial.

In general MQTT messages in mqtt-to-postgresql are first received (see entrypoint) and then stored in a message specific queue. Per message type: All messages within the last second are then gathered together and written into the database.

If one of the messages fails to get written into the database the entire batch of messages is regarded. In the future we should add here an additional buffer that tries each message separately to first identify the broken messages from the rest of the batch and then repeatedly tries to add the broken messages for like 5 times over a time period of 30 minutes to prevent issues with delayed messages (e.g., creation of a product is stuck somewhere on the edge, but the product itself is already used in a following station).

Entrypoint

The entrypoint is in mqtt.go:

https://github.com/united-manufacturing-hub/united-manufacturing-hub/blob/5177804744f0ff0294b822e85b4967655c2bc9fb/golang/cmd/mqtt-to-postgresql/mqtt.go#L83-L112

From there you can work and copy paste yourself until you add the messages into the queue.

Tip: for parsing the MQTT JSON messages we should use quicktype.io with the MQTT JSON message example from the documentation.

https://github.com/united-manufacturing-hub/united-manufacturing-hub/blob/5177804744f0ff0294b822e85b4967655c2bc9fb/golang/cmd/mqtt-to-postgresql/dataprocessing.go#L84

Queue

The next function is in database.go:

https://github.com/united-manufacturing-hub/united-manufacturing-hub/blob/5177804744f0ff0294b822e85b4967655c2bc9fb/golang/cmd/mqtt-to-postgresql/database.go#L1050-L1077

Testing

For testing we recommend spinning up a local instance of the entire factorycube-server Helm chart and then sending messages with Node-RED.

5.4 - How to publish a new version

The UMH uses the semantic versioning. This article explains how to increase the version number and what steps are needed to take
  1. Create release branch (e.g., v0.6.0) from staging. Docker will automatically build Docker containers with the tag VERSION-prerelease, e.g., v0.6.0-prerelease
  2. Create PR from release branch to main
  3. Update the helm charts factorycube-server and factorycube-edge by going into Charts.yaml and changing the version to the next version including a -prerelease
  4. Adjust repo link https://repo.umh.app in docs/static/examples/development.yaml to the deploy-preview of helm-repo, e.g., https://deploy-preview-515--umh-helm-repo.netlify.app/ with 515 beeing the PR created in 2. Additionally, add a –devel flag to both helm install commands, so that helm considers the prerelease as a valid version.
  5. Go into the folder deployment/helm-repo and execute
helm package ../factorycube-server/
helm package ../factorycube-edge/
  1. Go into deployment/helm-repo and execute helm repo index --url https://deploy-preview-515--umh-helm-repo.netlify.app/ --merge index.yaml .
  2. Commit changes. Wait for all container and deploy previews to be created. Conduct test on K300 by specifying the new cloud-init file e.g., https://deploy-preview-515--umh-docs.netlify.app/examples/development.yaml (you can create a bit.ly link for that)
  3. Test
  4. Conduct steps 3 - 6 with changed version v0.6.0 (instead of v0.6.0-prerelease) and changed repo index url: https://repo.umh.app
  5. Execute npx semantic-release --branches "master" --branches "v0.6.0" --plugins "@semantic-release/commit-analyzer" --plugins "@semantic-release/release-notes-generator" --plugins "@semantic-release/changelog"
  6. Remove old helm packages from prerelease from repo
  7. Merge PR from staging to main
  8. Add a new release containing a changelog of all changes

6 - Publications

This page contains multiple interesting publications made around Industry 4.0, mainly thesis from the RWTH Aachen University.

6.1 - Development of a methodology for implementing Predictive Maintenance

Because of high costs and effort PdM is only economically viable on machines and components with high revenue losses due to breakdown and where the failure is almost independent from uptime

Abstract

Objective of this thesis: The goal of this thesis is to develop a methodology to implement Predictive Maintenance (PdM) economically viable into a company. The methodology is then validated in the Digital Capability Center (DCC) Aachen.

Solution process: Maintenance strategies and machine learning algorithms are researched together with methods for optimizing productions lines. This knowledge is then summarized and validated in the DCC Aachen.

Key results: Because of high costs and effort PdM is only economically viable on machines and components with high revenue losses due to breakdown and where the failure is almost independent from uptime and wear. In the DCC Aachen the wind up bearing at the warping machine is identified as a component for a PdM implementation, but a combination of machine learning and existing sensors is not enough for a economically viable implementation.

Key word: Predictive Maintenance, maintenance strategies, machine learning

Content

Bachelorthesis Jeremy Theocharis

6.2 - Design and Evaluation of a Blue Ocean Strategy for an Open-Core IIoT Platform Provider in the Manufacturing Sector"

Currently, the market is quite opaque, which makes it difficult to compare providers on the market and thus to compete. This thesis is written in cooperation with the Aachen-based startup developing the IIoT platform “United Manufacturing Hub”. Its objective is to set UMH apart from the existing red-ocean market with the development of a blue ocean strategy.

This publication was made by Nicolas Altenhofen as a Master Thesis for the “Institute for Technology and Innovation Management” in cooperation with Marc Van Dyck (TIM / RWTH Aachen ) and us.

Final blue ocean strategy and competitive positioning of UMH

Introduction

The ongoing trend of digitization does not exclude the manufacturing sector which poses challenges for both involved parties: machine manufacturers and producing companies. While larger corporations and certain industries, such as the automotive industry, may be already well advanced in digitization, small and medium-sized enterprises (SMEs) often still face difficulties. For many producing companies, the added value of digitization is unclear and they also have strong security concerns about making their data available for analyses. On the other hand, machine manufacturers face problems in their transformation from hardware manufacturers to service providers (Bitkom Research & Ernst & Young 2018, p. 21; VDMA & McKinsey 2020, p. 22; Vargo & Lusch 2008, p. 254ff). Due to these difficulties and the currently high implementation costs, IIoT (industrial internet of things) platforms have so far been deployed rather sporadically. However, they can offer great potential for optimization, for example of service processes or the overall equipment effectiveness (OEE) and will play an important role in maintaining competitiveness in the future (VDMA & McKinsey 2020, p. 27ff).

For this reason, more and more startups and third-party providers are currently establishing businesses that are trying to solve the challenges and problems of both sides with a wide variety of approaches. Currently, the market is quite opaque, which makes it difficult to compare providers on the market and thus to compete. This thesis is written in cooperation with the Aachen-based startup developing the IIoT platform “United Manufacturing Hub” (UMH; UMH Systems GmbH). Its objective is to set UMH apart from the existing red-ocean market with the development of a blue ocean strategy. By redistributing the development focus to attributes that are most relevant to customers in the market and reducing efforts in less relevant areas, the goal is to create a new, non-competitive market (Kim & Mauborgne 2015, p. 24ff). UMH has set itself the task of making the digital transformation as easy as possible for machine manufacturers and producing companies as their end customers. To do this, it is important to know the needs and problems of the customers and to obtain their assessment of the solution approaches. As a starting point for market analysis, this thesis focuses on machine manufacturers as customers of the platform.

The research question is divided into sub-questions, which together contribute to answering the primary question (Karmasin & Ribing 2017, p. 24f). While the topic is elaborated on the example of UMH, the underlying questions can be generalized and are not sufficiently addressed in the existing literature. The concepts further described in chapter 2 provide useful insights into IIoT, open-source platforms, as well as blue ocean strategies, but there is limited literature on the linkages between those topics (e.g., Frank et al. 2019; p. 341ff; Shafiq et al. 2018, p. 1076ff) and none describing a blue ocean strategy in an IIoT platform context. Therefore, the primary research question (PQ) is:

PQ: Which blue ocean strategy has the best potential to set industry standards and establish an IIoT platform in the manufacturing sector?

Currently, most machine manufacturers rely on in-house developed IIoT platforms (Bender et al. 2020, p. 10f), although using an external platform would reduce duplication costs and provide access to existing applications and customers (Evans & Schmalensee 2008, p. 673). This suggests that currently available external IIoT platforms do not sufficiently cover customer needs. To better understand machine manufacturers’ needs and their motivation, the first sub-question (SQ) is therefore:

SQ1: What functionalities do the manufacturers’ platforms include and how were they implemented? Why have machine manufacturers decided to develop their own platform?

The four actions framework in the blue ocean literature suggests that product attributes need to be raised or created to increase the customer value and create new demand while others are reduced or eliminated to achieve cost leadership (Kim & Mauborgne 2015, p. 51). To assess and extend UMH’s solution approaches, the second sub-question is:

SQ2: 11 What functions or features are currently missing from existing platforms on the market? Which attributes must be raised to fulfill the desired customer benefits?

Finally, making the core of the software stack open source is a relevant part of UMH’s disruptive business model. Open source reduces costs and dependence and promotes among other things value creation. Dedrick and West (2004, p. 5f) found that the perceived reliability of Linux-operated servers was lower than that of servers with a proprietary operating system, which could also be the case for an open-source IIoT platform. To examine the effects of the open-source approach the third sub-question is:

SQ3: How does an open-source approach affect the value curve and how is it perceived by machine manufacturers?

To answer these research questions, this thesis first reviews the state of research on digitization and IIoT, platforms, and technology adoption of a market. Next, UMH and its open-core concept are presented based on an implemented proof of concept at the Digital Capability Center (DCC) Aachen. UMH’s competitors are then clustered into infrastructure providers, proprietary IIoT platforms, and system integrators, for which value curves are generated that show the current focus of the providers on the market. Hypotheses are formulated about the requirements of the IIoT market based on the literature, a conversation with Bender and Lewandowski (2021; authors of the underlying paper Bender et al. 2020), and an existing market research by UMH (2020). The hypotheses facilitate the preparation of the three-part interview guideline, each dedicated to answering one sub-question. Finally, the interviews with eight development managers at machine manufacturing companies are evaluated. Based on the findings, the hypotheses are assessed and the blue ocean strategy for UMH open core and premium is derived, thus answering the primary research question.

Content

Master Thesis Nicolas Altenhofen

6.3 - Industrial image processing for quality control in production lines: Development of a decision logic for the application case specific selection of hardware and software

To select suitable hardware components, a five-stage decision logic is developed and implemented as a software application, which suggests suitable components to the user depending on the specified use case and prioritizes them according to list price. In a simulative evaluation, this achieves complexity reductions between 73 and 98% and cost savings between 46 and 93%. A decision between Deep Learning and conventional algorithms can be made based on the given development circumstances as well as the complexity of image features.

This publication was made by Michael Müller as a Master Thesis for the “Institut für Textiltechnik der RWTH Aachen University” in cooperation with Kai Müller (ITA / RWTH Aachen ) and us.

Cognex camera connected with the United Manufacturing Hub open-source stack

Abstract

Objective of this thesis: The goal of the work is the development of a decision logic for the application case-specific selection of hardware and software for image processing systems for for quality control in industrial production. On the hardware side the components components camera, lens and illumination system are considered. On software side, it is decided, depending on the application, whether conventional algorithms or ventional algorithms or methods of Deep Learning are more suitable.

Solution process: Within the scope of a literature search, relevant descriptive variables for standardized for the standardized characterization of technologies and use cases. Furthermore, interdependencies between individual components and properties of the use case will be identified. By means of a market research, a database with concrete product product information. Based on these steps, a set of rules for the selection of hardware and software technologies is derived and tested on a use case in the application case at the Digital Capability Center Aachen. The decision-making logic for selecting hardware components will finally be user-friendly computer application.

Key results: To select suitable hardware components, a five-stage decision logic is developed and implemented as a software application, which suggests suitable components to the user depending on the specified use case and prioritizes them according to list price. In a simulative evaluation, this achieves complexity reductions between 73 and 98% and cost savings between 46 and 93%. A decision between Deep Learning and conventional algorithms can be made based on the given development circumstances as well as the complexity of image features.

Key word: Digital quality control, Technical textiles, Mobiltech, Industry 4.0, Technology selection

Content

Master Thesis Michael Müller

6.4 - Deep learning for industrial quality inspection: development of a plug-and-play image processing system

The central result is an overall process overview and a microservice architecture, with the help of which an industrial image processing system can be put into operation on the software side only by configuring the camera and entering the environment variables. Currently, cameras of the GenICam standard with GigE Vision interface and Cognex cameras are supported. The open architecture creates a basic platform for the development of further microservices and subsequent processes in the context of industrial image processing.

This publication was made by Patrick Kunz as a Bachelor Thesis for the “Institut für Textiltechnik der RWTH Aachen University” in cooperation with Kai Müller (ITA / RWTH Aachen ) and us.

MQTT is used as a central element in the open-source architecture for image processing systems

Abstract

Objective of this thesis: The objective of the thesis is the development of a robust and user-friendly software for an industrial image processing system, which applies deep learning methods. The user of this software will be able to quickly and easily put an image processing system into operation due to its plug-and-play capability and standardized interfaces. The system software is based exclusively on royalty-free software products.

Solution process: For the development of the overall system, relevant standards, interfaces and software solutions are researched and presented. By dividing the sys- tem into sub-processes, functional requirements for the software are derived and implemented in the development with the general requirements in a system architecture. The implementation and subsequent validation is carried out in the model production for textile wristbands at the Digital Capability Center Aachen.

Key results: The central result is an overall process overview and a microservice architecture, with the help of which an industrial image processing system can be put into operation on the software side only by configuring the camera and entering the environment variables. Currently, cameras of the GenICam standard with GigE Vision interface and Cognex cameras are supported. The open architecture creates a basic platform for the development of further microservices and subsequent processes in the context of industrial image processing.

Key word: Machine vision, quality control, deep learning, microservice architecture, MQTT

Content

Bachelor Thesis Patrick Kunz

6.5 - Development of a decision tool to select appropriate solutions for quality control depending on the defects occurring in the manufacturing process in the automobile branch of the technical-textiles industry

The results of this research provide an overview of the problems being faced regarding quality control during the manufacturing processes of technical textile in the automotive industry. In addition, information on the extent to which digital solutions for quality control are established in the industry is analyzed. Moreover, existing digital quality control solutions and measuring principles to tackle the identified problems in the industry are researched and identified.

This publication was made by Aditya Narayan Mishra as a Master Thesis for the “Institut für Textiltechnik der RWTH Aachen University” in cooperation with Kai Müller (ITA / RWTH Aachen ) and us.

Source: https://www.lindenfarb.de/en/

Abstract

Objective of this thesis: The objective of this thesis is to develop a decision tool regarding the quality control in the manufacturing of technical textiles for the automotive industry. The tool shall enable the access to information about the problems being faced and the consequent defects occurring during the manufacturing of technical textiles in the automotive industry. Subsequently, it shall provide an overview of the corresponding solutions and measuring principles for each of the identified problems

Solution process: Firstly, a literature review is carried out to provide a deep profound understanding to the important quality parameters and defects in each of the manufacturing processes of technical textile. Based on the literature review, a questionnaire is created to perform a market analysis in form of expert interviews. With the help of the market analysis, industry insights to the current status and problems associated with the quality control of manufacturing the technical textile fabrics in the automotive industry are addressed. Afterwards, based on the problems acquired through the expert interviews, the solutions and measuring principles are identified and subsequently a concept for the decision tool is designed.

Key results: The results of this research provide an overview of the problems being faced regarding quality control during the manufacturing processes of technical textile in the automotive industry. In addition, information on the extent to which digital solutions for quality control are established in the industry is analyzed. Moreover, existing digital quality control solutions and measuring principles to tackle the identified problems in the industry are researched and identified.

Key word: Digital quality control, Technical textiles, Mobiltech, Industry 4.0, Technology selection

Content

Master Thesis Aditya Narayan Mishra

6.6 - Implementation of Time Series Data based Digital Time Studies for Manual Processes within the Context of a Learning Factory

This thesis is concerned with combining the subject areas Industry 4.0 and the implementation of manual time studies.

This publication was made by Tobias Tratner (Xing, LinkedIn) as a Master Thesis for the Graz University of Technology & Deggendorf Institute of Technology in cooperation with Maria Hulla (Institute of Innovation and Industrial Management at TU Graz) and us.

Finished setup of the time studies in the LEAD-Factory

Abstract

The steadily advancing globalization significantly shapes today’s business environment for companies. Therefore, companies are increasingly under immense cost pressure and need to improve their production efficiency and product quality to remain competitive. Industry 4.0 applications that result from the advancing digitization offer great potential for long-term cost savings. Implementing time studies for mechanical activities can identify potential for improvement in the production process and enable them to be rectified. This thesis is concerned with combining the subject areas Industry 4.0 and the implementation of manual time studies. For this purpose, a digital time recording of manual activities was implemented in the LEAD Factory, the learning factory of the Institute of Innovation and Industrial Management at Graz University of Technology. Therefore, a mobile sensor kit and an IoT platform, provided by the factorycube of Industrial Analytics, were used. Using sensors, existing data from an RFID system, and an energy monitoring system, all activities on a selected workstation in the LEAD Factory can be documented and analyzed. This automated time recording enables long-term measurements to analyze working times and possible anomalies. The collected data is stored in so-called time- series databases and processed using various methods. The data is displayed on a dashboard using a visualization program. One focus of the work was the design of the data processing architecture with two different time-series data models, as well as the conception and development of methods for data processing in the context of time studies. A relational and a NoSQL database system were used equally. The use of two very different approaches should show the possibilities of both systems and enable an assessment of the two systems. Based on a utility analysis, both approaches are evaluated and compared using selected criteria. Thus, a clear recommendation can be made for one of the two approaches. Making the results of the work available to an open-source community, they can be used as a basis for the implementation of similar applications. In addition, the work shows through a digital time recording the huge potential to improve productivity in case of using existing data in a production environment.

Content

Master Thesis Tobias Tratner