This the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Documentation

United Manufacturing Hub - the open source manufacturing system

1 - Getting Started

The guide is split into three parts

1.1 - 0. Understanding the technologies

This section gives you an introduction into the used technologies. A rough understanding of these technologies is fundamental for installing and working with the system

Why your IT loves us

We built the United Manufacturing Hub with the needs of your IT in mind from day one and adhere to the highest standards of flexibility, scalability, security, and data protection.

Open

It is Open-Source (AGPL). This means you can commercially use, modify and distribute the project as long as you disclose the source code and all modifications. A more detailled overview can be found here. However, legally binding is the full license in LICENSE

Furthermore, it is based on well-documented standard interfaces (MQTT, REST, etc.). A more detailled explaination of those can be found further below.

Scalable

The system is built for horizontal scaling, including fault tolerance through Docker/Kubernetes/Helm. This means the system is built to run on multiple servers and heals itself when a server unexpectetly crashes. More information on the technology can be found further below.

(enterprise only) Edge devices can be set up and configured quickly in large quantities (somehow we must make our money :) )

Flexible

All components have flexible deployment options, from public cloud (Azure, AWS, etc.) to on-premise server installations to Raspberry Pis, everything is possible.

Additionally, you can freely choose your programming language as systems are connected through a central message broker (MQTT). Almost all programming languages have a very good MQTT library abvailable.

Tailored for manufacturing

The United Manufacturing Hub includes ready-made manufacturing apps, which deliver immediate business value and give a good oundation to built upon. It is not a generic IT solution and makes heavily use of established automation standards (OPC/UA, Modbus, etc.).

With this we can connect production assets very quickly either by retrofit or connection to existing interfaces.

Well-documented

Builds exclusively on well-documented software components supported by a large developer community. No more waiting in random telphone hotlines, just google your question and get your solution within seconds. If you need additionally enterprise support you can always contact us and/or the vendors on which the system is built upon (Grafana, etc.).

Secure

Meets the highest data protection standards. Compliant with the three pillars of information security:

  • Confidentiality (through e.g. end-to-end encryption, flexible deployment options, and principle of least privilege),
  • integrity (through e.g. ACID databases and MQTT QoS 2 with TLS),
  • and availability (through e.g. Use of Kubernetes and (for SaaS) a CDN)

Technologies

Here you can find an overview over the used technologies. As there are many good tutorials out there, we will not explain everything and only link to some of these tutorials. In doubt, just google it :)

Interfaces

The United Manufacturing Hub features various interfaces to push and extract data.

MQTT

Here is a video tutorial from Rui Santos on YouTube: https://www.youtube.com/watch?v=EIxdz-2rhLs

Out of the wikipedia article about MQTT:

MQTT (originally an acronym for MQ Telemetry Transport) is an lightweight, publish-subscribe network protocol that transports messages between devices. The MQTT protocol defines two types of network entities: a message broker and a number of clients. An MQTT broker is a server that receives all messages from the clients and then routes the messages to the appropriate destination clients. An MQTT client is any device (from a micro controller up to a fully-fledged server) that runs an MQTT library and connects to an MQTT broker over a network.

The protocol MQTT specifies not the actual content of the exchanged MQTT messages. In the UMH stack we specify them further with the UMH datamodel

REST / HTTP

Out of the wikipedia article about REST:

Representational state transfer (REST) is a de-facto standard for a software architecture for interactive applications that typically use multiple Web services. In order to be used in a REST-based application, a Web Service needs to meet certain constraints; such a Web Service is called RESTful. A RESTful Web service is required to provide an application access to its Web resources in a textual representation and support reading and modification of them with a stateless protocol and a predefined set of operations. By being RESTfull, Web Services provide interoperability between the computer systems on the internet that provide these services.

factoryinsight provides a REST / HTTP access to all data stored in the database. The entire API is well-documented.

Orchestration tools

Docker, Kubernetes and Helm are used for a so called orchestration. You can find further information on these technologies here:

Additional tools / tutorials

Next steps

If you are interested more in the architecture of the systems we recommend that you take a look into our concepts. If not, you can go to the next step in the tutorial.

1.2 - 1. Quick start

This section explains how the system (edge and server) can be setup quickly on a single edge device. This is only recommended for development and testing environments.

Prerequisites

  • k3os (AMD64) installed on a bootable USB-stick (you can get it here: https://github.com/rancher/k3os/releases/ ). You can create a bootable USB-stick using balenaEtcher
  • a laptop with SSH / SFTP client (e.g. MobaXTerm) and Lens (for accessing the Kubernetes cluster) installed
  • a edge device (currently only x86 systems supported)
  • keyboard, monitor, cables
  • A GitHub account with a public key. If you do not know how to do it check out this tutorial. You can download puttygen here
  • network setup and internet access according to the image below

Steps

k3OS

  1. Install k3OS on your edge device using the bootable USB-stick (Press “entf or delete” repeatedly to enter the BIOS of the Factorycube and then boot from the USB stick with K3OS)
  2. Choose the desired partition (in most cases 1)
  3. Do not use a cloud configuration file
  4. When asked, enter your GitHub username. In the future you will access the device via SSH with your private key. After the installation the system will reboot and show after successfull startup the IP adress of the device. If no IP is shown please check your network setup (especially whether you have DHCP activated).
  5. Configure K3OS as “server”
  6. Remove the USB stick after the message that the system will restart in 5 seconds.
  7. You can now disconnect Monitor and keyboard as you will do everything else via SSH.

General setup

  1. Connect via SSH e.g. with MobaXTerm (Username: rancher, Port: 22, remote host: ip of your edge device)
  2. Authenticate with your private key which belongs to the to the public key stored in Github. (For MobaXTerm: Advanced SSH Settings -> private key and select your private key)
  3. confirm the setting and connect
  4. Install helm on your edge device
export VERIFY_CHECKSUM=false 
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 
chmod 700 get_helm.sh && ./get_helm.sh
  1. Clone or copy the content of the united-manufacturing-hub repository on Github into the home folder (/home/rancher/united-manufacturing-hub).
  2. Execute cat /etc/rancher/k3s/k3s.yaml to retrieve the secrets to connect to your Kubernetes cluster
  3. Paste the file into Lens when adding a new cluster and adjust the IP 127.0.0.1 (only change the IP address. The port, the numbers after the colon, remain the same). You should now see the cluster in Lens.
  4. Create two namespaces in your Kubernetes cluster called factorycube-edge and factorycube-server by executing the following command:
kubectl create namespace factorycube-edge && kubectl create namespace factorycube-server

Install factorycube-edge

Warning: in production you should use your own certificates and not the ones provided by us as this is highly insecure when exposed to the internet and any insecure network. A tutorial to setup PKI infrastructure for MQTT according to this guide

  1. Go into the folder /home/rancher/united-manufacturing-hub/deployment/factorycube-edge
  2. Create a new file called development_values.yaml using touch development_values.yaml
  3. Copy the following content to that file or use the following example development_values.yaml.
  4. (Only if you did not use example file) Copy the certificates in deplotment/factorycube-server/developmentCertificates/pki/ and then ca.crt, issued/TESTING.crt and issued/private/TESTING.key into development_values.yaml. Additionally use as mqttBridgeURL ssl://factorycube-server-vernemq-local-service.factorycube-server:8883.
  5. Adjust iprange to your network IP range

Example for development_values.yaml:

mqttBridgeURL: "ssl://mqtt.umh.app:8883"
mqttBridgeTopic: "ia/factoryinsight"
sensorconnect:
  iprange: "172.16.1.0/24"
mqttBridgeCACert: |
    ENTER CERT HERE
mqttBridgeCert: |
    ENTER CERT HERE
mqttBridgePrivkey: |
    ENTER CERT HERE
  1. Execute helm install factorycube-edge /home/rancher/united-manufacturing-hub/deployment/factorycube-edge --values "/home/rancher/united-manufacturing-hub/deployment/factorycube-edge/development_values.yaml" --set serialNumber=$(hostname) --kubeconfig /etc/rancher/k3s/k3s.yaml -n factorycube-edge (change kubeconfig and serialNumber accordingly) (Please pay attention to the correct path. The path may be /home/rancher/united-manufacturing-hub-main … or similar)

Install factorycube-server

Warning: in production this should be installed on a seperate device / in the cloud to ensure High Availability and provide automated backups.

  1. Configure values.yaml according to your needs. For the development version you do not need to do anything. For help in configuring you can take a look into the respective documentation of the subcharts (Grafana, redis, timescaleDB, verneMQ) or into the documentation of the subcomponents (factoryinsight, mqtt-to-postgresql)
  2. Execute helm install factorycube-server /home/rancher/united-manufacturing-hub/deployment/factorycube-server --values "/home/rancher/united-manufacturing-hub/deployment/factorycube-server/values.yaml" --kubeconfig /etc/rancher/k3s/k3s.yaml -n factorycube-server and wait. Helm will automatically install the entire stack across multiple node. It can take up to several minutes until everything is setup. (Please pay attention to the correct path. The path may be /home/rancher/united-manufacturing-hub-main … or similar)

Everything should be now successfully setup and you can connect your edge devices and start creating dashboards! Keep in mind: the default development_values.yaml should only be used for development environments and never for production. See also notes below.

Using it

You can now access Grafana and nodered via HTTP / HTTPS (depending on your setup). Default user for Grafana is admin. You can find the password in the secret RELEASE-NAME-grafana. Grafana is available via port 8080, nodered via 1880.

1.3 - 2. Connecting machines and creating dashboards

This section explains how the United Manufacturing Hub is used practically

1. Extract data using factorycube-edge

The basic approach for data processing on the local hardware is to extract data from various data sources (OPC/UA, MQTT, Rest), extract the important information, and then make it available to the United Manufacturing Hub via a predefined interface (MQTT). For this data processing on the local hardware we use Nodered.

To extract and pre-process the data from different data sources we use the open source software node-red. node-red is a low-code programming for event-driven applications.

If you haven’t worked with node-red yet, here is a good documentation from node-red!

Here you can download the flow

General Configuration

Basically, 3 pieces of information must be communicated to the system. For more information feel free to check this article. These 3 information must be set to the system via the green configuration node-red, so that the data can be assigned exactly to an asset

The customer ID to be assigned to the asset: customerID

The location where the asset is located: location

The name of the asset: AssetID

Furthermore, you will find under the general settings, the state logic which determines the machine state with the help of the activity and detectedAnomaly topic. For more information feel free to check this article.

Inputs:

With the help of the inputs you can tap different data sources. Like for example:

Interaction with sensorconnect (Plug and Play connection of IO-Link Senosors):

With the help of Sensorconnect, different sensors can be connected quickly and easily via an IFM gateway. The sensor values are automatically extracted from the software stack and made available via MQTT.

To get a quick and easy overview of the available MQTT messages and topics we recommend the MQTT Explorer. If you don’t want to install any extra software you can use the MQTT-In node to subscribe to all available topics by subscribing to # and then direct the messages of the MQTT in nodes into a debugging node. You can then display the messages in the nodered debugging window and get information about the topic and available data points.

Topic structure: ia/raw/<transmitterID>/<gatewaySerialNumber>/<portNumber>/<IOLinkSensorID>

Example for ia/raw/

Topic: ia/raw/2020-0102/0000005898845/X01/210-156

This means that the transmitter with the serial number 2020-0102 has one ifm gateway connected to it with the serial number 0000005898845. This gateway has the sensor 210-156 connected to the first port X01.

{
"timestamp_ms": 1588879689394, 
"distance": 16
}

Extract information and make it available to the outputs:

In order for the data to be processed easily and quickly by the United Manufacturing hub, the input data (OPC/UA, Siemens S7) must be prepared and converted into a standardized data format (MQTT Topic). For a deep explanation of our MQTT data model check here and here.

The 4 most important data points:

  • Information whether the machine is running or not: /activity
  • Information about anomalies or concrete reasons for a machine standstill: /detectedAnomaly
  • The produced quantity: /count
  • An interface to communicate any process value to the system (e.g. temperature or energy consumption) - /processvalue

Using the information from the topics /activtiy and /detectedAnomaly the statelogic node calculates the discrete machine state. By first checking if the machine is running or not. If the machine is not running the machine state is set equal to the last /detectedAnomaly analogous to state model. The discrete machine state is then made available again via the /state topic.

Implementation example: You would like to determine the output and machine condition of a filling machine.

Used Sensors:

  • Lightbarrier for counting the bottles
  • A button bar via which the machine operator can inform the system that he is on break, for example
  1. Extract via the MQTT in node the information of the light barrier whether a bottle was produced. If a bottle was produced send a message to the output/count topic analog to MQTT datamodel.
  2. Use the output_to_activity node to use the information “a bottle was produced” to determine the information “the machine is running”. E.g. If every X seconds a bottle is produced set the activity equal to true analog to MQTT datamodel.
  3. Use the information of the button bar to tell the system why the machine is not running. e.g. Whenever button 3 is pressed send pause to the detectedAnomaly node analog to MQTT datamodel.

Now the machine status is automatically determined and communicated to the united manufacturing hub for further analysis. Like for example the speed loss.

TODO: #63 add example Flow for data processing

Testing:

With the help of the testing flows you can test your entire system or simply simulate some sample data for visualization.

See also DCC Aachen example in our showcase.

2. Create dashboards using factorycube-server

TODO

1.4 - 3. Using it in production

This section explains how the system can be setup and run safely in production

This article is split up into two parts:

The first part will focus on factorycube-edge and the Industrial Automation world. The second part will focus on factorycube-server and the IT world.

factorycube-edge

The world of Industrial Automation is heavily regulated as very often not only expensive machines are controlled, but also machines that can potentially injure a human being. Here are some information that will help you in setting it up in production (not legal advice!).

If you are unsure about how to setup something like this, you can contact us for help with implementation and/or certified devices, which will ease the setup process!

Hardware & Installation, Reliability

One key component in Industrial Automation is reliability. Hardware needs to be carefully selected according to your needs and standards in your country.

When changing things at the machine, you need to ensure that you are not voiding the warranty or to void the CE certification of the machine. Even just installing something in the electrical rack and/or connecting with the PLC can do that! And it is not just unnecessary regulations, it is actually important:

PLCs can be pretty old and usually do not have much capacity for IT applications. Therefore, it is essential when extracting data to not overload the PLCs capabilities by requesting too much data. We strongly recommend to test the performance and closely watch the CPU and RAM usage of the PLC.

This is the reason we install sometimes additional sensors instead of plugging into the existing ones. And sometimes this is enough to get the relevant KPIs out of the machine, e.g., the Overall Equipment Effectiveness (OEE).

Network setup

To ensure the safety of your network and PLC we recommend a network setup like following:

Network setup having the machines network, the internal network and PLC network seperated from each other

The reason we recommend this setup is to ensure security and reliability of the PLC and to follow industry best-practices, e.g. the “Leitfaden Industrie 4.0 Security” from the VDMA (Verband Deutscher Maschinenbauer) or Rockwell.

Additionally, we are taking more advanced steps than actually recommended (e.g., preventing almost all network traffic to the PLC) as we have seen very often, that machine PLC are usually not setup according to best-practices and manuals of the PLC manufacturer by system integrators or even the machine manufacturer due to a lack of knowledge. Default passwords not changed or ports not closed, which results in unnecessary attack surfaces.

Also updates are almost never installed on a machine PLC resulting in well-known security holes to be in the machines for years.

Another argument is a pretty practical one: In Industry 4.0 we see more and more devices being installed at the shopfloor and requiring access to the machine data. Our stack will not be the only one accessing and processing data from the production machine. There might be entirely different solutions out there, who need real-time access to the PLC data. Unfortunately, a lot of these devices are proprietary and sometimes even with hidden remote access features (very common in Industrial IoT startups unfortunately…). We created the additional DMZ around each machine to prevent one solution / hostile device at one machine being able to access the entire machine park. There is only one component (usually node-red) communicating with the PLC and sending the data to MQTT. If there is one hostile device somewhere it will have very limited access by default except specified otherwise, as it can get all required data directly from the MQTT data stream.

Our certified device “machineconnect” will have that network setup by default. Our certified device “factorycube” has a little bit different network setup, which you can take a look at here.

factorycube-server

We recommend adjusting the values.yaml to your needs. We recommend the following values for production:

General

  • In case of internet access: adding Cloudflare in front of the HTTP / HTTPS nodes (e.g. Grafana, factoryinsight) to provide an additional security layer
  • We also recommend using LetsEncrypt (e.g. with cert-manager)

TimescaleDB

  • Setting timescaleDB replica to 3 and tuning or disabling it altogether and using timescaleDB Cloud (https://www.timescale.com/products)
  • Adjusting resources for factoryinsight, enabling pdb and hpa and pointing it to the read replica of the timescaleDB database. We recommend pointing it to the read replica to increase performance and to prevent too much database load on the primary database.

VerneMQ

  • We recommend setting up a PKI infrastructure for MQTT (see also prerequisites) and adding the certs to vernemq.CAcert and following in the helm chart (by default there are highly insecure certificates there)
  • You can adjust the ACL (access control list) by changing vernemq.AclConfig
  • We highly recommend opening the unsecured port 1883 to the internet as everyone can connect there anonymously (vernemq.service.mqtt.enabled to false)
  • If you are using the VerneMQ binaries in production you need to accept the verneMQ EULA (which disallows using it in production without contacting them)
  • We recommend using 3 replicas on 3 different phyiscal servers for high availability setups

Redis

  • We recommend using 3 replicas
  • The password is generated once during setup and stored in the secret redis-secret

Nodered

  • We recommend disabling external access to nodered entirely and spawning a seperate nodered instance for every project (to avoid having one node crashing all flows)
  • You can change the configuration in nodered.settings
  • We recommend that you set a password for accessing the webinterface in the nodered.settings

Grafana

  • We recommend two replicas on two seperate phyiscal server
  • We also recommend changing the database password in grafana.grafana.ini.database.password (the database will automatically use this value during database setup)

mqtt-to-postgresql

  • We recommend at least two replicas on two seperate phyisical server
  • It uses the same database access as factoryinsight. So if you want to switch it you can do it in factoryinsight

factoryinsight

  • We recommend at least two replicas on two seperate phyiscal server
  • We strongly recommend that you change the passwords (they will automatically be used across the system, e.g. in the Grafana plugin)

2 - Concepts

The software of the United Manufacturing Hub is designed as a modular system. Our software serves as a basic building block for connecting and using various hardware and software components quickly and easily. This enables flexible use and thus the possibility to create comprehensive solutions for various challenges in the industry.

Architecture

Edge-device / IoT gateway

As a central hardware component we use an edge device which is connected to different data sources and to a server. The edge device is an industrial computer on which our software is installed. The customer can either use the United factorycube offered by us or his own IoT gateway.

More information about our certified devices can be found on our website

Examples:

  • Factorycube
  • Cubi

Data acquisition

The data sources connected to the edge device provide the foundation for automatic data collection. The data sources can be external sensors (e.g. light barriers, vibration sensors), input devices (e.g. button bars), Auto-ID technologies (e.g. barcode scanners), industrial cameras and other data sources such as machine PLCs. The wide range of data sources allows the connection of all machines, either directly via the machine PLC or via simple and fast retrofitting with external sensors.

More information can be found in the technical documentation of the edge helm chart factorycube-edge

Examples:

  • sensorconnect
  • barcodereader

Data processing

The software installed on the edge device receives the data from the individual data sources. Using various data processing services and “node-red”, the imported data is preprocessed and forwarded to the connected server via the MQTT broker.

More information can be found in the technical documentation of the edge and server helm chart factorycube-edge factorycube-server

Examples:

  • node-red

Data storage

The data forwarded by the edge device can either be stored on the customer’s servers or, in the SaaS version, in the United Cloud hosted by us. Relational data (e.g. data about orders and products) as well as time series data in high resolution (e.g. machine data like temperature) can be stored.

More information can be found in the technical documentation of the server helm chart factorycube-server

Examples:

  • TimescaleDB

Data usage

The stored data is automatically processed and provided to the user via a Grafana dashboard or other computer programs via a Rest interface. For each data request, the user can choose between raw data and various pre-processed data such as OEE, MTBF, etc., so that every user (even without programming knowledge) can quickly and easily compose personally tailored dashboards with the help of modular building blocks.

More information can be found in the technical documentation of the server helm chart factorycube-server

Examples:

  • Grafana
  • factoryinsight

Practical implications

Edge devices

Typically you have multiple data sources like sensorconnect or barcodereader, that are containered in a Docker container. They all send their data to the MQTT broker. You can now process the data in node-red by subscribing to the data sources via MQTT, processing the data, and then writing it back.

Server

Database access

The database on the server side should never be accessed directly by a service except mqtt-to-postgresql and factoryinsight. Instead, these services should be modified to include the required functionalities.

2.1 - The UMH datamodel / MQTT

All events or subsequent changes in production are transmitted via MQTT in the following data model

Introduction

All events or subsequent changes in production are transmitted via MQTT in the following data model. This ensures that all participants are always informed about the latest status.

The data model in the MQTT Broker can be divided into four levels. In general, the higher the level, the lower the data frequency and the more the data is prepared.

If you do not know the idea of MQTT (important keywords: “broker”, “subscribe”, “publish”, “topic”), we recommend reading the wikipedia article first.

All MQTT messages consist out of one JSON with atleast two elements in it.

  1. timestamp_ms: the amount of milliseconds since the 1970-01-01 (also called UNIX timestamp in milliseconds)
  2. <valueName>: a value

Some messages might deviate from it, but this will be noted explicitly. All topics are to be written in lower case only!

1st level: Raw data

Here are all raw data, which are not yet contextualized, i.e. assigned to a machine. These are in particular all data from sensorconnect.

Topic: ia/raw/

All raw data coming in via sensorconnect.

Topic structure: ia/raw/<transmitterID>/<gatewaySerialNumber>/<portNumber>/<IOLinkSensorID>

Example for ia/raw/

Topic: ia/raw/2020-0102/0000005898845/X01/210-156

This means that the transmitter with the serial number 2020-0102 has one ifm gateway connected to it with the serial number 0000005898845. This gateway has the sensor 210-156 connected to the first port X01.

{
"timestamp_ms": 1588879689394, 
"distance": 16
}

2nd level: contextualized data

In this level the data is already assigned to a machine.

Topic structure: ia/<customerID>/<location>/<AssetID>/<Measurement> e.g. ia/dccaachen/aachen/demonstrator/count.

An asset can be a machine, plant or line (Explicitly not a single station of an assembly cell).

By definition all topic names should be lower case only!

/count

Topic: ia/<customerID>/<location>/<AssetID>/count

Here a message is sent every time something has been counted. This can be, for example, a good product or scrap.

count in the JSON is an integer. scrap in the JSON is an integer, which is optional. It means scrap pieces of count are scrap. If not specified it is 0 (all produced goods are good).

Example for /count

{
    "timestamp_ms": 1588879689394, 
    "count": 1
}

/scrapCount

Topic: ia/<customerID>/<location>/<AssetID>/scrapCount

Here a message is sent every time products should be marked as scrap. It works as follows: A message with scrap and timestamp_ms is sent. It starts with the count that is directly before timestamp_ms. It is now iterated step by step back in time and step by step the existing counts are set to scrap until a total of scrap products have been scraped.

Important notes:

  • You can specify maximum of 24h to be scrapped to avoid accidents
  • (NOT IMPLEMENTED YET) If counts does not equal scrap, e.g. the count is 5 but only 2 more need to be scrapped, it will scrap exactly 2. Currently it would ignore these 2. see also #125
  • (NOT IMPLEMENTED YET) If no counts are available for this asset, but uniqueProducts are available, they can also be marked as scrap. //TODO

scrap in the JSON is an integer.

Example for /scrapCount

{
    "timestamp_ms": 1588879689394, 
    "scrap": 1
}

/barcode

Topic: ia/<customerID>/<location>/<AssetID>/barcode

A message is sent here each time the barcode scanner connected to the transmitter via USB reads a barcode via barcodescanner.

barcode in the JSON is a string.

Example for /barcode

{
    "timestamp_ms": 1588879689394, 
    "barcode": "16699"
}

/activity

Topic: ia/<customerID>/<location>/<AssetID>/activity

A message is sent here every time the machine runs or stops (independent whether it runs slow or fast, or which reason the stop has. This is covered in state)

activity in the JSON is a boolean.

Example for /activity

{
    "timestamp_ms": 1588879689394, 
    "activity": True
}

/detectedAnomaly

Topic: ia/<customerID>/<location>/<AssetID>/detectedAnomaly

A message is sent here each time a stop reason has been identified automatically or by input from the machine operator (independent whether it runs slow or fast, or which reason the stop has. This is covered in state).

detectedAnomaly in the JSON is a string.

Example for /detectedAnomaly

{
    "timestamp_ms": 1588879689394, 
    "detectedAnomaly": "maintenance"
}

/addShift

Topic: ia/<customerID>/<location>/<AssetID>/addShift

A message is sent here each time a new shift is started.

timestamp_ms_end in the JSON is a integer representing a UNIX timestamp in milliseconds.

Example for /addShift

{
    "timestamp_ms": 1588879689394, 
    "timestamp_ms_end": 1588879689395
}

/addOrder

Topic: ia/<customerID>/<location>/<AssetID>/addOrder

A message is sent here each time a new order is started.

product_id in the JSON is a string representing the current product name. order_id in the JSON is a string representing the current order name. target_units in the JSON is a integer and represents the amount of target units to be produced (in the same unit as count).

Attention:

  1. the product needs to be added before adding the order. Otherwise, this message will be discarded
  2. one order is always specific to that asset and can, by definition, not be used across machines. For this case one would need to create one order and product for each asset (reason: one product might go through multiple machines, but might have different target durations or even target units, e.g. one big 100m batch get split up into multiple pieces)

Example for /addOrder

{
    "product_id": "Beierlinger 30x15",
    "order_id": "HA16/4889",
    "target_units": 1
}

/addProduct

Topic: ia/<customerID>/<location>/<AssetID>/addProduct

A message is sent here each time a new product is added.

product_id in the JSON is a string representing the current product name. time_per_unit_in_seconds in the JSON is a float specifying the target time per unit in seconds.

Attention: See also notes regarding adding products and orders in /addOrder

Example for /addProduct

{
    "product_id": "Beierlinger 30x15",
    "time_per_unit_in_seconds": 0.2
}

/startOrder

Topic: ia/<customerID>/<location>/<AssetID>/startOrder

A message is sent here each time a new order is started.

order_id in the JSON is a string representing the order name.

Attention:

  1. See also notes regarding adding products and orders in /addOrder
  2. When startOrder is executed multiple times for an order, the last used timestamp is used.

Example for /startOrder

{
    "timestamp_ms": 1588879689394,
    "order_id": "HA16/4889",
}

/endOrder

Topic: ia/<customerID>/<location>/<AssetID>/endOrder

A message is sent here each time a new order is started.

order_id in the JSON is a string representing the order name.

Attention:

  1. See also notes regarding adding products and orders in /addOrder
  2. When endOrder is executed multiple times for an order, the last used timestamp is used.

Example for /endOrder

{
"timestamp_ms": 1588879689394,
"order_id": "HA16/4889",
}

/processValue

Topic: ia/<customerID>/<location>/<AssetID>/processValue

A message is sent here every time a process value has been prepared. Unique naming of the key.

<valueName> in the JSON is a integer or float representing a process value, e.g. temperature.

Example for /processValue

{
    "timestamp_ms": 1588879689394, 
    "energyConsumption": 123456
}

3rd level: production data

This level contains only highly aggregated production data.

/state

Topic: ia/<customerID>/<location>/<AssetID>/state

A message is sent here each time the asset changes status. Subsequent changes are not possible. Different statuses can also be process steps, such as “setup”, “post-processing”, etc. You can find a list of all supported states here

state in the JSON is a integer according to this datamodel

Example for /state

{
    "timestamp_ms": 1588879689394, 
    "state": 10000
}

/cycleTimeTrigger

Topic: ia/<customerID>/<location>/<AssetID>/cycleTimeTrigger

A message should be sent under this topic whenever an assembly cycle is started.

currentStation in the JSON is a string lastStation in the JSON is a string sanityTime_in_s in the JSON is a integer

Example for /cycleTimeTrigger

{
  "timestamp_ms": 1611170736684,
  "currentStation": "1a",
  "lastStation": "1b",
  "sanityTime_in_s": 100
}

/uniqueProduct

Topic: ia/<customerID>/<location>/<AssetID>/uniqueProduct

A message is sent here each time a product has been produced or modified. A modification can take place, for example, due to a downstream quality control.

UID: Unique ID of the current single product. isScrap: Information whether the current product is of poor quality and will be sorted out productID: the product that is currently produced, begin_timestamp_ms: Start time end_timestamp_ms: Completion time stationID: If the asset has several stations, you can also classify here at which station the product was created (optional).

Example for /uniqueProduct

{
  "begin_timestamp_ms": 1611171012717,
  "end_timestamp_ms": 1611171016443,
  "productID": "test123",
  "UID": "161117101271788647991611171016443",
  "isScrap": false,
  "stationID": "1a"
}

/scrapUniqueProduct

Topic: ia/<customerID>/<location>/<AssetID>/scrapUniqueProduct

A message is sent here each time a unique product has been scrapped.

UID: Unique ID of the current single product.

Example for /scrapUniqueProduct

{
  "UID": "161117101271788647991611171016443",
}

4th level: Recommendations for action

/recommendations

Topic: ia/<customerID>/<location>/<AssetID>/recommendations

Shopfloor insights are recommendations for action that require concrete and rapid action in order to quickly eliminate efficiency losses on the store floor.

recommendationUID: Unique ID of the recommendation. Used to subsequently deactivate a recommendation (e.g. if it has become obsolete). recommendationType: The ID / category of the current recommendation. Used to narrow down the group of people recommendationValues: Values used to form the actual recommendation set

Example for /recommendations

{
    "timestamp_ms": 1588879689394,
    "recommendationUID": 3556,
    "recommendationType": 8996,
    "enabled": True,
    "recommendationValues": 
    {
        "percentage1": 30, 
        "percentage2": 40
    }
}

in development

/qualityClass

A message is sent here each time a product is classified. Example payload:

**qualityClass 0 and 1 are defined by default. {.is-warning}

qualityClassNameDescriptionColor under which this “State” is automatically visualized by the traffic light
0GoodThe product does not meet the quality requirementsGreen
1BadThe product does not meet the quality requirementsRed

The qualityClass 2 and higher are freely selectable. {.is-warning}

qualityClassNameDescriptionColor under which this “State” is automatically visualized by the traffic light
2Cookie center brokenCookie center brokenFreely selectable
3Cookie has a broken cornerCookie has a broken cornerFreely selectable
{
"timestamp_ms": 1588879689394, 
"qualityClass": 1
}

/detectedObject

in progress (Patrick) {.is-danger}

Under this topic, a detected object is published from the object detection. Each object is enclosed by a rectangular field in the image. The position and dimensions of this field are stored in rectangle. The type of detected object can be retrieved with the keyword object. Additionally, the prediction accuracy for this object class is given as confidence. The requestID is only used for traceability and assigns each recognized object to a request/query, i.e. to an image. All objects with the same requestID were detected in one image capture.

{
"timestamp_ms": 1588879689394, 
}, "detectedObject": 
 {
   "rectangle":{
    "x":730,
    "y":66,
    "w":135,
    "h":85
   },
   { "object": "fork",
   "confidence":0.501
  },
"requestID":"a7fde8fd-cc18-4f5f-99d3-897dcd07b308"
}

/cycleTimeScrap

Under this topic a message should be sent whenever an assembly at a certain station should be aborted because the part has been marked as defective.

{ 
"timestamp_ms" : 1588879689394,
"currentStation" : "StationXY"
}

2.2 - Available states for assets

This data model maps various machine states to relevant OEE buckets.

Introduction

This data model is based on the following specifications:

  • Weihenstephaner Standards 09.01 (for filling)
  • Omron PackML (for packaging/filling)
  • EUROMAP 84.1 (for plastic)
  • OPC 30060 (for tobacco machines)
  • VDMA 40502 (for CNC machines)

Additionally, the following literature is respected:

  • Steigerung der Anlagenproduktivität durch OEE-Management (Focke, Steinbeck)

Abbreviations

  • WS –> “TAG NAME”: Valuename (number)
  • PackML –> Statename (number)
  • EUROMAP –> Statusname (number)
  • Tobacco –> ControlModeName (number)

ACTIVE (10000-29999)

The asset is actively producing.

10000: ProducingAtFullSpeedState

The asset is running on full speed.

Examples for ProducingAtFullSpeedState

  • WS_Cur_State: Operating
  • PackML/Tobacco: Execute

20000: ProducingAtLowerThanFullSpeedState

The asset is NOT running on full speed.

Examples for ProducingAtLowerThanFullSpeedState

  • WS_Cur_Prog: StartUp
  • WS_Cur_Prog: RunDown
  • WS_Cur_State: Stopping
  • PackML/Tobacco: Stopping
  • WS_Cur_State: Aborting
  • PackML/Tobacco: Aborting
  • WS_Cur_State: Holding
  • WS_Cur_State: Unholding
  • PackML/Tobacco: Unholding
  • WS_Cur_State: Suspending
  • PackML/Tobacco: Suspending
  • WS_Cur_State: Unsuspending
  • PackML/Tobacco: Unsuspending
  • PackML/Tobacco: Completing
  • WS_Cur_Prog: Production
  • EUROMAP: MANUAL_RUN
  • EUROMAP: CONTROLLED_RUN

NOT INCLUDED FOR NOW:

  • WS_Prog_Step: all

UNKNOWN (30000-59999)

The asset is in an unspecified state.

30000: UnknownState

We do not have any data for that asset (e.g. connection to PLC aborted).

Examples for UnknownState

  • WS_Cur_Prog: Undefined
  • EUROMAP: Offline

40000: UnspecifiedStopState

The asset is not producing, but we do not know why (yet).

Examples for UnspecifiedStopState

  • WS_Cur_State: Clearing
  • PackML/Tobacco: Clearing
  • WS_Cur_State: Emergency Stop
  • WS_Cur_State: Resetting
  • PackML/Tobacco: Clearing
  • WS_Cur_State: Held
  • EUROMAP: Idle
  • Tobacco: Other
  • WS_Cur_State: Stopped
  • PackML/Tobacco: Stopped
  • WS_Cur_State: Starting
  • PackML/Tobacco: Starting
  • WS_Cur_State: Prepared
  • WS_Cur_State: Idle
  • PackML/Tobacco: Idle
  • PackML/Tobacco: Complete
  • EUROMAP: READY_TO_RUN

50000: MicrostopState

The asset is not producing for a short period (typically around 5 minutes), but we do not know why (yet).

MATERIAL (60000-99999)

The asset has issues with materials.

60000: InletJamState

The machine does not perform its intended function due to a lack of material flow in the infeed of the machine detected by the sensor system of the control system (machine stop). In the case of machines that have several inlets, the condition of lack in the inlet refers to the main flow, i.e. to the material (crate, bottle) that is fed in the direction of the filling machine (central machine). The defect in the infeed is an extraneous defect, but because of its importance for visualization and technical reporting, it is recorded separately.

Examples for InletJamState

  • WS_Cur_State: Lack

70000: OutletJamState

The machine does not perform its intended function as a result of a jam in the good flow discharge of the machine detected by the sensor system of the control system (machine stop). In the case of machines that have several discharges, the jam in the discharge condition refers to the main flow, i.e. to the good (crate, bottle) that is fed in the direction of the filling machine (central machine) or is fed away from the filling machine. The jam in the outfeed is an external fault 1v, but it is recorded separately “because” of its importance for visualization and technical reporting.

Examples for OutletJamState

  • WS_Cur_State: Tailback

80000: CongestionBypassState

The machine does not perform its intended function due to a shortage in the bypass supply or a jam in the bypass discharge of the machine detected by the sensor system of the control system (machine stop). This condition can only occur in machines that have two outlets or inlets and in which the bypass is in turn the inlet or outlet of an upstream or downstream machine of the filling line (packaging and palletizing machines). The jam/shortage in the auxiliary flow is an external fault, but is recorded separately due to its importance for visualization and technical reporting.

Examples for CongestionBypassState

  • WS_Cur_State: Lack/Tailback Branch Line

90000: MaterialIssueOtherState

The asset has a material issue, but it is not further specified.

Examples for MaterialIssueOtherState

  • WS_Mat_Ready (Information about which material is lacking)
  • PackML/Tobacco: Suspended

PROCESS (100000-139999)

The asset is in a stop which is belongs to the process and cannot be avoided.

100000: ChangeoverState

The asset is in a changeover process between products.

Examples for ChangeoverState

  • WS_Cur_Prog: Program-Changeover
  • Tobacco: CHANGE OVER

110000: CleaningState

The asset is currently in a cleaning process.

Examples for CleaningState

  • WS_Cur_Prog: Program-Cleaning
  • Tobacco: CLEAN

120000: EmptyingState

The asset is currently emptied, e.g. to prevent mold for food products over the long breaks like the weekend.

Examples for EmptyingState

  • Tobacco: EMPTY OUT

130000: SettingUpState

The machine is currently preparing itself for production, e.g. heating up.

Examples for SettingUpState

  • EUROMAP: PREPARING

OPERATOR (140000-159999)

The asset is stopped because of the operator.

140000: OperatorNotAtMachineState

The operator is not at the machine.

150000: OperatorBreakState

The operator is in a break. note: different than planned shift as it could count to performance losses

Examples for OperatorBreakState

  • WS_Cur_Prog: Program-Break

PLANNING (150000-179999)

The asset is stopped as it is planned to stop (planned idle time).

160000: NoShiftState

There is no shift planned at that asset.

170000: NoOrderState

There is no order planned at that asset.

TECHNICAL (180000-229999)

The asset has a technical issue.

180000: EquipmentFailureState

The asset itself is defect, e.g. a broken engine.

Examples for EquipmentFailureState

  • WS_Cur_State: Equipment Failure

190000: ExternalFailureState

There is a external failure, e.g. missing compressed air

Examples for ExternalFailureState

  • WS_Cur_State: External Failure

200000: ExternalInterferenceState

There is an external interference, e.g. the crane to move the material is currently unavailable.

210000: PreventiveMaintenanceStop

A planned maintenance action.

Examples for PreventiveMaintenanceStop

  • WS_Cur_Prog: Program-Maintenance
  • PackML: Maintenance
  • EUROMAP: MAINTENANCE
  • Tobacco: MAINTENANCE

220000: TechnicalOtherStop

The asset has a technical issue, but it is not specified further.

Examples for TechnicalOtherStop

  • WS_Not_Of_Fail_Code
  • PackML: Held
  • EUROMAP: MALFUNCTION
  • Tobacco: MANUAL
  • Tobacco: SET UP
  • Tobacco: REMOTE SERVICE

2.3 - Open source in Industrial IoT: an open and robust infrastructure instead of reinventing the wheel.

How we are keeping up with the established players in Industrial IoT and why we believe the United Manufacturing Hub is changing the future of Industrial IoT and Industry 4.0 with the help of Open Source.

Image author: Christopher Burns from Unsplash

How do we keep up with the big players in the industry despite limited resources and small market share? The best way to do this is to break new ground and draw on the collective experience of organizations and their specialists instead of trying to reinvent the wheel.

The collaborative nature of open source enables companies and individuals alike to turn their visions into reality and keep up with established players such as Siemens, Microsoft, and Rockwell, even without a large number of programmers and engineers. This is the path we are taking at United Manufacturing Hub.

Open source software has long since outgrown the insider stage and has become a veritable trend that is becoming the standard in more and more industries. Many, in the IT world, common and intensively used applications (e.g. Kubernetes, TensorFlow, f-prime by NASA 1) have nowadays emerged in a collaborative approach and are available for free.

Open-Source on Mars: the Mars helicopter Ingenuity, relies heaviliy on open-source components like f-prime. Image author: JPL/NASA

Typically, these applications are not yet ready for production or Industry 4.0 use. Some, such as Grafana, are intended for completely different industries (Observability & Monitoring).

However, the source code of these software projects is freely accessible to everyone and can be individually adapted to specific needs. Thus, the application in the Industrial IoT is also no problem. In part, those applications were programmed over decades 2 by several thousand developers and are continuously developed further 3.

The status quo

Today, it is common to develop proprietary Industrial IoT programs and software platforms - the opposite of open source.

A reason behind is, that companies do not want to have foreign code written into their applications and they want to offer the customer a self-made, end-to-end solution.

It is common for a team of over 20 or even 30 people to be assigned to develop a dashboard or IoT gateway, with the focus on a pretty looking (usually self-branded) user interface (UI) and design. Existing open source solutions or automation standards are rarely built upon.

Self-developed, in-house architectures are often strongly influenced by company-specific know-how and therefore usually also favor the company’s own products and services in their interfaces.

The result: the wheel is often reinvented in both the software and hardware areas. The resulting architectures create a lock-in effect that leads to a dependency of the manufacturing companies on their software and hardware suppliers.

Reinventing the wheel: The software world

In our opinion, good examples in the category “reinvented the wheel” from the software world are:

  1. Self-developed visualizations such as visualizations from InfluxDB, PI Vision from OSIsoft or WAGO IoT Cloud Visualization (instead of Grafana).

  2. Flow-based low code / no code apps such as Wire Graph by Eurotech (instead of node-red)

  3. The bulk of Industrial IoT platforms that are claiming to be a “one-stop solution.” Such platforms are trying to cover every aspect from data acquisition, over processing, to visualization with in-house solutions (instead of relying on established technologies and just filling the gaps in the stack).

Both Grafana and node-red are highly professional solutions in their respective fields, which have already been used in various software projects for several years. Orchestrating such specialized applications means that offered and tested solutions can be put to good use.

Reinventing the wheel: The hardware world

There are numerous examples in the Industrial IoT hardware world where there is a conscious or unconscious deviation from established industry standards of the automation industry.

We have particularly noticed this with vendors in the field of Overall Equipment Effectiveness (OEE) and production overviews. Although they usually have very good dashboards, they still rely on self-developed microcontrollers combined with consumer tablets (instead of established automation standards such as a PLC or an industrial edge PC) for the hardware. In this case, the microcontroller, usually called IoT gateway, is considered a black box, and the end customer only gets access to the device in rare cases.

The advantages cannot be denied:

  1. the system is easy to use,
  2. usually very inexpensive,
  3. and requires little prior knowledge.

Unfortunately, these same advantages can also become disadvantages:

  1. the house system integrator and house supplier is not able to work with the system, as it has been greatly reduced for simplicity.
  2. all software extensions and appearing problems, such as integrating software like an ERP system with the rest of the IT landscape, must be discussed with the respective supplier. This creates a one-sided market power (see also Lock-In).

Another problem that arises when deviating from established automation standards: a lack of reliability.

Normally, the system always need to work because failures lead to production downtime (the operator must report the problem). The machine operator just wants to press a button to get a stop reason or the desired information. He does not want to deal with WLAN problems, browser update or updated privacy policies on the consumer tablet.

The strongest argument: Lock-In

In a newly emerging market, it is especially important for a manufacturing company not to make itself dependent on individual providers. Not only to be independent if a product/company is discontinued but also to be able to change providers at any time.

Particularly pure SaaS (Software-as-a-Service) providers should be handled with caution:

  • A SaaS offering typically uses a centralized cloud-based server infrastructure for multiple customers simultaneously. By its very nature, this makes it difficult to integrate into the IT landscape, e.g., to link with the MES system installed locally in the factory.
  • In addition, a change of provider is practically only possible with large-scale reconfiguration/redevelopment.
  • Lastly, there is a concern regarding the data ownership and security of closed systems and multiple SaaS offerings.

Basically, exaggerating slightly to make the point, it is important to avoid highly sensitive production data with protected process parameters getting to foreign competitors.

One might think that the manufacturing company is initially entitled to all rights to the data - after all, it is the company that “produced” the data.

In fact, according to the current situation, there is no comprehensive legal protection of the data, at least in Germany, if this is not explicitly regulated by contract, as the Verband der deutschen Maschinenbauer (VDMA) (Association of German Mechanical Engineering Companies) admits 4.

Even when it comes to data security, some people feel queasy about handing over their data to someone else, possibly even a US startup. Absolutely rightly so, says the VDMA, because companies based in the USA are obliged to allow US government authorities access to the data at any time 5.

An open source project can give a very good and satisfactory answer here:

United Manufacturing Hub users can always develop the product further without the original developers, as the source code is fully open and documented.

All subcomponents are fully open and run on almost any infrastructure, from the cloud to a Raspberry Pi, always giving the manufacturing company control over all its data.

Interfaces with other systems are either included directly, greatly simplifying their development, or can be retrofitted themselves without being nailed down to specific programming languages.

Unused potential

In the age of Industry 4.0, the top priority is for companies to operate as efficiently as possible by taking full advantage of their potential.

Open source software, unlike classic proprietary software, enables this potential to be fully exploited. Resources and hundreds of man-hours can be saved by using free solutions and standards from the automation industry.

Developing and offering a proprietary dashboard or IoT gateway that is reliable, stable, and free of bugs is wasting valuable time.

Another hundred, if not a thousand, man-hours are needed until all relevant features such as single sign-on, user management, or logging are implemented. Thus, it is not uncommon that even large companies, the market leaders in the industry, do not operate efficiently, and the resulting products are in the 6-to-7-digit price range.

But the efficiency goes even further:

Open source solutions also benefit from the fact that a community is available to help with questions. This service is rarely available with proprietary solutions. All questions and problems must be discussed with the multi-level support hotline instead of simply Googling the solution.

And so, unfortunately, most companies take a path that is anything but efficient. But isn’t there a better way?

United Manufacturing Hub’s open source approach.

Who says that you have to follow thought patterns or processes that everyone else is modeling? Sometimes it’s a matter of leaving established paths, following your own convictions, and initiating a paradigm shift. That is the approach we are taking.

We cannot compete with the size and resources of the big players. That is why we do not even try to develop in one or two years, with a team of 20 to 30 programmers what large companies have developed in hundreds of thousands of hours.

But that’s not necessary because the resulting product is unlikely to keep up with the open source projects or established automation standards. That is why the duplicated work is not worth the struggle .

The open source software code is freely accessible and thus allows maximum transparency and, at the same time, security. It offers a flexibility that is not reached by programs developed in the traditional way. By using open source software, the United Manufacturing Hub is taking an efficient way of developing. It allows us to offer a product of at least equal value but with considerably fewer development costs.

Example OEE dashboard created in Grafana

Simplicity and efficiency in the age of Industrial IoT.

At United Manufacturing Hub, we combine open source technologies with industry-specific requirements. To do this, we draw on established software such as Docker, Kubernetes or Helm 1 and create, for example, data models, algorithms, and KPIs (e.g. the UMH data model, the factoryinsight and mqtt-to-postresql components) that are needed in the respective industries.

By extracting all data from machine controls (OPC/UA, etc.), we ensure the management and distribution of data on the store floor. Also, if additional data is needed, we offer individual solutions using industry-specific certified sensor retrofit kits, for example, at a steel manufacturer. More on this in one of the later parts of this series.

Summary

Why should we reinvent the wheel when we can focus our expertise on the areas we can provide the most value to our customers?

Leveraging open source solutions allow us to expose a stable and robust infrastructure that enables our customers to meet the challenges of Industrial IoT.

Because, in fact, manufacturing and Industrial IoT is not about developing new software at the drop of a hat. It is more about solving individual problems and challenges. This is done by drawing on a global network of experts who have developed special applications in their respective fields. These applications allow all hardware and software components to be quickly and easily established in the overall architecture through a large number of interfaces.


  1. For the common technologies see also Understanding the technologies. ↩︎

  2. https://www.postgresql.org/docs/current/history.html ↩︎

  3. https://github.com/kubernetes/kubernetes ↩︎

  4. Leitfaden Datennutzung. Orientierungshilfe zur Vertragsgestaltung für den Mittelstand. Published by VDMA in 2019. ↩︎

  5. *Digitale Marktabschottung: Auswirkungen von Protektionismus auf Industrie 4.0 * Published by VDMA’s Impulse Foundation in 2019. ↩︎

2.4 - Why we chose timescaleDB over InfluxDB

TimescaleDB is better suited for the Industrial IoT than InfluxDB, because it is stable, mature and failure resistant, it uses the very common SQL as a query language and you need a relational database for manufacturing anyway

Introduction

The introduction and implementation of an Industrial IoT strategy is already complicated and tedious. There is no need to put unnecessary obstacles in the way through lack of stability, new programming languages, or more databases than necessary. You need a piece of software that you can trust with your company’s most important data.

We are often asked why we chose timescaleDB instead of InfluxDB. Both are time-series databases suited for large amounts of machine and sensor data (e.g., vibration or temperature).

We started with InfluxDB (probably due to its strong presence in the home automation and Grafana communities) and then ended up with timescaleDB based on three arguments. In this article, we would like to explain our decision and provide background information on why timescaleDB makes the most sense for the United Manufacturing Hub.

Argument 1: Reliability & Scalability

A central requirement for a database: it cannot lose or corrupt your data. Furthermore, as a central element in an Industrial IoT stack, it must scale with growing requirements.

TimescaleDB

TimescaleDB is built on PostgreSQL, which has been continuously developed for over 25 years and has a central place in the architecture of many large companies like Uber, Netflix, Spotify or reddit. This has created a fault-tolerant database that can scale horizontally across multiple servers. In short: it is boring and works.

InfluxDB

In contrast, InfluxDB is a relatively young startup that was funded at 119.9 M USD (as of 2021-05-03) but still doesn’t have 25+ years of expertise to fall back on.

On the contrary: Influx has completely rewritten the database twice in the last 5 years 1 2. Rewriting software can improve fundamental issues or add exciting new features. However, it is usually associated with breaking changes in the API and new unintended bugs. This results in additional migration projects, which take time and risk system downtime or data loss.

Due to its massive funding, we get the impression that they add quite a lot of exciting new features and functionalities (e.g., an own visualization tool). However, after testing, we noticed that the stability suffers under these new features.

In addition, Influx only offers the horizontally scalable version of the database in the paid version, which will scare off companies wanting to use it on a larger scale as you will be fully dependent on the provider of that software (vendor lock-in).

Summary

With databases, the principle applies: Better boring and working than exciting and unreliable.

We can also strongly recommend an article by timescaleDB.

Argument 2: SQL is better known than flux

The second argument refers to the query language, i.e., the way information can be retrieved from the database.

SQL (timescaleDB)

TimescaleDB, like PostgreSQL, relies on SQL, the de facto standard language for relational databases. Advantages: A programming language established for over 45 years, which almost every programmer knows or has used at least once. Any problem? No problem, just Google it, and some smart person has already solved it on Stack Overflow. Integration with PowerBI? A standard interface that’s already integrated!

SELECT time, (memUsed / procTotal / 1000000) as value
FROM measurements
WHERE time > now() - '1 hour';

Example SQL code to get the average memory usage for the last hour.

flux (InfluxDB)

InfluxDB, on the other hand, relies on the homegrown flux, which is supposed to simplify time-series data queries. It sees time-series data as a continuous stream upon which are applied functions, calculations and transformations3.

Problem: as a programmer, you have to rethink a lot because the language is flow-based and not based on relational algebra. It takes some time to get used to it, but it is still an unnecessary hurdle for those not-so-tech-savvy companies who already struggle with Industrial IoT.

From some experience, we can also say that the language quickly reaches its limits. In the past, we worked with additional Python scripts that extract the data from InfluxDB via Flux, then process it and then play it back again.

// Memory used (in bytes)
memUsed = from(bucket: "telegraf/autogen")
  |> range(start: -1h)
  |> filter(fn: (r) =>
    r._measurement == "mem" and
    r._field == "used"
  )

// Total processes running
procTotal = from(bucket: "telegraf/autogen")
  |> range(start: -1h)
  |> filter(fn: (r) =>
    r._measurement == "processes" and
    r._field == "total"
    )

// Join memory used with total processes and calculate
// the average memory (in MB) used for running processes.
join(
    tables: {mem:memUsed, proc:procTotal},
    on: ["_time", "_stop", "_start", "host"]
  )
  |> map(fn: (r) => ({
    _time: r._time,
    _value: (r._value_mem / r._value_proc) / 1000000
  })
)

Example Flux code for the same SQL code.

Summary

In summary, InfluxDB puts unnecessary obstacles in the way of not-so-tech-savvy companies with flux, while PostgreSQL relies on SQL, which just about every programmer knows.

We can also strongly recommend the blog post by timescaleDB on exactly this topic.

Argument 3: relational data

Finally, the argument that is particularly important for production: Production data is more relational than time-series based.

Relational data is, simply put, all table-based data that you can store in Excel in a meaningful way, for example, shift schedules, orders, component lists, or inventory.

Relational data. Author: AutumnSnow, License: CC BY-SA 3.0

TimescaleDB provides this by default through the PostgreSQL base, whereas with InfluxDB, you always have to run a second relational database like PostgreSQL in parallel.

If you have to run two databases anyway, you can reduce complexity and directly use PostgreSQL/timescaleDB.

Not an argument: Performance for time-series data

Often the duel between timescaleDB and InfluxDB is fought on the performance level. Both databases are efficient, and 30% better or worse does not matter if both databases are 10x-100x faster 4 than classical relational databases like PostgreSQL or MySQL.

Even if it is not important, there is strong evidence that timescaleDB is actually more performant. Both databases regularly compare their performance against other databases, and InfluxDB never compares itself to timescaleDB. However, timescaleDB has provided a detailed performance guide of influxDB.

Summary

Who do you trust more? The nerdy and boring, or the good-looking accountant, with 25 new exciting tools?

The same goes for databases: Boring is awesome.


  1. https://www.influxdata.com/blog/new-storage-engine-time-structured-merge-tree/ ↩︎

  2. https://www.influxdata.com/blog/influxdb-2-0-open-source-beta-released/ ↩︎

  3. https://www.influxdata.com/blog/why-were-building-flux-a-new-data-scripting-and-query-language/ ↩︎

  4. https://docs.timescale.com/latest/introduction/timescaledb-vs-postgres ↩︎

3 - Examples

This section is an overview over the various showcases that we already did. It provides for every showcase a quick summary including a picture. More details can be found in the subsequent documents.

Metalworking industry

Flame cutting & blasting

Retrofitting of 11 flame cutting machines and blasting systems at two locations using sensors, barcode scanners and button bars to extract and analyze operating data.

See also the detailed documentation.

Identification of the optimization potential of two CNC milling machines

Two-month analysis of CNC milling machines and identification of optimization potentials. Automatic data acquisition coupled with interviews of machine operators and shift supervisors revealed various optimization potentials.

See also the detailed documentation.

Textile industry

Cycle time monitoring

See also the detailed documentation.

Retrofitting of weaving machines for OEE calculation

Retrofitting of weaving machines that do not provide data via the PLC to extract operating data. Subsequent determination of the OEE and detailed breakdown of the individual key figures

See also the detailed documentation

Filling & packaging industry

Performance management in a brewery

Retrofit of a bottling line for different beer types. Focus on the identification of microstops causes and exact delimitation of the bottleneck machine.

See also the detailed documentation.

Retrofit of a Japanese pharmaceutical packaging line

Retrofit of a Japanese pharmaceutical packaging line for automatic analysis of microstop causes as well as to relief the machine operators of data recording.

See also the detailed documentation.

Quality monitoring in a filling line

quality monitoring

TODO: #69 add short description for DCC quality check

See also the detailed documentation.

Semiconductor industry

Identification of optimization potential in the context of the COVID-19 crisis

Use of the factorycube for rapid analysis of bottleneck stations. The customer was thus able to increase the throughput of critical components for ventilators within the scope of COVID-19.

See also the detailed documentation.

3.1 - Flame cutting & blasting

This document describes the flame cutting & blasting use case

Profile

categoryanswer
IndustrySteel Industry
Employees>1000
Number of retrofitted sites2
Project duration6 months
Number of retrofitted machines11
Types of machines retrofittedPlasma cutting machines, oxyfuel cutting machines, shot blasting machines

Photos

Challenges

Lack of transparency about production processes

  • Lead times are unknown
  • No comparison between target and actual times
  • Duration and causes of machine downtimes are unclear

Heterogeneous machinery and machine controls from different manufacturers

  • Only minimal connection of the machines to the ERP system
  • Manual input of production data into the ERP system
  • Machine controls are partially locked by manufacturers
  • Machine controls use different protocols

Reliable production planning and quotation generation not possible

  • No data on past and current machine utilization available
  • Quotation preparation is done with theoretical target times, as no information about actual times is available

Solution

Integration

TODO: #68 add integration for flame cutting

Installed hardware

factorycube

factorycube sends the collected production data to the server. See also [factorycube].

Gateways

Gateways connect the sensors to the factorycube.

Models:

  • ifm AL1352
  • ifm AL1350

Light barriers

Light barriers are installed on cutting machines and are activated when the cutting head is lowered and in use. Used to measure machine conditions, cycle times and piece counts.

Models:

  • ifm O5D100 (Optical distance sensor)
  • ifm O1D108 (Optical distance sensor)

Vibration sensors

Vibration sensors are installed on the beam attachments and detect the machine condition via vibration. Used to measure machine conditions.

Model:

  • ifm VTV122 (vibration transmitter)

Button bar

Button bar is operated by the machine operator in case of a machine standstill. Each button is assigned a reason for standstill. Used to identify the causes of machine downtime.

Model:

Barcode scanner

Barcode scanners are used to scan production orders, which contain the target times. Used to scan target times for target/actual comparison.

Model:

  • Datalogic PowerScan PD9531
  • Datalogic USB Cable Straight 2m (CAB-438)

Implemented dashboards

The customer opted for our SaaS offering. We created the following dashboards for the client.

Default navigation options from Grafana, which we modified to allow custom menus.

  1. Customizable menu lets you quickly navigate between dashboards
  2. In the time selection you can adjust the times for the current dashboard

Plant-manager dashboard

  1. Dashboard for the plant manager / shift supervisor, which gives an overview of the production in the factory
  2. For each machine the current machine status
  3. For each machine, the overall equipment effectiveness / OEE for the selected time period
  4. For each machine, a timeline showing the machine statuses in color
  5. Overview of all orders, including target/actual deviation and which stop reasons, including setup times, occurred during the order

Machine deep dive

  1. Dashboard for the machine operator / shift supervisor, which displays the details for a machine
  2. The current machine status with time stamp
  3. The overall equipment effectiveness / OEE for the selected time period, including trend over time
  4. An overview of the accumulated duration of each stop reason
  5. A timeline where the machine states are color coded
  6. A timeline where the shifts become visible
  7. A timeline where the orders are displayed
  8. Overview of all orders, including target/actual deviation and which stop reasons, including setup times, occurred during the order
  9. Overview of the number of individual stop reasons

Cross-factory dashboard

  1. Dashboard for the cross-factory manager, who can use this to obtain an overview of the sites
  2. The overall equipment effectiveness / OEE for the selected time period for all machines.
  3. The minimum overall equipment effectiveness / OEE for the selected time period for machine type A.
  4. The average overall equipment effectiveness / OEE for the selected time period for machine type A
  5. The maximum overall equipment effectiveness / OEE for the selected period for machine type A
  6. Overview of all orders, including target/actual deviation and which stop reasons, including setup times, occurred during the order
  7. Export function as .csv

3.2 - Brewery

This document describes the brewery use case

Profile

categoryanswer
IndustryBrewery
Employees~150
Number of retrofitted sites1
Project duration3 months
Number of retrofitted machines8
Types of machines retrofittedEntire filling line (filler, labeler, palletizer, etc.)

Photos

Challenges

Lack of transparency about production processes

  • Duration and causes of machine downtimes are unclear
  • High proportion of smaller microstops with unknown cause
  • Exclusively reactive maintenance, as data on the condition of the components is lacking

Moving bottleneck

  • Since the production process is highly interlinked, a stoppage of a single machine can lead to a stoppage of the entire line
  • The problem machine “bottleneck machine” is difficult to identify, as it can shift during a shift and is difficult to see with the eye

High effort to collect data as part of the introduction of a continuous improvement process

  • Changeover times must be recorded manually with a stop watch and are still not very standardized
  • No data on past and current machine utilization available
  • Maintenance actions recorded manually, no automatic system to log, store and visualize error codes from machine

Solution

Integration

At the beginning, a “BDE entry program” was carried out together with a lean consulting to identify optimization potentials and to present our solution. For this purpose, the [factorycube] was installed at the filler within a few hours in combination with the tapping of electrical signals from the control system and button strips. A connection of the PLC interfaces was initially out of the question due to time and cost reasons. After the customer decided on a permanent solution, the factorycube was dismounted.

All machines have been equipped with the “Weihenstephaner Standards”, a protocol commonly used in the German brewery industry and were already connected within a machines network. Therefore, the installation was pretty straightforward using our enterprise plugin for that protocol and one central server.

Installed hardware

Server

Implemented dashboards

The customer opted for our SaaS offering. We created the following dashboards for the client.

Default navigation options from Grafana, which we modified to allow custom menus.

  1. Customizable menu lets you quickly navigate between dashboards
  2. In the time selection you can adjust the times for the current dashboard

Plant-manager dashboard

Dashboard for the plant manager / shift supervisor, which gives an overview of the production in the factory

  1. For each machine the current machine status
  2. For each machine, the overall equipment effectiveness / OEE for the selected time period
  3. For each machine, a timeline showing the machine statuses in color

Performance cockpit

Dashboard for the supervisor to get an overview of the machine

  1. The current machine status
  2. The overall equipment effectiveness / OEE for the selected time period, including trend over time
  3. The average changeover time
  4. The average cleaning time
  5. A timeline where the machine states are color coded
  6. A timeline where the shifts become visible
  7. A timeline with the machine speed
  8. Overview of the number of individual stop reasons, excluding technical defects as they are not relevant for the shift

Maintenance cockpit

Dashboard for the head of maintenance to get an overview of the machine

  1. The current machine status
  2. The overall equipment effectiveness / OEE for the selected time period, including trend over time
  3. The MTTR (mean time to repeair), an important key figure for maintenance
  4. The MTBF (mean time between failures), an important key figure for maintenance
  5. A timeline where the machine states are color coded
  6. A timeline where the process value “bottle lock open/close” is visualized. This helps the manager of the maintenance to isolate the cause of a problem more precisely.
  7. A timeline with the machine speed
  8. An overview of the accumulated duration of each stop reason, that is relevant for maintenance
  9. Overview of the number of individual stop reasons, that is relevant for maintenance

3.3 - Semiconductor

This document describes the semiconductor use case

Profile

categoryanswer
IndustrySemiconductor industry
Employees>1000
Number of retrofitted sites1
Project duration2 months
Number of retrofitted machines1
Types of machines retrofittedDispensing robot

Photos

Challenges

Increasing demand could not be fulfilled

  • the demand for the product, which was required for ventilators, was increasing over 1000% due to the COVID-19 crisis
  • the production was struggling to keep up with the ramp up

Production downtime needed to be avoided at all costs

  • production downtime would have meant not fulfilling the demand

A quick solution was needed

  • to meet the demand, the company needed a quick solution and could not accept months of project time

Solution

Integration

We were given a 2h time slot by the company to install the sensors, from the time we entered the factory until the time we left (including safety briefings and ESD compliance checks). With the help of videos, we got an overview beforehand and created a sensor plan. During this time slot, we used the machine operator’s break to install all the sensors and verified the data during the subsequent machine run. Through VPN we were able to access the device and fine-tune the configuration.

Installed hardware

factorycube

factorycube sends the collected production data to the server. See also [factorycube].

Gateways

Gateways connect the sensors to the factorycube.

Models:

  • ifm AL1352

Ultrasonic sensor

picture TODO

The ultrasonic sensor was used to measure whether the robot was currently moving and thus whether the machine was running.

Models:

  • TODO

Proximity sensor

Proximity sensors were used to detect if the product was ready for operator removal. Together with the ultrasonic sensors, we were able to measure whether the machine was standing because the machine operator had not removed the product and was therefore not available.

Models:

  • ifm KQ6005

Button bar

Button bar is operated by the machine operator in case of a machine standstill. Each button is assigned a reason for standstill. Used to identify the causes of machine downtime.

Model:

  • Self-made, based on Siemens TODO

Implemented dashboards

The customer opted for our SaaS offering and a additional analysis of the data.

Dashboard screenshot

The customer opted for SaaS solution and required only a very simple dashboard as most insights were gained from a detailed analysis. The dashboard includes the functionality to export data as .csv.

Additional analysis

The data was exported into .csv and then analyzed in Python & Excel. Together with interviews of the operators and supervisors we could extract multiple insights including optimization potential through alignment of the work processes and improvement of changeovers through Single-minute exchange of die (SMED).

3.4 - Cycle time monitoring in an assembly cell

This document describes the cycle time monitoring use case

Profile

An assembly cell was retrofitted to measure and optimize cycle times. Customizable textile wristbands are produced in the assembly cell.

Photos of the machines

Challenges

Lack of information about production performance

  • Cycle times are unknown
  • Bottleneck of the assembly cell cannot be identified
  • No information about productivity of individual employees
  • Piece counts are not documented
  • No comparison between target and actual performance

Lack of transparency about downtimes

  • Frequency and duration of downtimes of the assembly cell are not recorded
  • Causes of downtime are often unknown and not documented

Connection of assembly cell to conventional systems not possible

  • Sewing machines do not have machine controls that could be connected

Solution

Integration

TODO: #66 Add integration for assembly analytics

Installed hardware

factorycube

factorycube sends the collected production data to the server. See also [factorycube].

Gateways

Gateways connect the sensors to the factorycube.

Models:

  • ifm AL1352

Light barriers

Light barriers are installed on the removal bins and are activated when the employee removes material. Used to measure cycle time and material consumption.

Models:

  • ifm O5D100 (Optical distance sensor).

Proximity sensor

Proximity sensors on the foot switches of sewing machines detect activity of the process. Used to measure cycle time.

Models:

  • ifm KQ6005

Barcode scanner

The barcode scanner is used to scan the wristband at the beginning of the assembly process. Process start and product identification.

Model:

  • Datalogic PowerScan PD9531
  • Datalogic USB Cable Straight 2m (CAB-438)

Implemented dashboards

The customer opted for a combination of our SaaS offering with the building kit (and thus an on-premise option). The customer decided to go for PowerBI as a dashboard and connected it via the REST API with factoryinsight.

Used node-red flows

With the help of Assembly Analytics Nodes, it is possible to measure the cycle time of assembly cells in order to measure and continuously improve their efficiency in a similar way to machines.

Here is an exemplary implementation of those nodes:

There are 2 stations with a total of 4 cycles under consideration

Station 1 (AssemblyCell1):

1a: Starts with scanned barcode and ends when 1b starts

1b: Starts with a trigger at the pick to light station and ends when station 1a starts

Station 2 (AssemblyCell2):

2a: Starts when the foot switch at the 2nd station is pressed and ends when 2b starts

2b: Starts when the quality check button is pressed and ends when 2a starts.

Assumptions:

  • Unrealistically long cycle times are filtered out (cycle times over 20 seconds).
  • There is a button bar between the stations to end the current cycle and mark that product as scrap. The upper 2 buttons terminate the cycle of AssemblyCell1 and the lower ones of AssemblyCell2. The aborted cycle creates a product that is marked as a scrap.

Nodes explained:

  • Assembly Analytics Trigger: Cycles can be started with the help of the “Assembly Analytics Trigger” software module.

  • Assembly Analytics Scrap: With the help of the software module “Assembly Analytics Scrap”, existing cycles can be aborted and that produced good can be marked as “scrap”.

  • With the help of the software module “Assembly Analytics Middleware”, the software modules described above are processed into “unique products”.

Here you can download the flow described above

3.5 - Quality monitoring in a bottling line

This document describes the quality monitoring use case

Profile

A bottling line for filling water bottles was retrofitted with an artificial intelligence quality inspection system. With the help of a camera connected to an ia: factorycube, the bottles are checked for quality defects and sorted out by a pneumatic device in the event of a defect.

Photos of the machines

Challenges

Manual visual inspection causes high costs

  • Each individual bottle is checked for quality defects by an employee
  • One employee is assigned to each shift exclusively for quality inspection

Customer complaints and claims due to undetected quality defects

  • Various quality defects are difficult to detect with the naked eye and are occasionally overlooked

No data on quality defects that occur for product and process improvement

  • Type and frequency of quality defects are not recorded and documented
  • No data exists that can be analyzed to derive improvement measures for product and process optimization

Solution

Integration

TODO: #67 Add integration for DCC quality check

Installed hardware

factorycube

A machine learning model runs on the factorycube, which evaluates and classifies the images. See also [factorycube].

Gateways

Gateways connect the sensors to the factorycube.

Models:

  • ifm AL1352

Light barriers

A light barrier identifies the bottle and sends a signal to the factorycube to trigger the camera.

Models:

  • ifm O5D100 (Optical distance sensor)

Camera

A camera takes a picture of the bottle and sends it to the factorycube.

Models:

  • Allied Vision (Mako G-223)

Detectable quality defects

Automated action

As soon as a quality defect is detected the defect bottle is automatically kicked out by the machine.

3.6 - Pharma packaging

This document describes the pharma packaging use case

Profile

categoryanswer
Industrypharma industry
Employees
Number of retrofitted sites
Project duration
Number of retrofitted machines
Types of machines retrofitted

TODO: #70 add pharma packaging case

3.7 - Weaving

TODO

Profile

categoryanswer
Industry
Employees
Number of retrofitted sites
Project duration
Number of retrofitted machines
Types of machines retrofitted

TODO: #71 add weaving case

3.8 - CNC Milling

This document describes the CNC milling use case

TODO #65

4 - Tutorials

This section has tutorials and other documents, that do not fit into the other categories.

4.1 - Certified devices

This section contains tutorials related to our commercial certified devices.

4.1.1 - How to use machineconnect

This document explains how to install and use machineconnect

Purpose

machineconnect is our certified device used for the connection of PLCs and installed in the switch cabinet of the machine.

The idea behind machineconnect is to protect the PLC and all components with an additional firewall. Therefore, it is not accessible from outside of machineconnect except explicitly configured in the firewall.

Features

  • Industrial Edge Computer
    • With DIN rail mounting and 24V
    • Vibration resistent according to IEC 60068-2-27, IEC 60068-2-64/ MIL-STD-810, UNECE Reg.10 E-Mark, EN50155
    • Increased temperature range (-25°C ~ 70°C)
  • Open source core installed and provisioned according to customer needs (e.g. MQTT certificates) in production mode (using k3OS)
  • Additional security layer for your PLC by using OPNsense (incl. Firewall, Intrusion Detection, VPN)
  • 10 years of remote VPN access via our servers included

Physical installation

  1. Attach wall mounting brackets to the chassis
  2. Attach DIN Rail mounting brackets to the chassis
  3. Clip system to the DIN Rail
  4. Connect with 24V power supply
  5. Connect Ethernet 1 with WAN / Internet
  6. Connect Ethernet 3 with local switch (if existing). This connection will be called from now on “LAN”.
  7. (optional, see connection to PLC. If skipped please connect the PLC to Ethernet 3) Connect Ethernet 2 with PLC. This connection will be called from now on “PLC network”.

Verify the installation by turning on the power supply and checking whether all Ethernet LEDs are blinking.

Connection to the PLC

There are two options to connect the PLC. We strongly recommend Option 1, but in some cases (PLC has fixed IP and is communicating with engine controllers or HMI and you cannot change the IP adresses there) you need to go for option 2.

  1. Configure the PLC to retrieve the IP via DHCP
  2. Configure OPNsense to give out the same IP for the MAC-address of the PLC for LAN. Go to Services –> DHCPv4 –> LAN and add the PLC under “DHCP static mappings for this device”

Option 2: The PLC has a static IP, which cannot be changed

  1. Adding a new interface for the PLC network, e.g. “S7”.
  2. Adding a new gateway (see screenshot. Assuming 192.168.1.150 is the IP of the PLC and the above created interface is called “S7”)
  3. Adding a new route (see screenshot and assumptions of step 2)
  4. Changing NAT to “Manual outbound NAT rule generation” (see screenshot and assumptions of step 2)
  5. Add firewall rule to the PLC interface (see screenshot and assumptuons of step 2)
  6. Add firewall rule to LAN allowing interaction between LAN network and PLC network

If you are struggling with these steps and have bought a machineconnect from us, feel free to contact us!

Next steps

After doing the physical setup and connecting the PLC you can continue with part 3 of the getting started guide.

4.2 - Setting up the documentation

This document explains how to get started with the documentation locally
  1. Clone the repo
  2. Go to /docs and execute git submodule update --init --recursive to download all submodules
  3. Startup the development server by using sudo docker-compose up --build

4.3 - FAQ

This document gives answers to the frequently asked questions

I cannot login into ia: factoryinsight / Grafana although I’ve entered my credentials correctly several times

The username in Grafana is case-sensitive. That means, if the user is **J**eremy.**T**[email protected] and you enter **j**eremy.**t**[email protected] you will get a failed login message.

4.4 - Setting up the PKI infrastructure

This document describes how to create and manage the certificates required for MQTT

Prerequisites

This tutorial is assuming zou are using ubuntu and have installed easy-rsa using sudo apt-get install easyrsa

Initially setting up the infrastructure

Create a new directory and go into it, e.g.

mkdir ~/mqtt.umh.app/
cd ~/mqtt.umh.app/

Setup basic PKI infrastructure with /usr/share/easy-rsa/easyrsa init-pki

Copy the default configuration file with cp /usr/share/easy-rsa/vars.example pki/vars and edit it to your liking (e.g. adjust EASYRSA_REQ_… and CA and cert validity)

Build the CA using /usr/share/easy-rsa/easyrsa build-ca nopass

Create the server certificate by using the following commands (exchange mqtt.umh.app with your domain!):

/usr/share/easy-rsa/easyrsa gen-req mqtt.umh.app nopass
/usr/share/easy-rsa/easyrsa sign-req server mqtt.umh.app 

Copy the private key `pki/private/mqtt.umh.app.key` and the public certificate `pki/issued/mqtt.umh.app.crt` together with the root CA `pki/ca.crt` to the configuration of the MQTT broker.

## Adding new clients

Create new clients with following commands (remember to change TESTING with the planned MQTT client id):
```bash
./easyrsa gen-req TESTING nopass
./easyrsa sign-req client TESTING

4.5 - Edge networking

The UMH stack features a sophisticated system to be integrated into any enterprise network. Additionally, it forces multiple barriers against attacks by design. This document should clear up any confusion.

factorycube

The factorycube (featuring the RUT955) consists out of two separate networks:

  1. internal
  2. external

The internal network connects all locally connected machines, sensors and miniPCs with each other. The external network is “the connection to the internet”. The internal network can access the external network, but not the other way around, except specifically setting firewall rules (“port forwarding”).

Example components in internal network

  • Laptop for setting up
  • Router
  • miniPC
  • ifm Gateways
  • Ethernet Cameras

Example components in external network

  • Router (with its external IP)
  • the “Internet” / server

4.6 - Working with the system practically

Three step implementation plan for using the factorycube

Please ensure that you have read the safety information and manuals before proceeding! Failure to do so can result in damages to the product or serious injuries.

Create a sensor plan

Please create before installing anything a sensor plan. Prior to installing take the layout of your selected line and add:

  • Electricity sockets
  • Internet connections (bring your own)
  • Your planned sensors
  • The position of the factorycube
  • Your planned cabling

Discuss and install

Setup a meeting with your line engineers and discuss your plan. Then install everything according to plan. Ensure, that all sensors and cables are mounted tightly and that they do not interfere with the production process.

Supply your factorycube with power and turn it on

Plug in the power cable to turn the factorycube on. After a few seconds the ia: factorycube should be lit up.

Connect your factorycube to the internet

If you want to use the cloud dashboard, you must first connect the factorycube to the internet.

You need:

  • your credentials
  • an Ethernet cable (provided)
  • a laptop which is not connected to any VPN

For a network overview of the Factorycube, click here

Instructions to login

Connect the factorycube with your computer via an ethernet cable using the IO-Link port (not Internettest_ on the factorycube.

Open the following website on your browser: http://172.16.x.2 (The X stands for the last number(s) of the serial number. e.g. 2019_0103 -> x=3 or 2019_0111 -> x=11)

Enter your credentials according to the information in the customer area. The username is always “admin”

3 ways to connect to the internet: WiFi, 3G/4G or Ethernet

Further information on how to connect the factorycube with the internet can be found in the official router manual

Instructions to setup WiFi

  • Select “Network” → “Wireless/Wlan”. If necessary remove the old Wireless station point
  • Click on “Add” next to wireless station mode
  • Click on “Start Scan” next to wireless station mode
  • Click on the network of your choice
  • “join network”
  • Afterwards enter your credentials and confirm

The computer should now be connected to the internet.

Instructions to setup 3G/4G

  • Insert the SIM-card (if a SIM-card is already provided in the ia: factorycube, skip this step)
  • For the installation of a SIM card please contact our experts
  • Select “Network” → “Mobile”
  • Adjust the settings under the “General” tab as follows:
  • Save your settings

The computer should now be connected to the internet.

Instructions to set up connection via Ethernet

  • Plug the Ethernet cable into the device’s “Internet” port and the other side into the network access port
  • Select “Network” –> “WAN”
  • Select as wired as your main WAN
  • Click save

The computer should now be connected to the internet. You can now the entire United Manufacturing Hub Edge Stack. For more information, take a look in the getting started with edge devices.

Outro

Closely monitor the data and verify over the entire duration of the next days, whether the data is plausible. Things that can go wrong here:

  • Sensors not mounted properly and not calibrated anymore
  • The operators are using the line different from what was discussed before (e. g., doing a changeover and removing the sensors)

4.7 - How to add additional SSH keys in k3OS

This article explains how to add an additional SSH key to k3OS, so that multiple people can access the device

Prerequisites

  • Edge device running k3OS
  • SSH access to that device
  • SSH / SFTP client
  • Public and private key suited for SSH access

Tutorial

  1. Access the edge device via SSH
  2. Go to the folder /home/rancher/.ssh and edit the file authorized_keys
  3. Add there your additional SSH key

4.8 - How to update the stack / helm chart

This article explains how to update the helm chart, so that you can apply changes to the configuration of the stack or to install newer versions

Prerequisites

none

Tutorial

  1. Go to the folder deployment/factorycube-server or deployment/factorycube-edge
  2. Execute helm upgrade factorycube-server . --values "YOUR VALUES FILE" --kubeconfig /etc/rancher/k3s/k3s.yaml -n YOUR_NAMESPACE

This is basically your installation command, but you exchange install with upgrade. You need to change “YOUR VALUES FILE” with the path of your values.yaml, e.g. /home/rancher/united-manufacturing-hub/deployment/factorycube-server/values.yaml and you need to adjust YOUR_NAMESPACE with the correct namespace name. If you did not specify any namespace during the installation you can use the namespace default. If you are using factorycube-edge instead of factorycube-server you need to adjust that as well.

5 - Developers

This section has all technical documents and API specifications

This repository contains multiple folders and sub-projects:

  • /golang contains software developed in Go, especially factoryinsight and mqtt-to-postgresql and their corresponding tests (-environments)
  • /deployment contains all deployment related files for the server and the factorycube, e.g. based on Kubernetes or Docker, sorted in seperate folders
  • /sensorconnect contains sensorconnect
  • /grafana-plugins/factoryinsight-datasource contains factoryinsight-datasource
  • /barcodereader contains barcodereader
  • /python-sdk contains a template and examples to analyze data in real-time on the edge devices using Python, Pandas and Docker. It is deprecated as we switched to [node-red] and only published for reference.
  • /docs contains the entire documentation and API specifications for all components including all information to buy, assemble and setup the hardware

5.1 - factorycube-server

The architecture of factorycube-server

factoryinsight

factoryinsight is an open source REST API written in Golang to fetch manufacturing data from a timescaleDB database and calculate various manufacturing KPIs before delivering it to a user visualization, e.g. [Grafana] or [PowerBI].

Features:

  • OEE (Overall Equipment Effectiveness), including various options to investigate OEE losses (e.g. analysis over time, microstop analytics, changeover deep-dives, etc.)
  • Various options to investigate OEE losses further, for example stop analysis over time, microstop analytics, paretos, changeover deep-dives or stop histograms
  • Scalable, microservice oriented approach for Plug-and-Play usage in Kubernetes or behind load balancers (including health checks and monitoring)
  • Compatible with important automation standards, e.g. Weihenstephaner Standards 09.01 (for filling), Omron PackML (for packaging/filling), EUROMAP 84.1 (for plastic), OPC 30060 (for tobacco machines) and VDMA 40502 (for CNC machines)

The openapi documentation can be found here

mqtt-to-postgresql

the tool to store incoming MQTT messages to the postgres / timescaleDB database

Technical information and usage can be found in the documentation for mqtt-to-postgresql

factoryinsight-datasource

This is a plugin for Grafana which acts as a datasource and creates a connection to factoryinsight.

5.1.1 - factoryinsight

This document is an overview over the various showcases that we already did. It provides for every showcase a quick summary including a picture. More details can be found in the subsequent documents.

5.1.2 - mqtt-to-postgresql

Documentation of mqtt-to-postgresql

TODO: #80 fill out standardized documentation for mqtt-to-postgresql

|NAME OF DOCKER CONTAINER|

|This is a short description of the docker container.|

Getting started

Here is a quick tutorial on how to start up a basic configuration / a basic docker-compose stack, so that you can develop.

|TUTORIAL|

docker-compose -f ./deployment/mqtt-to-postgresql/docker-compose-mqtt-to-postgresql-development.yml –env-file ./.env up -d –build

Environment variables

This chapter explains all used environment variables.

|EXAMPLE_VARIABLE|

Description: |DESCRIPTION|

Type: |BOOL, STRING, etc.|

Possible values: |if restricted|

Example value: |EXAMPLE|

5.2 - factorycube-edge

sensorconnect

This tool automatically finds connected ifm gateways (e.g. the AL1350 or AL1352), extracts all relevant data and pushes the data to a MQTT broker. Technical information and usage can be found in the documentation for sensorconnect

barcodereader

This tool automatically detected connected USB barcode scanners and send the data to a MQTT broker. Technical information and usage can be found in the documentation for barcodereader

mqtt-bridge

This tool acts as an MQTT bridge to handle bad internet connections. Messages are stored in a persistent queue on disk. This allows using the factorycube-edge in remote environments with bad internet connections. It will even survive restarts (e.g. internet failure and then 1h later power failure). We developed it as we’ve tested multiple MQTT brokers and their bridge functionalities (date of testing: 2021-03-15) and could not find a proper solution:

nodered

This tool is used to connect PLC and to process data. See also Getting Started. Or take a look into the official documentation

emqx-edge

This tool is used as a central MQTT broker. See emqx-edge documentation for more information.

5.2.1 - barcodereader

This is the documentation for the container barcodereader.

Getting started

Here is a quick tutorial on how to start up a basic configuration / a basic docker-compose stack, so that you can develop.

Go to the root folder of the project and execute the following command:

sudo docker build -f deployment/barcodereader/Dockerfile -t barcodereader:latest . && sudo docker run --privileged -e "DEBUG_ENABLED=True" -v '/dev:/dev' barcodereader:latest 

All connected devices will be shown, the used device is marked with “Found xyz”. After every scan the MQTT message will be printed.

Environment variables

This chapter explains all used environment variables.

DEBUG_ENABLED

Description: Deactivates MQTT and only prints the barcodes to stdout

Type: bool

Possible values: true, false

Example value: true

CUSTOM_USB_NAME

Description: If your barcodereader is not in the supported list of devices, you must specify the name of the USB device here

Type: string

Possible values: all

Example value: Datalogic ADC, Inc. Handheld Barcode Scanner

MQTT_CLIENT_ID

Description: The MQTT client id to connect with the MQTT broker

Type: string

Possible values: all

Example value: weaving_barcodereader

BROKER_URL

Description: The MQTT broker URL

Type: string

Possible values: IP, DNS name

Example value: ia_mosquitto

Example value 2: localhost

BROKER_PORT

Description: The MQTT broker port. Only unencrypted ports are allowed here (default: 1883)

Type: integer

Possible values: all

Example value: 1883

CUSTOMER_ID

Description: The customer ID, which is used for the topic structure

Type: string

Possible values: all

Example value: dccaachen

LOCATION

Description: The location, which is used for the topic structure

Type: string

Possible values: all

Example value: aachen

MACHINE_ID

Description: The machine ID, which is used for the topic structure

Type: string

Possible values: all

Example value: weaving_machine_2

5.2.2 - sensorconnect

This docker container automatically detects ifm gateways in the specified network and reads their sensor values in the highest possible data frequency. The MQTT output is specified in the MQTT documentation

Getting started

Here is a quick tutorial on how to start up a basic configuration / a basic docker-compose stack, so that you can develop.

  1. Specify the environment variables, e.g. in a .env file in the main folder or directly in the docker-compose
  2. execute sudo docker-compose -f ./deployment/sensorconnect/docker-compose.yaml up -d --build

Environment variables

This chapter explains all used environment variables.

TRANSMITTERID

Description: The unique transmitter id. This will be used for the creation of the MQTT topic. ia/raw/TRANSMITTERID/…

Type: string

Possible values: all

Example value: 2021-0156

BROKER_URL

Description: The MQTT broker URL

Type: string

Possible values: IP, DNS name

Example value: ia_mosquitto

Example value 2: localhost

BROKER_PORT

Description: The MQTT broker port. Only unencrypted ports are allowed here (default: 1883)

Type: integer

Possible values: all

Example value: 1883

IP_RANGE

Description: The IP range to search for ifm gateways

Type: string

Possible values: All subnets in CIDR notation

Example value: 172.16.0.0/24

5.2.3 - mqtt-bridge

This tool acts as an MQTT bridge to handle bad internet connections.

Getting started

Here is a quick tutorial on how to start up a basic configuration / a basic docker-compose stack, so that you can develop.

Here is a quick tutorial on how to start up a basic configuration / a basic docker-compose stack, so that you can develop.

  1. Specify the environment variables, e.g. in a .env file in the main folder or directly in the docker-compose
  2. execute sudo docker-compose -f ./deployment/mqtt-bridge/docker-compose.yaml up -d --build

Environment variables

This chapter explains all used environment variables.

REMOTE_CERTIFICATE_NAME

Description: the certificate name / client id

Type: string

Possible values: all

Example value: 2021-0156

REMOTE_BROKER_URL

Description: the URL to the broker. Can be prepended either with ssl:// or mqtt:// or needs to specify the port afterwards with :1883

Type: string

Possible values: all

Example value: ssl://mqtt.app.industrial-analytics.net

REMOTE_SUB_TOPIC

Description: the remote topic that should be subscribed. The bridge will automatically append a /# to the string mentioned here

Type: string

Possible values: all

Example value: ia/ia

REMOTE_PUB_TOPIC

Description: the remote topic prefix where messages from the remote broker should be send to.

Type: string

Possible values: all

Example value: ia/ia

REMOTE_BROKER_SSL_ENABLED

Description: should SSL be enabled and certificates be used for connection?

Type: bool

Possible values: true or false

Example value: true

LOCAL_CERTIFICATE_NAME

Description: the certificate name / client id

Type: string

Possible values: all

Example value: 2021-0156

LOCAL_BROKER_URL

Description: the URL to the broker. Can be prepended either with ssl:// or mqtt:// or needs to specify the port afterwards with :1883

Type: string

Possible values: all

Example value: ssl://mqtt.app.industrial-analytics.net

LOCAL_SUB_TOPIC

Description: the remote topic that should be subscribed. The bridge will automatically append a /# to the string mentioned here

Type: string

Possible values: all

Example value: ia/ia

LOCAL_PUB_TOPIC

Description: the remote topic prefix where messages from the remote broker should be send to.

Type: string

Possible values: all

Example value: ia/ia

LOCAL_BROKER_SSL_ENABLED

Description: should SSL be enabled and certificates be used for connection?

Type: bool

Possible values: true or false

Example value: true

BRIDGE_ONE_WAY

Description: DO NOT SET TO FALSE OR THIS MIGHT CAUSE AN ENDLESS LOOP! NEEDS TO BE FIXED BY SWITCHING TO MQTTV5 AND USING NO_LOCAL OPTION WHILE SUBSCRIBING. If true it sends the messages only from local broker to remote broker (not the other way around)

Type: bool

Possible values: true or false

Example value: true

Important note regarding topics

The bridge will append /# to LOCAL_SUB_TOPIC and subscribe to it. All messages will then be send to the remote broker. The topic on the remote broker is defined by:

  1. First stripping LOCAL_SUB_TOPIC from the topic
  2. and then replacing it with REMOTE_PUB_TOPIC

6 - Publications

This page contains multiple interesting publications made around Industry 4.0, mainly thesis from the RWTH Aachen University.

6.1 - Development of a methodology for implementing Predictive Maintenance

Because of high costs and effort PdM is only economically viable on machines and components with high revenue losses due to breakdown and where the failure is almost independent from uptime

Abstract

Objective of this thesis: The goal of this thesis is to develop a methodology to implement Predictive Maintenance (PdM) economically viable into a company. The methodology is then validated in the Digital Capability Center (DCC) Aachen.

Solution process: Maintenance strategies and machine learning algorithms are researched together with methods for optimizing productions lines. This knowledge is then summarized and validated in the DCC Aachen.

Key results: Because of high costs and effort PdM is only economically viable on machines and components with high revenue losses due to breakdown and where the failure is almost independent from uptime and wear. In the DCC Aachen the wind up bearing at the warping machine is identified as a component for a PdM implementation, but a combination of machine learning and existing sensors is not enough for a economically viable implementation.

Key word: Predictive Maintenance, maintenance strategies, machine learning

Content

Bachelorthesis Jeremy Theocharis

6.2 - Industrial image processing for quality control in production lines: Development of a decision logic for the application case specific selection of hardware and software

To select suitable hardware components, a five-stage decision logic is developed and implemented as a software application, which suggests suitable components to the user depending on the specified use case and prioritizes them according to list price. In a simulative evaluation, this achieves complexity reductions between 73 and 98% and cost savings between 46 and 93%. A decision between Deep Learning and conventional algorithms can be made based on the given development circumstances as well as the complexity of image features.

This publication was made by Michael Müller as a Master Thesis for the “Institut für Textiltechnik der RWTH Aachen University” in cooperation with Kai Müller (ITA / RWTH Aachen ) and us.

Cognex camera connected with the United Manufacturing Hub open-source stack

Abstract

Objective of this thesis: The goal of the work is the development of a decision logic for the application case-specific selection of hardware and software for image processing systems for for quality control in industrial production. On the hardware side the components components camera, lens and illumination system are considered. On software side, it is decided, depending on the application, whether conventional algorithms or ventional algorithms or methods of Deep Learning are more suitable.

Solution process: Within the scope of a literature search, relevant descriptive variables for standardized for the standardized characterization of technologies and use cases. Furthermore, interdependencies between individual components and properties of the use case will be identified. By means of a market research, a database with concrete product product information. Based on these steps, a set of rules for the selection of hardware and software technologies is derived and tested on a use case in the application case at the Digital Capability Center Aachen. The decision-making logic for selecting hardware components will finally be user-friendly computer application.

Key results: To select suitable hardware components, a five-stage decision logic is developed and implemented as a software application, which suggests suitable components to the user depending on the specified use case and prioritizes them according to list price. In a simulative evaluation, this achieves complexity reductions between 73 and 98% and cost savings between 46 and 93%. A decision between Deep Learning and conventional algorithms can be made based on the given development circumstances as well as the complexity of image features.

Key word: Digital quality control, Technical textiles, Mobiltech, Industry 4.0, Technology selection

Content

Master Thesis Michael Müller

6.3 - Deep learning for industrial quality inspection: development of a plug-and-play image processing system

The central result is an overall process overview and a microservice architecture, with the help of which an industrial image processing system can be put into operation on the software side only by configuring the camera and entering the environment variables. Currently, cameras of the GenICam standard with GigE Vision interface and Cognex cameras are supported. The open architecture creates a basic platform for the development of further microservices and subsequent processes in the context of industrial image processing.

This publication was made by Patrick Kunz as a Bachelor Thesis for the “Institut für Textiltechnik der RWTH Aachen University” in cooperation with Kai Müller (ITA / RWTH Aachen ) and us.

MQTT is used as a central element in the open-source architecture for image processing systems

Abstract

Objective of this thesis: The objective of the thesis is the development of a robust and user-friendly software for an industrial image processing system, which applies deep learning methods. The user of this software will be able to quickly and easily put an image processing system into operation due to its plug-and-play capability and standardized interfaces. The system software is based exclusively on royalty-free software products.

Solution process: For the development of the overall system, relevant standards, interfaces and software solutions are researched and presented. By dividing the sys- tem into sub-processes, functional requirements for the software are derived and implemented in the development with the general requirements in a system architecture. The implementation and subsequent validation is carried out in the model production for textile wristbands at the Digital Capability Center Aachen.

Key results: The central result is an overall process overview and a microservice architecture, with the help of which an industrial image processing system can be put into operation on the software side only by configuring the camera and entering the environment variables. Currently, cameras of the GenICam standard with GigE Vision interface and Cognex cameras are supported. The open architecture creates a basic platform for the development of further microservices and subsequent processes in the context of industrial image processing.

Key word: Machine vision, quality control, deep learning, microservice architecture, MQTT

Content

Bachelor Thesis Patrick Kunz

6.4 - Development of a decision tool to select appropriate solutions for quality control depending on the defects occurring in the manufacturing process in the automobile branch of the technical-textiles industry

The results of this research provide an overview of the problems being faced regarding quality control during the manufacturing processes of technical textile in the automotive industry. In addition, information on the extent to which digital solutions for quality control are established in the industry is analyzed. Moreover, existing digital quality control solutions and measuring principles to tackle the identified problems in the industry are researched and identified.

This publication was made by Aditya Narayan Mishra as a Master Thesis for the “Institut für Textiltechnik der RWTH Aachen University” in cooperation with Kai Müller (ITA / RWTH Aachen ) and us.

Source: https://www.lindenfarb.de/en/

Abstract

Objective of this thesis: The objective of this thesis is to develop a decision tool regarding the quality control in the manufacturing of technical textiles for the automotive industry. The tool shall enable the access to information about the problems being faced and the consequent defects occurring during the manufacturing of technical textiles in the automotive industry. Subsequently, it shall provide an overview of the corresponding solutions and measuring principles for each of the identified problems

Solution process: Firstly, a literature review is carried out to provide a deep profound understanding to the important quality parameters and defects in each of the manufacturing processes of technical textile. Based on the literature review, a questionnaire is created to perform a market analysis in form of expert interviews. With the help of the market analysis, industry insights to the current status and problems associated with the quality control of manufacturing the technical textile fabrics in the automotive industry are addressed. Afterwards, based on the problems acquired through the expert interviews, the solutions and measuring principles are identified and subsequently a concept for the decision tool is designed.

Key results: The results of this research provide an overview of the problems being faced regarding quality control during the manufacturing processes of technical textile in the automotive industry. In addition, information on the extent to which digital solutions for quality control are established in the industry is analyzed. Moreover, existing digital quality control solutions and measuring principles to tackle the identified problems in the industry are researched and identified.

Key word: Digital quality control, Technical textiles, Mobiltech, Industry 4.0, Technology selection

Content

Master Thesis Aditya Narayan Mishra