#4 - Digital twins could protect manufacturers from cyberattacks

Digital twins are opening doors for better products across all industries and cybersecurity may also fit neatly into the digital twin portfolio.


 

A new and improved strategy for detecting cyberattacks on manufacturing systems, such as 3D printers, uses artificial intelligence (AI) to monitor a digital twin that mimics and is fed real-time data from the physical system.
GRAPHIC: N. HANACEK / NIST

As more manufacturing equipment becomes remotely accessible, new entry points for cyberattacks are created. Researchers at the National Institute of Standards and Technology (NIST) and the University of Michigan devised a cybersecurity framework bringing digital twin technology together with machine learning (ML) and human expertise.

In a paper published in IEEE Transactions on Automation Science and Engineering, researchers demonstrated the feasibility of their strategy by detecting cyberattacks aimed at a 3D printer in their lab. The frameworkcould be applied to a broad range of manufacturing technologies.

Cyberattacks can be subtle and difficult to detect or differentiate from other, sometimes more routine, system anomalies. Operational data describing what’s occurring within machines – sensor data, error signals, digital commands issued, or executed – could support cyberattack detection. However, directly accessing this data in near real time from operational technology (OT) devices could put the performance and safety of the process on the factory floor at risk.

However, without looking at the hardware, cybersecurity professionals may be leaving room for malicious actors to operate undetected, which is where digital twins come in.

Digital twins arm engineers with operational data without impacting performance or safety, including predicting when parts will fail and require maintenance. To seize the opportunity presented by digital twins for tighter cybersecurity, researchers developed a new strategy, tested on an off-the-shelf 3D printer.

The team built a digital twin to emulate the 3D printing process and provided it with information from the real printer. As the printer built a part (a plastic hourglass), computer programs monitored and analyzed measured temperatures from the physical printing head and simulated temperatures computed in real time by the digital twin.

The researchers launched waves of disturbances at the printer, some as harmless as an external fan causing the printer to cool, but others that caused the printer to incorrectly report its temperature represented something more nefarious.

How did the team’s computer programs distinguish a cyberattack from something more routine? The framework’s answer uses a process of elimination.

The programs analyzing the real and digital printers were pattern-recognizing ML models trained on normal operating data. The models were adept at recognizing the printer operating under normal conditions and could tell when things were out of the ordinary.

If the models detected an irregularity, they passed it to other computer models that checked whether the strange signals were consistent with known issues, such as the printer’s fan cooling its printing head more than expected. Then the system categorized the irregularity as an expected anomaly or a potential cyber threat.

In the last step, a human expert is to interpret the system’s findings and make a decision. The expert either confirms the cybersecurity system’s suspicions or teaches it a new anomaly to store in the database. In time, the models learn more, and the human expert needs to teach less.

For the 3D printer, the team found its cybersecurity system could correctly sort cyberattacks from normal anomalies by analyzing physical and emulated data.

The researchers plan to study how the framework responds to more varied and aggressive attacks, ensuring the strategy is reliable and scalable.

NIST: https://www.nist.gov