Traditionally, robots worked in structured factory environments or workspaces segregated from human operators by barriers or fencing systems. However, today robots are moving into human-populated environments to cooperate with people in different areas such as assembly operations and maintenance. This new field of use is referred to as Human-Robot Collaboration (HRC) Applications.

More than efficiency for task execution, collaborative robots should provide physical safety for humans. Providentially, there are multiple industrial standards concerning personnel safety, hardware requirements, power and force limiting and injury severity scale, although they focus mainly on hazard prevention and lack an explicit clarification of run-time behavior in the face of an unforeseen event. Thus, in collaboration with ITIA-CNR, we aim to find a coherent solution to the problem of guaranteeing the safety of the operators and the efficiency of the robotic task, by devising exhaustive techniques to discover unexpected hazards due to human behavior (un-intended use of robot such as errors and misuses), provide quantitative information about them (risk estimation) and identify possible reactions at run-time when there are safety violations.

We want to design and develop a tool for safety engineers to run a semi-automated risk assessment in a consistent fashion with current standards. The word semi-automated is used because we try to maximize the automatically as much as possible to reduce human errors in safety super-visioning but still leave some decisions on their experience and competence to define risk reduction measures.

In order to do so, we are pursuing to create a framework that automatically provides a logical model of the collaborating system and iteratively verifies satisfaction of desired safety properties, by means of tools previously developed within the Deepse group for formal verification of properties of real time safety critical systems. In particular, we use the Zot, a bounded Satisfiability checker and the TRIO, a temporal logic language.

In short, the result of our automated analysis will be used by safety supervisors to be combined with his/her experience to provide a risk evaluation that is precise, correct and does not overlook unforeseen events.