The Importance of Prioritizing Safety for Advanced Robotic Systems

Just a few years ago, the very idea that humans might be able to build something much more complex and smarter than themselves was considered nothing more than a dream or an overblown fantasy. However, this has changed relatively recently. Artificial intelligence and cybernetics are perfect examples of this union of human and machine. But is this union safe?

First we should look at what cybernetics actually is and why is it so important for robotics?

The powerful cybernetics

Cybernetics represents the interdisciplinary study of communication and control systems controlled by feedback mechanisms. Cybernetics includes concepts such as cognition, adaptation, social control and communication, making it very similar to machine learning. Both cybernetics and AI areas are based on the same principle = binary logic, which is why they are also often confused. However, they are slightly different: artificial intelligence is based on a realistic view that machines can work and behave like humans, while cybernetics is based on a constructive view of the world. Cybernetics applies to robotics to give the sense to systems and to plan and act with so-called situational awareness.

The mistake problem

When people think about implementing robots to work fields, they should have consider one thing – when we achieve to run robots in this world, what happens if an error occurs? People make mistakes no matter if it’s their fault or not. The world constantly changes and people can adapt to these changes. On the contrary, robots don’t.

Generally in all software fields even the tiniest change may lead to a system failure. But what is the bigger problem: a lot of system are deeply connected and depend to each other. If one of the system fails, the other ones will fail too. It’s more likely that the failures will chain because digital systems are very sensitive. For example, if you erase a word from a famous book, you will still understand that one sentence from the book. But if you delete a character from a compiled file, you will get… A complete chaos.

And that is the difference between humans and machines. A person won’t die because of a missing letter or word. We make mistakes often but in small scales. Machines fail less often but their failures lead to more serious impacts.

Security controversy

Robotics and artificial intelligence are relatively new sciences. During that time, it had not yet been precisely defined how a robot should behave in society. There are robotics laws, but when we compare them to the laws that apply to the citizens of every democratic state in the world, we find that they are not as precise and comprehensive. This causes a lot of confusion and controversy. The greatest confusion is caused by ignorance of the violation of any law. Nowadays robots don’t have any feelings, nor the feelings of guilt.

For example, a robot could be misused as a tool to commit murder, without itself being aware of the seriousness of the act it’s doing. Robotic devices with artificial intelligence also don’t yet know the threshold of pain, when they can hurt or kill a person. This is also a reason to pay more attention to safety in the workplace using robots. Technology all alone isn’t capable of achieving maximum security in the society.

We divide failures into 3 categories:

  • Hardware error – mechanical parts or energetic systems failure…
  • Software error – hacked software, an error in a code…
  • Human error – recklessness, carelessness, unauthorised access, insufficient knowledge…

Adhering to industry standards and educating workers can save lives and enable modern workplaces to effectively utilize advanced robot capabilities. The manufacturer of robotic equipment must ensure that this equipment meets all requirements for safety and health and environmental protection, technical documentation and instructions for use must be also available. The people managing these equipment must be sufficiently familiar with the equipment and have a good attitude towards it considerate.