Hostname: page-component-6bf8c574d5-9nwgx Total loading time: 0 Render date: 2025-02-23T17:34:16.707Z Has data issue: false hasContentIssue false

Robots: The Sorcerer's Apprentice Broom?

Published online by Cambridge University Press:  02 May 2012

Hendrik van Brussel
Affiliation:
Department of Mechanical Engineering, Katholieke Universiteit Leuven, Celestijnenlaan 300B, 3001 Heverlee (Leuven), Belgium. Email: Hendrik.VanBrussel@mech.kuleuven.be
Rights & Permissions [Opens in a new window]

Abstract

By virtue of its outspoken multidisciplinarity, robotics is an extremely popular research field, but is not always exercised by researchers with the broad scientific view required to make breakthroughs. This leads to a very fragmentary research landscape. Robots have their roots in fiction (the Sorcerer’s Apprentice, Frankenstein, Golem, RUR). Their first appearance in the real world came in the early 1960s when Unimation sold their first industrial robots. However, only recently are robots invading people’s daily lives, as service robots, and in health care, as surgery robots, intelligent wheelchairs, and rehabilitation robots, for example. This migration from structured factory environments to cluttered homes is a tremendous step, requiring much more intelligent behaviour. In this paper, the major research questions to be answered will be outlined and illustrated with partial solutions, mainly taken from the author’s own research experience at KU Leuven.

Type
Brains and Robots
Copyright
Copyright © Academia Europaea 2012

Present-day Societal Paradigm Shifts

Paradigms are generally accepted truths. Societal paradigms shift continually, sometimes gradually, sometimes suddenly and drastically. It is important for industrial and societal decision makers to be aware of these shifts at an early stage, and to be able to act upon them in a proper and timely way.

Some important societal/industrial paradigm shifts, which can be taken into account by appropriate technological developments, are listed hereunder. The potential positive role that robotics can play in this context is the subject of this paper. The title of this paper refers to the dialogue ‘The liar and the sceptic’ by Lucian of Samosata (AD 125–180) (who inspired Walt Disney, Paul Dukas and even Goethe), in which the inexperienced sorcerer's apprentice tries, in the absence of his master, to use the sorcerer's magical pestle – his private android or personal robot – to fill a bucket with Nile water effortlessly, in order to clean the room, but he does not know the magic formula to control the pestle and stop the water flow, with the room flooding as a consequence. This story holds a warning that new technology should be used properly and carefully so as not to bring harm to people but to help them.

  • Paradigm shift 1: Humans want to get rid of monotonous, dirty, dangerous, physically exhausting tasks. This calls for machines (robots) to replace or augment the human.

  • Paradigm shift 2: The customer wants high-quality, personalized products (mass customization) at low cost. This calls for manufacturing systems that are flexibly automated and easily reconfigurable.

  • Paradigm shift 3: The machine (computer) has become woven into the fabric of people's everyday life. This requires simple and natural ways of human/machine interaction (the disappearing machine/computer).

  • Paradigm shift 4: The ‘age quake’ necessitates a rethinking of (health) care. The machine (robot) replacing the human as caretaker and companion is a noble goal to be pursued.

Robots

Robots are excellent candidates to help in coping (faster) with the above listed paradigm shifts.

For our purposes, robots are machines that (physically) interact with objects or humans. To realize the goal of using robots to cope with the above-mentioned paradigm shifts, robots must evolve from machines that merely execute pre-programmed jobs, such as for example spot welding robots in the automotive industry, to intelligent machines, featuring autonomy, adaptivity, learning capacity and provided with a natural human/robot-interaction capability, such as for example intelligent wheelchairs with shared autonomy. Or stated differently, robots should exhibit humanlike traits before they can be called intelligent.

Robots can belong to one of two categories: industrial robots (e.g. spot welding robots) or service robots (e.g. museum guide robots, vacuum cleaners, robot pets), with medical robots as an important subcategory of service robots (e.g. surgical robots, rehabilitation robots). In this paper, the main emphasis is put on service robots. Figure 1 shows different robot types and their appearance.

Figure 1 Robot types and appearances.

Humans have always been trying to animate lifeless matter. The magical pestle of Lucian of Samosata is an early example. Hero of Alexandria built many moving garden ornaments, driven by air or steam. Al-Jazari, in twelfth-century Iraq, made ingenious mechanisms, automata and clocks. Leonardo da Vinci demonstrated a walking lion before King François I of France. Famous are the eighteenth-century automata of Jaquet-Droz, still on display in the museum of Neuchâtel, Switzerland and Vaucanson's ‘The Duck’, still to be seen in Grenoble, that could flap its wings and digest grain. And, of course, all science fiction stories such as Frankenstein, the Golem, Capek's RUR, and the like, are illustrations of man's desire to create artificial living creatures.

Robots are, contrary to the SF-contraptions, not always – in fact almost never – anthropomorphic (android). They mostly are stationary, arm-like machines equipped with a rudimentary hand or gripper to manipulate objects. They are programmed like a computer, using a specially designed programming language. Robots can also be mobile, moving on wheels (wheelchairs), tracks (fire fighting robots), or legs (Japanese android robots, mainly research vehicles), or they can fly (unmanned airplanes, also called ‘drones’) or swim (unmanned submarines for underwater repair or recovery, autonomous endoscopes ‘swimming’ in the intestine channel). Figure 2 shows different locomotion systems for robots. Sensors are increasingly used to enhance the interaction of robots with their environment. Recently, there has been a renewed interest in android robots. Until recently, they were only developed in Japan, with the bizarre example of Professor Hiroshi Ishiguro, Osaka University, who made a copy (or ‘avatar’) of himself and of his daughter. Figure 3 shows him with his android avatar. Now also NASA is spending considerable research money to develop humanoid robots meant to work alongside humans as helpers on board of the International Space Station.

Figure 2 Different robot locomotion systems.

Figure 3 Intelligent wheelchair controlled by a brain computer interface (BCI).

Intelligent Robots

The service robots that are to be discussed in this paper evolve towards machines that exhibit an increasing number of attributes of intelligence, such as: autonomous/autonomic behaviour, capacity of learning from experience, (natural) communication capabilities, behaviour-based control, (physical) human/robot-interaction, group behaviour, shared control, (self)-awareness/consciousness.

Autonomous Behaviour

Autonomous behaviour is the ability to cope with uncertainties and recover from unexpected events. An example of an autonomous robot is the Intelligent AGV (Automatic Guided Vehicle), called E'GV, developed by the author for the company EGEMIN. E'GV navigates freely in a factory environment between predefined start and stop positions, based only on natural landmarks (e.g. walls), detected by a laser scanner, and a map of the factory floor stored in the computer. Obstacles, occurring on the robot's trajectory, are detected by the laser scanner and automatically avoided using on-board software algorithms.

Autonomic Behaviour

Autonomic behaviour is a concept originally invented by IBM to save on software engineers by introducing self-maintenance of computer software. It has been adapted for robots by the author. Autonomic robots are robots that keep working optimally under all circumstances. They exhibit so-called self-X properties, such as: self-diagnostics, self-repair, self-optimization, graceful degradation. An autonomic robot would, for instance, move its arm in such a way as to consume a minimal amount of energy. Hence, autonomic behaviour has nothing to do with the task the robot has to execute, but only with keeping the robot in optimal shape when executing the task.

Learning Behaviour

A learning robot adapts (optimizes) its behaviour based on past experience, this latter acquired by training. For example, a handicapped person, with a handicap consisting of a reduced ability to turn right, learns to navigate an electric wheelchair. In a learning phase he/she first navigates the wheelchair through a complex test track, full of obstacles. The joystick signals, the wheelchair positions, and the positions where collisions occur, are continuously registered and serve as inputs and outputs to train an artificial neural network (ANN). This ANN contains all necessary information about the driver's navigation skills and handicap. During actual driving, the ANN assists the driver where necessary with safely executing the intended trajectory, thereby avoiding collisions with obstacles. This behaviour is called ‘shared control’, as the robot control function is shared between the driver and the control computer. The driver is given as much autonomy as possible so that he/she keeps the impression of having full control.

Behaviour-based Robot Control

In behaviour-based control of robots the task is expressed as a concatenation of elementary robot behaviours, like moving forward, avoiding an obstacle, moving in contact with a plane surface with a constant force (e.g. for window cleaning), etc. As an example, opening a door consists of following a sequence of elementary behaviours: grasping the door handle, turning the handle, pushing or pulling the door open, navigating through the door opening. Sensors signal the transition from one elementary behavior to the next. Behaviour-based robot control mimics human behaviour when executing similar tasks.

Human/Robot Interaction

As stated earlier, human/robot (H/R) interaction is very important to promote and facilitate the use of robots in different tasks and environments. H/R-interaction can be non-physical or physical.

Non-physical H/R interaction can be achieved by programming (unilateral communication), one- or two-way voice communication (e.g. robot butler, museum guide, callable mobile camera for video conferencing). Brain control interfaces (BCIs) are starting to become a valuable alternative for robot control, e.g. for wheelchair control by severely handicapped persons (for instance with lock-in syndrome) (Figure 4). Capturing stimulated brain activity by external electrodes positioned on the skull surface enables one to generate control signals to the robot without physical interaction with the wheelchair, e.g. with a joystick. In the framework of the European FP6 project MAIA, in which the author participated, it was possible to devise three thought processes which could generate three distinct move commands (move straight ahead, turn left, turn right) enabling navigation of a wheelchair steered merely by thought processes. Honesty requires the author to say that parallel-acting obstacle avoidance algorithms were still needed to enhance the system reliability.

Figure 4 Professor Hiroshi Ishiguro with his android avatar.

Physical H/R interaction is important for programming-by-demonstration applications (welding), or for lifting aids (iron nurse for handling bed-ridden patients, power lifting assistants in industry enabling load sharing between human and robot for transport and positioning of heavy objects) or for rehabilitation robots, where motions are ‘softly’ imposed to the patient's limb through so-called impedance control.

An important H/R communication mode is ‘shared control’, already mentioned above for the wheelchair application. Issues in shared control are: capturing intention estimation and user models from joystick signals, navigation training, feedback (haptic, tactile, visual) to the user to enhance control performance, using methods and tools such as Bayesian estimation and ANNs. The ideal, which is now pursued by the author's research group in the framework of the new FP7 European project RADHAR, is the so-called ‘single button’ wheelchair. The patient buys a wheelchair and can immediately start using it in a safe starting mode by pushing one button. During its use, the wheelchair gradually gets smarter and adapts its behaviour to the particularities of the user.

Natural interfaces are needed to speed up the acceptance of robots that are used in the vicinity of humans. It is clear that people would accept robots in the same room if they were able to talk to them in a natural language and if the robot were to react in a humanlike way. Robots with genuine emotions are still far away and it is even doubtful that we will ever be capable of creating such robots. The Japanese pet robots we can see on YouTube only exhibit pseudo-emotions, being stereotypical, pre-programmed reactions to incoming stimuli.

A nice example of a robot with an easy-to-use natural interface is the robot for laser laparoscopy, named Vesalius, equipped with a writing interface, developed by the author and his team. The robot has four motion degrees of freedom and serves to move the laparoscope in such a way that it rotates around the so-called ‘trocar point’, which is the point where the laparoscope enters the body of the patient. By doing so the skin at the trocar point is not mechanically loaded. The laparoscope tube contains the laser beam and the camera, mounted parallel with respect to each other. By manipulating the position of the laser beam, tissue can be ablated on an organ located inside the body. In manual laser laparoscopy, the surgeon moves the laser beam manually along the desired trajectory. In robotized laser laparoscopy, a joystick must be used to control the laser beam position. Because this is not a natural way of control, in Vesalius, the joystick has been replaced by a pen by which the surgeon ‘writes’ the desired trajectory on a digitizing tablet or directly on the endoscope image rendered on a touch screen. This is a very natural way of controlling the robot position. It further has several secondary advantages, such as: hand tremors are avoided and high ablation trajectory accuracies can be obtained by introducing a scale factor between the motion of the pen and that of the laser beam.

Using robots in ‘normal’ laparoscopic surgery brings along an important drawback, namely that the surgeon has no haptic or tactile feedback. Indeed, due to the fact that he/she is controlling the surgical tools from a distance (teleoperation or master/slave operation) with a master joystick, he/she does not feel the interaction forces occurring between the tools and the tissue, for instance during suturing. He/she also cannot palpate the tissue with his/her finger to detect the presence of hard spots or the position of arteries, as he/she would do during open surgery. Therefore, haptic feedback methods are needed and are being developed by the author's team, in which the tool/tissue interaction forces are measured and reflected back to the master station from where the slave robot is controlled. To make tele-palpation possible, an artificial robot finger measures the tactile image and sends this to a tactile actuator (e.g. a ring with an array of pins around the surgeon's finger). The array of pins renders the tactile image onto the surgeon's finger, yielding the same perception as when he/she would palpate the organ during open surgery.

Group/Swarm Behaviour

In many instances a group or even a swarm of robots has to work together to achieve a complex task. An industrial example is an assembly system consisting of several robots grouped around a common transport system to assemble a complex product, such as a mobile phone. Every robot can do part of the job, some can do several jobs, but they all have to work together to achieve the global goal: assembling a certain amount of mobile phones of sufficient quality, thereby meeting a certain deadline. Visionaries envisage a future where swarms of thousands of robots are working together to clean up garbage piles, or where thousands of robots are launched in space for interplanetary missions to execute exploration and exploitation tasks on remote planets. Or where clouds of ‘intelligent dust’ (nanocameras) are released for spying and intelligence missions.

The author's research group has developed several powerful methods to control groups of machines (robots). They are based on two concepts: the holonic systems and the stigmergy concepts. A holonic system consists of a group/swarm of autonomous agents (holons), working together to achieve a common global goal. A holonic system is situated between the strict hierarchical system (with only fixed rules, no flexible strategies) and the heterarchical system (no fixed rules, only flexible strategies). A hierarchical system is very predictable, as long as everything goes right, but has no flexibility when something goes wrong, making the whole system stop, while a heterarchical system is extremely flexible but no predictions can be made about its performance. A holonic system is situated in between: it may start as a hierarchical system but when something goes wrong (e.g. a robot breaks down) the holons start interacting with each other, on a local basis, to find a solution. This solution will be suboptimal but the system keeps functioning; it exhibits graceful degradation. A global solution is obtained based on local interactions between holons. Holonic systems exhibit emergent, hence unpredictable, behaviour, and this is one of the major criticisms on the system. A V-shaped flock of geese is a nice example of how local rules (keeping distance to immediate neighbours) give rise to global behavior (V-shaped flock).

The inter-holon communication can take on several forms. An interesting one is based on stigmergy, being the way ant colonies are foraging. Ants are not communicating directly with each other, but they use the environment to deposit information (pheromones) picked up by the other ants in the neighbourhood.

The two explained concepts have been combined by the author's team in one powerful formalism, called the PROSA (Product, Resource, Order, Staff) architecture, and applied in several case studies, ranging from controlling a huge painting plant for car bodies, through controlling a fleet of crop harvesters and their unloading trucks, a robotics assembly plant with five robots around a pallet transportation system (shown in Figure 5), to a flexible manufacturing system for long weaving loom parts. Another application where local rules resulted in global behaviour was a system consisting of two mobile robots pushing a large box to a far-away target position. Cheating the system by unexpectedly moving the target position (indicated by a lamp detected by light sensors on the robots), or increasing the friction with the ground at one side of the box, did not make the system fail, proving its robustness against disturbances.

Figure 5 Holonic assembly cell consisting of 4 robots, 1 mobile manipulator connected by a pallet transport system, and working holonically together to assemble a variety of products.

Outlook

Robots are high-tech artefacts with an ever-increasing level of ‘intelligence’.

Well-known futurist Raymond Kurzweil foresees that in 2020 a $1000 desktop will have the processing capability of one human brain and that by 2030 machines will claim to be self-conscious and that these claims will be widely accepted. He further asserts that the increasing complexity and deployment of huge numbers of nanorobots, as described in the SF-novel ‘Prey’ by Michael Crichton, may give rise to dramatic unforeseen emerging behaviour.

MIT robot guru Rodney Brooks asserts that ‘Robots of today are like the PCs of 1978’.

It is the humble opinion of the author that these statements are all grossly exaggerated. Robotics researchers will have a tremendous job to even start creating artefacts that exhibit primitive humanlike behaviour, like self-awareness, genuine emotions, etc. So long as these problems are not solved, robots will fail to act as valid companions to humans. Nevertheless, robots, in their present primitive shape, at least compared with humans, continue to serve humanity in a remarkably positive way. The negative aspects, such as creation of unemployment by the industrial use of robots, are, all in all, marginal.

More information can be found on http://www.mech.kuleuven.be/en/pma

Hendrik Van Brussel (born 1944), is Emeritus Professor in mechatronics and automation at the Faculty of Engineering, Katholieke Universiteit Leuven (K.U. Leuven), Belgium, past chairman of the Division of Production Engineering, Machine Design and Automation, Department of Mechanical Engineering. From 1971 until 1973 he was active in Bandung, Indonesia, establishing a Metal Industries Development Centre, and as was an associate professor at Institut Teknologi Bandung. He was a pioneer in robotics research in Europe and an active promoter of the mechatronics idea as a new paradigm in machine design. He has published extensively on different aspects of robotics, mechatronics and flexible automation. He is a Fellow of SME and IEEE. He holds honorary doctor degrees from Rheinisch-Westfälische Technische Hochschule (RWTH), Aachen, Germany, from the ‘Politehnica’ University in Bucarest, Romania, and from the ‘Transilvania’ University of Braşov, Romania. He is a Member of the Royal Flemish Academy of Belgium for Sciences and Fine Arts, Foreign Member of the Royal Swedish Academy of Engineering Sciences (IVA) and Foreign Associate of the National Academy of Engineering (US). He is an Active Member of CIRP (International Academy for Production Engineering) of which he served as President (2000–2001). He is past President of Euspen (European Society for Precision Engineering and Nanotechnology).

Figure 0

Figure 1 Robot types and appearances.

Figure 1

Figure 2 Different robot locomotion systems.

Figure 2

Figure 3 Intelligent wheelchair controlled by a brain computer interface (BCI).

Figure 3

Figure 4 Professor Hiroshi Ishiguro with his android avatar.

Figure 4

Figure 5 Holonic assembly cell consisting of 4 robots, 1 mobile manipulator connected by a pallet transport system, and working holonically together to assemble a variety of products.