A Critical Examination Of Using Humanoid Robots In Construction

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View A Critical Examination Of Using Humanoid Robots In Construction as PDF for free.

More details

  • Words: 15,768
  • Pages: 45
0

Engineering Faculty Civil engineering Department Third Year Major Research Project

A CRITICAL EXAMINATION OF USING HUMANOID ROBOTS IN CONSTRUCTION: Exploring the Potential Risks through Artificially Intelligent Behaviour

Supervisor: John may

DECLARATION The work described in this project report is all my/our own unaided effort, as are all the text, tables, and diagrams except where clearly referenced to others.

Signed:

Signed:

Peter Neville

Baiju Vaidya

Date: 23/04/2008

Date: 23/04/2008

1

Acknowledgements

Firstly we would like to thank John May for his supervision and assistance with our report. Also, from the University of Bristol Psychology department, Dr Clive Frankish, who helped with guiding us in a previously unfamiliar area. Thanks also goes to Dr Zeyang Xia of Tsingua University - Robotics and Automation Department, Kayla Kim of Robotis Corporation and Junichira Maeda of The Shimizu Corporation, for their support and information of current humanoid technologies. Finally, a thank you to The Singularity Institute for Artificial Intelligence for providing an insight into AI.

2

Executive Summary

The main objectives of this report are to highlight the crucial discussion points for the future of Artificial Intelligence, and show how these will determine the nature of risks through using humanoid robots in construction. In this report, humanoid refers to a humanoid robot. This is an autonomous robot, is the form of a human, which can be advantageously deployed to perform tasks in a range of environments [i]. Construction is one of the largest industries in the world, and one which humanoids could greatly benefit.

Introduction The limited history of robotics and the rate of progression in technology has emphasised how important it is to consider the risk of tomorrows artificial agents today. Looking at the potential of humanoids at present has shown that they could be ideal for use on construction sites. The main needs for humanoids in construction were found to be limited human capabilities, more effective & efficient site operation, and safer working practices.

Artificially Intelligent Behaviour By studying cognitive psychology and human internal processes, various limitations that humanoids in construction may face transpired. These included inadequacies in sensation, perception, recognition and attention. Researching modern theories of AI, two possible futures of intelligence in artificial agents were concluded, one where there is a limit at basic intelligence and one where this limit does not exist. Following this a functional analysis was carried out for each AI future, which declared what functions a humanoid would need for construction.

Risk For basic intelligence, a risk assessment was carried out and showed that the majority of consequences were detrimental to the construction industry in terms of human resources and productivity. With highly intelligent humanoids, it was shown that the major risks lie in lack of human control. Masked by the potential benefits of using humanoids, construction companies could underestimate the complexity of control needed, and so be under the pretence that their control is sufficient when it is not. If this is the case, the risks could be catastrophic.

3

Contents

Introduction

4-14

A Brief History of Robots and Humanoid Robots

4-6

Case Study: Honda – The development of Asimo

6-8

Current Uses of Humanoid Robots

8-9

What are the needs for Humanoid Robots in Construction?

9-11

Automation in Construction now

12

Inhibitors to Progress and Recommendations

13-14

Where is Industry Heading?

14

Artificially Intelligent Behaviour

15-28

Introduction to AI Behaviour

15

Psychology of Artificial Intelligence: How a humanoid might behave

16-20

Agency and Artificial Intelligence – Intelligent Agents

20-25

Possible application and issues regarding humanoids for construction tasks

25-27

Investigation by Functional Analysis Method

27-28

Risk Assessment – Humanoid Robot: Weak AI (Basic Intelligence) Risk Assessment table

Risk Assessment – Humanoid Robot: Strong AI (High Intelligence)

29-31 30-31

32-39

Intelligence Levels

34-36

Control

36-37

Ethics

37-39

Final Discussion

40-42

References

43-44

4

Introduction

A Brief History of Robots and Humanoid Robots The era of robots, in particular humanoid robots, is in its infancy. Much of the work that has taken place in this field dates only to the last century, but this has laid vital foundations that can be built upon. Our efforts in understanding a humanoid robot’s behaviour and estimating the potential risks associated with its use, have to stem from previous results established by inventors, scientists and theorists.

The operation of humanoid robots has yet to happen and the research currently ongoing is progressive, but limited. In addition, it is probable that humanoids will emerge as specialists in certain tasks (similar to automated robots used today), before ones with generic intelligence materialise. A study of the development of robotics through time will provide an insight into how technology has progressed, and hence better understand the direction that robotics is heading in the future.

In the past, little research has been carried out on humanoids, but more on robots in general. Only recently has the work on robots been advanced enough to create humanoids, so much of the history lies in the broader field of robotics, rather than specifically on humanoids. In reality, a humanoid is simply an autonomous robot but its overall form is based on that of a human. Advances in humanoid robots will mainly be through their AI capabilities, as will be shown in the following section. This means that the humanoid could one day have the ability to interact with people using natural language, self-learn, and think in order to fulfil its goal.

The word robot originates from the Czech word robota, meaning slave-like labour [1]. Its clear purpose from its original design was to be a simple aid to the human; to perform tasks the human didn’t want to or shouldn’t have to perform. History on robotics is very vague, partly due to false claims of robots having been built in the past, but mostly due to the definition of the word robot. An actual mechanical agent, with the ability to decision-make, known as a robot, has only truly been around for the past century. However the ideas and designs of an ‘inanimate creation’ that performs various tasks have been around for centuries. Various ‘machines’ have been constructed in the more distant past with the ability of performing one

5

sole action but can these be defined as robots? It is more logical to say that the technological age in the last century that has created detailed, reliable electronics and a vast increase in computer technology, has ‘added brains to brawn’ so to speak. These machines now have the ability to convey a sense that they have intent or agency of their own.

Agency is a philosophical concept of an agent being able to make choices and impose those choices on the world [2]. Robotic engineers solely consider mental agency. This is judged not on the appearance of the machine, but the way its actions are controlled. For example, a typical remote controlled car is not considered to be a robot, however, self-controlled cars like the 1990’s driverless cars of Ernst Dickmanns, could be classified as a robot [3]. On the other hand, for many laymen, a CNC (Computer Numerical Control) milling machine is not considered to be a robot, but a factory automation arm or a humanoid is. Interestingly, the CNC milling machine has a very similar control system to a robot arm, however the perception of human features makes people feel that the machine is not simply a machine, but is aware of its surroundings, and thus a robot. Due to this uncertainty, different countries set various guidelines for what can be classified as a robot [4].

The first major design of a humanoid was by Leonardo da Vinci in around 1495, around 500 years before the first humanoid was ever constructed. The design is of a mechanical knight able to sit up, wave its arms and move its head, and is said to be based on the Vitruvian Man. Prior to the construction of robots, complex machines existed with the ability to perform certain tasks. It was from these that robots came into being, and in turn from these that humanoids were created. It is claimed that the first electronic, autonomous robot was created by William Grey Walter in Bristol, in the early 1950s. Capable of sensing light and being able to contact objects, meant it was able to easily navigate [4]. Following this, the first modern robot was constructed in 1954 by George Devol named the Unimate [5]. Within the next six years it was sold to a General Motors plant to lift hot pieces of metal from a die-casting machine, and stack them.

In 1971 Miomir Vukobratovic and his team at Mihajlo Pupin Institute, Serbia, built the first active anthropomorphic exoskeleton, meaning it had the unique human attributes but in a nonhuman form. This led to construction of the first full-scale anthropomorphic humanoid robot. Wabot-1, built in Waseda University, Tokyo, had the ability to communicate with a person, measure distances and judge direction using external receptors, and could hold and

6

transport objects using tactile sensors on its hands [6]. It consisted of limb-control, vision and communications systems. This breakthrough in technology was the first glimpse that man could appear to give life to an inanimate object. In quick succession, further similar humanoids were produced, each trying to outdo the other. Like the space race, Japan and the USA had a small technological race for humanoid creation. Laboratories and research centres were set up, and a great deal money was given to fund R&D in this field.

Since Wabot-1, each new humanoid created had more similarities to not only the human form, but also human behaviour. Primarily, the evolution of humanoids was in the way of increased vision, increased speed and an increase in the degrees of freedom. It was then stepped-up to designing a humanoid being able to walk on uneven surfaces, creating an artificial respiratory system, and manufacturing a silicone skin.

Case Study: Honda - The development of Asimo [7], [8], [9]

Before

undertaking

a

study on

humanoids,

a

general

understanding of a current humanoid’s components and formation is necessary. Understandably, this delves into more of the technological aspect of the humanoid and could be greatly detailed. Furthermore, the variety of humanoid structures are excessive, and so for the purpose of this introduction, a brief and concise case study will be carried out on arguably the most advanced humanoid today; Asimo, created by Honda. Figure 1

In 1986, Honda set themselves the task of constructing a two-legged humanoid robot that had some abilities similar to that of a human. Their goal was to ultimately create a partner for people, a new kind of robot that functions in society.

Prior to Asimo, a series of development robots were constructed. In early experimental models (E0 – E3), from 1986 to 1991, the focus point was to create legs that could simulate the walking motion of a human. The second stage in the development of these models (E4 – E6),

7

took place between 1991 and 1993, where the motion of legs was developed for stabilisation and stair climbing. Honda’s first series of prototype humanoid robots (P1 – P3) were developed between 1993 and 1997; a head, body and arms were added to the robot to improve stability and add functionality. The evolution from P1 to P3 was through weight and height decrease, and its aesthetics; on the whole making it more similar to a human. In 2005, the latest version of Asimo was unveiled. Having surpassed all of its counterparts, Asimo has the ability to walk and run (both feet off the ground between paces) on uneven surfaces, register and interact with objects, recognise certain programmed faces, and comprehend and respond to voice commands.

Weighing 52kg and with a height of 120cm, meant that Asimo was the smallest model produced so far. Through anthropometric research, it was found that this combination of height and weight made the humanoid more ergonomically optimal. Asimo has 34 degrees of freedom meaning that its motion is more fluid and dynamic. A major development from the prototype models is the ability of Asimo to change direction while running. This, along with Asimo’s advanced communications technology and increased sensory perception, has assisted Asimo in realistic movement and actions.

Recent developments have meant Honda can now enable humanoid-humanoid interaction. This has been through developing information transfer systems between Asimos. Through this, a range of Asimos can interact, and deduce the most time-efficient way to complete a task. In addition, Asimo can now also detect a variety of surrounding movement and thus can identify oncoming people. It can then choose the most appropriate path so that it does not block the movement of others. Most recently, a new space-saving battery recharging station has been created, and through the latest technology, Asimo will automatically recharge at the nearest station when its battery falls below a critical point.

A concise technological understanding of what can be called a ‘model’ humanoid of today can be used as a solid foundation in order to progress in the report. If an understanding of an undeveloped future humanoid is to be gathered, this technical case study of Asimo is necessary. All of the recent developments mentioned have made the idea of humanoid use on construction sites more of a reality, and so from this trend, it is logical to suggest that not only will humanoids be present on a construction site in the very near future, but they could have numerous uses.

8

At this stage in the development of Asimo, Honda’s main aim is to put R&D funds into looking more at intelligence and AI capabilities, as this is the difference between a limited use humanoid and a next generation humanoid. This aspect of AI is what will be investigated in the following section Artificially Intelligent Behaviour.

Current Uses of Humanoid Robots Before looking at the uses a humanoid robot has in our society today, it is essential to look at why it has them. Only then can future uses be appropriately assessed. The main areas of R&D in humanoids are in motion, perception based control, interaction, and upper limb control [10]. The major flaw in motion is the use of a humanoid’s legs. Not only is speed very limited, but also the ability to adapt to height changes in the ground causes significant movement issues. Another major flaw is the ability of a humanoid to interact with its environment and with other humans. These two flaws alone have limited the potential uses of humanoids due to health and safety reasons. Therefore, the main uses at present are such that the flaws are not exposed.

Humanoids in entertainment, is a concept we are all aware of. Even before humanoid use, science fiction films such as Star Wars have entertained us with humanoids like C3PO. Now, universal studios have released Ursula [10]. A humanoid solely for entertainment, she will walk, talk and entertain crowds. However, her abilities are very limited which is due to the two major flaws discussed above. Sony also released a set of dream robots; however due to the fact it was to be made available for children, its size had to remain at 0.6m tall and its strength capabilities minimal.

Humanoid robots are only now being developed to work in close proximity with humans. In 2005, Wakamaru created by Mitsubishi was released to provide companionship to elderly and disabled people. Its cost is relatively low at $14,000 in comparison to other humanoids being developed. However this is in concurrence with its limited abilities. Functions include reminders to take medicine and calls for help if it suspects something is wrong.

At this point in time, humanoid robots are still undergoing research and development. With the idea of humanoid use still being in the very early stages, all that have been made so far are, in principle, prototypes that are being tested. The uses mentioned above however are not the main

9

ones forecast for the humanoid’s future. More importantly are the potential impacts that a humanoid can make in industry; particularly the construction industry.

What are the needs for Humanoid Robots in construction? In order to discover the needs for humanoids on construction sites, it is necessary to understand how a typical construction site operates. The main flaws in the construction industry will be identified, and robots that are used in the industry today will be examined. Also, inhibitors to the progress of robotics in construction and ways to improve development will be reviewed. Only then can it be understood where a humanoid could fit in, what tasks it could carry out, and who it would need to communicate with. It is believed that there are needs for humanoids in construction, and by understanding what they are, it will be seen how the construction industry of tomorrow could be a safer and more efficient place.

A crucial aspect is to identify where humanoids could fit into the industry. It is clear that when humanoids are first implemented on construction sites, they will take on the role of a construction worker, receiving orders. Any higher-ranking humanoid is probably a very futuristic scenario, and might not be seen unless they are commanding other humanoids. To begin with, humanoids will most likely be used to assist human construction workers in carrying out simple tasks such as lifting objects. Most importantly, they will need to communicate with other robots or human workers in some way. Perhaps certain workers on site will receive special training to allow them to give orders successfully to a humanoid. How this might occur will be examined later in the report.

By looking at flaws in the industry, and the challenges that face construction now and in the future, drivers for the acceptance of humanoids can be deduced. If humanoids are a realistic vision for the near future, these drivers for change must be established and understood. In Britain alone, there are over two million people working in the construction industry, making it the country’s largest employer. In the past 25 years, nearly 3000 people have died and many more injured as a result of construction work [11]. Nowadays, fatalities and serious injuries are much rarer; however, they will inevitably happen which many people regard as unacceptable in this day and age.

10

There are various reasons for accidents occurring, the main one being human error, which will always be present. It is due to nature and cannot be helped that humans have idle tendencies, become tired, will not always stay vigilant, or will oversee errors. As long as humans continue to work on construction sites, accidents will occur. The point being implied here is that humanoids might ease this problem, although it is not as simple as that. Humanoids may also be equally unsafe, if not more than humans, especially in the early stages of use. To what extent this may be the case is unknown; assessing the risks to try and clarify the situation would appear to be the only responsible course of action.

The arguments are complex, and there is an important temporal element. An idea that should be considered is that humanoid use in the future could result in far fewer accidents occurring in the even farther future. However, risks concerning humanoids in their early stages may be plentiful, but it can be hoped that these safety issues will be eliminated with time. Surely, the prospect of a construction industry involving humanoids with no accidents associated with them is a driver for change.

A flaw of the industry today, which may not be immediately obvious, is human labour, and the shortages or unreliability of supply; the Swiss construction sector for example is suffering from a declining number of workers [12]. The reason why it is not focussed upon a great deal is because there is no current viable alternative to the problem. Absence from work or sick leave costs the UK economy around £1.75bn a year with back pain being a major cause of time off, so understandably, much loss occurs within the construction industry [13]. Other factors that incur losses include paying for training and holidays. Understandably, if humanoids are used in construction, then far fewer losses would occur. Capital costs for the humanoid units would be high, especially to begin with, but the life-cycle costs might be lower.

In many places around the world, shortages of skilled labour are a big problem and humanoids could be implemented to ease this shortage. Additionally, more and more young people are opting for university educations rather than learning a trade, which is another explanation for the increasing shortage in some areas. It may be argued that humanoids will be used where there is still a good supply of labour, putting people out of work. But the dissemination of these robots is likely to be very slow and their abilities are unlikely to match that of humans for some time after their debut. Therefore, any unemployment due to this is not seen as a major significance, as it will be a natural progression of society and industry.

11

Labour is the resource that is most critical for progressing in any project and its efficient use must be harnessed in order to achieve project effectiveness [14]. Building sites are subject to much delay such as obtaining certain work materials, and when work is rescheduled due to this, workers can be left with nothing to do whilst still being paid.

Although the limit of human capabilities is not seen as a cause for inefficiency, this is again because there is no current alternative. As well as some of the problems facing construction being potentially reduced by using humanoids, productivity on site could be bolstered. Humanoids could be designed with various attributes for certain tasks; there will no doubt be different ‘breeds’. They could potentially be faster, stronger and more precise than human beings. With this arise issues of safety, for example if a humanoid has greater strength than a human, this could be a huge risk factor and there may need to be a trade-off, which is focused on later in the report. If ways can be developed of confidently mitigating these risks though, then the prospects are highly desirable.

In some ways, the use of humanoids could improve sustainability. For instance, many workers in the construction industry use cars, as work is always changing and often in remote parts. In the days of humanoids, there could be one delivery to and from site.

Certain areas of construction are high risk, such as mining, tunnelling, or any type of deep excavation, and humanoids could be employed to reduce risk of harm to human life. That is not to say a humanoid robot will be dispensable, but the value of human life will always be greater than that of a machine. Mining in South Africa is more dangerous than in any other country with over 200 deaths in 2007, an increase from the previous year [15]. Instances like this contribute to the drive for change of attitudes and more towards artificial intelligence.

12

Automation in construction now Various automated robots used today have been examined in order to understand the current state of the technology and how we are heading towards a future where AI will be a significant feature. Automated robots in construction must control their own performance in carrying out sequences of operations. Due to the nature of a construction site, prone to variation much unlike a well-ordered factory, they must be highly suited to their purpose. For this reason the robots found in construction at present will only carry out very specific tasks.

The Shimizu Corporation in Japan have developed prototypes and working construction robots including a concrete power-floating machine and a wall climbing painting robot [16]. The concrete power-floating machine is a good example of how productivity of the human task it performs can be raised. Used regularly in Japan, it provides a means of acquiring a good finish to the concrete surface in a third of the time that a team of construction operatives would have taken.

The ‘RoadRobot’ is a fully automatic road paver developed by the German company Joseph Vogele AG [17]. It is a masterpiece of automation and was developed to pave roads automatically, improve quality and reduce costs. Not surprisingly, the RoadRobot is the most expensive among Vogele road pavers, but certainly the most impressive [18]. Regular pavers certainly have automation of individual functions but this will not satisfy future needs. By linking all of the operating functions to form an overall automatic system, the human operator can direct their skills to fewer jobs such as monitoring output to ensure highest quality is being maintained.

A Surface Preparation system nicknamed ‘BIBER’, developed by a partnership of companies, is an ingenious way of automatically removing, preparing or restoring surfaces, including wall facades, ships and tanks [19]. It was developed as a major labour saving device and a way of improving quality and lowering costs. The system comprises a toolhead, telescoping lifting unit and a vacuum cleaner. The innovative toolhead allows removal of rough or old coating and scrubbing of large areas. The telescopic lifting arm can reach up to 60 metres, saving costs of scaffolding, which would have been otherwise used, and the vacuum picks up the loosened particles that are contained and the air is then filtered.

13

Inhibitors to Progress and Recommendations The development of robotics in construction is more difficult than in other industries for a number of reasons. The industry is extremely diverse and one that has to cope with a unique set of circumstances on each site. The seemingly unorganised nature of work, temporary works, and weather issues are some of the main contributors to the barrier against robotics in construction. There is also much investment needed [20]. That said, progress has been made and if continued, civil engineering will see more autonomous machines replacing humans where safety and efficiency are paramount. These issues are likely to become even greater driving forces as time progresses.

In more general terms, one of the greatest inhibitors to progress is the lack of advancement in AI. It is a huge challenge to develop a humanoid to see and hear, and all the other senses that it needs to possess in order to work with humans. An important concept will be the issue of checking how much information a humanoid is processing and what it will do. If it is unable to process certain information, or data is handled incorrectly, the humanoid could cease the task or carry out the wrong function; this may be a risk to itself and others on the construction site. Giving the power of thought to a machine can be a very dangerous thing, so human control must always be established.

In order to reach a future where humanoids work in construction, the problems facing automation in general in the industry should be tackled. One of the greatest inhibitors to the introduction of robotics is the lack of integration between design and production. By separating these processes, designs will not take account of constructability issues, therefore not catering for the needs of automation. A more logical approach to these processes is required if the benefits of robotics are to be fully realised.

Construction companies can make a huge difference by developing strategies for embarking on R&D programmes. They should form alliances with other relevant players in the construction process. It could prove advantageous to focus on designers and engineering firms with the ability to innovate. Clients and customers are also able to contribute by encouraging innovative uses of technology.

14

Much can be learnt from Shimizu’s SMART (Shimizu Manufacturing system by Advanced Robotics Technology) program, which has looked at redefining the construction process [21]. Their aim is to recreate the ordered working environment of a factory as a construction site. This is achieved by using CIC (Computer Integrated Construction), which automates a number of processes including erection and welding of steel frames and placement of precast floors and walls. With this system in place, construction processes can become more standardised and project durations and man-hours greatly reduced.

As we have seen, robots used now can only function in very specific scenarios and if construction processes do not change, this will cause set backs for robots of the future. Shimizu is taking the SMART approach so that the application of robotics can be less problematic. In turn, when humanoids are used, they will function much better in environments that are similar and less prone to variation.

Where is Industry Heading? In the coming years we can expect to see much more attention given to the navigation and sensing (especially vision) of robots. There will be more investment for R&D and more standardisation of components. There are likely to be changes in construction processes, tending towards simplified assembly and fixing, and offsite prefabrication will increase.

Just as in the 1980s when technology was progressing rapidly and the possibilities for automation in construction were realised, we are now reaching a stage where attitudes are changing. The vision of humanoid robots working in construction is becoming more widespread and companies are seeing them as a seriously viable option.

15

Artificially Intelligent Behaviour

Introduction to AI Behaviour Prior to analysing any form of risk that a humanoid robot of the future could cause on a construction site, it is vital to have a fundamental understanding of cognitive psychology and the mental processes that determine its behaviour. These mental processes are based on those operating in the human brain, such as sensation, perception, problem solving, attention, communication and memory. By analysing these, important design issues and limitations will begin to surface. Focus will be given to the possible ways a humanoid could work on a construction site, and a functional analysis will be carried out in order to help define what tasks will be necessary. The specific concrete scenarios will help to further identify possible limitations.

Due to the conceptuality of this report, it is near impossible to carry out a straightforward risk assessment of humanoid robots, whose future is so ambiguous. As a result, one of the focal aims of this section of the report will be to look at the essence of a humanoid, or any future system for that matter - Artificial Intelligence (AI). This will help to estimate the extent to which a humanoid will be able to act upon its own accord. By exploring some of the compelling arguments concerned with AI, estimations will be made as to what level of intelligence could actually develop in the future.

It should be noted that this section of the report will not focus on the technical details of artificial intelligence and what can and cannot be currently achieved. This study looks at a time in the future that cannot be designed for at present, and makes assumptions about what may be possible. The aim is not to design a prototype humanoid, but rather, the work aims to make the argument that it is essential to think critically about the future use of humanoids in construction. A side effect may be that it is also useful to a designer of humanoids for construction, helping to keep in mind certain design issues that will reduce the likelihood of risks occurring - a critical but often overlooked part of defining requirements for a humanoid.

16

Psychology of Artificial Intelligence: How a humanoid might behave

Cognitive Psychology Cognitive Psychology is the scientific study of mental processes of behaviour. It is being studied here to help to understand the limitations of a humanoid’s capabilities. This section will highlight the relevant principles of mental processes, how they relate to humanoids, and what potential limitations could arise as a result.

Before humanoids can be used in construction, we must ask ourselves: how is information about the external world transmitted to the humanoid’s processor; and what design problems must be solved by robotics engineers for sensory processing? It is difficult to investigate relevant mental processes as there are few humanoids at this time and technology is limited. Suitable assumptions and predictions will be made based on a number of key sources.

Information Processing [22] Information processing is based on mental processes that acquire, interpret and transform mental representations, from perception to memory. Input is registered through the appropriate sensors and transferred to a short-term store where decisions occur and a response is the outcome. As humans, we also transfer information from a short-term memory store to a long term, or permanent memory store, shown in Figure 2. Humanoids will also need a memory in order to recognise objects and perform tasks. The question that arises is, will a humanoid be able to create its own memory by processing information in the same way as a human, or will it merely

manipulate

information? An attempt at tackling this question, along with the broader issue, can be found in subsection; Agents.

Figure 2

Intelligent

17

Sensation and Perception [22], [23] Sensation is the detection of simple properties such as brightness, colour, hardness or loudness. Perception is the interpretation of these sensory signals that facilitates object recognition and the identification of properties such as size and location. The distinction between sensory signals may not be entirely clear; properties that can be directly sensed may be rather complex. This could cause huge repercussions in the case of humanoids. The essential requirement for their sensory system is a network mechanism for translating stimulus energy into electrical signals and a means of differentiating between stimulus qualities.

The processes of perception are largely automatic for humans, but are very complex for humanoids, where they may require a large amount of processing to perceive a single object. The first stage of object recognition is to group elements of the visual array that correspond to different objects. A humanoid will need to have sufficient means of defining between a figure or object against the ground. Humans are able to use their vast knowledge and common sense in order to perceive the world around us, but visual images could prove ambiguous for a humanoid. For instance, how will it differentiate between size and distance; shape and rotation/distortion; colour and illumination?

Perceptual information makes contact with meaning, for example, recognising objects and faces, and reading and comprehending words. In order to identify the sensorial input, a humanoid will need to match this up with its stored memory bank. For coherent cooperation with human workers, designers should aim to match the efficiency and response time of around 200ms that humans are capable of. This is a difficult task due to the extreme variability that exists in the sensory input.

Recognition [24] In the film The Terminator [51], the humanoid robot is fitted with a camera like device, which presents annotated output to an internal control post; this annotation relates to the concept of a ‘homunculus’, which is used to illustrate the functioning of a system, and can be viewed as an entity or agent. An example of this is illustrated in Figure 3. If a similar approach is developed for a humanoid for construction, then the question is raised: How will it recognise objects that are annotated in the visual output?

18

Figure 3

One possible solution to the question is the Template theory [Neisser, 1967]. It is achieved by choosing the template stored in the memory that provides the best match to a sensorial input. It is a simple idea and is already used by non-human recognition systems such as bar code scanners. Problems arise due to the enormous amount of templates that the robot’s memory would need, and also the similarities and dissimilarities that exist between stimuli and templates that are not specified. Another theory that could form the basis of object recognition for humanoids is Feature Matching. Patterns are recognised in terms of features, which are fragments, or constituents of a larger unit. It works well in that a finite number of features could potentially combine into an infinite number of objects; but feature matching only works in fairly simple domains, such as printed letter recognition, and faces difficulty in the recognition of real objects. Both these theories are possibilities for future humanoids but face limitations.

Possibly the most realistic solution is recognition by components [Biederman, 1987], or geon theory. Objects can be described in terms of small sets of geometrical parts named geons. The 24 established geons are simple 3D shapes like cubes, cones and wedges, each one having 15 sizes and builds. Representation of an object consists of an array of constituent geons, as well as a description of the spatial relations among them. There are 81 possible ways to join them and statistical analysis shows there are over 10 million possible objects that can be constructed from two geons. Some geons are illustrated in Figure 4a with examples of objects they can combine to make.

19

Figure 4a

Figure 4b

Many everyday objects can be built out of a small number of geons into instantly familiar shapes. The geon method of recognition could work very well for humanoids, as it would allow them to confidently recognise the objects on a building site and also complex objects such as animals and humans. Figure 4b shows how this can be achieved by breaking down the process into several stages from simple to complex, until the desired level of confidence is reached. Geon theory stands as a strong contender for recognition systems but still faces a key limitation; distinguishing between different faces would be problematic due to the largely generic geon construct of the face. Also, natural objects would cause difficulties, such as a tree or puddle. Geon theory works well for artefacts, but due to the aforementioned limitations, it would not be sufficient alone as a basis for recognition.

The human brain can interpret images of very unfamiliar objects. It may not be able to recognise or name the object, but it can describe it in detail; shape, surface-texture, orientation, size, position, and colour. Computer vision programs for humanoids in construction ought to be able to do the same sort of thing if they are to be truly useful in the future.

Attention [25] A humanoid will be able to sense a large amount of information at any one time, but selectivity is required to keep this information to a manageable size. It would therefore need the attribute of attention in order to behave successfully on site, which Figure 5

would

filter

out

most

irrelevant

information.

Broadbent’s filter model [1958] in Figure 5 represents

sensorial information as identifiable balls, with attention symbolised by a Y-shaped tube. The tube can only accept one ball at a time with a hinged flap acting as a filter.

20

Information is held in temporary store before it is reported and undergoes rapid decay. A humanoid will need to be able to switch the filter from one channel to another, which could take longer than it does for a human. This process is automatic for humans and highlights another area of development that is required for success in humanoid AI. Name activation for channel entry could be used to grab humanoid attention, but when tasks are becoming more complex, and they have more responsibility, they should be working on their own accord without constant prompting. A humanoid that exhibits a high level of intelligence will understand natural language and various gestures, so will achieve a high level of attention. Whether or not this level of intelligence can be reached, there will be a long time where robots can only understand limited subsets of language. Communication processes are likely to be slow and laborious, with possible consequences including misinterpretations of commands.

Agency and Artificial Intelligence – Intelligent Agents Agency is the capacity of humans, animals, robots, or ‘agents’ to make choices and act upon them; it will prove to be vital in understanding the behaviour of humanoid robots. So how do Agency and AI link together? AI is the study of intelligent agents; ones that acknowledge their surroundings and take certain actions in order to perform their tasks [26]. In terms of intelligence, the simplest agent would be a single tasked computer programme, solving a basic problem. On the other hand, the most complex agents are humans; the capability of the brain is, as yet, unmatchable.

There is a format by which a humanoid, or any intelligent agent for that matter, relates to its environment. The humanoid will perceive information from its environment. With its sensors and actuators, it will respond the relative actions. Abstractly, an agent is a function from percept histories (P*) to actions (A), illustrated in Figure 6 [27]. f: P*

Figure 6

A

21

When analysing potential risks, what level of intelligence is being looked at, regarding the operation of humanoid robots on a construction site? The main discussion within this section of the report is what level of artificial intelligence can be physically achieved? Theoretically, it is assumed that, as with any new technology, with time humanoid intelligence will continue to increase. However with regards to the discussion of the philosophy of AI, compelling arguments exist for and against ‘strong AI’ prevailing in the future. If strong AI does not prevail, there will be an eventual standstill to the progression of AI; this is what is known as ‘weak AI’. In this report, when identifying risks further on, weak AI humanoids are those possessing the basic attribute of intelligence, but intelligence levels have reached a threshold from which they cannot increase. Strong AI however, is the case where this barrier is not present, meaning that intelligence levels have the potential to increase to a level similar to a human, or even beyond.

Strong AI [28] In essence, for strong artificial intelligence to prevail, theory states that a digital computer, which is ultimately what the heart of a humanoid is, can be programmed to be a mind. This means that it should be able to exhibit cognitive states ascribed to human beings [29]. The theory of strong AI sets certain specifications, which must be met in order for a machine to be deemed intelligent. With regards to humanoids, if these properties can be exhibited, different behavioral patterns will exist to those displayed by a humanoid solely capable of weak AI.

Specification – N.B. Specifications vary from source to source. However the most important properties with regards to humanoid robots are shown below [29]: •

Ability to reason, and make judgments under uncertainty.*



Ability to demonstrate common sense.*



Ability to learn and adapt information.*



Ability to plan a course of action.



Ability to perceive external stimuli.



Ability to communicate with humans (using Natural Language Processes, NLP).



Ability to recognise, feel and display emotion.*



Ability to manipulate its environment out of free will.



Ability to demonstrate a state of consciousness.*



Ability to be aware of its presence with respect to its environment.

22

It is clear there are some of these stipulations that have already been met; a humanoid has the ability to perceive stimuli, communicate rudimentarily with humans, and in some respects understand and manipulate its environment. Through these developments, it can almost be declared certain that humanoids of the future will have enough technology to perform some functions very well. However, the major points that would determine strong AI intelligence are shown with an asterix above. If these are achievable, the humanoid will be a completely different entity.

Weak AI The concept of weak AI tells us that, in the case of humanoids, they have the ability to act intelligently and simulate cognitive processes. However unlike strong AI, the weak AI system is not itself a cognitive process [30]. As mentioned above, the weak AI system shares some properties with strong AI, and these are clear in today’s humanoid research and development. However, if strong AI is impossible, weak AI states that there will be a dead-end to AI advancement. This would mean that the future of humanoids would be greatly restricted with regards to intelligence.

Complexities of AI [31], [32], [33], [34], [35] The debate lies in whether strong AI is possible. Many AI academics believe that Strong AI is achievable, not perhaps in the very near future, but in the next 20-30 years. Some, on the other hand, believe that it is physically unfeasible for strong AI to prevail, as for this to occur, the computational system has to mimic the exact processes of the brain, which is considered impossible.

On the other hand, the AAAS (American Association for the Advancement of Science) believe that by 2029, machines and humans will eventually merge through devices implanted in the body to boost intelligence and health [36]. IBM, together will a Swiss University team, are to create the first ever computer simulation of the entire human brain [37]. This is being done in order to allow scientists to discover more about exactly how the brain works; if projects like this can be successfully completed, the possibility of mimicking a brain-like system on a robot, is very feasible. However, all of this is simply speculation and two fundamental questions arise creating flaws in this research; how can a brain be mimicked if we as humans don’t even fully understand how it works? And, even if prototypes are constructed, is there actually any way of developing them for use in real-life situations? Overall, the nature of the strong AI argument is

23

based on much theoretical suggestion, whereas the arguments against strong AI are more logic and common-sensed based. With regards to the scientific argument, it is becoming more and more possible to understand brain processes. If, hypothetically, the brain is fully understood, there is no reason to believe that in a period of time a neuroanatomical map of the brain cannot be reproduced onto a computer system [38].

However, there exist three fundamental differences between a brain and a computer, which could potentially limit their similarities. Firstly, computers are digital systems and the brain is largely an analogue device where activity varies continuously. Secondly, digital computer systems are serial devices, in which one instruction is executed at a time, whereas the brain is a parallel-processing device. Thirdly, software codes and brain cells differ in that codes can be programmed to undertake a variety of tasks whereas many brain-cells are dedicated to one purpose only. For example, cells in the visual cortex respond only to lines slanting in a particular direction [32].

The rest of the strong AI argument is based on the ability to generate a mimic brain-like structure. With technological advancements, it is considered possible by some that in about 20 years, it will be possible to imitate the brain’s circuitry using man made electronic components. Counter-arguments state that even if this is possible, it is logically impossible for computational systems to feel emotions, experience consciousness or understand the actions that they perform. Without these, can a humanoid have the properties (those highlighted previously) of strong AI?

The discussion above has surfaced some other important uncertainties. Can only an analogue, parallel-processing, dedicated device be intelligent? Can AI developers ignore human psychology and philosophy and get on with their job of building an artificial thinking machine that may be very different from the brain? It is clear that this understanding of human psychology and philosophy is needed. If AI is to progress, it needs a solid basis of reference and what better basis of reference is there than the human brain; the most intelligent system present. However, perhaps it should only be used as a reference and not completely duplicated. If it seems that the route to recreating the human brain is impossible, maybe it is meant to be this way, and other routes to creating intelligent systems should be considered.

24

The Turing Test & Chinese Room Experiments [28], [29], [39] The Turing test was an experiment set up by Alan Turing in the mid 20th century. It is still regarded as one of the strongest basis for a machine to demonstrate intelligence. The test involves having a human judge engage in natural language with one human and one machine. Both have the aim of trying to appear human; after a series of questions, if the judge cannot decide which is which, the machine is considered to be intelligent. The strength of the test arises from the fact that any question can be asked to either participant in the hope that one set of answers will be distinguishable from the other, and as yet, no machine has been able to pass the test. From this, it seems that if a humanoid in the future was able to successfully complete the test, its role on the construction site would be benefited, and human-humanoid interaction would be successful.

However, strong evidence also lies against the credibility of this test. One major flaw is that the test is explicitly behaviourist, meaning it only tests how the subject acts. A machine may pass the test by replicating human conversational behaviour, but this is only due to algorithms set to perform string substitution and canned responses [40]. Moreover, with regard to humanoid use, surely it is more necessary to have intelligence in cognitive processes such as reasoning and the ability to learn, rather than simply being able to replicate a human conversationally.

In this way, Searle uses the Chinese room argument as a thought experiment to persuade people of the uselessness of using AI-concepts in psychology. Searle rejects the notion that if the Turing test were passed, a human-like robot would really understand and have real intent similar to or exceeding that of a human (strong AI). Rather, he accepts that such robots are merely manipulators of information (weak AI) and insists that the conceptual content of AIideas cannot help to describe or explain mental processes as such, since minds possess intentionality whereas computers do not. Searle declares that intentionality is a biological phenomenon, just as dependent on the underlying biochemistry as photosynthesis is for example [32].

25

The Future of AI All of these theories and concepts are not useful in suggesting what our future will be like in terms of artificial intelligence, due to the compelling opposing arguments. But it does help the report in that it would be wrong to consider one definitive possible future. It has also shown that the implications for strong AI prevailing, or weak AI being maintained, will be widely divergent. This means that the future of AI will determine the behaviour, and so inevitably the future, of humanoids. Therefore, when considering the risks involved with using humanoids in construction, two futures will be explored; one where only weak AI is possible, and the other where strong AI prevails.

Possible application and issues regarding humanoids for construction tasks [41], [42]

On outdoor construction sites, not every worker needs to be an expert in the job they are carrying out. It is possible for many tasks to be carried out by one expert and one novice. It is realistic for the applicable tasks to be carried out by a novice humanoid partner of an expert human, perhaps even many humanoids for bigger tasks. This could be a viable option for countries where labour forces are shrinking and societies are aging such as China and Japan, where much of the associated research and testing is carried out.

Figure 7 shows the leg module of a humanoid being developed today in Japan. Each hip has 3 degrees of freedom, the knees have 1, and the ankles have 2, which equates to 6 DOF per leg. This is good progress by today’s standards, but when more complex tasks are asked of a humanoid, it will need many more DOF, for example, in order to ascend a ladder, or crawl into an excavation.

Figure 7

Horizontal wobble occurs during walking due to mechanical factors, and presents problems in stability. The torso should be kept relatively stable, as with human motion, in order to avoid toppling. This can be achieved by:

-

Allowing the swinging leg to make as large a range as possible, while also being able to

change the support stance rapidly in the event of loss of stability.

26

-

Providing a smooth change in the centre of gravity during bipedal motion.

For level ground, flat-footed humanoids can gain maximum contact friction with the ground surface. But for uneven ground, this would result in an uneven distribution of loading on various contact points, leading to instability. To overcome this, flexible foot bases would be required to match the ground surface. Also, each joint throughout the humanoid’s system should not only be designed to simulate human movement, but also match the same range as the respective human joint.

There are two possible methods of information transfer that we can expect to see in the future: voice command and pressure sense operation. Voice command would be more intuitive, and reduce the need for training. Pressure sense operation on the other hand, is slightly more complex; it guides the robot through the pressure applied, then robot action is adjusted to match the forces applied by the human worker. This type of process would be applicable to carrying an object together such as a panel, or sections of formwork for example.

Basic systems of voice command have been developed today where a human worker wears a portable controller, which mounts a voice recognition application with a microphone for input. Voice data is converted into minimum task commands by the software, and transmitted to the humanoid’s motion planner via radio LAN link. Numerical commands such as “one meter forward” should be avoided in order to minimise voice recognition error in the system. Instead, more abstract commands could be used like those demonstrated in Figure 8, which is adapted from work undertaken by Kawada Industries [41].

Voice Command Forward Backward Right Left More forward More backward More right More left Stop Quit

Figure 8

Motion Command (mm vectors) MOVE (50,0,0) MOVE (-50,0,0) MOVE (0,50,0) MOVE (0,-50,0) MOVE (200,0,0) MOVE (-200,0,0) MOVE (0,200,0) MOVE (0,-200,0) MOVE (0,0,0) QUIT

27

Further on in time, it is expected that the system will become more compact in size, perhaps solely contained within the humanoid’s hardware. Also, we can expect to see a system that integrates voice and pressure sensing.

Investigation by Functional Analysis Method [42], [43]

The performance and capability of a humanoid, which is required for construction work, has been investigated. Figure 9 shows the primary functions that a humanoid must possess. The secondary functions of individual performance must be realised for full satisfaction of the primary function to occur. They also represent what must be complied with, in order for humanoid use to be worthwhile. The functions displayed are vital for actual construction work in an outdoor environment with human cooperation.

Primary Function Ability to Move

Ability to Work

Ability to transfer Orders and Intention Ability to be Reliable

Ability to have Knowledge

Ability to Recognise Ability to Judge

Secondary Function Travel over irregular terrain Vary speed of movement Change direction of movement Generate movement course Carry sufficient weight Work at suitable speed (aim to match human) Range of limb movement Flexible hands Accurate positioning (within millimetres) Long operation hours Communicate with humans by means of voice recognition or pressure sensing Stable against falling over Sufficient protection from impacts Recognises danger of potential reckless move Low frequency of failure Interpret construction site data Knowledge of materials Capable of some memory Sufficient knowledge of site regulations Image recognition, perhaps by geon theory Detection of states (solid, liquid, etc) Small degree of common sense

Figure 9

So, these functions represent what is required of a humanoid for construction work. It also concerns that of a weak AI, one that is new or recently developed, or one that has reached a limit or dead end in the possibilities of humanoid intelligence.

28

A second functional analysis has been performed which considers a time where tasks required of humanoids are far more complex, their perception systems are more advanced, and they have a real sense of intent; a time where strong AI exists, in which a new set of risks will prevail. The functions stated here are additional to, or a more advanced form of those stated in Figure 9.

Primary Function Ability to Move

Ability to have Knowledge

Secondary Function Travel over a large variety of terrain, and ladders Degree of expressive skills Can operate a large number of tools/machinery Capable of smooth conversation using natural languages processes (NLP) Understands fully and learns, not merely manipulating data High strength protection can resist many forms of damage Understands danger and acts to prevent it Near 0 frequency failure Capable of homeostasis (self regulating) Advanced vestibular system (balance) Self-aware and can display sapience (wisdom)

Ability to Recognise

Capable of short and long term memory, learning from it, and recounting perfectly Displays advanced level of perception:

Ability to Work Ability to transfer Orders and Intention Ability to be Reliable

Ability to Judge

Recognises multitude of objects and makes reliable guesses of unknown Motion processing Auditory feature extraction Face and eye detection Figure-ground segmentation Recognises self and other beings Gesture recognition Matches own behaviour to observation High level of commonsense (similar to human) Can reason, strategise and solve problems

Figure 10

By carrying out these functional analyses, areas where risks may develop are beginning to emerge. This gives a good starting point for delving into and imagining the sorts of risks that could arise on a construction site through using humanoid robots. From exploring the psychology of mental processes and investigating the future of AI, it has been found that weak AI will exhibit various limitations. The degree of object recognition and level of commonsense will be low, with a failure to cope with unknown situations, therefore much human assistance will be needed. On the other hand, even though the aforementioned limitations will not apply to a humanoid with strong AI (provided that strong AI triumphs), new forms of risk will transpire.

29

Risk Assessment – Humanoid Robot: Weak AI (Basic Intelligence) Analysing the behaviour of humanoids and imagining their possible work applications has helped to identify seven main categories where risks are most likely to arise. The risk assessment table shows these broken down into several sub-categories and the next column briefly describes an array of risks. The likelihood, risk rating and cost of controlling each risk is estimated in terms of high, medium or low (H, M, L), and brief descriptions of consequences and suggested control strategies are given.

Risk likelihoods have been estimated and should not be taken too definitively, as it is difficult to predict the future. Risk is often defined as the multiple of the likelihood and consequence, but here it is mainly a consideration of the risk to human life. Thus, high risk ratings are those likely to harm humans and low ratings are those risks that just disrupt the flow of work. Where no cost of control is stated, the risk is as a result of a design issue, with no easy way of treatment on site other than warning human workers of the risks.

By not being able to scrutinise retrospective risks, this risk identification process requires an imaginative and conceptual understanding of the circumstances. This is another reason why this topic is so important and must be considered. In many cases it is easier to see the problem areas where risks are likely to stem from, hence why many of the consequences explain how a multitude of risks could arise. These ratings are high because the risk is in the realms of the unknown. The most important risk is number 43, concerned with control. It allows the human to always have the upper hand in terms of power, being able to shut down the humanoid system if it becomes out of control.

Perhaps surprisingly, few risks actually pose threat to humans. In fact, most of the risks are concerned with too much human involvement, lack of productivity and hence an increase in costs. In reality, could this fact equate to a much bigger problem for construction companies than risks to human health?

30

No

Category

1 Movement 2 3 4

Sub Category walking on uneven terrain inadequate DOF/flexibility/speed

5 6

high strength

7 8

Risk Description

Likelihood

Consequence

Risk Rating

Control Strategy

Cost of Control L (H) L

trips and falls experiences difficulty

H M

damage to self, others, or materials inhibits progress to work

H L

take flatter route or avoid walking if possible (other transport?) human assistance and/or above

has trouble with performing all but the most simple tasks causes obstruction to human worker

H M

range of possible accidents, more hindrance than benefit disrupt work efficiency on site

M L

ensure humanoid is not working out of its depth allow designated work areas

L M L L

clumsy behaviour

H

multitude of accident could form

H

ensure humanoid is not working out of its depth

causes damage to materials and equipment

M

loss of goods

M

only allow humanoid to handle what it is trained to

harmful collision with human due to error in movement

M

H

H

injury to human same as 1, but heavy load makes worse

L

damage to self, others, or materials

M

M

vary movement

trips over whilst carrying a heavy load falls due to instability from changing speed or direction

10

generating movement course

falls or takes incorrect path from failure to identify ground materials/terrain

L

same as 1 or delays work

M

11 Communication

via voice recognition

lack of clarity in human input

L

range of misinterpretation errors

H

disallow humanoid on complicated terrain increase human-humanoid interaction training

laborious continual prompting and impatience on human part

M

can lead to human negligence and errors forming

M

rotate shifts to work with humanoid

L

human applies too high a pressure and humanoid loses balance

L

same as 1, also more likely to involve human

H

increase human-humanoid interaction training (for human)

M

laborious continual prompting and impatience on human part

M

can lead to human negligence and errors forming

M

rotate shifts to work with humanoid

L

excessive human involvement

M

losses in productivity/money

M

no solution, inevitable byproduct of humanoid use

-

unable to distinguish between signals becomes inundated with large amount of input mistakes still murky water for ground surface

M

9

12 13

via pressure sensing

14 15

general

16 Sensing

large variation of sensory input

17 18

complex sensory signals

19 20 21

humanoid sensors

L

confused, takes incorrect action, range of risks emerge unable to function at all, hindrance on site becomes submerged and breaks down

confusion exists and data incorrectly processed

L

fails to detect unknown or unfamiliar sense fails to differentiate between stimulus qualities (i.e. colour vs. brightness)

H

ensure all workers are aware of humanoid risks ensure heavy loads carried on stable ground keep speed and movement minimum for task

L H L M

L

only present simple scenarios to humanoid only present simple scenarios to humanoid

L

M

ensure all water is cordoned off

L

inhibits perception process and unable to function

M

only present simple scenarios to humanoid

L

L

multitude of risks could occur

H

do not employ humanoid in unfamiliar scenarios

L

L

incorrect detection and takes wrong action

M

clear and simple colour coding for humanoid tasks

M

L

M

L

31

No

Category

22 Perception

Sub Category recognition

23

24

Risk Description fails to recognise figure against ground (bad lighting) mistakes human body part for material object mistakes an object from failure to distinguish between size/distance or shape/distortion

Likelihood

Consequence

Risk Rating

Control Strategy

Cost of Control

L

collision and injury to human

H

always ensure good lighting

M

L

injury to human

H

ensure human wearing PPE

L

M

delays task, possible material/object damage

M

material/object clear presented to humanoid

M

L

delays task, possible damage to humanoid

M

clear site appropriately for humanoid use

H

M

delays task

L

no solution, design issue

-

26

processing

natural objects (trees, liquid etc) mistaken for other materials slow response time from difficulty in recognition process

27 Memory

stored knowledge

unable to deal with unknown situations which it has no memory or knowledge of

L

halts task

L

only present known scenarios, inevitable with weak AI

L

fails to treat material with care due to limited materials knowledge

L

damage to material

M

sensitive or unique materials prohibited from humanoid

L

M

reduction in human productivity

M

L

injury or negative effect on human

H

25

28 29

limited common sense

30

excessive human involvement due to poor initiative and independence acts harmfully or unethically towards human

no solution, inevitable byproduct of humanoid use largely design issue, warn humans of risks

L

31

memory stored from experience

failure to remember the task being carried out

L

need prompting and causes delay to human & humanoid

M

32

(if possible)

breaks rules due to failure in recalling new site regulations or ethical codes

M

range of unsafe behaviour

H

do not leave humanoid alone for long periods ensure any new rule programmed into humanoid system

L

33 Attention

channel selection

34 35

excess of information and unable to select correct channel humanoid out of control, unable to attract attention channels busy and information is lost from short term store

L

H

L

incapable of functioning

L

keep commands short and simple, one human commander

L

multiple risks could arise

H

reboot humanoid system

L

L

delays task

M

repeat instructions

L

36

process

excessive human involvement

M

decrease in productivity

M

no solution, inevitable byproduct of humanoid use

-

37 Reliability 38

preservation

fall or other accident, result of limited protection fall/collision from regular bouts of instability

L M

damage to humanoid damage to humanoid, and/or others

M M

ensure sufficient protection, mainly design issue no solution, design issue

M -

bad weather harms humanoid mechanisms

L

temporary/permanent damage to humanoid

M

predominantly design issue, and avoid use in bad weather

M

health & safety

fails to recognise danger of reckless movement

L

maintenance

machine fails/breaks down battery runs out of power humanoid out of control, hazardous action towards human

39 40 41 42 43 Control

H

predominantly design issue, and avoid dangerous tasks

M

M L

multitude of risk could occur site obstruction, decrease productivity as above

M M

ensure regular maintenance ensure battery charged every day

H M

L

multitude of scenarios involving injury to human life

VH

one human command that will shut down humanoid system

L

32

Risk Assessment – Humanoid Robot: Strong AI (High Intelligence)

Introduction It has been proposed from the previous section of the report that the risks involved with humanoid robots possessing strong AI, and those with weak AI will be very divergent. Due to the uncertainties and complications that strong AI results in, a risk assessment cannot be performed in the same way as with weak AI. In fact, it would be unfeasible and completely ineffective to try and determine the specific risks on a construction site; at present, there is simply not enough information or evidence of high intelligence AI. Due to this, different scenarios from which risks could evolve will be discussed, and then linked with humanoid use in construction. “By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it [44].”

Replicating the Brain From the previous section, a strong basis for the prevalence of higher intelligent systems was that they would mimic the human brain. If this is the case, the idea arises that surely the majority of characteristics of the human would be mimicked, including those that are undesired, such as human error. Human error is the single largest cause of accidents on any construction site, and having a humanoid exhibiting this property would negate much of its original purpose. The likelihood though of designers not resolving this issue is very low and it is almost certain that all strong AI humanoids designed for construction will eventually have close to zero humanoid error. The question is how long will this issue take to resolve, and what will be deemed too long?

The human developmental process works by individuals, as children, possessing little or no stored cognition, but learning and adapting information in the early years of their life. In this way, the future of higher intelligence humanoids could begin with them as an ‘empty canvas’, with the ability to learn what is needed for their purpose. With regards to construction, humanoids could be site specific. This means that for each site there would be different tasks, and these tasks would be taught to them on site rather than in their design and manufacture phase. If this were to be the case, on-site supervisors training the humanoids would themselves need to be trained. These training schemes would no doubt be complicated, and unless the supervisors are successfully educated, poor training from them to the humanoids

33

would result in wasted time and resources (both human and humanoid). In worst case scenarios, poor training could lead to humanoids malfunctioning or carelessly performing their tasks, and thus potentially causing major risk to human life. Perhaps the chances of this are minimal as the training would be thorough, and in the design phase humanoids would be designed so their ability to be taught is simple and user friendly. But as a result of numerous trainers and potential tasks asked of humanoids, the probability of failure due to poor training and task variation would be astronomical.

Humanoid Consciousness Consciousness is a difficult concept to delineate, but generally speaking means awareness of ones existence, with full activity of mind and senses. An expression, not fully understood with regards to humans, is therefore extremely complex to associate with a man-made entity.

With the height of intelligence that is expected if strong AI prevails, when examining potential risks of humanoids, their ability to emote and feel should be assessed. Even though all evidence to suggest high intelligence systems may emote is theoretical and based solely on opinion, if this is the case, should it be able to possess this characteristic? In many ways, it can be argued that it is our ability to emote that provides us with the capability to reason, make judgements and adapt information. And isn’t it these reasons that we wish to create strong AI humanoids? Conversely, it could be emotion that will be detrimental to a humanoid’s performance, as evidence from humans show that emotion affects work rate and productivity.

Additionally, if this were the case, the applicability of humanoid rights would increase. With humanoids being perceived as more of a life form rather than a machine, standards would be set, similar to humans, regarding the treatment of humanoids. This could nullify the primary grounds for instating humanoids on construction sites in the first place. One of the major intentions for humanoids in construction is their prolonged use on site, and it is hoped that the site can potentially operate 24 hours a day in the future. But with the establishment of humanoid rights, laws and policies, we may almost reach the stage of equal treatment of humans and humanoids, in which case their benefits would dramatically decrease.

Another major factor of high intelligence humanoids on a construction site is human– humanoid relations. When dealing with weak AI, this does not apply as much; the limited

34

features of a weak AI humanoid mean that the relations between a human and a humanoid will be more similar to that between a human and a computer. With higher intelligence, it is clear with human psychology that some humans may affiliate well with humanoids whereas others may find them threatening; with this, a new set of risks will emerge. The poor synergy between humanoids and other workers may lead to lack of work rate, striking and an overall negative impact on the construction industry. Perhaps human-humanoid relations will develop over time once humanoids have been suitably introduced into our society. But the question arises of how long this period of time will be when humans will accept humanoids and work together harmoniously? However if we look at trends in discrimination today, this prospect may never be reached. If some humans cannot even accept others, then how can our society ever fully accept humanoids?

Intelligence Levels Within this report, strong AI has been classified as the potential of machine intelligence to have no limit. Assuming AI will advance like current technology has, a machine’s intelligence could theoretically increase to an infinite level. The foundation of this statement lies in Moore’s Law which states that the number of transistors that can be inexpensively placed on an integrated circuit is increasing exponentially, doubling approximately every two years [45]. With regards to intelligence, this means that a system’s capability to be intelligent is increasing exponentially. To note, this does not negate the discussion of whether strong AI will prevail in the behaviour section, as these two points are mutually exclusive. However, various factors affect the credibility of Moore’s Law in the future, however these delve into very detailed computational jargon [46], and so are not necessary to discuss within this report. What should be regarded however, is that a major consequence for artificial intelligence from Moore’s Law in the future is technological singularity. This is a stage in AI development, where a machine has enough intelligence to recursively self-improve; it will be able to rewrite its own cognitive functions, and thus gain intelligence without human intervention [44]. The scope for intelligence is extensive, and whereas weak AI has a clear limit, if technological singularity occurs, machine intelligence could reach a level far beyond human comprehension.

35

What is being dealt with here is a very open-ended discussion, and at present, there is very little to emulate ideas from, as there is no past frame of reference. With the idea of AI being in its initial stages, it seems almost absurd to delve into AI at a very high level. However, if technological singularity occurs, the rate of intelligence could exponentially increase itself. And so this level of very high intelligence could be seen sooner than we think.

Human-Like Intelligence Level A level of very high intelligence, almost similar to that of a human’s, could result in a complete shift of operations on a construction site. It is usually considered that the role of a humanoid will remain at the bottom of the chain. From the introduction, it was found that robots in general were designed to undertake tasks the human didn’t want to or shouldn’t have to do. However, with this height of intelligence, construction companies may find that it would be better to modify their resource management. If deemed suitable, should humanoids perform tasks with less or even no human supervision? This question can be rephrased to how much testing and safety analyses have to be performed before allowing a humanoid to work solely? Perhaps the amount of testing and safety analyses may alter depending on the frequency of failure from lack of human supervision. But is allowing this failure to occur acceptable?

Another scenario that could emerge due to this height of intelligence is role alteration. Primarily it could be seen that humanoids have more of an input into their work rather than simply carrying out their tasks. Construction companies may use them in higher positions of responsibility, or even have certain humanoids operating a team of secondary humanoids on site. It is optimistic to believe that there will be a positive synergy on site between humans and humanoids of similar intelligence levels (as mentioned in human–humanoid relations), and whereas companies may believe that this will lead to a successful allocation of resources, a great deal of risk lies in this change of site hierarchy. The predominant question arises that if humanoid intelligence is high enough, could there ever be a time when a humanoid will be above a human in the chain of command?

Smarter-than-Human Intelligence The most abstract of all ideas involves deliberating over an entity that has more intelligence than a human. Like our model of physics fails when we reach a black hole, our model of life will fail if entities smarter than humans are introduced [47]. The risks can be classified as

36

non-anthropogenic existential risks [48], meaning that they are no longer at a local level (construction site), but now at a global level. No detailed outcomes can be deliberated, as the scenario is beyond comprehension. Two opposing views exist regarding smarter-than-human intelligence. One extreme suggests that it will be the end of human life as we know it, whereas another renders the view that solutions will be provided to previously unsolvable problems.

With regard to construction, it could be said that this super-intelligence could transform the construction industry from strength to strength. However the major flaw in this argument is best represented using the Giant Cheesecake Fallacy. This states that super-intelligence could potentially construct colossal cheesecakes; but would it actually want to? This vision of providing impressive results goes from machine capability to actuality, but does not consider the issue of motive [44]. Additionally, if the construction industry evolves in magnificent ways due to super-intelligence, will this eliminate the need for any human involvement? This situation can be compared to playing chess with a superior opponent. It is unknown what moves he will play; if it was then he would not be superior at the game. All that is clear is that the game will not turn out as you wish, and it is almost certain that you will lose. In this way, it is a catastrophic risk for humans to take, to create a smarter-than-human entity.

Control Arguably the most important factor with regards to risk within this project of humanoid robots on construction sites is control. Control is the ability to have a dominating influence over something else; to regulate, manage and direct something. Control is vital on a variety of levels. In order to direct a command to a humanoid, substantial control is needed to make sure that this command is carried out. However, for more serious purposes, control is needed to restrain the humanoid from acting in a way which it must not. Risks from this can cause wasted time and resources, but on a more serious note, can lead to human injury which could range from minor harm to loss of life.

With regard to weak AI, this concept of control is relatively straight-forward. The humanoid’s low intelligence means that it is easier to design such that the potential risks are kept to a minimum, especially after analysing cognitive behaviour and carrying out a risk

37

assessment. Due to the minimal intelligence, its actions can be generally pre-determined. However, an increase in intelligence levels results in an increase of uncertainty with regard to a humanoid’s actions, and this poses a much greater risk. Furthermore, an increase in intelligence, results in a changing context of AI behaviour. The level of risk does not simply increase linearly, as not only does the probability of risk increase, so does the catastrophe level. When dealing with more intelligent humanoids, their actions are backed by intent and if this is negative, it can be greatly detrimental for human safety.

As long as a great degree of control is maintained, a human’s intention can be safely induced into an intelligent entity. When dealing with humanoids with similar intelligence to humans, control must be enforced to permit intelligence levels from further increasing. If smarterthan-human intelligence exists in a machine, control has been immediately lost and it would be impossible to regain.

Ethics Machine ethics is concerned with ensuring that the behaviour of machines towards human users and perhaps other machines as well, is ethically acceptable. ERRN (European Robotics Research Network) believe that robots will eventually be intelligent enough to be considered its own species [49]. If truly intelligent systems are present, society will be faced with an array of ethical problems.

Moor describes two types of ethical agents that could exist. The first is an implicit ethical agent, one that is programmed to behave ethically and its behaviour is constrained by a designer who is following ethical principles. The second is an explicit ethical agent, one that is able to calculate the best action in ethical dilemmas using ethical principles. It can represent ethics explicitly and then operate effectively on the basis of this knowledge [49]. Developing an explicit ethical agent is the ultimate goal as it would cope far better with unknown situations. Certainly, this is concerned with strong AI and implicit agents with weak AI. This will be a challenge as ethics has not been completely codified; it is a field that is still evolving.

Machine ethics is important for a number of reasons. Intelligent humanoid entities will be capable of causing harm to human beings unless this is prevented by adding an ethical

38

component to them. Additionally, human fear of intelligent machines stems from the concern over whether they will behave ethically or not. Popular culture is full of images of machines without any ethical code, such as The Matrix [50] and The Terminator [51]. Joy [2000] argues that the only antidote to such fates and worse is to relinquish dangerous technologies [52]. Many people working in machine ethics research believe they can offer a more realistic solution. Perhaps if humanoids were designed so that they felt fear towards humans, control would be a far easier issue [53]. Finally, research in machine ethics could advance the study of ethical theory and help discover problems with current ones. Dennet believes that AI ‘makes philosophy honest’ [49].

One scenario that could emerge is that a humanoid will begin to behave ethically and then change, deciding to behave unethically in order to secure advantages for itself. This is not such a far fetched thought as it may first seem. After all, humans are far from ethical agents, and even though we are taught ethical principles, we tend to favour ourselves. So if humanoids have a human-like-brain, they may act for their own interest; after all we do, and most of us probably consider ourselves to be ethical. Dietrich contests this view, and argues that machines may actually have an advantage over human beings with respect to behaving ethically [54]. Humans may have evolved with a genetic predisposition towards unethical behaviour as a survival mechanism. We could now have the chance to create entities that inspire us to behave more ethically.

Humanoid Consciousness discussed whether a humanoid should be able to feel emotion. If we reach an age where strong AI is possible and a humanoid has an artificial brain that works similarly to a human brain, is it inevitable anyway? This question cannot be answered at present, but we can continue to think about the prospect of humanoid emotion. It is a strange and disturbing thought to imagine a human being that possesses no emotion. If we want to create a humanoid to look like, and work with humans, then can it do this without emotion? Yes, is the probable answer, but will it do this successfully? Maybe not, we just don’t know. Furthermore, emotions can equate to extreme forms of risk. Strong emotions such as hate and jealousy can cause humans to be over zealous; perhaps we should try to restrict this possibility from the robot domain [55]. An emotional robot may also ignore the laws it is programmed to follow, or its ethical principles.

39

At present, the only form of principles set to govern the future of robots exists in fictional stories by Isaac Asimov. His Three Laws of Robotics are as follows: A robot may not injure a human being or, through inaction, allow a human being to come to harm; A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law; A robot must protect its own existence as long as such protection does not conflict with the First or Second Law [56]. Although popular in science fiction, these laws have been suggested as flawed for actual implementation. If we imagine trying to set down a set of rules for a machine that is going to exist in a situation where it is operating in a world of concepts that we don’t even know, then the problem becomes apparent.

The singularity institute have recognised this and have made suggestions as to what an intelligent agent should be able to understand, in order to act ethically. The first is that AI agents need to understand evolutionary processes in order to display morality and ethics. Humans display true altruism due to this involuntary understanding; if humanoids were programmed to solely benefit others then we could feel much safer about their existence. Also, their source codes should be made available for humans to view; essentially a way to see humanoids think. In this way, they will be easier to control and more trusted as an entity. Additionally, they should be more economically and emotionally sentient. By understanding the value that humans place on things, it can be assumed that humanoids will not act in a way that is harmful to human society [57]. A great deal of responsibility lies with those that may eventually create an ethical machine. There is one thing that society should fear more than sharing an existence with intelligent machines, and that is sharing an existence with machines that do not behave ethically.

40

Final Discussion This report has involved an understanding of a number of fields that were not originally anticipated, but were required in order to formulate a thorough risk assessment of humanoid robots in construction. The investigation of humanoid behaviour and artificial intelligence involved research into engineering, psychology, philosophy, sociology, ethics and IT. Preparing a risk assessment for the unknown future, of an entity not yet developed, meant that simply using an engineering approach was not enough.

By carrying out a risk assessment of humanoids which exhibit weak AI, one fundamental issue became apparent. An initial belief that the majority of risks would implicate harm to human life was disputed. It was uncovered that most of the risks were concerned with three main areas: •

Time – Many of the risks were concluded to result in a delay of the task being carried out. This was either through the humanoid not being able to perform the task, the task being carried out slowly, or the task being completed poorly.



Human Resources – Due to the limited capability of the humanoid, many of the tasks require an excessive amount of human assistance.



Material/Humanoid damage – Taking into account human commonsense and limited humanoid capability, if a malfunction occurred it was estimated more likely to cause damage to itself and/or materials it was working with.

The significance of these consequences are still high because they result in losses for the construction industry with regard to productivity and cost. On the other hand, risks to humans are nonetheless present, and without significant humanoid development by designers, the introduction of weak AI humanoids to construction sites cannot be a realistic idea. If the harmful abilities of a humanoid are restricted in order to greatly reduce the risks to humans, then their benefits on site could reduce. In order to trade-off humanoid capabilities for human safety, designers need to devise ways of enforcing sufficient control for this level of intelligence, whilst keeping their appropriate abilities.

On the following page is a flow diagram illustrating the possible paths that artificial intelligence could take and the effects of this on construction.

41

FLOW DIAGRAM SHOWING FUTURE OF AI

Present day

AI development through time NB: not to scale

Weak AI

Is there a limit?

yes

no

Dead-end to AI development Strong AI – continue development of higher intelligence Through further development, weak AI can be perfected and still provide benefits for construction

Sufficient human control?

yes

Has singularity been reached?

yes

no

no Stop AI development

Are all control measures are in place?

no

yes Will these measures be sufficient? Momentous benefits for construction industry, intelligence can be controlled to desired level

yes

no

Intelligence levels will increase beyond human comprehension (smarterthan-human), all control is lost

42

It should firstly be noted that the flow diagram is based on theoretical ideas. It should not be taken as a definitive route, but it does bring to light many eventualities that must be considered in the present day. In the future there may be other variables that affect the outcome of AI which cannot be forecast.

From this diagram, three major outcomes have been exposed. The first is that AI development has reached a dead end. Even though a range of risks have surfaced concerning weak AI, it is expected that through meticulous refinement, these risk could be notably reduced and humanoids would provide ample benefits for construction. There is a worry that construction companies may implement humanoids too soon, masked by their potential benefits. If this happens and hazards occur, this could give humanoids a very bad reputation, as well as the industry, and damage the prospect of future development.

The greatest area of difficulty lies in determining what control measures will need to be in place if singularity is reached, and whether these will be sufficient. As yet, these are not concrete but the suggestions are that humanoids should be explicit ethical agents and they should follow new rules, such as those described in the ethics section. Whatever control measures are in place, if they are sufficient, there will be momentous benefits for construction. With higher-than-human intelligence that can be controlled to human desire, the construction industry will flourish beyond belief. However, if control measures are not sufficient then humanoid intelligence could increase to an inconceivable level where all control is lost.

Perhaps it would be more beneficial in the long run to pursue a path towards perfecting weaker AI and prohibit super intelligence from developing. The largest risk posed by the flow chart is for one to believe that control measures are sufficient when they are in fact not. This is because there are so many conditions that have to be met in order to control super intelligence, and the consequence of failure could be overwhelming. Not only could this potentially mean the downfall of the construction industry, but possibly even to mankind. It is ironic to think that we may one day be on the path to creating our successors.

43

References [i] World Scientific, ‘Introduction to Robotics’ accessed 04/08 [1] Design Boom, ‘Robot Sapiens’ accessed 12/07 [2] Wikipedia, ‘Agency’ accessed 12/07 [3] Prof. Schmidhuber, ‘Highlights of robot car history’ accessed 01/08 [4] Wikipedia, ‘Robot’ accessed 11/07 [5] Joseph Engelberger, ‘Sounds like a robot to me’ accessed 01/08 [6] Android World, ‘Historical Android Projects’ accessed 12/07 [7] Honda, ‘Asimo homepage’ accessed 12/07 [8] Honda, ‘Technical Information: Asimo’ accessed 01/08 [9] HNICEM, ‘From science fiction to reality – Humanoid Robots’ accessed 01/08 [10] Illinois Institute of Technology, ‘Humanoid Robots’ accessed 02/08 [11] HSE, ‘Health and safety in the construction industry’ accessed 12/07 [12] Rihs, Sandra, ‘Robots improve construction sites’ accessed 12/07

[13] BBC News, ‘Sick leave costs UK £1.75bn’ accessed 12/07 [14] Constructing Excellence, ‘Integrated Construction’ accessed 12/07 [15] Fin24, ‘SA mines “most dangerous”’ accessed 01/08 [16] IAARC, ‘Quick guide to construction automation and robotics’ accessed 01/08 [17] IAARC, ‘RoadRobot’ accessed 01/08 [18] Vogel, ‘Products’ accessed 01/08 [19] IAARC, ‘BIBER’ accessed 01/08 [20] PATH, ‘Construction Robotics’ accessed 01/08 [21] Shimizu, ‘New SMART System’ accessed 01/08 [22] Clive Frankish, ‘Sensation: Acquiring information from the external world’ (PSYC 11012 – Cognitive Psychology) [23] Clive Frankish, ‘Perception’ (PSYC 11012 – Cognitive Psychology) [24] Markus Damian, ‘High-Level Perception’ (PSYC 11012 – Cognitive Psychology) [25] Markus Damian, ‘Attention’ (PSYC 11012 – Cognitive Psychology) [26] Wikipedia, ‘Artificial Intelligence’ accessed 03/08 [27] Uni. of Pennsylvania - Dept of Computer and IT, Intelligent Agents <www.cis.upenn.edu/~cse391/cis391_2007_fall/ agents-2007-fall.ppt> accessed 02/08

44

[28] Philosophy Online, ‘Philosophy of Mind’ accessed 03/08 [29] Russell, Stuart J. & Norvig, Peter, ‘Artificial Intelligence: A modern Approach 2nd edition’ (Oxford: Prentice Hall, 2003)

[42] Junichiro Maeda, Hiroo Takada, Yoshio Abe, ‘Applicable possibility studies on a humanoid robot to cooperative work on construction site with a human worker’, p1-6 [43] Bryan Adams, ‘A New Kind of Tod’ accessed 02/08

[30] Raymond J. Mooney, ‘Philosophical Arguments Against “Strong” AI’ accessed 03/08

[44] Eliezer Yudkowsky, ‘Artificial Intelligence as a Positive and Negative Factor in Global Risk’, p 1

[31] Wikipedia, ‘Weak AI’ accessed 03/08

[45] Gordon E. Moore, ‘Cramming more components onto integrated circuits’ accessed 04/08

[32] Boden, Margaret A., ‘Artificial Intelligence in Psychology’ (Cambridge: The MIT Press, 1989) [33] David Bruemmer, ‘Humanoid Robotics – What Does The Future Hold?’ accessed 03/08 [34] Kevin Anderson, ‘Predicting AI’s future’ accessed 03/08 [35] Edmund Furse, ‘Arguments Against Strong AI’ accessed 03/08 [36] Helen Briggs, ‘Machine to reach man by 2029’ accessed 03/08

[46] Jon Stokes, ‘Understanding Moore’s Law – Part III, the future of Moore’s law’ accessed 04/08 [47] The Singularity Institute, ‘What is Singularity’ accessed 04/08 [48] Nick Bostrom, ‘Artificial Intelligence and Existential Risks’ Singularity Summit 2006 [49] AI magazine – Volume 28, Number 4, Winter 2007 – Articles ‘AI in the news’, ‘Human Implications of Human-Robot Interaction’, ‘Machine Ethics: Creating an Ethical Intelligent Agent’. [50] Wachowski & Wachowski, ‘The Matrix’, 1999 [51] Cameron & Hurd, ‘The Terminator’, 1984

[37] Duncan Graham-Rowe, ‘Mission to build a simulated brain begins’ accessed 03/08

[52] Joy, B., ‘Why the Future Doesn’t Need Us’ (Cambridge: Granta Books, 2000) [53] Norman, Donald A., ‘Why Machines Should Fear’, Scientific American magazine

[38] Edmund Furse, ‘Arguments For Strong AI’ accessed 03/08

[54] Dietrich, E., ‘After the Humans Are Gone’, 2006 North American Computing and Philosophy Conference

[39] Haugeland, J., ‘Mind Design II – Philosophy, Psychology, Artificial Intelligence’ (Cambridge: The MIT Press, 1997)

[55] Jason Nemeth, ‘Should Robots Feel?’ accessed 04/08

[40] Joseph Weizenbaum, ‘Eliza’ accessed 03/08

[56] Wikipedia, ‘Three Laws of Robotics’ accessed 04/08

[41] Kazuhiho Yokoyama, Junichirou Maeda, Takakatsu Isozumi, ‘Application of Humanoid Robots for Cooperative Tasks in the Outdoors’, p1-7

[57] J. Storrs Hall, ‘Asimov’s Laws of Robotics – Revised’, Singularity Summit 2007

Related Documents