Interactive Case-based Reasoning In Sequential Diagnosis

  • December 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Interactive Case-based Reasoning In Sequential Diagnosis as PDF for free.

More details

  • Words: 7,363
  • Pages: 12
Applied Intelligence 14, 65–76, 2001 c 2001 Kluwer Academic Publishers. Manufactured in The Netherlands. °

Interactive Case-Based Reasoning in Sequential Diagnosis DAVID MCSHERRY School of Information and Software Engineering, University of Ulster, Coleraine BT52 1SA, Northern Ireland, United Kingdom [email protected]

Abstract. Interactive trouble-shooting and customer help-desk support, both activities that involve sequential diagnosis, represent the majority of applications of case-based reasoning (CBR). An analysis is presented of the user-interface requirements of intelligent systems for sequential diagnosis. We argue that mixed-initiative dialogue, explanation of reasoning, and sensitivity analysis are essential to meet the needs of experienced as well as novice users. Other issues to be addressed by system designers include relevance and consistency in dialogue, tolerance of missing data, and timely provision of feedback to users. Many of these issues have previously been addressed by the developers of expert systems and the lessons learned may have important implications for CBR. We present a prototype environment for interactive CBR in sequential diagnosis, called CBR Strategist, which is designed to meet the identified requirements. Keywords:

1.

case-based reasoning, diagnosis, user interface, explanation

Introduction

In sequential diagnosis, tests are selected at each stage of the evidence-gathering process on the basis of their usefulness for discriminating between competing hypotheses that may account for a reported symptom or fault. The usefulness of a test, for example as measured by the expected reduction in the entropy of the distribution of disease or solution probabilities, depends on the evidence, if any, previously reported. Methods used to guide test selection in intelligent systems include information-theoretic measures [1], rule-based reasoning [2], and value of information [3]. Often the aim in sequential diagnosis is to minimize the number of tests required to reach a firm diagnosis and hence reduce the risks and costs associated with testing. Another important issue to be addressed by system designers is the system’s acceptability to users. Interactive trouble-shooting and customer help-desk support, both activities that involve sequential diagnosis, represent the majority of applications of case-based reasoning (CBR) [4, 5]. Many of the issues related to problem-solving efficiency and acceptability to users

have previously been addressed by the developers of expert systems, and the lessons learned may have important implications for interactive CBR. For example, an expert system for sequential diagnosis can explain the relevance of a selected test at any stage of the evidence-gathering process [6]. In interactive CBR, the details of a target problem are similarly elicited from the user, often with questions selected or ranked in order of usefulness by the system [4, 5, 7], However, the relevance of a test selected by nearest-neighbour retrieval, or inductive retrieval based on information-theoretic measures, can be difficult to explain. An algorithm for decision-tree induction in which attribute selection can be explained in strategic terms [8] is the basis of an approach to interactive CBR proposed in this paper. The algorithm, called Strategist, is adapted from a model of the evidence-gathering strategies used by doctors [9–13]. A demand-driven (or lazy) version of Strategist has been implemented in an environment providing integrated support for machine learning, problem solving and explanation [8]. This paper presents an extended version of the environment, called CBR Strategist,

66

McSherry

which is designed as a tool for interactive CBR in sequential diagnosis. Features of the extended environment include mixed-initiative dialogue, visual feedback on the impact of evidence reported, sensitivity analysis, and an approach to the maintenance of consistency in dialogue in which there is no requirement for explicit domain knowledge. An analysis of the user-interface requirements of intelligent systems for sequential diagnosis, from both expert systems and CBR perspectives, is presented in Section 2. The algorithm that provides the basis for inductive retrieval and explanation of reasoning in CBR Strategist is described in Section 3. The decision tree it induces is often identical to the tree induced by ID3 [14] when the number of attributes is small [8]. As shown in this paper, it is also possible for the decision trees to differ significantly in structure. An overview of CBR Strategist is presented in Section 4, and an example case library is used to illustrate its use as a tool for interactive CBR in the domain of computer fault diagnosis. 2.

The User Interface in Sequential Diagnosis

Aspects of the user interface in intelligent systems for sequential diagnosis which may affect their problemsolving efficiency and acceptability to users include mixed-initiative dialogue, feedback on the impact of reported evidence, dialogue relevance and consistency, explanation of reasoning, tolerance of missing data, and sensitivity analysis. 2.1.

Mixed-Initiative Dialogue

The importance of mixed-initiative dialogue is well recognized by the developers of expert systems and clinical decision-support systems [9, 11, 15–17]. Intelligent systems are unlikely to be accepted by users, and professional users in particular, if they insist on asking the questions, and force the user to play a passive role. Instead, the user should be able to volunteer data at any stage of the consultation dialogue. Some tools for interactive CBR support a form of mixed-initiative dialogue in which the user can select from a list of questions to be answered and not just the question considered most useful by the system [4, 5, 7]. Enabling the user to volunteer data may be important not only because it is more acceptable to users but also to increase efficiency in problem solving. It may be that the user, reminded of a similar previous case from her

own experience, knows which features of the problem are most relevant. Another aspect of mixed-initiative dialogue is whether the user can volunteer an opinion. A system which ignores the user’s opinion is missing an opportunity to involve the user more closely in the problemsolving process and may also be missing an opportunity to solve the problem more efficiently. Often in fault diagnosis, for example, a professional user may have a good idea about what is causing the problem and is using the system as a means of confirming her opinion. Clinical decision-support systems which recognize that the user may wish to suggest a diagnosis to be considered include DXplain [15] and Hypothesist [11].

2.2.

Feedback on the Impact of Reported Evidence

Intelligent systems for sequential diagnosis should provide users with timely feedback on the impact of reported evidence. As each piece of evidence is reported, some diagnoses become more likely and others less likely. In problem-solving dialogue with a human expert, it would be unusual not to receive some feedback as evidence is reported, such as a comment that a reported symptom rules out a particular diagnosis. Similarly, the user of an intelligent system should not have to wait to the end of a consultation, when only a single possibility remains, to obtain feedback. Continuous updating and display of similarity scores (or rankings) of retrieved cases as questions are answered by the user is one way to provide incremental feedback in interactive CBR [5, 7]. This approach is similar to the incremental updating and display of disease probabilities in clinical decision support [1, 9]. Hypothesist [11] provides visual feedback in the form of a graphical display, continually updated in the light of new evidence, of disease probabilities. An example of visual feedback in interactive CBR is the colour coding of user preferences in a knowledge-navigation system for exploring the domain of new cars [18]. Users can adjust their preferences interactively and the resulting changes in the degree of match of displayed alternatives are immediately reflected in their associated colour patterns.

2.3.

Relevant Dialogue

The questions an intelligent system asks the user should be relevant in the context of the problem or fault

Interactive Case-Based Reasoning in Sequential Diagnosis

reported by the user. Inevitably, some questions will be relevant for certain problems and not for others. In computer fault diagnosis, if the problem reported is that there are strange characters on the screen, then asking the user if the power light is on is clearly an irrelevant question. One reason for the popularity of the backward-chaining (or goal-driven) inference mechanism widely used in expert systems is that the user is asked only questions that are relevant in the context of the system’s current goal. There is no reason to suppose that relevant dialogue is any less important to users in interactive CBR. One approach to ensuring relevant dialogue in CBR relies on a heterogeneous case structure in which only attributes (or questions) which are relevant in the diagnosis of the fault explained by a case are associated with the case [5, 7]. This is the approach used in NaCoDAE [4, 7], a tool for conversational CBR [19]. In conversational CBR, the user enters a partial description of her problem in natural language and the system assists in further elaborating the problem through a conversation with the user. In NaCoDAE, the user’s description of the problem is used to retrieve cases with similar descriptions provided by the authors of the case library. To provide further details of the problem, the user selects from a list of unanswered questions in the most similar retrieved cases, ranked in order of their frequency. Maximizing information gain is another method used to rank questions in conversational CBR [20, 21]. 2.4.

Consistent Dialogue

A problem related to dialogue relevance is how to avoid asking the user a question when the answer can be inferred from the user’s initial description of the problem or answers to previous questions [4, 7]. In printer trouble-shooting, for example, there is no point in asking if the user is having print quality problems if she has already complained of black spots on the page. Asking such unnecessary questions not only reduces problemsolving efficiency; it also reveals a lack of common sense that is likely to reduce the user’s confidence in the system [22]. Worse still, it leaves open the possibility of an inconsistent response, perhaps leading to the system being unable to reach a solution or even suggesting an incorrect solution. Inconsistent dialogue can be avoided in an expert system with a mixed inference strategy in which backward-chaining is combined with opportunistic forward chaining based on priority rules that fire as soon as

67

their conditions are satisfied [22–24]. A similar rulebased approach has been used in CBR [4, 5, 7]. In conversational CBR, the problem of inferring aspects of a user’s problem from its partial description, thereby avoiding potentially inconsistent questions and increasing retrieval efficiency, is referred to as dialogue inferencing. Recently, a model-based approach to dialogue inferencing has been proposed as a means of avoiding the overheads associated with the acquisition and maintenance of rules [4, 7]. In Section 4, an approach to the maintenance of dialogue consistency in interactive CBR based on inductive retrieval is presented in which there is no requirement for explicit domain knowledge. 2.5.

Explanation of Reasoning

There is general agreement that to be useful, intelligent systems must be capable of explaining their reasoning [6, 25, 26]. An advantage of CBR systems in comparison with expert systems is their ability to justify a solution in terms of actual experience, which may be more acceptable to users than an explanation based on rules. However, justification of a solution on the grounds that it worked before loses some of its appeal when there is no exact match for a target case. Solution justification must therefore take account of differences between the target case and the retrieved case as well as similarities [27]. Justification of a CBR solution is analogous to the explanation provided by an expert system, usually consisting of a reasoning trace, when asked at the end of a consultation how a conclusion was reached. In sequential diagnosis, an expert system can also explain the relevance of a selected test at any stage of the evidencegathering process. The explanation provided by the expert system is usually a simple statement of its current goal and the rule it is trying to fire. Less commonly, an expert system may be able to explain its reasoning in strategic terms [6]. Explanation of question relevance is equally desirable in CBR, often in which the details of a target problem are elicited interactively from the user and questions are selected at the system’s initiative. In decision-tree induction, a rule-like explanation of an attribute’s relevance can be generated by looking ahead to find the shortest path from the current node of the decision tree to a leaf node [28]. A similar technique could be used to explain the relevance of questions in interactive CBR based on inductive retrieval. An

68

McSherry

alternative approach described in this paper is based on an algorithm for decision-tree induction in which attribute relevance can be explained in strategic terms [8]. The strategic explanations thus provided during the evidence-gathering process are intended to complement, rather than replace, the precedent-based justification of solutions already provided, at least implicitly, by CBR systems. 2.6.

Tolerance of Missing Data

Intelligent systems for sequential diagnosis must be able to reason effectively in the absence of complete data. Often there are questions that the user is unable to answer, for example because an observation was not recorded and cannot be repeated, or because the answer requires an expensive or complicated test that the user is reluctant or incompetent to perform. In interactive CBR, allowing the user to select from a list of questions rather than answering the one considered most useful by the system is one way to ensure that progress can be made when the user is unable to answer every question [4, 5, 7]. A retrieval strategy that focuses initially on features with zero cost is another approach which recognizes that certain questions may be easier (or less costly) to answer [29]. In systems that take the initiative by asking direct questions, the user should have the option to answer unknown to any question. This is a common feature in an expert system, which can continue in the absence of the requested information by trying another rule or attempting to prove a different goal. In CBR, nearest-neighbour retrieval is less sensitive to missing data than inductive retrieval [5]. Auriol et al. [30] propose the integration of inductive and nearest-neighbour retrieval as an approach to tolerating missing data in a decision tree. Approaches that rely purely on decision-tree technology include those implemented in CART [31] and C4.5 [32]. Quinlan [33] compared several approaches to tolerating missing values in decision-tree induction and concluded that while some were clearly inferior, none was uniformly superior to the rest. Any algorithm for top-down induction of decision trees can be modified to anticipate the problem of missing data by building a larger tree in which each node has an additional branch to be followed when the value of the most useful attribute is unknown [28]. In algorithms that adopt a demand-driven approach to decision-tree induction [8, 28, 29, 34], this is equivalent to select-

ing the next best attribute at the current node when an attribute’s value is unknown at problem-solving time. 2.7.

Sensitivity Analysis

Although sensitivity analysis is easily supported in an intelligent system that already supports mixedinitiative dialogue, its importance is often understated [12, 35]. Because one source of uncertainty in diagnosis is uncertain data, it is important that the user should be able to examine the effects of changing the value of an attribute about which she is uncertain, or the answer to a question that involves subjective judgement. Another source of uncertainty is incomplete data. For example, the user may wish to examine the potential effects of tests whose results are unknown, or which carry high risk or cost. A simple what-if analysis may be enough to show that the missing data can have little effect on the outcome, thus increasing the user’s confidence in the solution. 3.

Decision-Tree Induction in Strategist

Strategist is an algorithm for top-down induction of decision trees in which attribute relevance can be explained in strategic terms [8]. Its attribute-selection strategies, such as confirming a target outcome class or eliminating a competing outcome class, are based on the evidence-gathering strategies used by doctors in diagnosis [36, 37]. Continually revised as the data set is recursively partitioned, the target outcome class is initially the one that is most likely in the data set. 3.1.

Attribute-Selection Strategies

In order of priority, Strategist’s main attribute-selection strategies are: CONFIRM confirm the target outcome class ELIMINATE eliminate the likeliest alternative outcome class VALIDATE increase the probability of the target outcome class OPPOSE reduce the probability of the likeliest alternative outcome class A fifth strategy called DISCRIMINATE is available but only occasionally needed in practice. In the current subset of the data set, an attribute will support the

Interactive Case-Based Reasoning in Sequential Diagnosis

CONFIRM strategy if it has a value which occurs only in the target outcome class, the ELIMINATE strategy if one of its values occurs in the target outcome class but not in the likeliest alternative outcome class, the VALIDATE strategy if one of its values is more likely in the target outcome class than in any other outcome class, and the OPPOSE strategy if one of its values is less likely in the likeliest alternative outcome class than in any other outcome class. When more than one attribute is available to support the CONFIRM or ELIMINATE strategy, the attribute selected is the one whose expected eliminating power in favour of the target outcome class is greatest. The eliminating power of an attribute value E is the sum of the probabilities of the outcome classes surviving in the current subset which are eliminated by the attribute value, or zero if none is eliminated. In a given subset S of the data set, the expected eliminating power of an attribute A with values E 1 , E 2 , . . . , E n , in favour of an outcome class C, is: γ (A, C, ξ ) =

n X

p(E i | C, ξ ) el(E i , ξ )

69

where el(E i , ξ ) is the eliminating power of E i in S and ξ is the combination of attribute values on the path from the root node to S. When the available strategy of highest priority is VALIDATE or OPPOSE, the attribute selected is the one for which the expected weight of evidence in favour of the target outcome class, relative to the likeliest alternative, is greatest. In a given subset S of the data set, the expected weight of evidence of an attribute A in favour of an outcome class C1 , relative to an alternative outcome class C2 , is: ψ(A, C1 , C2 , ξ ) =

n X p(E i | C1 , ξ ) p(E i | C1 , ξ ) p(E i | C2 , ξ ) i=1

where E 1 , E 2 , . . . , E n are the values of A and ξ is the combination of attribute values on the path from the root node to S. 3.2.

Example Data Set

Table 1 shows an artificial data set based on a highly simplified version of the real-world problem of fault

i=1

Table 1.

An artificial data set for diagnosis of the computer fault “screen is dark.”

Power light on

Fan can be heard

Computer switched on

Computer plugged in

Monitor light on

Brightness level adjusted

Monitor switched on

false

false

true

true

false

false

false

FPC

false

false

true

true

false

false

true

FPC

false

false

true

true

false

true

false

FPC

false

false

true

true

false

true

true

FPC

false

false

false

true

false

false

false

CSO

false

false

false

true

false

false

true

CSO

false

false

false

true

false

true

false

CSO

false

false

false

true

false

true

true

CSO

false

false

true

false

false

false

false

CNPI

false

false

true

false

false

false

true

CNPI

false

false

true

false

false

true

false

CNPI

false

false

true

false

false

true

true

CNPI

true

true

true

true

false

false

false

MSO

true

true

true

true

false

true

false

MSO

true

true

true

true

false

false

true

MNPI

true

true

true

true

false

true

true

MNPI

true

true

true

true

true

false

true

BLTL

true

true

true

true

true

true

true

VCD

Possible solution

70

McSherry

diagnosis in a computer system. The fault reported by the user is assumed to be that the screen is dark. Outcome classes in the data set are faulty power cord (FPC), computer switched off (CSO), computer not plugged in (CNPI), monitor switched off (MSO), monitor not plugged in (MNPI), brightness level too low (BLTL), and video cable disconnected (VCD). Of course, there are other possible reasons why the screen may be dark, such as screen-saver running, monitor in power-saving mode, a blown fuse, or an interruption to the power supply. 3.3.

Induced Decision Trees

Decision-tree induction in Strategist is illustrated in Fig. 1. The likeliest outcome class in each subset (or an arbitrarily selected outcome class where two or more are equally likely) is shown in bold. The likeliest alternative outcome class is also shown. Initially the likeliest outcome class in the example data set is FPC and the likeliest alternative is CSO. None of the available attributes has a value which occurs only in the target outcome class, so the CONFIRM strategy is not supported in the data set. However, the attribute computerswitched-on has a value which occurs in FPC but not in CSO and will therefore support the ELIMINATE strategy. As the only such attribute, computer-switched-on is selected to appear at the root node of the decision tree. As CSO is the only outcome class surviving in the subset of the data set for which computer-switched-on = false, the corresponding node of the decision tree is a leaf node. In the subset with computer-switchedon = true, the target outcome class remains unchanged but the likeliest alternative changes to CNPI. Again no attribute is available to support the CONFIRM strategy, but the attribute computer-plugged-in has a value that will eliminate CNPI. Following its elimination, the likeliest alternative changes to MSO. However, power-light-on and fan-can-be-heard now have values that occur only in FPC and will therefore support the CONFIRM strategy. In the current subset of the data set, the expected eliminating power of power-light-on in favour of FPC is:

Similarly, the expected eliminating power of fancan-be-heard is 0.60. Since the two attributes are equally good supporters of the CONFIRM strategy, power-light-on is arbitrarily selected as the attribute to partition the current subset. In the partition corresponding to power-light-on = true, the target outcome class changes to MSO. Among the examples in this subset, monitor-switched-on has a value which occurs only in the target outcome class. As the only attribute available to support the CONFIRM strategy, it is used to partition the current subset. Similarly, a single attribute is available to support the CONFIRM strategy at each of the remaining non-leaf nodes. Only two of the available attribute-selection strategies, CONFIRM and ELIMINATE, are required to induce the entire tree and a quantitative measure of attribute usefulness is needed at only one of the nodes. The decision tree induced by ID3 from the example data set is shown in Fig. 2. It differs from the Strategist tree both in structure and in the attribute selected to appear at the root node. According to the information gain criterion [14], the most useful attribute is power-light-on. In Strategist, power-light-on would support the VALIDATE strategy, since power-light-on = false is more likely (or at least equally likely) in FPC than in any other outcome class and, if known, increases its probability from 0.22 to 0.33. However, as it always has the same value in FPC and CSO, power-light-on can support neither of the CONFIRM and ELIMINATE strategies which are given priority in Strategist. When the trees induced by the two algorithms differ, as in this example, the ID3 tree tends to be smaller [8]. In the Strategist tree, the average number of questions required to reach a leaf node is approximately 4 compared with an average of 3 for the ID3 tree. However, the advantage of the Strategist tree is that the relevance of the attribute selected at each node can be explained in strategic terms. For example, when powerlight-on is selected by Strategist it is because the algorithm is attempting to confirm FPC, the target outcome class.

γ (power-light-on, FPC, ξ ) = p(power-light-on = true | FPC, ξ ) × p(FPC | ξ ) + p(power-light-on = false | FPC, ξ ) × ( p(MSO | ξ ) + p(MNPI | ξ ) + p(BLTL | ξ ) + p(VCD | ξ )) = 0.60

4.

CBR Strategist

Written in Prolog for the Apple Macintosh, CBR Strategist is a prototype environment for interactive CBR in sequential diagnosis in which the user interface is

Interactive Case-Based Reasoning in Sequential Diagnosis

Figure 1.

71

Decision tree induced by Strategist from the example data set.

designed to meet the requirements identified in Section 2. A case library based on the example data set from Section 3 is used here to illustrate the userinterface features provided in the environment and its use as a tool for interactive CBR in the domain of computer fault diagnosis. The case structure in CBR Strategist is homogeneous, with each case represented as a list of attribute-value pairs and an associated out-

come class or diagnosis. Case retrieval is based on the attribute-selection strategies described in Section 3. 4.1.

Mixed-Initiative Dialogue

At the start of a consultation, the user can volunteer as much information as she wishes by selecting from a question menu (Fig. 3), and answering the selected

72

McSherry

Figure 2.

Decision tree induced by ID3 from the example data set.

remains in force, however unlikely, until revised by the user or eliminated by the reported evidence. The user can recapture the initiative, or change the target outcome class, at any stage of the consultation. 4.2.

Figure 3. stage.

In CBR Strategist, the user can volunteer data at any

CBR Strategist provides visual feedback in the form of a graphical display, continually updated in the light of new evidence, of the relative frequencies of surviving cases. Fig. 4 shows the relative frequencies of surviving cases in the example case library when the computer is known to be switched on and plugged in. 4.3.

question. There is no ranking of questions in order of usefulness in the question menu, to which the user can return at any stage to volunteer additional data or correct an answer to a previous question. The answers to previous questions are displayed in a separate window. As each question is answered, CBR Strategist uses the evidence provided to eliminate cases whose values for the attribute differ from the reported value. The user can continue selecting questions she wishes to answer until only a single outcome class remains, or pass the initiative to CBR Strategist at any stage by clicking on the “CBR” button. Before giving CBR Strategist the initiative, she can volunteer an opinion about the cause of the problem by clicking on the “Goal” button and choosing a target outcome class. Alternatively, she can leave the choice of target outcome class to CBR Strategist. If so, the target outcome class (initially the likeliest outcome class given any evidence already reported) is continually revised in the light of new evidence and the user is informed when it changes. A target outcome class selected by the user

Visual Feedback

Relevant Dialogue

When given the initiative, CBR Strategist’s selection of questions based on the CONFIRM, ELIMINATE, VALIDATE and OPPOSE strategies described in Section 3 ensures the relevance of questions the user is asked. Its strategic approach to inductive retrieval is based on a demand-driven version of Strategist [8]. Instead of partitioning the current subset of the case library into subsets corresponding to all values of a selected attribute, it asks the user for the attribute’s value and uses the evidence provided to eliminate cases in the current subset whose values for the attribute differ from the reported value. 4.4.

Maintenance of Dialogue Consistency

Inconsistent dialogue can occur only if inconsistent data is reported by the user and accepted by the system. To maintain consistency in dialogue, it is therefore sufficient to ensure that inconsistent data, if reported by the user, is immediately rejected by the system. This is the approach used in CBR Strategist. Interestingly,

Interactive Case-Based Reasoning in Sequential Diagnosis

Figure 4.

73

In CBR Strategist, the relative frequencies of surviving cases are continually updated as new evidence is reported.

there is no requirement for explicit domain knowledge in the approach. A potential source of inconsistency arises, for example, when the value of one case attribute logically depends on the value of another. One such logical dependency in the example case library is the commonsense rule: if power-light-on = true then computer-switched-on = true Though not explicitly known by CBR Strategist, the rule is implicit in the case data and this is enough for the potential inconsistency to be avoided. For example, suppose that the user has previously reported that the power light is on. Following the elimination of cases with power-light-on = false, there can be no case with computer-switched-on = false among the surviving cases in the case library. It follows that if the user now reports that the computer is not switched on, CBR Strategist will be unable to find a matching case among the surviving cases. It therefore informs the user that there are no matching cases, and suggests that she should check the consistency of the reported data. For example, if the user has mistakenly reported that the power light is on, she can recover from the error by using the question menu to correct her previous answer. As this example illustrates, inconsistent data can be seen as an attempt to describe a point in the problem space that cannot exist in reality. Thus provided there is no inconsistency in the case library, the simple approach described ensures that inconsistent data is never accepted; for if any reported finding is inconsistent with evidence previously reported, there can be no matching case in the case library. If some of the case attributes are non-binary, it is possible that CBR Strategist may ask a

question to which one of the possible answers is inconsistent with evidence previously reported. However, if the inconsistent answer is selected by the user, CBR Strategist will once again inform the user that there are no matching cases. 4.5.

Explanation of Reasoning

As in an expert system, the user can query the relevance of any question the system asks. CBR Strategist’s explanation, like that of its machine-learning predecessor [8] depends on its current strategy. For example, in the CONFIRM strategy, the user is shown the value of the selected attribute that will confirm the target outcome class. In the ELIMINATE strategy, the user is shown the value of the selected attribute that will eliminate the likeliest alternative outcome class. 4.6.

Tolerance of Missing Data

The user can answer unknown to any question, in which case CBR Strategist simply selects the next most useful question; that is, the one that best supports the attributeselection strategy of highest priority now available. Of course, a unique classification or diagnosis may not be possible if the user is unable to provide sufficient data. In that case, the user may decide to adopt the likeliest solution as suggested by the relative frequencies of partially matching cases. 4.7.

Sensitivity Analysis

At any stage of a consultation, the user can return to the question menu (Fig. 3) to examine the effects of changing an uncertain answer, or the possible effects of

74

McSherry

has been confirmed. Because of the limited coverage of the example case library, CBR Strategist is unaware of other possible causes of the reported symptoms, such as a blown fuse or an interruption to the power supply. 4.9.

Figure 5. question.

In CBR Strategist, the user can answer unknown to any

an omitted test that carries high risk or cost. The relative frequencies of matching cases (Fig. 4) are dynamically updated to enable the user to visualize the effects of changes in the reported evidence. 4.8.

Example Consultation

Before giving CBR Strategist the initiative, the user has reported that the computer is switched on and plugged in. On receiving the initiative, CBR Strategist informs the user that the target outcome class is FPC, and its first question, selected for its ability to support the CONFIRM strategy, is whether the power light is on (Fig. 5). The user is unable to answer this question, so CBR Strategist looks for the next best question. As noted in Section 3, power-light-on and fan-can-beheard are equally good supporters of the CONFIRM strategy when the computer is switched on and plugged in and FPC is the target outcome class. When asked if the fan can be heard, the user queries the relevance of the question (Fig. 6). The explanation generated by CBR Strategist is shown in Fig. 7. CBR Strategist now repeats the question. The user replies that the fan cannot be heard, and is informed that FPC

Figure 6. Querying the relevance of a question selected by CBR Strategist.

Discussion

Features of the user interface in CBR Strategist include mixed-initiative dialogue, visual feedback on the impact of reported evidence, tolerance of missing data, sensitivity analysis, and maintenance of dialogue consistency. Its strategic approach to inductive retrieval ensures the relevance of dialogue and enables the reasoning process to be explained in strategic terms. Although no account is taken of the relative costs of tests in its attribute-selection strategies, they could easily be modified to give priority to least expensive tests. CBR Strategist’s approach to the maintenance of dialogue consistency ensures that inconsistent data, if reported by the user, is immediately rejected. A limitation of the approach is that the system is unable to determine whether the absence of a matching case is due to lack of problem-space coverage or inconsistency in the evidence reported by the user. However, the overheads associated with the acquisition or discovery of the knowledge required for this distinction to be made are unlikely to be justified in practice. If the user is satisfied that the reported data is consistent, then the absence of a matching case must mean that the target case is not covered by the case library. Pending the addition of a new case to the case library, adopting the likeliest solution as suggested by the relative frequencies of partially matching cases remains an immediate option for the consultation user. The most serious limitation of CBR Strategist, as currently implemented, arises from its assumption of a homogeneous case structure [5] in which the value of every attribute is recorded for every case. This essentially limits its scope to the diagnosis of a single reported fault, since the attributes which are relevant in the diagnosis of other faults are unlikely to be the same. A similar limitation is shared by many algorithms for machine learning which assume a homogeneous structure in the data sets to which they are applied. However, to be useful for interactive trouble-shooting, by far the most extensive use of CBR technology [4], a CBR tool needs to cover an acceptable range of reported faults. An approach to providing coverage for a range of faults in CBR Strategist is being investigated in which a homogeneous case structure is assumed only within

Interactive Case-Based Reasoning in Sequential Diagnosis

Figure 7.

Explanation of reasoning in CBR Strategist.

partitions of the case library corresponding to different faults. While cases in different partitions may be indexed by different attributes, every case will also be indexed by the reported fault (e.g., screen is dark) explained by its solution (e.g., faulty power cord). At the start of a consultation, the user will select from the faults covered by the case library, triggering the retrieval of all cases in the corresponding partition of the case library. From that point, the consultation will continue as in the current version of CBR Strategist. Similar functionality is provided in HOMER, a help-desk application in which the user can start a consultation by selecting from a problem hierarchy [38]. Automating the refinement of case libraries to increase their conformity with design guidelines is a topic of increasing research interest [19]. While the restructuring of an existing case library to conform with the structure proposed here should not be difficult to automate, a more challenging problem will be how to deal with the residual heterogeneity that is likely to remain within resulting partitions of the case library. Modification of CBR Strategist’s attribute-selection strategies to tolerate missing values in library cases is one of the possible solutions to be investigated by further research. 5.

75

Summary

Following an analysis of the user-interface requirements of intelligent systems for sequential diagnosis, a prototype environment for interactive CBR in sequential diagnosis has been presented. Features of the user interface in CBR Strategist include mixed-initiative dialogue, visual feedback on the impact of reported evidence, tolerance of missing data, and sensitivity analysis. Inductive retrieval based on the evidence-gathering strategies used by doctors enables CBR Strategist to explain the relevance of questions it asks the user in strategic terms. A simple approach to the maintenance of consistency in dialogue has been presented in which there is no requirement for explicit domain knowledge. Instead,

logical dependencies implicit in the case data are sufficient to ensure the immediate rejection of a reported finding that is inconsistent with evidence previously reported. An approach to providing trouble-shooting coverage for a range of faults in CBR Strategist is being investigated in which a homogeneous case structure is assumed only within partitions of the case library corresponding to different faults. Acknowledgments The author is grateful to the reviewers for their insightful comments and suggestions. References 1. G.A. Gorry, J.P. Kassirer, A. Essig, and W.B. Schwartz, “Decision analysis as the basis for computer-aided management of acute renal failure,” American Journal of Medicine, vol. 55, pp. 473– 484, 1973. 2. E.H. Shortliffe, “Clinical decision support systems,” in Medical Informatics: Computer Applications in Health Care, edited by E.H. Shortliffe and L.E. Perreault, Addison-Wesley: Reading, Massachusetts, pp. 466–502, 1990. 3. D.E. Heckerman, Probabilistic Similarity Networks, MIT Press: Cambridge, Massachusetts, 1991. 4. D.W. Aha, “The omnipresence of case-based reasoning in science and application,” Expert Update, vol. 1, pp. 29–45, 1998. 5. I. Watson, Applying Case-Based Reasoning: Techniques for Enterprise Systems, Morgan Kaufmann: San Francisco, California, 1997. 6. R.W. Southwick, “Explaining reasoning: an overview of explanation in knowledge-based systems,” Knowledge Engineering Review, vol. 6, pp. 1–19, 1991. 7. D.W. Aha, T. Maney, and L.A. Breslow, “Supporting dialogue inferencing in conversational case-based reasoning,” in Proceedings of the Fourth European Workshop on Case-Based Reasoning, Dublin, Ireland, 1998, pp. 262–273. 8. D. McSherry, “Strategic induction of decision trees,” in Proceedings of ES98, Cambridge, England, 1998, pp. 15–26. Also to appear in Knowledge-Based Systems. 9. D. McSherry, “Intelligent dialogue based on statistical models of clinical decision making,” Statistics in Medicine, vol. 5, pp. 497– 502, 1986. 10. D. McSherry, “A domain-independent theory for testing fault hypotheses,” in Colloquium on Intelligent Fault Diagnosis, Institution of Electrical Engineers: Londons, 1992.

76

McSherry

11. D. McSherry, “HYPOTHESIST: A development environment for intelligent diagnostic systems,” in Proceedings of the Sixth Conference on Artificial Intelligence in Medicine Europe, Grenoble, France, 1997, pp. 223–234. 12. D. McSherry, “Avoiding premature closure in sequential diagnosis,” Artificial Intelligence in Medicine, vol. 10, pp. 269–283, 1997. 13. D. McSherry, “Dynamic and static approaches to clinical data mining,” Artificial Intelligence in Medicine, vol. 16, pp. 97–115, 1999. 14. J.R. Quinlan, “Induction of decision trees,” Machine Learning, vol. 1, pp. 81–106, 1986. 15. G.O. Barnett, J.J. Cimino, J.A. Hupp, and E.P. Hoffer, “DXplain: an evolving diagnostic decision-support system,” Journal of the American Medical Association, vol. 258, pp. 67–74, 1987. 16. D.C. Berry and D.E. Broadbent, “Expert systems and the manmachine interface. Part Two: The user interface,” Expert Systems, vol. 4, pp. 18–28, 1987. 17. R.S. Patil, P. Szolovits, and W.B. Schwartz, “Modeling knowledge of the patient in acid-base and electrolyte disorders,” in Artificial Intelligence in Medicine, edited by P. Szolovits, Westview Press, Boulder, Colorado pp. 191–226, 1982. 18. K.J. Hammond, R. Burke, and K. Schmitt, “A case-based approach to knowledge navigation,” in Case-Based Reasoning: Experiences, Lessons & Future Directions, edited by D.B. Leake, AAAI Press/MIT Press, Menlo Park, California, pp. 125–136, 1996. 19. D.W. Aha and L.A. Breslow, “Refining conversational case libraries,” in Proceedings of the Second International Conference on Case-Based Reasoning, Providence, Rhode Island, 1997, pp. 267–278. 20. H. Shimazu, A. Shibata, and K. Nihei, (2000) “ExpertGuide: a conversational case-based reasoning tool for developing mentors in knowledge spaces,” Applied Intelligence, vol. 14, no. 1, pp. 33–48, 2001. 21. Q. Yang and J. Wu, (2000) “Enhancing the effectiveness of interactive case-based reasoning with clustering and decision forests,” Applied Intelligence, vol.14, no.1, pp. 49–64, 2001. 22. P. Jackson, Introduction to Expert Systems, Addison-Wesley, (Ist Edition) Wokingham, UK, 1986. 23. J. Cendrowska and M. Bramer, “Inside an expert system: A rational reconstruction of the MYCIN consultation system,” in Artificial Intelligence: Tools, Techniques and Applications, edited by T. O’Shea and M. Eisenstadt, Harper & Row, NewYork, pp. 453– 497, 1984. 24. D. McSherry and K. Fullerton, “PRECEPTOR: a shell for medical expert systems and its application in a study of prognostic indices in stroke,” Expert Systems, vol. 2, pp. 140–147, 1985. 25. A.K. Goel and J.W. Murdock, “Meta-cases: explaining casebased reasoning,” in Proceedings of the Third European Workshop on Case-Based Reasoning, Lausanne, Switzerland, 1996, pp. 150–163. 26. D.B. Leake, “CBR in context: The present and future,” in CaseBased Reasoning: Experiences, Lessons & Future Directions, edited by D.B. Leake, AAAI Press/MIT Press, Menlo Park, California, pp. 3–30, 1996. 27. J.L. Kolodner and D.B. Leake, “A tutorial introduction to case-based reasoning,” in Case-Based Reasoning: Experiences, Lessons & Future Directions, edited by D.B. Leake, AAAI Press/MIT Press, Minlo Park, California, pp. 31–65, 1996.

28. D. McSherry, “Integrating machine learning, problem solving and explanation,” in Proceedings of ES95, Cambridge, England, 1995, pp. 145–157. 29. B. Smyth and P. Cunningham, “A comparison of incremental case-based reasoning and inductive learning,” in Proceedings of the Second European Workshop on Case-Based Reasoning, Chantilly, France, 1994, pp. 151–164. 30. E. Auriol, M. Manago, K.-D. Althoff, S. Wess, and S. Dittrich, “Integrating induction and case-based reasoning: Methodological approach and first evaluation,” in Proceedings of the Second European Workshop on Case-Based Reasoning, Chantilly, France, 1994, pp. 18–32. 31. L. Breiman, J.H. Friedman, and C.J. Stone, Classification and Regression Trees, Wadsworth: Pacific Grove, California, 1984. 32. J.R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann: San Mateo, California, 1993. 33. J.R. Quinlan, “Unknown attribute values in induction,” in Proceedings of the Sixth International Workshop on Machine Learning, Ithaca, New York, 1989, pp. 164–168. 34. J.H. Friedman, R. Kohavi, and Y. Yun, “Lazy decision trees,” in Proceedings of the Thirteenth National Conference on Artificial Intelligence, Portland, Oregon, 1996, pp. 717–724. 35. R.M. O’Keefe, V. Belton, and T. Ball, “Experiences using expert systems in O.R.” Journal of Operational Research Society, vol. 37, pp. 657–668, 1986. 36. A.S. Elstein, L.A. Schulman, and S.A. Sprafka, Medical Problem Solving: An Analysis of Clinical Reasoning, Harvard University Press, Cambridge, Massachusetts, 1978. 37. E.H. Shortliffe and G.O. Barnett, “Medical data: their acquisition, storage and use,” in Medical Informatics: Computer Applications in Health Care, edited by E.H. Shortliffe and L.E. Perreault, Addison-Wesley: Reading Massachusetts, pp. 37–69, 1990. 38. M. G¨oker, T. Roth-Berghofer, R. Bergmann, T. Pantleon, R. Traph¨oner, S. Wess, and W. Wilke, “The development of HOMER: A case-based CAD/CAM help-desk support tool,” in Proceedings of the Fourth European Workshop on Case-Based Reasoning, Dublin, Ireland, 1998, pp. 346–357.

David McSherry graduated with first class honours in Mathematics at the Queen’s University of Belfast in 1973, and was awarded the degrees of M. Sc. and Ph. D. by the same university in 1974 and 1976. Currently a senior lecturer in Computing Science at the University of Ulster, his research interests include case-based reasoning, machine learning, and artificial intelligence in medicine. He has published a large number of papers in the proceedings of international conferences and journals, and won the award for best technical paper at the Eighteenth Annual International Conference of the British Computer Society’s Specialist Group on Knowledge-Based Systems and Applied Artificial Intelligence.

Related Documents