Unit 4 Association rule mining Lecture 27/23-09-09
L27/17-09-09
1
Association Rule Mining • Given a set of transactions, find rules that will predict the occurrence of an item based on the occurrences of other items in the transaction Market-Basket transactions
TID
Example of Association Rules
Item
{Diaper} → {Beer}, {Milk, Bread} → {Eggs,Coke}, {Beer, Bread} → {Milk},
Implication means co-occurrence, not causality!
L27/17-09-09
2
Definition: Frequent Itemset • Itemset – A collection of one or more items • Example: {Milk, Bread, Diaper}
– k-itemset • An itemset that contains k items
TID
• Support count (σ ) – Frequency of occurrence of an itemset – E.g. σ ({Milk, Bread,Diaper}) = 2
• Support – Fraction of transactions that contain an itemset – E.g. s({Milk, Bread, Diaper}) = 2/5
L27/17-09-09
1
•Frequent Itemset
An itemset whose support is greater than or equal to a minsup threshold
2
3
Definition: Association Rule • Association Rule
TID
– An implication expression of the form X → Y, where X and Y are itemsets – Example: {Milk, Diaper} → {Beer}
• Rule Evaluation Metrics – Support (s) – Fraction of transactions that contain both X and Y – Confidence (c) – Measures how often items in Y appear in transactions that contain X
Example:
1
{Milk, Diaper } ⇒ Beer s= c=
L27/17-09-09
σ (Milk , Diaper, Beer) 2 = = 0.4 |T| 5
σ (Milk, Diaper, Beer) 2 = = 0.67 σ (Milk, Diaper) 3
2
4
Association Rule Mining Task
• Given a set of transactions T, the goal of association rule mining is to find all rules having – support ≥ minsup threshold – confidence ≥ minconf threshold • Brute-force approach: – List all possible association rules – Compute the support and confidence for each rule – Prune rules that fail the minsup and minconf thresholds
⇒ Computationally prohibitive!
L27/17-09-09
5
Mining Association Rules Example of Rules:
TID
1
Items
{Milk,Diaper} → {Beer} (s=0.4, c=0.67) {Milk,Beer} → {Diaper} (s=0.4, c=1.0) {Diaper,Beer} → {Milk} (s=0.4, c=0.67) {Beer} → {Milk,Diaper} (s=0.4, c=0.67) {Diaper} → {Milk,Beer} (s=0.4, c=0.5) {Milk} → {Diaper,Beer} (s=0.4, c=0.5)
Bread
Observations:
• All the above rules are binary partitions of the same itemset: {Milk, Diaper, Beer}
• Rules originating from the same itemset have identical support but can have different confidence • Thus, we may decouple the support and confidence requirements L27/17-09-09
6
Mining Association Rules
• Two-step approach:
1. Frequent Itemset Generation – Generate all itemsets whose support ≥ minsup
2. Rule Generation – Generate high confidence rules from each frequent itemset, where each rule is a binary partitioning of a frequent itemset
• Frequent itemset generation is still computationally expensive
L27/17-09-09
7
Frequent Itemset Generation null
A
B
C
D
E
AB
AC
AD
AE
BC
BD
BE
CD
CE
DE
ABC
ABD
ABE
ACD
ACE
ADE
BCD
BCE
BDE
CDE
ABCD
ABCE
ABDE
ACDE
ABCDE
L27/17-09-09
BCDE
Given d items, there are 2d possible candidate itemsets 8
Frequent Itemset Generation
• Brute-force approach:
– Each itemset in the lattice is a candidate frequent itemset – Count the support of each candidate by scanning the database Transactions
N
TID 1 2 3 4 5
Items Bread, Milk Bread, Diaper, Beer, Eggs Milk, Diaper, Beer, Coke Bread, Milk, Diaper, Beer Bread, Milk, Diaper, Coke
w
L27/17-09-09
List of Candidate
9
Computational Complexity
• Given d unique items:
– Total number of itemsets = 2d – Total number of possible association rules:
d d − k R = ∑ × ∑ k j = 3 − 2 +1 d −1
d −k
k =1
j =1
d
d +1
If d=6, R = 602 rules
L27/17-09-09
10
Frequent Itemset Generation Strategies • Reduce the number of candidates (M) – Complete search: M=2d – Use pruning techniques to reduce M
• Reduce the number of transactions (N) – Reduce size of N as the size of itemset increases – Used by DHP and vertical-based mining algorithms
• Reduce the number of comparisons (NM) – Use efficient data structures to store the candidates or transactions – No need to match every candidate against every transaction L27/17-09-09
11
Reducing Number of Candidates • Apriori principle: – If an itemset is frequent, then all of its subsets must also be frequent
• Apriori principle holds due to the following property of the support measure:
∀X , Y : ( X ⊆ Y ) ⇒ s( X ) ≥ s(Y )
• Support of an itemset never exceeds the support of its subsets – This is known as the anti-monotone property of support L27/17-09-09
12
Illustrating Apriori Principle null
A
Found to be Infrequent
B
C
D
E
AB
AC
AD
AE
BC
BD
BE
CD
ABC
ABD
ABE
ACD
ACE
ADE
BCD
BCE
Pruned supersets
ABCD
L27/17-09-09
ABCE
ABDE
ACDE
BCDE
13
Illustrating Apriori Principle Item Bread Coke Milk Beer Diaper Eggs
Count 4 2 4 3 4 1
Items (1-itemsets)
Minimum Support = 3
Itemset {Bread,Milk} {Bread,Beer} {Bread,Diaper} {Milk,Beer} {Milk,Diaper} {Beer,Diaper}
If every subset is considered, 6 C1 + 6C2 + 6C3 = 41 With support-based pruning, 6 + 6 + 1 = 13
Count 3 2 3 2 3 3
Pairs (2-itemsets) (No need to generate candidates involving Coke or Eggs)
Triplets (3-itemsets) Itemset {Bread,Milk,Diaper}
L27/17-09-09
Count 3
14
• Method: Let k=1
Apriori Algorithm
– Generate frequent itemsets of length 1 – Repeat until no new frequent itemsets are identified • Generate length (k+1) candidate itemsets from length k frequent itemsets • Prune candidate itemsets containing subsets of length k that are infrequent • Count the support of each candidate by scanning the DB • Eliminate candidates that are infrequent, leaving only those that are frequent L27/17-09-09
15
Reducing Number of Comparisons • Candidate counting: – Scan the database of transactions to determine the support of each candidate itemset – To reduce the number of comparisons, store the candidates in a hash structure • Instead of matching each transaction against every candidate, match it against candidates contained in the hashed buckets
Hash Structure
Transactions
N
TID 1 2 3 4 5
Items Bread, Milk Bread, Diaper, Beer, Eggs Milk, Diaper, Beer, Coke Bread, Milk, Diaper, Beer Bread, Milk, Diaper, Coke L27/17-09-09
k
Buckets
16
Generate Hash Tree Suppose you have 15 candidate itemsets of length 3: {1 4 5}, {1 2 4}, {4 5 7}, {1 2 5}, {4 5 8}, {1 5 9}, {1 3 6}, {2 3 4}, {5 6 7}, {3 4 5}, {3 5 6}, {3 5 7}, {6 8 9}, {3 6 7}, {3 6 8} You need: • Hash function • Max leaf size: max number of itemsets stored in a leaf node (if number of candidate itemsets exceeds max leaf size, split the node)
Hash function 3,6,9 1,4,7
234 567 345 136
145
2,5,8 124 457
125 458
L27/17-09-09
159
356 357 689
367 368
17
Association Rule Discovery: Hash tree Candidate Hash Tree
Hash Function
1,4,7
3,6,9
2,5,8
234 567 145
136 345
Hash on 1, 4 or 7 124 457
125 458
159
L27/17-09-09
356 357 689
367 368
18
Association Rule Discovery: Hash tree Candidate Hash Tree
Hash Function
1,4,7
3,6,9
2,5,8
234 567 145
136 345
Hash on 2, 5 or 8 124 457
125 458
159
L27/17-09-09
356 357 689
367 368
19
Association Rule Discovery: Hash tree Candidate Hash Tree
Hash Function
1,4,7
3,6,9
2,5,8
234 567 145
136 345
Hash on 3, 6 or 9 124 457
125 458
159
L27/17-09-09
356 357 689
367 368
20
Subset Operation Given a transaction t, what are the possible subsets of size 3?
Transaction, t 1 2 3 5 6
Level 1 1 2 3 5 6
2 3 5 6
Level 2 12 3 5 6
13 5 6
123 125 126
135 136
15
L27/17-09-09
6
156
23 5 6
25 6
235 236
256 21
3
Subset Operation Using Hash Tree 1 2 3 5 6 transaction Hash Function
1+ 2356
2+ 356
1,4,7
3+ 56
3,6,9
2,5,8
234 567 145
136 345
124 457
125 458
159
356 357 689
L27/17-09-09
367 368
22
Subset Operation Using Hash Tree
Hash Function
1 2 3 5 6 transaction
1+ 2356
2+ 356
12+ 356
1,4,7
3+ 56
3,6,9
2,5,8
13+ 56 234 567
15+ 6 145
136 345
124 457
125 458
159
L27/17-09-09
356 357 689
367 368
23
Subset Operation Using Hash Tree
Hash Function
1 2 3 5 6 transaction
1+ 2356
2+ 356
12+ 356
1,4,7
3+ 56
3,6,9
2,5,8
13+ 56 234 567
15+ 6 145
136 345
124 457
125 458
159
356 357 689
367 368
Match transaction against 11 out of 15 candidates L27/17-09-09
24
•
Factors Affecting Complexity Choice of minimum support threshold – lowering support threshold results in more frequent itemsets – this may increase number of candidates and max length of frequent itemsets
• Dimensionality (number of items) of the data set – more space is needed to store support count of each item – if number of frequent items also increases, both computation and I/O costs may also increase
• Size of database – since Apriori makes multiple passes, run time of algorithm may increase with number of transactions
• Average transaction width – transaction width increases with denser data sets
– This may increase max length of frequent itemsets and traversals of hash tree (number of subsets in a transaction increases with its width) L27/17-09-09
25
Compact Representation of Frequent Itemsets • Some itemsets are redundant because they have identical support as their supersets
TID A1 A2 A3 A4 1 1 1 1 1 1 1 1 1 2 1 1 1 1 3 10 = 3 × ∑ k 10
• Number of frequent itemsets
k =1
• Need a compact representation
L27/17-09-09
26
Maximal Frequent Itemset
An itemset is maximal frequent if none of its immediate supersets is frequent null Maximal Itemsets
A
B
C
D
E
AB
AC
AD
AE
BC
BD
BE
CD
CE
ABC
ABD
ABE
ACD
ACE
ADE
BCD
BCE
BDE
Infrequent Itemsets
Border ABCD
ABCE
L27/17-09-09
ABDE
ACDE
BCDE
27
Closed Itemset • An itemset is closed if none of its immediate supersets has the same support as the itemset TID 1 2 3 4 5
Items {A,B} {B,C,D} {A,B,C,D} {A,B,D} {A,B,C,D}
Itemset {A} {B} {C} {D} {A,B} {A,C} {A,D} {B,C} {B,D} L27/17-09-09 {C,D}
Support 4 5 3 4 4 2 3 3 4 3
Itemset {A,B,C} {A,B,D} {A,C,D} {B,C,D} {A,B,C,D}
Support 2 3 2 3 2
28
Maximal vs Closed Itemsets
Transaction Ids
null
TID
Item s
1
ABC
2
ABCD
3
BCE
4
ACDE
5
DE
124
123
A
12
124
AB
12
24
AC
ABC
B
AE
24 ABD
ABE
2
345 D
2 BC
3
BD
4 ACD
245
C
123
4
AD
2
1234
2
4 ACE
BE
ADE
BCD
E
24
3
CD
BCE
34
CE
BDE
45
4
4 ABCD
ABCE
Not supported by any transactions
ABDE
ACDE
BCDE
ABCDE
L27/17-09-09
29
DE
CDE
Maximal vs Closed Frequent Itemsets Minimum support = 2 124
123
A
12
124
AB
12 ABC
24
AC
AD
ABD
ABE
2
1234
B
AE
345
D
2 BC
3
BD
4 ACD
245
C
123
4
24
2
Closed but not maximal
null
BE
2
4 ACE
ADE
BCD
E
24
3
CD
BCE
Closed and maximal 34
CE
BDE
45
4
DE
CDE
4 ABCD
ABCE
ABDE
ACDE
BCDE
# Closed = 9 # Maximal = 4
ABCDE
L27/17-09-09
30
Maximal vs Closed Itemsets Frequent Itemsets Closed Frequent Itemsets Maximal Frequent Itemsets
L27/17-09-09
31
Alternative Methods for Frequent Itemset Generation • Traversal of Itemset Lattice – General-to-specific vs Specific-to-general Frequent itemset border
null
null
.. ..
.. ..
{a 1,a 2,...,a n}
{a 1,a 2,...,a n} L27/17-09-09
(a) General-to-specific
(b) Specific-to-general
Frequent itemset border
null
.. .. Frequent itemset border
{a 1,a 2 ,...,a 32
(c) Bidirection
Alternative Methods for Frequent Itemset Generation • Traversal of Itemset Lattice – Equivalent Classes null
A
AB
ABC
B
AC
AD
ABD
ACD
C
BC
BD
null
D
CD
A
AB
B
AC
ABC
BCD
D
C
BC
ABD
AD
BD
ACD
CD
BCD
ABCD
ABCD
(a) Prefix tree
L27/17-09-09
(b) Suffix tree
33
Alternative Methods for Frequent Itemset Generation • Traversal of Itemset Lattice – Breadth-first vs Depth-first
(a) Breadth first
L27/17-09-09
(b) Depth firs 34
Alternative Methods for Frequent Itemset Generation • Representation of Database – horizontal vs vertical data layout Horizontal Data Layout TID 1 2 3 4 5 6 7 8 9 10
Items A,B,E B,C,D C,E A,C,D A,B,C,D A,E A,B A,B,C A,C,D B
Vertical Data Layout A 1 4 5 6 7 8 9 L27/17-09-09
B 1 2 5 7 8 10
C 2 3 4 8 9
D 2 4 5 9
E 1 3 6
35
FP-growth Algorithm • Use a compressed representation of the database using an FP-tree • Once an FP-tree has been constructed, it uses a recursive divide-and-conquer approach to mine the frequent itemsets
L27/17-09-09
36
FP-tree construction After reading TID=1:
TID 1 2 3 4 5 6 7 8 9 10
Items {A,B} {B,C,D} {A,C,D,E} {A,D,E} {A,B,C} {A,B,C,D} {B,C} {A,B,C} {A,B,D} {B,C,E}
null A:1
B:1 After reading TID=2: A:1
null
B:1
B:1 C:1 D:1
L27/17-09-09
37
FP-Tree Construction TID 1 2 3 4 5 6 7 8 9 10
Items {A,B} {B,C,D} {A,C,D,E} {A,D,E} {A,B,C} {A,B,C,D} {B,C} {A,B,C} {A,B,D} {B,C,E}
Header table
Item A B C D E
Pointer
Transaction Database
null B:3
A:7 B:5
C:1
D:1
C:3 D:1
C:3 D:1
D:1
E:1
E:1
E:1
D:1
Pointers are used to assist frequent itemset generation L27/17-09-09
38
FP-growth
C:1
Conditional Pattern base for D: P = {(A:1,B:1,C:1), (A:1,B:1), (A:1,C:1), (A:1), (B:1,C:1)}
D:1
Recursively apply FPgrowth on P
null A:7 B:5
B:1
C:1
C:3 D:1
D:1 D:1
Frequent Itemsets found (with sup > 1): AD, BD, CD, ACD, BCD
D:1 L27/17-09-09
39
Tree Projection null
Set enumeration tree: A
Possible Extension: E(A) = {B,C,D,E}
B
C
D
E
AB
AC
AD
AE
BC
BD
BE
CD
CE
DE
ABC
ABD
ABE
ACD
ACE
ADE
BCD
BCE
BDE
CDE
Possible Extension: E(ABC) = {D,E} ABCD
ABCE
ABDE
ACDE
BCDE
ABCDE
L27/17-09-09
40
Tree Projection • Items are listed in lexicographic order • Each node P stores the following information: – – – –
Itemset for node P List of possible lexicographic extensions of P: E(P) Pointer to projected database of its ancestor node Bitvector containing information about which transactions in the projected database contain the itemset
L27/17-09-09
41
Projected Database Projected Database for node A:
Original Database:
TID 1 2 3 4 5 6 7 8 9 10
Items {A,B} {B,C,D} {A,C,D,E} {A,D,E} {A,B,C} {A,B,C,D} {B,C} {A,B,C} {A,B,D} {B,C,E}
TID Items 1 {B} 2 {} 3 {C,D,E} 4 {D,E} 5 {B,C} 6 {B,C,D} 7 {} 8 {B,C} 9 {B,D} 10 at node {} A is T ∩ E(A) For each transaction T, projected transaction L27/17-09-09
42
ECLAT • For each item, store a list of transaction ids (tids) Horizontal Data Layout TID 1 2 3 4 5 6 7 8 9 10
Items A,B,E B,C,D C,E A,C,D A,B,C,D A,E A,B A,B,C A,C,D B
Vertical Data Layout A 1 4 5 6 7 8 9
B 1 2 5 7 8 10
C 2 3 4 8 9
D 2 4 5 9
E 1 3 6
TID-list L27/17-09-09
43
•
ECLAT Determine support of any k-itemset by intersecting tidlists of two of its (k-1) subsets.
A B 1 1 4 2 5 5 6 7 8 7 10 8 • 3 traversal 9approaches:
∧
→
AB 1 5 7 8
– top-down, bottom-up and hybrid
• Advantage: very fast support counting • Disadvantage: intermediate tid-lists may become too large for memory L27/17-09-09
44
Rule Generation
• Given a frequent itemset L, find all non-empty subsets f ⊂ L such that f → L – f satisfies the minimum confidence requirement – If {A,B,C,D} is a frequent itemset, candidate rules: ABC →D, A →BCD, AB →CD, BD →AC,
ABD →C, B →ACD, AC → BD, CD →AB,
ACD →B, C →ABD, AD → BC,
BCD →A, D →ABC BC →AD,
• If |L| = k, then there are 2k – 2 candidate association rules (ignoring L → ∅ and ∅ → L) L27/17-09-09
45
Rule Generation • How to efficiently generate rules from frequent itemsets? – In general, confidence does not have an antimonotone property c(ABC →D) can be larger or smaller than c(AB →D)
– But confidence of rules generated from the same itemset has an anti-monotone property – e.g., L = {A,B,C,D}: c(ABC → D) ≥ c(AB → CD) ≥ c(A → BCD) • Confidence is anti-monotone w.r.t. number of items on the RHS of the rule
L27/17-09-09
46
Rule Generation for Apriori Algorithm Lattice of rules ABCD=>{ }
Low Confidence Rule BCD=>A
CD=>AB
BD=>AC
ACD=>B
BC=>AD
ABD=>C
AD=>BC
ABC=>D
AC=>BD
Pruned Rules D=>ABC
C=>ABD L27/17-09-09
B=>ACD
A=>BCD 47
Rule Generation for Apriori Algorithm • Candidate rule is generated by merging two rules that share the same prefix in the rule consequent CD=>AB BD=>AC • join(CD=>AB,BD=>AC) would produce the candidate rule D => ABC • Prune rule D=>ABC if its subset AD=>BC does not have high confidence L27/17-09-09
D=>ABC
48
Effect of Support Distribution • Many real data sets have skewed support distribution
Support distribution of a retail data set
L27/17-09-09
49
Effect of Support Distribution • How to set the appropriate minsup threshold? – If minsup is set too high, we could miss itemsets involving interesting rare items (e.g., expensive products) – If minsup is set too low, it is computationally expensive and the number of itemsets is very large
• Using a single minimum support threshold may not be effective L27/17-09-09
50
Multiple Minimum Support • How to apply multiple minimum supports? – MS(i): minimum support for item i – e.g.: MS(Milk)=5%, MS(Coke) = 3%, MS(Broccoli)=0.1%, MS(Salmon)=0.5% – MS({Milk, Broccoli}) = min (MS(Milk), MS(Broccoli)) = 0.1% – Challenge: Support is no longer anti-monotone •
Suppose:
Support(Milk, Coke) = 1.5% and Support(Milk, Coke, Broccoli) = 0.5%
• {Milk,Coke} is infrequent but {Milk,Coke,Broccoli} is frequent
L27/17-09-09
51
Multiple Minimum Support Item
MS(I)
Sup(I)
A
0.10% 0.25%
B
0.20% 0.26%
C
0.30% 0.29%
D
0.50% 0.05%
E
3%
4.20%
AB
ABC
AC
ABD
AD
ABE
AE
ACD
BC
ACE
BD
ADE
BE
BCD
CD
BCE
CE
BDE
DE
CDE
A
B C
D
E
L27/17-09-09
52
Multiple Minimum Support Item
MS(I)
Sup(I)
AB
ABC
AC
ABD
AD
ABE
AE
ACD
BC
ACE
BD
ADE
BE
BCD
CD
BCE
CE
BDE
DE
CDE
A A
B
0.10% 0.25%
0.20% 0.26%
B C
C
0.30% 0.29% D
D
0.50% 0.05% E
E
3%
4.20%
L27/17-09-09
53
•
Multiple Minimum Support (Liu 1999) Order the items according to their minimum support (in ascending order) – e.g.:
MS(Milk)=5%, MS(Coke) = 3%, MS(Broccoli)=0.1%, MS(Salmon)=0.5% – Ordering: Broccoli, Salmon, Coke, Milk
• Need to modify Apriori such that: – L1 : set of frequent items – F1 : set of items whose support is ≥ MS(1) where MS(1) is mini( MS(i) ) – C2 : candidate itemsets of size 2 is generated from F1 instead of L1 L27/17-09-09
54
Multiple Minimum Support (Liu 1999)
• Modifications to Apriori: – In traditional Apriori,
• A candidate (k+1)-itemset is generated by merging two frequent itemsets of size k • The candidate is pruned if it contains any infrequent subsets of size k
– Pruning step has to be modified: • Prune only if subset contains the first item • e.g.: Candidate={Broccoli, Coke, Milk} (ordered according to minimum support) • {Broccoli, Coke} and {Broccoli, Milk} are frequent but {Coke, Milk} is infrequent – Candidate is not pruned because {Coke,Milk} does not contain the first item, i.e., Broccoli.
L27/17-09-09
55
Pattern Evaluation • Association rule algorithms tend to produce too many rules – many of them are uninteresting or redundant – Redundant if {A,B,C} → {D} and {A,B} → {D} have same support & confidence
• Interestingness measures can be used to prune/rank the derived patterns • In the original formulation of association rules, support & confidence are the only measures used L27/17-09-09
56
Application of Interestingness Interestingness Measure Knowledge
Measures
Patterns
Postprocessing Preprocessed Data Prod Prod uct Prod uct Prod uct Prod uct Prod uct Prod uct Prod uct Prod uct Prod uct uct
Featur Featur e Featur e Featur e Featur e Featur e Featur e Featur e Featur e Featur e e
Mining
Selected Data
Preprocessing
Data
Selection L27/17-09-09
57
Computing Interestingness Measure • Given a rule X → Y, information needed to compute rule interestingness can be obtained from a contingency table
Contingency table for X → Y
Y
Y
X
f11
f10
f1+
X
f01
f00
fo+
f+1
f+0
|T|
f11 : support of X and Y f10 : support of X and Y f01 : support of X and Y f00 : support of X and Y Used to define various measures ◆
support, confidence, lift, Gini, J-measure, etc.
L27/17-09-09
58
Drawback of Confidence Coffe Coffee Tea
15
5
20
Tea
75
5
80
90
10
100
Association Rule: Tea → Coffee Confidence= P(Coffee|Tea) = 0.75 but P(Coffee) = 0.9 ⇒ Although confidence is high, rule is misleading L27/17-09-09
⇒ P(Coffee|Tea) = 0.9375
59
Statistical Independence
• Population of 1000 students – – – – – – – –
600 students know how to swim (S) 700 students know how to bike (B) 420 students know how to swim and bike (S,B) P(S∧B) = 420/1000 = 0.42 P(S) × P(B) = 0.6 × 0.7 = 0.42 P(S∧B) = P(S) × P(B) => Statistical independence P(S∧B) > P(S) × P(B) => Positively correlated P(S∧B) < P(S) × P(B) => Negatively correlated
L27/17-09-09
60
Statistical-based Measures • Measures that take into account statistical dependence P (Y | X ) Lift = P (Y ) P( X , Y ) Interest = P ( X ) P (Y ) PS = P ( X , Y ) − P ( X ) P (Y ) P ( X , Y ) − P ( X ) P (Y ) φ − coefficient = P ( X )[1 − P ( X )]P (Y )[1 − P (Y )] L27/17-09-09
61
Example: Lift/Interest Coffee
Coffee
Tea
15
5
20
Tea
75
5
80
90
10
100
Association Rule: Tea → Coffee Confidence= P(Coffee|Tea) = 0.75 but P(Coffee) = 0.9 ⇒ Lift = 0.75/0.9= 0.8333 (< 1, therefore is negatively associated) L27/17-09-09
62
Drawback of Lift & Interest Y
Y
X
10
0
10
X
0
90
90
10
90
100
0.1 Lift = = 10 (0.1)(0.1)
X X
Y 90 0 90
Y 0 10 10
90 10 100
0 .9 Lift = = 1.11 (0.9)(0.9) Statistical independence: If P(X,Y)=P(X)P(Y) => Lift = 1 L27/17-09-09
63
Properties of A Good Measure • Piatetsky-Shapiro: 3 properties a good measure M must satisfy: – M(A,B) = 0 if A and B are statistically independent – M(A,B) increase monotonically with P(A,B) when P(A) and P(B) remain unchanged – M(A,B) decreases monotonically with P(A) [or P(B)] when P(A,B) and P(B) [or P(A)] remain unchanged
L27/17-09-09
64
Comparing Different Measures f f f Exam ple f 10 examples of contingency tables:
Rankings of contingency tables using various measures:
11
E1 E2 E3 E4 E5 E6 E7 E8 E9 E10
L27/17-09-09
8123 8330 9481 3954 2886 1500 4000 4000 1720 61
10
01
00
83 424 1370 2 622 1046 94 127 298 3080 5 2961 1363 1320 4431 2000 500 6000 2000 1000 3000 2000 2000 2000 7121 5 1154 2483 4 7452
65
Property under Variable Permutation A A
B p r
B q s
B B
A p q
A r s
Does M(A,B) = M(B,A)? Symmetric measures: ◆
support, lift, collective strength, cosine, Jaccard, etc
Asymmetric measures: ◆
confidence, conviction, Laplace, J-measure, etc L27/17-09-09
66
Property under Row/Column Scaling Grade-Gender Example (Mosteller, 1968): Male Female
Male Female
High
2
3
5
Low
1
4
5
3
7
10
High Low
4 2 6
30 40 70
2x
10x
34 42 76
Mosteller: Underlying association should be independent of the relative number of male and female students in the samples L27/17-09-09
67
Property under Inversion A BOperation C D E F
. . . . .
Transaction 1
Transaction N
1 0 0 0 0 0 0 0 0 1
0 0 0 0 1 0 0 0 0 0 (a)
0 1 1 1 1 1 1 1 1 0
1 1 1 1 0 1 1 1 1 1 (b)
L27/17-09-09
0 1 1 1 1 1 1 1 1 0
0 0 0 0 1 0 0 0 0 0 (c) 68
Example: φ -Coefficient
∀ φ -coefficient is analogous to correlation coefficient for continuous variables Y
Y
X
60
10
70
X
10
20
30
70
30
100
X X
0.6 − 0.7 × 0.7 φ= 0.7 × 0.3 × 0.7 × 0.3 = 0.5238
Y 20 10 30
Y 10 60 70
30 70 100
0.2 − 0.3 × 0.3 φ= 0.7 × 0.3 × 0.7 × 0.3 = 0.5238
φ Coefficient is the same for both tables L27/17-09-09
69
Property under Null Addition A A
B p r
B q s
A A
B p r
B q s+k
Invariant measures: ◆
support, cosine, Jaccard, etc
Non-invariant measures: ◆
correlation, Gini, mutual information, odds ratio, etc
L27/17-09-09
70
Different Measures have Different Properties Sym bol
M e as ure
Range
P1
P2
P3
O1
O2
O3
O3'
Φ λ α Q Y κ M J G s c L V I IS PS F AV S ζ
Correlation
-1 … 0 … 1
Yes
Yes
Yes
Yes
No
Yes
Yes
Lambda
0…1
Yes
No
No
Yes
No
No*
Yes
Odds ratio
0 … 1 …∞
Yes*
Yes
Yes
Yes
Yes
Yes*
Yes
Yule's Q
-1 … 0 … 1
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yule's Y
-1 … 0 … 1
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Cohen's
-1 … 0 … 1
Yes
Yes
Yes
Yes
No
No
Yes
Mutual Inf ormation
0…1
Yes
Yes
Yes
Yes
No
No*
Yes
J-Measure
0…1
Yes
No
No
No
No
No
No
Gini Index
0…1
Yes
No
No
No
No
No*
Yes
Support
0…1
No
Yes
No
Yes
No
No
No
Confidence
0…1
No
Yes
No
Yes
No
No
No
Laplace
0…1
No
Yes
No
Yes
No
No
No
Conviction
0.5 … 1 … ∞
No
Yes
No
Y es**
No
No
Yes
Interest
0 … 1 …∞
Yes*
Yes
Yes
Yes
No
No
No
IS (cosine)
0 .. 1
No
Yes
Yes
Yes
No
No
No
Piatetsky-Shapiro's
-0.25 … 0 … 0.25
Yes
Yes
Yes
Yes
No
Yes
Yes
Certainty f actor
-1 … 0 … 1
Yes
Yes
Yes
No
No
No
Yes
0.5 … 1 … 1
Yes
Yes
Yes
No
No
No
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Added value Collective
strength
Jaccard
1 2 − 1 2 −0 … 3−1 … No ∞0L27/17-09-09 3 3 3 3
2
0 .. 1
No
Y
Y
No Yes* 71Yes No
No
Y
Support-based Pruning • Most of the association rule mining algorithms use support measure to prune rules and itemsets • Study effect of support pruning on correlation of itemsets – Generate 10000 random contingency tables – Compute support and pairwise correlation for each table – Apply support-based pruning and examine the tables that are removed
L27/17-09-09
72
Effect of Support-based Pruning All Item pairs 1000 900 800 700 600 500 400 300 200 100
9 0.
1
8
0.
7 0.
5 0.
6
4 0.
Correlation
0.
3
2 0.
L27/17-09-09
0.
1 0.
0
-0 .9 -0 .8 -0 .7 -0 .6 -0 .5 -0 .4 -0 .3 -0 .2 -0 .1
-1
0
73
Effect of Support-based Pruning Support < 0.01
3
4
5
6
7
8
9
0.
0.
0.
0.
0.
0.
1
2
-0
-0 .7 -0 .6 -0 .5 -0 .4 -0 .3
0.
0 1
0
0.
50
0
50
0.
100
.7 -0 .6 -0 .5 -0 .4 -0 .3 -0 .2 -0 .1
100
.8
150
-0
150
-0
200
-1
200
1
250
0 0. 1 0. 2 0. 3 0. 4 0. 5 0. 6 0. 7 0. 8 0. 9
250
-0 .2 -0 .1
300
-1 -0 .9 -0 .8
300
.9
Support < 0.03
Support < 0.05 Correlation
Correlation 300
200 150 100 50
74
0 0. 1 0. 2 0. 3 0. 4 0. 5 0. 6 0. 7 0. 8 0. 9
L27/17-09-09
Correlation
1
0 -1 -0 .9 -0 .8 -0 .7 -0 .6 -0 .5 -0 .4 -0 .3 -0 .2 -0 .1
Support-based pruning eliminates mostly negatively correlated itemsets
250
Effect of Support-based Pruning • Investigate how support-based pruning affects other measures • Steps: – Generate 10000 contingency tables – Rank each table according to the different measures – Compute the pair-wise correlation between the measures
L27/17-09-09
75
Effect of Support-based Pruning ◆
Without Support Pruning (All Pairs) A llP airs(40.14% ) 1
C o nvictio n O d dsratio
0.9
C o lS trength
0.8
C o rre lation Inte rest
0.7
P S C F
Jaccard
0.6
Y uleY R eliab ility K ap pa
0.5 0.4
K losgen Y uleQ
0.3
C onfiden ce Lap lace
0.2
IS
0.1
S upp ort Jaccard
0 -1
L am b da G in i
-0.8
-0.6
-0.4
-0.2
0 C orrelation
0.2
0.4
0.6
0.8
1
J-m ea sure M utu a lIn fo 1
2
3
4
5
6
7
8
9
10
11
12
1 3
14
15
16
1 7
1 8
19
20
21
◆
Red cells indicate correlation between the pair of measures > 0.85
◆
40.14% pairs have correlation > 0.85 L27/17-09-09
Scatter Plot between Correlation & Jaccard Measure
76
Effect of Support-based Pruning ◆
0.5% ≤ support ≤ 50% 0.005<=support<=0.500(61.45% )
1
Inte rest C o nvictio n
0.9
O d dsratio C o lS trength
0.8
Lap lace
0.7
C onfiden ce C orrelation
0.6 Jaccard
K losgen R eliability P S
0.5 0.4
Y uleQ C F
0.3
Y uleY K ap pa
0.2
IS
0.1
Jaccard S upp ort
0 -1
L am b da G in i
-0.8
-0.6
-0.4
-0.2
0 C orrelation
0.2
0.4
0.6
0.8
J-m ea sure M utu a l In fo 1
◆
2
3
4
5
6
7
8
9
10
1 1
12
1 3
14
15
16
17
1 8
19
20
21
Scatter Plot between Correlation & Jaccard Measure:
61.45% pairs have correlation > 0.85
L27/17-09-09
77
1
Effect of Support-based Pruning ◆
0.5% ≤ support ≤ 30% 0.005<=support<=0.300(76.42% )
1
S upp ort Inte rest
0.9
R eliab ility C o nvictio n
0.8
Y uleQ
0.7
O d dsratio C onfiden ce
0.6 Jaccard
C F Y uleY K ap pa
0.5 0.4
C o rre lation C o lS trength
0.3
IS Jaccard
0.2
Lap lace P S
0.1
K losgen
0 -0.4
L am b da M utu a lIn fo
-0.2
0
0.2 0.4 C orrelation
0.6
0.8
G in i J-m ea sure 1
◆
2
3
4
5
6
7
8
9
10
11
12
1 3
14
15
16
1 7
1 8
19
20
21
Scatter Plot between Correlation & Jaccard Measure
76.42% pairs have correlation > 0.85
L27/17-09-09
78
1
Subjective Interestingness Measure • Objective measure: – Rank patterns based on statistics computed from data – e.g., 21 measures of association (support, confidence, Laplace, Gini, mutual information, Jaccard, etc).
• Subjective measure: – Rank patterns according to user’s interpretation • A pattern is subjectively interesting if it contradicts the expectation of a user (Silberschatz & Tuzhilin) • A pattern is subjectively interesting if it is actionable (Silberschatz & Tuzhilin)
L27/17-09-09
79
Interestingness via Unexpectedness • Need to model expectation of users (domain knowledge)
+ Pattern expected to be frequent - Pattern expected to be infrequent Pattern found to be frequent Pattern found to be infrequent
+ - +
Expected Patterns Unexpected Patterns
• Need to combine expectation of users with evidence from data (i.e., extracted patterns) L27/17-09-09
80
Interestingness via Unexpectedness • Web Data (Cooley et al 2001) – Domain knowledge in the form of site structure – Given an itemset F = {X1, X2, …, Xk} (Xi : Web pages) • L: number of links connecting the pages • lfactor = L / (k × k-1) • cfactor = 1 (if graph is connected), 0 (disconnected graph)
– Structure evidence = cfactor × lfactor – Usage evidence
P ( X X ... X ) = P ( X ∪ X ∪ ... ∪ X ) 1
1
2
2
k
k
– Use Dempster-Shafer theory to combine domain knowledge and evidence from data L27/17-09-09
81