Apriori algorithm

In computer science and data mining, Apriori is a classic algorithm for learning association rules. Apriori is designed to operate on databases containing transactions (for example, collections of items bought by customers, or details of a website frequentation). Other algorithms are designed for finding association rules in data having no transactions (Winepi and Minepi), or having no timestamps (DNA sequencing).

As is common in association rule mining, given a set of itemsets (for instance, sets of retail transactions, each listing individual items purchased), the algorithm attempts to find subsets which are common to at least a minimum number C (the cutoff, or confidence threshold) of the itemsets. Apriori uses a "bottom up" approach, where frequent subsets are extended one item at a time (a step known as candidate generation, and groups of candidates are tested against the data. The algorithm terminates when no further successful extensions are found.

Apriori uses breadth-first search and a hash tree structure to count candidate item sets efficiently. It generates candidate item sets of length $$k$$ from item sets of length $$k-1$$. Then it prunes the candidates which have an infrequent sub pattern. According to the downward closure lemma, the candidate set contains all frequent $$k$$-length item sets. After that, it scans the transaction database to determine frequent item sets among the candidates. For determining frequent items quickly, the algorithm uses a hash tree to store candidate itemsets. This hash tree has item sets at the leaves and hash tables at internal nodes (Zaki, 99). Note that this is not the same kind of hash tree used in for instance p2p systems

Apriori, while historically significant, suffers from a number of inefficiencies or trade-offs, which have spawned other algorithms. Candidate generation generates large numbers of subsets (the algorithm attempts to load up the candidate set with as many as possible before each scan). Bottom-up subset exploration (essentially a breadth-first traversal of the subset lattice) finds any maximal subset S only after all $$2^{|S|}-1$$ of its proper subsets.

Example
This example suggests the process of selecting or generating a list of likely ordered serial candidate item sets. The techniques goal is to construct a set of $$k$$ node ordered serial item sets from $$k-1$$ length item sets. For example, with $$k = 4$$, suppose there are two such sets of length $$k-1$$...
 * $$A \rightarrow B \rightarrow C$$,

and
 * $$A \rightarrow B \rightarrow D$$,

two candidate item sets are generated, namely
 * $$A \rightarrow B \rightarrow C \rightarrow D$$

and
 * $$A \rightarrow B \rightarrow D \rightarrow C$$.

Algorithm
Association rule mining is to find out association rules that satisfy the predefined minimum support and confidence from a given database. The problem is usually decomposed into two subproblems. One is to find those itemsets whose occurrences exceed a predefined threshold in the database; those itemsets are called frequent or large itemsets. The second problem is to generate association rules from those large itemsets with the constraints of minimal confidence. Suppose one of the large itemsets is Lk, Lk = {I1, I2, …, Ik}, association rules with this itemsets are generated in the following way: the first rule is {I1, I2, …, Ik-1}⇒ {Ik}, by checking the confidence this rule can be determined as interesting or not. Then other rule are generated by deleting the last items in the antecedent and inserting it to the consequent, further the confidences of the new rules are checked to determine the interestingness of them. Those processes iterated until the antecedent becomes empty. Since the second subproblem is quite straight forward, most of the researches focus on the first subproblem. The Apriori algorithm finds the frequent sets $$L$$ In Database $$D$$.


 * Find frequent set $$L_{k-1}$$.
 * Join Step.
 * $$C_k$$ is generated by joining $$L_{k-1}$$with itself
 * Prune Step.
 * Any $$(k-1)$$ -itemset that is not frequent cannot be a subset of a frequent $$k$$ -itemset, hence should be removed.

where
 * ($$C_k$$: Candidate itemset of size $$k$$)
 * ($$L_k$$: frequent itemset of size $$k$$)

Apriori Pseudocode
Apriori $$(T,\varepsilon)$$
 * $$L_1 \gets \{ $$ large 1-itemsets $$ \} $$
 * $$k \gets 2$$
 * while $$ L_{k-1} \neq \varnothing $$
 * $$C_k \gets $$Generate$$(L_{k-1})$$
 * for transactions $$t \in T$$
 * $$C_t \gets $$Subset$$(C_k,t)$$
 * for candidates $$c \in C_t$$
 * $$\mathrm{count}[c] \gets \mathrm{count}[c]+1$$
 * $$L_k \gets \{ c \in C_k | ~ \mathrm{count}[c] \geq \varepsilon \}$$
 * $$k \gets k+1 $$
 * return $$\bigcup_k L_k$$