Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Aprende Challenge: Apriori Algorithm Implementation | Mining Frequent Itemsets
Association Rule Mining

book
Challenge: Apriori Algorithm Implementation

Now we will implement Apriori algorithm using mlxtend library.
Let's discover some implementation key points:

  • We will utilize the mlxtend.frequent_patterns module to detect frequent itemsets using the Apriori algorithm and to provide association rules;
  • The Apriori algorithm is implemented using the apriori(data, min_support, use_colnames=True) function. Note that the data argument represents the transaction dataset in one-hot-encoded format. The min_support argument is a numerical value that represents the minimum support threshold;
  • To detect association rules, we can use the association_rules(frequent_itemsets, metric, min_threshold) function. The frequent_itemsets variable represents a list of frequent itemsets generated by the apriori function, and the metric variable represents the metric name in a string format that we use to measure the strength of the association rule. The min_threshold argument represents the minimum threshold value of the metric to detect significant association rules.

What is one-hot-encoded format

One-hot encoding is a technique used to convert categorical variables into a numerical format that can be used for machine learning algorithms. It involves representing each category in a categorical variable as a binary vector, where each vector has a length equal to the number of unique categories in the variable. The vector is all zeros except for the index corresponding to the category, which is set to 1.

Suppose we have the following transaction dataset for Apriori algorithm:

We want to convert the "Items" column into a one-hot encoded format.

After One-Hot Encoding:

Tarea

Swipe to start coding

Your task is to find frequent itemsets and association rules in the given dataset. You need to use the apriori() function with one-hot-encoded data and a minimum support value of 0.2 as arguments to detect frequent itemsets. Then, use the association_rules() function with the frequent itemsets, confidence, and minimum threshold value of 0.7 as arguments to detect association rules.

Solución

from mlxtend.frequent_patterns import apriori
from mlxtend.frequent_patterns import association_rules
import pandas as pd
from mlxtend.preprocessing import TransactionEncoder

# Example transaction dataset
dataset = [
["milk", "bread", "eggs", "cheese", "yogurt"],
["bread", "butter", "jam", "eggs", "cheese"],
["milk", "bread", "butter", "jam", "yogurt"],
["milk", "eggs", "cheese", "yogurt", "fruit"],
["bread", "eggs", "butter", "jam", "honey"],
["bread", "eggs", "jam", "yogurt", "fruit"],
["bread", "milk", "eggs", "butter", "jam", "cheese", "yogurt"],
["milk", "cheese", "jam", "honey", "fruit"],
["bread", "milk", "eggs", "butter", "jam", "honey"]
]

# Convert the dataset into a DataFrame
df = pd.DataFrame(dataset)

# Apply transaction encoding
encoder = TransactionEncoder()
encoded_transactions = encoder.fit_transform(dataset)

# Convert encoded transactions to DataFrame
df_encoded = pd.DataFrame(encoded_transactions, columns=encoder.columns_)

# Find frequent itemsets using the Apriori algorithm
frequent_itemsets = apriori(df_encoded, min_support=0.2, use_colnames=True)

# Print frequent itemsets
print("Frequent Itemsets:")
print(frequent_itemsets)

# Generate association rules

¿Todo estuvo claro?

¿Cómo podemos mejorarlo?

¡Gracias por tus comentarios!

Sección 2. Capítulo 2
# Import necessary functions from the `mlxtend` library
from mlxtend.frequent_patterns import apriori
from mlxtend.frequent_patterns import association_rules
import pandas as pd
from mlxtend.preprocessing import TransactionEncoder

# Example transaction dataset
dataset = [
["milk", "bread", "eggs", "cheese", "yogurt"],
["bread", "butter", "jam", "eggs", "cheese"],
["milk", "bread", "butter", "jam", "yogurt"],
["milk", "eggs", "cheese", "yogurt", "fruit"],
["bread", "eggs", "butter", "jam", "honey"],
["bread", "eggs", "jam", "yogurt", "fruit"],
["bread", "milk", "eggs", "butter", "jam", "cheese", "yogurt"],
["milk", "cheese", "jam", "honey", "fruit"],
["bread", "milk", "eggs", "butter", "jam", "honey"]
]

# Convert the dataset into a DataFrame
df = pd.DataFrame(dataset)

# Apply transaction encoding
encoder = TransactionEncoder()
encoded_transactions = encoder.fit_transform(dataset)

# Convert encoded transactions to DataFrame
df_encoded = pd.DataFrame(encoded_transactions, columns=encoder.columns_)

# Find frequent itemsets using the Apriori algorithm
frequent_itemsets = apriori(___, min_support=___, use_colnames=True)

# Print frequent itemsets
print("Frequent Itemsets:")
print(frequent_itemsets)

toggle bottom row
some-alt