Challenge: Evaluating the Perceptron
To evaluate the previously created perceptron, you will use a dataset containing two input features and two distinct classes (0 and 1):
This dataset is balanced, with 500 samples from class 1 and 500 samples from class 0. Therefore, accuracy is a sufficient metric for evaluation in this case, which can be calculated using the accuracy_score() function:
accuracy_score(y_true, y_pred)
y_true represents the actual labels, while y_pred represents the predicted labels.
The dataset is stored in perceptron.py as two NumPy arrays: X (input features) and y (corresponding labels), so they will be simply imported. This file also contains model, which is the instance of the Perceptron class you previously created.
Swipe to start coding
Your goal is to evaluate how well the trained perceptron model performs on unseen data. Follow the steps below to split the dataset, train the model, generate predictions, and measure its accuracy.
- Split the dataset into training (80%) and testing (20%) sets using the
train_test_split()function.- Use
test_size=0.2andrandom_state=10to ensure reproducibility.
- Use
- Train the perceptron model for 10 epochs with a learning rate of
0.01by calling thefit()method. - Obtain predictions for all examples in the test set by calling the model’s
forward()method for each input example. - Round the predictions using
np.round()so that probabilities greater than or equal to0.5are treated as class1, and those below0.5as class0. - Evaluate accuracy by comparing the predicted labels with the actual test labels using the
accuracy_score()function fromsklearn.metrics.
Solution
Thanks for your feedback!
single
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat
How do I use the `accuracy_score()` function with my perceptron model?
Can you show me how to import the dataset and model from `perceptron.py`?
What are the next steps to evaluate the perceptron on this dataset?
Awesome!
Completion rate improved to 4
Challenge: Evaluating the Perceptron
Swipe to show menu
To evaluate the previously created perceptron, you will use a dataset containing two input features and two distinct classes (0 and 1):
This dataset is balanced, with 500 samples from class 1 and 500 samples from class 0. Therefore, accuracy is a sufficient metric for evaluation in this case, which can be calculated using the accuracy_score() function:
accuracy_score(y_true, y_pred)
y_true represents the actual labels, while y_pred represents the predicted labels.
The dataset is stored in perceptron.py as two NumPy arrays: X (input features) and y (corresponding labels), so they will be simply imported. This file also contains model, which is the instance of the Perceptron class you previously created.
Swipe to start coding
Your goal is to evaluate how well the trained perceptron model performs on unseen data. Follow the steps below to split the dataset, train the model, generate predictions, and measure its accuracy.
- Split the dataset into training (80%) and testing (20%) sets using the
train_test_split()function.- Use
test_size=0.2andrandom_state=10to ensure reproducibility.
- Use
- Train the perceptron model for 10 epochs with a learning rate of
0.01by calling thefit()method. - Obtain predictions for all examples in the test set by calling the model’s
forward()method for each input example. - Round the predictions using
np.round()so that probabilities greater than or equal to0.5are treated as class1, and those below0.5as class0. - Evaluate accuracy by comparing the predicted labels with the actual test labels using the
accuracy_score()function fromsklearn.metrics.
Solution
Thanks for your feedback!
single