Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Challenge: Evaluating the Perceptron | Neural Network from Scratch
Introduction to Neural Networks

bookChallenge: Evaluating the Perceptron

To evaluate the previously created perceptron, you will use a dataset containing two input features and two distinct classes (0 and 1):

This dataset is balanced, with 500 samples from class 1 and 500 samples from class 0. Therefore, accuracy is a sufficient metric for evaluation in this case, which can be calculated using the accuracy_score() function:

accuracy_score(y_true, y_pred)

y_true represents the actual labels, while y_pred represents the predicted labels.

The dataset is stored in perceptron.py as two NumPy arrays: X (input features) and y (corresponding labels), so they will be simply imported. This file also contains model, which is the instance of the Perceptron class you previously created.

Task

Swipe to start coding

Your goal is to evaluate how well the trained perceptron model performs on unseen data. Follow the steps below to split the dataset, train the model, generate predictions, and measure its accuracy.

  1. Split the dataset into training (80%) and testing (20%) sets using the train_test_split() function.
    • Use test_size=0.2 and random_state=10 to ensure reproducibility.
  2. Train the perceptron model for 10 epochs with a learning rate of 0.01 by calling the fit() method.
  3. Obtain predictions for all examples in the test set by calling the model’s forward() method for each input example.
  4. Round the predictions using np.round() so that probabilities greater than or equal to 0.5 are treated as class 1, and those below 0.5 as class 0.
  5. Evaluate accuracy by comparing the predicted labels with the actual test labels using the accuracy_score() function from sklearn.metrics.

Solution

Everything was clear?

How can we improve it?

Thanks for your feedback!

Section 2. Chapter 12
single

single

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

Suggested prompts:

How do I use the `accuracy_score()` function with my perceptron model?

Can you show me how to import the dataset and model from `perceptron.py`?

What are the next steps to evaluate the perceptron on this dataset?

close

Awesome!

Completion rate improved to 4

bookChallenge: Evaluating the Perceptron

Swipe to show menu

To evaluate the previously created perceptron, you will use a dataset containing two input features and two distinct classes (0 and 1):

This dataset is balanced, with 500 samples from class 1 and 500 samples from class 0. Therefore, accuracy is a sufficient metric for evaluation in this case, which can be calculated using the accuracy_score() function:

accuracy_score(y_true, y_pred)

y_true represents the actual labels, while y_pred represents the predicted labels.

The dataset is stored in perceptron.py as two NumPy arrays: X (input features) and y (corresponding labels), so they will be simply imported. This file also contains model, which is the instance of the Perceptron class you previously created.

Task

Swipe to start coding

Your goal is to evaluate how well the trained perceptron model performs on unseen data. Follow the steps below to split the dataset, train the model, generate predictions, and measure its accuracy.

  1. Split the dataset into training (80%) and testing (20%) sets using the train_test_split() function.
    • Use test_size=0.2 and random_state=10 to ensure reproducibility.
  2. Train the perceptron model for 10 epochs with a learning rate of 0.01 by calling the fit() method.
  3. Obtain predictions for all examples in the test set by calling the model’s forward() method for each input example.
  4. Round the predictions using np.round() so that probabilities greater than or equal to 0.5 are treated as class 1, and those below 0.5 as class 0.
  5. Evaluate accuracy by comparing the predicted labels with the actual test labels using the accuracy_score() function from sklearn.metrics.

Solution

Switch to desktopSwitch to desktop for real-world practiceContinue from where you are using one of the options below
Everything was clear?

How can we improve it?

Thanks for your feedback!

Section 2. Chapter 12
single

single

some-alt