Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Challenge: Solving Task Using XGBoost | Commonly Used Boosting Models
Ensemble Learning
course content

Contenido del Curso

Ensemble Learning

Ensemble Learning

1. Basic Principles of Building Ensemble Models
2. Commonly Used Bagging Models
3. Commonly Used Boosting Models
4. Commonly Used Stacking Models

Challenge: Solving Task Using XGBoost

Tarea

The "Credit Scoring" dataset is commonly used for credit risk analysis and binary classification tasks. It contains information about customers and their credit applications, with the goal of predicting whether a customer's credit application will result in a good or bad credit outcome.

Your task is to solve classification task on "Credit Scoring" dataset:

  1. Create Dmatrix objects using training and test data. Specify enable_categorical argument to use categorical features.
  2. Train the XGBoost model using the training DMatrix object.
  3. Set the split threshold to 0.5 for correct class detection.

Note

'objective': 'binary:logistic' parameter means that we will use logistic loss (also known as binary cross-entropy loss) as an objective function when training the XGBoost model.

Cambia al escritorio para practicar en el mundo realContinúe desde donde se encuentra utilizando una de las siguientes opciones
¿Todo estuvo claro?

¿Cómo podemos mejorarlo?

¡Gracias por tus comentarios!

Sección 3. Capítulo 6
toggle bottom row

Challenge: Solving Task Using XGBoost

Tarea

The "Credit Scoring" dataset is commonly used for credit risk analysis and binary classification tasks. It contains information about customers and their credit applications, with the goal of predicting whether a customer's credit application will result in a good or bad credit outcome.

Your task is to solve classification task on "Credit Scoring" dataset:

  1. Create Dmatrix objects using training and test data. Specify enable_categorical argument to use categorical features.
  2. Train the XGBoost model using the training DMatrix object.
  3. Set the split threshold to 0.5 for correct class detection.

Note

'objective': 'binary:logistic' parameter means that we will use logistic loss (also known as binary cross-entropy loss) as an objective function when training the XGBoost model.

Cambia al escritorio para practicar en el mundo realContinúe desde donde se encuentra utilizando una de las siguientes opciones
¿Todo estuvo claro?

¿Cómo podemos mejorarlo?

¡Gracias por tus comentarios!

Sección 3. Capítulo 6
toggle bottom row

Challenge: Solving Task Using XGBoost

Tarea

The "Credit Scoring" dataset is commonly used for credit risk analysis and binary classification tasks. It contains information about customers and their credit applications, with the goal of predicting whether a customer's credit application will result in a good or bad credit outcome.

Your task is to solve classification task on "Credit Scoring" dataset:

  1. Create Dmatrix objects using training and test data. Specify enable_categorical argument to use categorical features.
  2. Train the XGBoost model using the training DMatrix object.
  3. Set the split threshold to 0.5 for correct class detection.

Note

'objective': 'binary:logistic' parameter means that we will use logistic loss (also known as binary cross-entropy loss) as an objective function when training the XGBoost model.

Cambia al escritorio para practicar en el mundo realContinúe desde donde se encuentra utilizando una de las siguientes opciones
¿Todo estuvo claro?

¿Cómo podemos mejorarlo?

¡Gracias por tus comentarios!

Tarea

The "Credit Scoring" dataset is commonly used for credit risk analysis and binary classification tasks. It contains information about customers and their credit applications, with the goal of predicting whether a customer's credit application will result in a good or bad credit outcome.

Your task is to solve classification task on "Credit Scoring" dataset:

  1. Create Dmatrix objects using training and test data. Specify enable_categorical argument to use categorical features.
  2. Train the XGBoost model using the training DMatrix object.
  3. Set the split threshold to 0.5 for correct class detection.

Note

'objective': 'binary:logistic' parameter means that we will use logistic loss (also known as binary cross-entropy loss) as an objective function when training the XGBoost model.

Cambia al escritorio para practicar en el mundo realContinúe desde donde se encuentra utilizando una de las siguientes opciones
Sección 3. Capítulo 6
Cambia al escritorio para practicar en el mundo realContinúe desde donde se encuentra utilizando una de las siguientes opciones
some-alt