BigQuery and XGBoost Integration: A Jupyter Notebook Tutorial for Binary Classification

Introduction

In selecting a binary classification model for tabular data, I decided to quickly try out a fast, non-deep learning model: Gradient Boosting Decision Trees (GBDT). This article describes the process of creating a Jupyter Notebook script using BigQuery as the data source and the XGBoost algorithm for modeling.

Complete Script

For those who prefer to jump straight into the script without the explanation, here it is. Please adjust the project_name, dataset_name, and table_name to fit your project.

<span>import</span> <span>xgboost</span> <span>as</span> <span>xgb</span>
<span>from</span> <span>sklearn.model_selection</span> <span>import</span> <span>train_test_split</span><span>,</span> <span>GridSearchCV</span>
<span>from</span> <span>sklearn.metrics</span> <span>import</span> <span>precision_score</span><span>,</span> <span>recall_score</span><span>,</span> <span>f1_score</span><span>,</span> <span>log_loss</span>
<span>from</span> <span>google.cloud</span> <span>import</span> <span>bigquery</span>
<span># Function to load data from BigQuery </span><span>def</span> <span>load_data_from_bigquery</span><span>(</span><span>query</span><span>):</span>
<span>client</span> <span>=</span> <span>bigquery</span><span>.</span><span>Client</span><span>()</span>
<span>query_job</span> <span>=</span> <span>client</span><span>.</span><span>query</span><span>(</span><span>query</span><span>)</span>
<span>df</span> <span>=</span> <span>query_job</span><span>.</span><span>to_dataframe</span><span>()</span>
<span>return</span> <span>df</span>
<span>def</span> <span>compute_metrics</span><span>(</span><span>labels</span><span>,</span> <span>predictions</span><span>,</span> <span>prediction_probs</span><span>):</span>
<span>precision</span> <span>=</span> <span>precision_score</span><span>(</span><span>labels</span><span>,</span> <span>predictions</span><span>,</span> <span>average</span><span>=</span><span>'</span><span>macro</span><span>'</span><span>)</span>
<span>recall</span> <span>=</span> <span>recall_score</span><span>(</span><span>labels</span><span>,</span> <span>predictions</span><span>,</span> <span>average</span><span>=</span><span>'</span><span>macro</span><span>'</span><span>)</span>
<span>f1</span> <span>=</span> <span>f1_score</span><span>(</span><span>labels</span><span>,</span> <span>predictions</span><span>,</span> <span>average</span><span>=</span><span>'</span><span>macro</span><span>'</span><span>)</span>
<span>loss</span> <span>=</span> <span>log_loss</span><span>(</span><span>labels</span><span>,</span> <span>prediction_probs</span><span>)</span>
<span>return</span> <span>{</span>
<span>'</span><span>precision</span><span>'</span><span>:</span> <span>precision</span><span>,</span>
<span>'</span><span>recall</span><span>'</span><span>:</span> <span>recall</span><span>,</span>
<span>'</span><span>f1</span><span>'</span><span>:</span> <span>f1</span><span>,</span>
<span>'</span><span>loss</span><span>'</span><span>:</span> <span>loss</span>
<span>}</span>
<span># Query in BigQuery </span><span>query</span> <span>=</span> <span>"""</span><span> SELECT * FROM `<project_name>.<dataset_name>.<table_name>` </span><span>"""</span>
<span># Loading data </span><span>df</span> <span>=</span> <span>load_data_from_bigquery</span><span>(</span><span>query</span><span>)</span>
<span># Target data </span><span>y</span> <span>=</span> <span>df</span><span>[</span><span>"</span><span>reaction</span><span>"</span><span>]</span>
<span># Input data </span><span>X</span> <span>=</span> <span>df</span><span>.</span><span>drop</span><span>(</span><span>columns</span><span>=</span><span>[</span><span>"</span><span>reaction</span><span>"</span><span>],</span> <span>axis</span><span>=</span><span>1</span><span>)</span>
<span># Splitting data into training and validation sets </span><span>X_train</span><span>,</span> <span>X_val</span><span>,</span> <span>y_train</span><span>,</span> <span>y_val</span> <span>=</span> <span>train_test_split</span><span>(</span><span>X</span><span>,</span> <span>y</span><span>,</span> <span>test_size</span><span>=</span><span>0.2</span><span>,</span> <span>random_state</span><span>=</span><span>1</span><span>)</span>
<span># Training the XGBoost model </span><span>model</span> <span>=</span> <span>xgb</span><span>.</span><span>XGBClassifier</span><span>(</span><span>eval_metric</span><span>=</span><span>'</span><span>logloss</span><span>'</span><span>)</span>
<span># Setting the parameter grid </span><span>param_grid</span> <span>=</span> <span>{</span>
<span>'</span><span>max_depth</span><span>'</span><span>:</span> <span>[</span><span>3</span><span>,</span> <span>4</span><span>,</span> <span>5</span><span>],</span>
<span>'</span><span>learning_rate</span><span>'</span><span>:</span> <span>[</span><span>0.01</span><span>,</span> <span>0.1</span><span>,</span> <span>0.2</span><span>],</span>
<span>'</span><span>n_estimators</span><span>'</span><span>:</span> <span>[</span><span>100</span><span>,</span> <span>200</span><span>,</span> <span>300</span><span>],</span>
<span>'</span><span>subsample</span><span>'</span><span>:</span> <span>[</span><span>0.8</span><span>,</span> <span>0.9</span><span>,</span> <span>1.0</span><span>]</span>
<span>}</span>
<span># Initializing GridSearchCV </span><span>grid_search</span> <span>=</span> <span>GridSearchCV</span><span>(</span><span>estimator</span><span>=</span><span>model</span><span>,</span> <span>param_grid</span><span>=</span><span>param_grid</span><span>,</span> <span>cv</span><span>=</span><span>3</span><span>,</span> <span>scoring</span><span>=</span><span>'</span><span>accuracy</span><span>'</span><span>,</span> <span>verbose</span><span>=</span><span>1</span><span>,</span> <span>n_jobs</span><span>=-</span><span>1</span><span>)</span>
<span># Executing the grid search </span><span>grid_search</span><span>.</span><span>fit</span><span>(</span><span>X_train</span><span>,</span> <span>y_train</span><span>)</span>
<span># Displaying the best parameters </span><span>print</span><span>(</span><span>"</span><span>Best parameters:</span><span>"</span><span>,</span> <span>grid_search</span><span>.</span><span>best_params_</span><span>)</span>
<span># Model with the best parameters </span><span>best_model</span> <span>=</span> <span>grid_search</span><span>.</span><span>best_estimator_</span>
<span># Predictions on validation data </span><span>val_predictions</span> <span>=</span> <span>best_model</span><span>.</span><span>predict</span><span>(</span><span>X_val</span><span>)</span>
<span>val_prediction_probs</span> <span>=</span> <span>best_model</span><span>.</span><span>predict_proba</span><span>(</span><span>X_val</span><span>)</span>
<span># Predictions on training data </span><span>train_predictions</span> <span>=</span> <span>best_model</span><span>.</span><span>predict</span><span>(</span><span>X_train</span><span>)</span>
<span>train_prediction_probs</span> <span>=</span> <span>best_model</span><span>.</span><span>predict_proba</span><span>(</span><span>X_train</span><span>)</span>
<span># Evaluating the model (validation data) </span><span>val_metrics</span> <span>=</span> <span>compute_metrics</span><span>(</span><span>y_val</span><span>,</span> <span>val_predictions</span><span>,</span> <span>val_prediction_probs</span><span>)</span>
<span>print</span><span>(</span><span>"</span><span>Optimized Validation Metrics:</span><span>"</span><span>,</span> <span>val_metrics</span><span>)</span>
<span># Evaluating the model (training data) </span><span>train_metrics</span> <span>=</span> <span>compute_metrics</span><span>(</span><span>y_train</span><span>,</span> <span>train_predictions</span><span>,</span> <span>train_prediction_probs</span><span>)</span>
<span>print</span><span>(</span><span>"</span><span>Optimized Training Metrics:</span><span>"</span><span>,</span> <span>train_metrics</span><span>)</span>
<span>import</span> <span>xgboost</span> <span>as</span> <span>xgb</span>
<span>from</span> <span>sklearn.model_selection</span> <span>import</span> <span>train_test_split</span><span>,</span> <span>GridSearchCV</span>
<span>from</span> <span>sklearn.metrics</span> <span>import</span> <span>precision_score</span><span>,</span> <span>recall_score</span><span>,</span> <span>f1_score</span><span>,</span> <span>log_loss</span>
<span>from</span> <span>google.cloud</span> <span>import</span> <span>bigquery</span>

<span># Function to load data from BigQuery </span><span>def</span> <span>load_data_from_bigquery</span><span>(</span><span>query</span><span>):</span>
    <span>client</span> <span>=</span> <span>bigquery</span><span>.</span><span>Client</span><span>()</span>
    <span>query_job</span> <span>=</span> <span>client</span><span>.</span><span>query</span><span>(</span><span>query</span><span>)</span>
    <span>df</span> <span>=</span> <span>query_job</span><span>.</span><span>to_dataframe</span><span>()</span>
    <span>return</span> <span>df</span>

<span>def</span> <span>compute_metrics</span><span>(</span><span>labels</span><span>,</span> <span>predictions</span><span>,</span> <span>prediction_probs</span><span>):</span>
    <span>precision</span> <span>=</span> <span>precision_score</span><span>(</span><span>labels</span><span>,</span> <span>predictions</span><span>,</span> <span>average</span><span>=</span><span>'</span><span>macro</span><span>'</span><span>)</span>
    <span>recall</span> <span>=</span> <span>recall_score</span><span>(</span><span>labels</span><span>,</span> <span>predictions</span><span>,</span> <span>average</span><span>=</span><span>'</span><span>macro</span><span>'</span><span>)</span>
    <span>f1</span> <span>=</span> <span>f1_score</span><span>(</span><span>labels</span><span>,</span> <span>predictions</span><span>,</span> <span>average</span><span>=</span><span>'</span><span>macro</span><span>'</span><span>)</span>
    <span>loss</span> <span>=</span> <span>log_loss</span><span>(</span><span>labels</span><span>,</span> <span>prediction_probs</span><span>)</span>
    <span>return</span> <span>{</span>
        <span>'</span><span>precision</span><span>'</span><span>:</span> <span>precision</span><span>,</span>
        <span>'</span><span>recall</span><span>'</span><span>:</span> <span>recall</span><span>,</span>
        <span>'</span><span>f1</span><span>'</span><span>:</span> <span>f1</span><span>,</span>
        <span>'</span><span>loss</span><span>'</span><span>:</span> <span>loss</span>
    <span>}</span>

<span># Query in BigQuery </span><span>query</span> <span>=</span> <span>"""</span><span> SELECT * FROM `<project_name>.<dataset_name>.<table_name>` </span><span>"""</span>

<span># Loading data </span><span>df</span> <span>=</span> <span>load_data_from_bigquery</span><span>(</span><span>query</span><span>)</span>

<span># Target data </span><span>y</span> <span>=</span> <span>df</span><span>[</span><span>"</span><span>reaction</span><span>"</span><span>]</span>

<span># Input data </span><span>X</span> <span>=</span> <span>df</span><span>.</span><span>drop</span><span>(</span><span>columns</span><span>=</span><span>[</span><span>"</span><span>reaction</span><span>"</span><span>],</span> <span>axis</span><span>=</span><span>1</span><span>)</span>

<span># Splitting data into training and validation sets </span><span>X_train</span><span>,</span> <span>X_val</span><span>,</span> <span>y_train</span><span>,</span> <span>y_val</span> <span>=</span> <span>train_test_split</span><span>(</span><span>X</span><span>,</span> <span>y</span><span>,</span> <span>test_size</span><span>=</span><span>0.2</span><span>,</span> <span>random_state</span><span>=</span><span>1</span><span>)</span>

<span># Training the XGBoost model </span><span>model</span> <span>=</span> <span>xgb</span><span>.</span><span>XGBClassifier</span><span>(</span><span>eval_metric</span><span>=</span><span>'</span><span>logloss</span><span>'</span><span>)</span>

<span># Setting the parameter grid </span><span>param_grid</span> <span>=</span> <span>{</span>
    <span>'</span><span>max_depth</span><span>'</span><span>:</span> <span>[</span><span>3</span><span>,</span> <span>4</span><span>,</span> <span>5</span><span>],</span>
    <span>'</span><span>learning_rate</span><span>'</span><span>:</span> <span>[</span><span>0.01</span><span>,</span> <span>0.1</span><span>,</span> <span>0.2</span><span>],</span>
    <span>'</span><span>n_estimators</span><span>'</span><span>:</span> <span>[</span><span>100</span><span>,</span> <span>200</span><span>,</span> <span>300</span><span>],</span>
    <span>'</span><span>subsample</span><span>'</span><span>:</span> <span>[</span><span>0.8</span><span>,</span> <span>0.9</span><span>,</span> <span>1.0</span><span>]</span>
<span>}</span>

<span># Initializing GridSearchCV </span><span>grid_search</span> <span>=</span> <span>GridSearchCV</span><span>(</span><span>estimator</span><span>=</span><span>model</span><span>,</span> <span>param_grid</span><span>=</span><span>param_grid</span><span>,</span> <span>cv</span><span>=</span><span>3</span><span>,</span> <span>scoring</span><span>=</span><span>'</span><span>accuracy</span><span>'</span><span>,</span> <span>verbose</span><span>=</span><span>1</span><span>,</span> <span>n_jobs</span><span>=-</span><span>1</span><span>)</span>

<span># Executing the grid search </span><span>grid_search</span><span>.</span><span>fit</span><span>(</span><span>X_train</span><span>,</span> <span>y_train</span><span>)</span>

<span># Displaying the best parameters </span><span>print</span><span>(</span><span>"</span><span>Best parameters:</span><span>"</span><span>,</span> <span>grid_search</span><span>.</span><span>best_params_</span><span>)</span>

<span># Model with the best parameters </span><span>best_model</span> <span>=</span> <span>grid_search</span><span>.</span><span>best_estimator_</span>

<span># Predictions on validation data </span><span>val_predictions</span> <span>=</span> <span>best_model</span><span>.</span><span>predict</span><span>(</span><span>X_val</span><span>)</span>
<span>val_prediction_probs</span> <span>=</span> <span>best_model</span><span>.</span><span>predict_proba</span><span>(</span><span>X_val</span><span>)</span>

<span># Predictions on training data </span><span>train_predictions</span> <span>=</span> <span>best_model</span><span>.</span><span>predict</span><span>(</span><span>X_train</span><span>)</span>
<span>train_prediction_probs</span> <span>=</span> <span>best_model</span><span>.</span><span>predict_proba</span><span>(</span><span>X_train</span><span>)</span>

<span># Evaluating the model (validation data) </span><span>val_metrics</span> <span>=</span> <span>compute_metrics</span><span>(</span><span>y_val</span><span>,</span> <span>val_predictions</span><span>,</span> <span>val_prediction_probs</span><span>)</span>
<span>print</span><span>(</span><span>"</span><span>Optimized Validation Metrics:</span><span>"</span><span>,</span> <span>val_metrics</span><span>)</span>

<span># Evaluating the model (training data) </span><span>train_metrics</span> <span>=</span> <span>compute_metrics</span><span>(</span><span>y_train</span><span>,</span> <span>train_predictions</span><span>,</span> <span>train_prediction_probs</span><span>)</span>
<span>print</span><span>(</span><span>"</span><span>Optimized Training Metrics:</span><span>"</span><span>,</span> <span>train_metrics</span><span>)</span>
import xgboost as xgb from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.metrics import precision_score, recall_score, f1_score, log_loss from google.cloud import bigquery # Function to load data from BigQuery def load_data_from_bigquery(query): client = bigquery.Client() query_job = client.query(query) df = query_job.to_dataframe() return df def compute_metrics(labels, predictions, prediction_probs): precision = precision_score(labels, predictions, average='macro') recall = recall_score(labels, predictions, average='macro') f1 = f1_score(labels, predictions, average='macro') loss = log_loss(labels, prediction_probs) return { 'precision': precision, 'recall': recall, 'f1': f1, 'loss': loss } # Query in BigQuery query = """ SELECT * FROM `<project_name>.<dataset_name>.<table_name>` """ # Loading data df = load_data_from_bigquery(query) # Target data y = df["reaction"] # Input data X = df.drop(columns=["reaction"], axis=1) # Splitting data into training and validation sets X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=1) # Training the XGBoost model model = xgb.XGBClassifier(eval_metric='logloss') # Setting the parameter grid param_grid = { 'max_depth': [3, 4, 5], 'learning_rate': [0.01, 0.1, 0.2], 'n_estimators': [100, 200, 300], 'subsample': [0.8, 0.9, 1.0] } # Initializing GridSearchCV grid_search = GridSearchCV(estimator=model, param_grid=param_grid, cv=3, scoring='accuracy', verbose=1, n_jobs=-1) # Executing the grid search grid_search.fit(X_train, y_train) # Displaying the best parameters print("Best parameters:", grid_search.best_params_) # Model with the best parameters best_model = grid_search.best_estimator_ # Predictions on validation data val_predictions = best_model.predict(X_val) val_prediction_probs = best_model.predict_proba(X_val) # Predictions on training data train_predictions = best_model.predict(X_train) train_prediction_probs = best_model.predict_proba(X_train) # Evaluating the model (validation data) val_metrics = compute_metrics(y_val, val_predictions, val_prediction_probs) print("Optimized Validation Metrics:", val_metrics) # Evaluating the model (training data) train_metrics = compute_metrics(y_train, train_predictions, train_prediction_probs) print("Optimized Training Metrics:", train_metrics)

Enter fullscreen mode Exit fullscreen mode

Explanation

Loading Data from BigQuery

Previously, data was stored in Cloud Storage as CSV files, but the slow data loading was reducing the efficiency of our learning processes, prompting the shift to BigQuery for faster data handling.

Setting Up the BigQuery Client

<span>from</span> <span>google.cloud</span> <span>import</span> <span>bigquery</span>
<span>client</span> <span>=</span> <span>bigquery</span><span>.</span><span>Client</span><span>()</span>
<span>from</span> <span>google.cloud</span> <span>import</span> <span>bigquery</span>
<span>client</span> <span>=</span> <span>bigquery</span><span>.</span><span>Client</span><span>()</span>
from google.cloud import bigquery client = bigquery.Client()

Enter fullscreen mode Exit fullscreen mode

This code initializes a BigQuery client using Google Cloud credentials, which can be set up through environment variables or the Google Cloud SDK.

Querying and Loading Data

<span>def</span> <span>load_data_from_bigquery</span><span>(</span><span>query</span><span>):</span>
<span>query_job</span> <span>=</span> <span>client</span><span>.</span><span>query</span><span>(</span><span>query</span><span>)</span>
<span>df</span> <span>=</span> <span>query_job</span><span>.</span><span>to_dataframe</span><span>()</span>
<span>return</span> <span>df</span>
<span>def</span> <span>load_data_from_bigquery</span><span>(</span><span>query</span><span>):</span>
    <span>query_job</span> <span>=</span> <span>client</span><span>.</span><span>query</span><span>(</span><span>query</span><span>)</span>
    <span>df</span> <span>=</span> <span>query_job</span><span>.</span><span>to_dataframe</span><span>()</span>
    <span>return</span> <span>df</span>
def load_data_from_bigquery(query): query_job = client.query(query) df = query_job.to_dataframe() return df

Enter fullscreen mode Exit fullscreen mode

This function executes an SQL query and returns the results as a DataFrame in Pandas, allowing for efficient data processing.

Training the Model with XGBoost

XGBoost is a high-performance machine learning algorithm utilizing gradient boosting, widely used for classification and regression problems.

https://arxiv.org/pdf/1603.02754

Model Initialization

<span>import</span> <span>xgboost</span> <span>as</span> <span>xgb</span>
<span>model</span> <span>=</span> <span>xgb</span><span>.</span><span>XGBClassifier</span><span>(</span><span>eval_metric</span><span>=</span><span>'</span><span>logloss</span><span>'</span><span>)</span>
<span>import</span> <span>xgboost</span> <span>as</span> <span>xgb</span>
<span>model</span> <span>=</span> <span>xgb</span><span>.</span><span>XGBClassifier</span><span>(</span><span>eval_metric</span><span>=</span><span>'</span><span>logloss</span><span>'</span><span>)</span>
import xgboost as xgb model = xgb.XGBClassifier(eval_metric='logloss')

Enter fullscreen mode Exit fullscreen mode

Here, the XGBClassifier class is instantiated, using log loss as the evaluation metric.

Data Splitting

<span>from</span> <span>sklearn.model_selection</span> <span>import</span> <span>train_test_split</span>
<span>X_train</span><span>,</span> <span>X_val</span><span>,</span> <span>y_train</span><span>,</span> <span>y_val</span> <span>=</span> <span>train_test_split</span><span>(</span><span>X</span><span>,</span> <span>y</span><span>,</span> <span>test_size</span><span>=</span><span>0.2</span><span>,</span> <span>random_state</span><span>=</span><span>1</span><span>)</span>
<span>from</span> <span>sklearn.model_selection</span> <span>import</span> <span>train_test_split</span>
<span>X_train</span><span>,</span> <span>X_val</span><span>,</span> <span>y_train</span><span>,</span> <span>y_val</span> <span>=</span> <span>train_test_split</span><span>(</span><span>X</span><span>,</span> <span>y</span><span>,</span> <span>test_size</span><span>=</span><span>0.2</span><span>,</span> <span>random_state</span><span>=</span><span>1</span><span>)</span>
from sklearn.model_selection import train_test_split X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=1)

Enter fullscreen mode Exit fullscreen mode

This function splits the data into training and validation sets, which is crucial for testing the model’s performance and avoiding overfitting.

Parameter Optimization

<span>from</span> <span>sklearn.model_selection</span> <span>import</span> <span>GridSearchCV</span>
<span>param_grid</span> <span>=</span> <span>{</span>
<span>'</span><span>max_depth</span><span>'</span><span>:</span> <span>[</span><span>3</span><span>,</span> <span>4</span><span>,</span> <span>5</span><span>],</span>
<span>'</span><span>learning_rate</span><span>'</span><span>:</span> <span>[</span><span>0.01</span><span>,</span> <span>0.1</span><span>,</span> <span>0.2</span><span>],</span>
<span>'</span><span>n_estimators</span><span>'</span><span>:</span> <span>[</span><span>100</span><span>,</span> <span>200</span><span>,</span> <span>300</span><span>],</span>
<span>'</span><span>subsample</span><span>'</span><span>:</span> <span>[</span><span>0.8</span><span>,</span> <span>0.9</span><span>,</span> <span>1.0</span><span>]</span>
<span>}</span>
<span>grid_search</span> <span>=</span> <span>GridSearchCV</span><span>(</span><span>estimator</span><span>=</span><span>model</span><span>,</span> <span>param_grid</span><span>=</span><span>param_grid</span><span>,</span> <span>cv</span><span>=</span><span>3</span><span>,</span> <span>scoring</span><span>=</span><span>'</span><span>accuracy</span><span>'</span><span>,</span> <span>verbose</span><span>=</span><span>1</span><span>,</span> <span>n_jobs</span><span>=-</span><span>1</span><span>)</span>
<span>grid_search</span><span>.</span><span>fit</span><span>(</span><span>X_train</span><span>,</span> <span>y_train</span><span>)</span>
<span>from</span> <span>sklearn.model_selection</span> <span>import</span> <span>GridSearchCV</span>
<span>param_grid</span> <span>=</span> <span>{</span>
    <span>'</span><span>max_depth</span><span>'</span><span>:</span> <span>[</span><span>3</span><span>,</span> <span>4</span><span>,</span> <span>5</span><span>],</span>
    <span>'</span><span>learning_rate</span><span>'</span><span>:</span> <span>[</span><span>0.01</span><span>,</span> <span>0.1</span><span>,</span> <span>0.2</span><span>],</span>
    <span>'</span><span>n_estimators</span><span>'</span><span>:</span> <span>[</span><span>100</span><span>,</span> <span>200</span><span>,</span> <span>300</span><span>],</span>
    <span>'</span><span>subsample</span><span>'</span><span>:</span> <span>[</span><span>0.8</span><span>,</span> <span>0.9</span><span>,</span> <span>1.0</span><span>]</span>
<span>}</span>
<span>grid_search</span> <span>=</span> <span>GridSearchCV</span><span>(</span><span>estimator</span><span>=</span><span>model</span><span>,</span> <span>param_grid</span><span>=</span><span>param_grid</span><span>,</span> <span>cv</span><span>=</span><span>3</span><span>,</span> <span>scoring</span><span>=</span><span>'</span><span>accuracy</span><span>'</span><span>,</span> <span>verbose</span><span>=</span><span>1</span><span>,</span> <span>n_jobs</span><span>=-</span><span>1</span><span>)</span>
<span>grid_search</span><span>.</span><span>fit</span><span>(</span><span>X_train</span><span>,</span> <span>y_train</span><span>)</span>
from sklearn.model_selection import GridSearchCV param_grid = { 'max_depth': [3, 4, 5], 'learning_rate': [0.01, 0.1, 0.2], 'n_estimators': [100, 200, 300], 'subsample': [0.8, 0.9, 1.0] } grid_search = GridSearchCV(estimator=model, param_grid=param_grid, cv=3, scoring='accuracy', verbose=1, n_jobs=-1) grid_search.fit(X_train, y_train)

Enter fullscreen mode Exit fullscreen mode

GridSearchCV performs cross-validation to find the best combination of parameters for the model.

Model Evaluation

The performance of the model is evaluated using precision, recall, F1 score, and log loss on the validation dataset.

<span>def</span> <span>compute_metrics</span><span>(</span><span>labels</span><span>,</span> <span>predictions</span><span>,</span> <span>prediction_probs</span><span>):</span>
<span>from</span> <span>sklearn.metrics</span> <span>import</span> <span>precision_score</span><span>,</span> <span>recall_score</span><span>,</span> <span>f1_score</span><span>,</span> <span>log_loss</span>
<span>return</span> <span>{</span>
<span>'</span><span>precision</span><span>'</span><span>:</span> <span>precision_score</span><span>(</span><span>labels</span><span>,</span> <span>predictions</span><span>,</span> <span>average</span><span>=</span><span>'</span><span>macro</span><span>'</span><span>),</span>
<span>'</span><span>recall</span><span>'</span><span>:</span> <span>recall_score</span><span>(</span><span>labels</span><span>,</span> <span>predictions</span><span>,</span> <span>average</span><span>=</span><span>'</span><span>macro</span><span>'</span><span>),</span>
<span>'</span><span>f1</span><span>'</span><span>:</span> <span>f1_score</span><span>(</span><span>labels</span><span>,</span> <span>predictions</span><span>,</span> <span>average</span><span>=</span><span>'</span><span>macro</span><span>'</span><span>),</span>
<span>'</span><span>loss</span><span>'</span><span>:</span> <span>log_loss</span><span>(</span><span>labels</span><span>,</span> <span>prediction_probs</span><span>)</span>
<span>}</span>
<span>val_metrics</span> <span>=</span> <span>compute_metrics</span><span>(</span><span>y_val</span><span>,</span> <span>val_predictions</span><span>,</span> <span>val_prediction_probs</span><span>)</span>
<span>print</span><span>(</span><span>"</span><span>Optimized Validation Metrics:</span><span>"</span><span>,</span> <span>val_metrics</span><span>)</span>
<span>def</span> <span>compute_metrics</span><span>(</span><span>labels</span><span>,</span> <span>predictions</span><span>,</span> <span>prediction_probs</span><span>):</span>
    <span>from</span> <span>sklearn.metrics</span> <span>import</span> <span>precision_score</span><span>,</span> <span>recall_score</span><span>,</span> <span>f1_score</span><span>,</span> <span>log_loss</span>
    <span>return</span> <span>{</span>
        <span>'</span><span>precision</span><span>'</span><span>:</span> <span>precision_score</span><span>(</span><span>labels</span><span>,</span> <span>predictions</span><span>,</span> <span>average</span><span>=</span><span>'</span><span>macro</span><span>'</span><span>),</span>
        <span>'</span><span>recall</span><span>'</span><span>:</span> <span>recall_score</span><span>(</span><span>labels</span><span>,</span> <span>predictions</span><span>,</span> <span>average</span><span>=</span><span>'</span><span>macro</span><span>'</span><span>),</span>
        <span>'</span><span>f1</span><span>'</span><span>:</span> <span>f1_score</span><span>(</span><span>labels</span><span>,</span> <span>predictions</span><span>,</span> <span>average</span><span>=</span><span>'</span><span>macro</span><span>'</span><span>),</span>
        <span>'</span><span>loss</span><span>'</span><span>:</span> <span>log_loss</span><span>(</span><span>labels</span><span>,</span> <span>prediction_probs</span><span>)</span>
    <span>}</span>
<span>val_metrics</span> <span>=</span> <span>compute_metrics</span><span>(</span><span>y_val</span><span>,</span> <span>val_predictions</span><span>,</span> <span>val_prediction_probs</span><span>)</span>
<span>print</span><span>(</span><span>"</span><span>Optimized Validation Metrics:</span><span>"</span><span>,</span> <span>val_metrics</span><span>)</span>
def compute_metrics(labels, predictions, prediction_probs): from sklearn.metrics import precision_score, recall_score, f1_score, log_loss return { 'precision': precision_score(labels, predictions, average='macro'), 'recall': recall_score(labels, predictions, average='macro'), 'f1': f1_score(labels, predictions, average='macro'), 'loss': log_loss(labels, prediction_probs) } val_metrics = compute_metrics(y_val, val_predictions, val_prediction_probs) print("Optimized Validation Metrics:", val_metrics)

Enter fullscreen mode Exit fullscreen mode

Output Results

When you run the notebook, you will get the following output showing the best parameters and the model evaluation metrics.

Best parameters: <span>{</span><span>'learning_rate'</span>: 0.2, <span>'max_depth'</span>: 5, <span>'n_estimators'</span>: 300, <span>'subsample'</span>: 0.9<span>}</span>
Optimized Validation Metrics: <span>{</span><span>'precision'</span>: 0.8919952583956949, <span>'recall'</span>: 0.753797304483842, <span>'f1'</span>: 0.8078981867164722, <span>'loss'</span>: 0.014006406471894417<span>}</span>
Optimized Training Metrics: <span>{</span><span>'precision'</span>: 0.8969556573175115, <span>'recall'</span>: 0.7681976753444204, <span>'f1'</span>: 0.8199353049298048, <span>'loss'</span>: 0.012475375680566196<span>}</span>
Best parameters: <span>{</span><span>'learning_rate'</span>: 0.2, <span>'max_depth'</span>: 5, <span>'n_estimators'</span>: 300, <span>'subsample'</span>: 0.9<span>}</span>
Optimized Validation Metrics: <span>{</span><span>'precision'</span>: 0.8919952583956949, <span>'recall'</span>: 0.753797304483842, <span>'f1'</span>: 0.8078981867164722, <span>'loss'</span>: 0.014006406471894417<span>}</span>
Optimized Training Metrics: <span>{</span><span>'precision'</span>: 0.8969556573175115, <span>'recall'</span>: 0.7681976753444204, <span>'f1'</span>: 0.8199353049298048, <span>'loss'</span>: 0.012475375680566196<span>}</span>
Best parameters: {'learning_rate': 0.2, 'max_depth': 5, 'n_estimators': 300, 'subsample': 0.9} Optimized Validation Metrics: {'precision': 0.8919952583956949, 'recall': 0.753797304483842, 'f1': 0.8078981867164722, 'loss': 0.014006406471894417} Optimized Training Metrics: {'precision': 0.8969556573175115, 'recall': 0.7681976753444204, 'f1': 0.8199353049298048, 'loss': 0.012475375680566196}

Enter fullscreen mode Exit fullscreen mode

Additional Information

Using Google Cloud Storage as a Data Source

In some cases, it may be more appropriate to load data from Google Cloud Storage rather than BigQuery. The following function reads a CSV file from Cloud Storage and returns it as a DataFrame in Pandas, and can be used interchangeably with the load_data_from_bigquery function.

<span>from</span> <span>google.cloud</span> <span>import</span> <span>storage</span>
<span>def</span> <span>load_data_from_gcs</span><span>(</span><span>bucket_name</span><span>,</span> <span>file_path</span><span>):</span>
<span>client</span> <span>=</span> <span>storage</span><span>.</span><span>Client</span><span>()</span>
<span>bucket</span> <span>=</span> <span>client</span><span>.</span><span>get_bucket</span><span>(</span><span>bucket_name</span><span>)</span>
<span>blob</span> <span>=</span> <span>bucket</span><span>.</span><span>blob</span><span>(</span><span>file_path</span><span>)</span>
<span>data</span> <span>=</span> <span>blob</span><span>.</span><span>download_as_text</span><span>()</span>
<span>df</span> <span>=</span> <span>pd</span><span>.</span><span>read_csv</span><span>(</span><span>io</span><span>.</span><span>StringIO</span><span>(</span><span>data</span><span>),</span> <span>encoding</span><span>=</span><span>'</span><span>utf-8</span><span>'</span><span>)</span>
<span>return</span> <span>df</span>
<span>from</span> <span>google.cloud</span> <span>import</span> <span>storage</span>

<span>def</span> <span>load_data_from_gcs</span><span>(</span><span>bucket_name</span><span>,</span> <span>file_path</span><span>):</span>
    <span>client</span> <span>=</span> <span>storage</span><span>.</span><span>Client</span><span>()</span>
    <span>bucket</span> <span>=</span> <span>client</span><span>.</span><span>get_bucket</span><span>(</span><span>bucket_name</span><span>)</span>
    <span>blob</span> <span>=</span> <span>bucket</span><span>.</span><span>blob</span><span>(</span><span>file_path</span><span>)</span>
    <span>data</span> <span>=</span> <span>blob</span><span>.</span><span>download_as_text</span><span>()</span>
    <span>df</span> <span>=</span> <span>pd</span><span>.</span><span>read_csv</span><span>(</span><span>io</span><span>.</span><span>StringIO</span><span>(</span><span>data</span><span>),</span> <span>encoding</span><span>=</span><span>'</span><span>utf-8</span><span>'</span><span>)</span>
    <span>return</span> <span>df</span>
from google.cloud import storage def load_data_from_gcs(bucket_name, file_path): client = storage.Client() bucket = client.get_bucket(bucket_name) blob = bucket.blob(file_path) data = blob.download_as_text() df = pd.read_csv(io.StringIO(data), encoding='utf-8') return df

Enter fullscreen mode Exit fullscreen mode

Example of use:

<span>bucket_name</span> <span>=</span> <span>'</span><span><bucket-name></span><span>'</span>
<span>file_path</span> <span>=</span> <span>'</span><span><file-path></span><span>'</span>
<span>df</span> <span>=</span> <span>load_data_from_gcs</span><span>(</span><span>bucket_name</span><span>,</span> <span>file_path</span><span>)</span>
<span>bucket_name</span> <span>=</span> <span>'</span><span><bucket-name></span><span>'</span>
<span>file_path</span> <span>=</span> <span>'</span><span><file-path></span><span>'</span>

<span>df</span> <span>=</span> <span>load_data_from_gcs</span><span>(</span><span>bucket_name</span><span>,</span> <span>file_path</span><span>)</span>
bucket_name = '<bucket-name>' file_path = '<file-path>' df = load_data_from_gcs(bucket_name, file_path)

Enter fullscreen mode Exit fullscreen mode

Training a Model with LightGBM

If you want to use LightGBM instead of XGBoost, you can simply replace the XGBClassifier with LGBMClassifier in the same setup.

<span>import</span> <span>lightgbm</span> <span>as</span> <span>lgb</span>
<span>model</span> <span>=</span> <span>lgb</span><span>.</span><span>LGBMClassifier</span><span>()</span>
<span>import</span> <span>lightgbm</span> <span>as</span> <span>lgb</span>
<span>model</span> <span>=</span> <span>lgb</span><span>.</span><span>LGBMClassifier</span><span>()</span>
import lightgbm as lgb model = lgb.LGBMClassifier()

Enter fullscreen mode Exit fullscreen mode

Conclusion

Future articles will cover the use of BigQuery ML (BQML) for training.

原文链接:BigQuery and XGBoost Integration: A Jupyter Notebook Tutorial for Binary Classification

© 版权声明
THE END
喜欢就支持一下吧
点赞15 分享
Little compliments mean so much to me sometimes.
有时候,一点微不足道的肯定,对我却意义非凡
评论 抢沙发

请登录后发表评论

    暂无评论内容