龙空技术网

机器学习:葡萄酒质量预测模型教程

林小婵的店 622

前言:

目前朋友们对“python model selection”大体比较珍视,朋友们都想要知道一些“python model selection”的相关资讯。那么小编也在网摘上网罗了一些关于“python model selection””的相关知识,希望大家能喜欢,我们一起来学习一下吧!

本文介绍如何利用机器学习模型根据各种特征预测葡萄酒质量。从这里下载分析数据集。

葡萄酒数据集包含以下特征:

Input variables (based on physicochemical tests):

fixed acidity, volatile acidity, citric acid, residual sugar,

chlorides,free sulfur dioxide,total sulfur dioxide,density,

pH,sulphates, alcohol

Output variables:

quality (score between 0 and 10)

首先通过导入所需的Python库并加载白葡萄酒和红葡萄酒的csv文件来加载两个数据集。

#import the libraries

import pandas as pd

import numpy as np

import seaborn as sns

import matplotlib.pyplot as plt

# load the files

df_red = pd.read_csv(“winequality-red.csv”, sep=”;”)

df_white = pd.read_csv(“winequality-white.csv”, sep=”;”)

将这两个dataframes 合并起来分析。Python代码如下:

df = pd.concat([df_red, df_white], axis=0)

检查是否有任何空列

df.isnull().sum()

fixed acidity 0

volatile acidity 0

citric acid 0

residual sugar 0

chlorides 0

free sulfur dioxide 0

total sulfur dioxide 0

density 0

pH 0

sulphates 0

alcohol 0

quality 0

找出输出(质量)变量与所有输入变量之间的相关性,Python实现如下:

# identify the correlation

plt.subplots(figsize=(20,15))

corr = df.corr()

sns.heatmap(corr,square=True, annot=True)

一些如酒精,柠檬酸,游离二氧化硫,pH值呈正相关,质量会有所改善,而密度,残糖和酸度会对质量产生负面影响。

让我们确定前6个相关特征。Python代码如下:

# pick the top 6 highly correlating columns

cols = corr.nlargest(6, ‘quality’)[‘quality’].index

corrcoef = np.corrcoef(df[cols].values.T)

# correlation plotted against the top columns

plt.subplots(figsize=(20,15))

corr = df.corr()

sns.heatmap(corrcoef,square=True, annot=True, xticklabels= cols.values, yticklabels=cols.values)

通过绘制直方图来分析数据的分布

使用机器学习中的sklearn库,将数据集拆分为测试和训练数据集,我使用了20%的数据作为测试数据集。Python代码如下:

y = df[“quality”]

X = df.drop(“quality”, axis=1)

from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)

由于不同的列具有不同的值,因此您需要归一化值以获得准确的预测结果。我在这里使用StandardScaler库。您也可以使用MinMaxScaler方法。

from sklearn.preprocessing import StandardScaler

scaler = StandardScaler()

X_train = scaler.fit_transform(X_train)

X_test = scaler.fit_transform(X_test)

现在,我将根据各种算法拟合我的训练数据,并根据测试值确定预测输出的准确性。Python实现如下:

from sklearn.metrics import accuracy_score, confusion_matrix

from sklearn.linear_model import LogisticRegression

logreg = LogisticRegression()

logreg.fit(X_train, y_train)

pred_logreg = logreg.predict(X_test)

accuracy = accuracy_score(pred_logreg, y_test)

print("Logreg Accuracy Score %.2f" % accuracy)

cm = confusion_matrix(pred_logreg, y_test)

knn = KNeighborsClassifier(n_neighbors=1)

knn.fit(X_train, y_train)

pred_knn = knn.predict(X_test)

accuracy = accuracy_score(pred_knn, y_test)

print("Knn Accuracy Score %.2f" % accuracy)

from sklearn.svm import SVC

svc = SVC()

svc.fit(X_train, y_train)

pred_svc =svc.predict(X_test)

accuracy = accuracy_score(pred_svc, y_test)

print("SVC Accuracy Score %.2f" % accuracy)

dtree = DecisionTreeClassifier()

dtree.fit(X_train, y_train)

pred_tree =dtree.predict(X_test)

accuracy = accuracy_score(pred_tree, y_test)

print("DTree Accuracy Score %.2f" % accuracy)

from sklearn.ensemble import RandomForestClassifier

rf = RandomForestClassifier()

rf.fit(X_train, y_train)

pred_rf =rf.predict(X_test)

accuracy = accuracy_score(pred_rf, y_test)

print("Random Forest Accuracy Score %.2f" % accuracy)

我尝试了各种算法,包括Logistic回归,决策树,随机森林,KNN和SVC。

随机森林为我提供更好的准确性(64%)

Logreg Accuracy Score 0.53

Knn Accuracy Score 0.62

SVC Accuracy Score 0.57

DTree Accuracy Score 0.55

Random Forest Accuracy Score 0.64

将前10条记录的测试数据与预测数据进行比较,结果表明,其中有2条记录的质量预测与测试结果不同

标签: #python model selection