前言:
此刻兄弟们对“c语言编程求平均数”大体比较注重,各位老铁们都想要学习一些“c语言编程求平均数”的相关知识。那么小编同时在网摘上搜集了一些对于“c语言编程求平均数””的相关文章,希望兄弟们能喜欢,兄弟们一起来了解一下吧!一 前言
对于一个类别特征,如果这个特征的取值非常多,则称它为高基数(high-cardinality)类别特征。在深度学习场景中,对于类别特征我们一般采用 Embedding 的方式,通过预训练或直接训练的方式将类别特征值编码成向量。
在经典机器学习场景中,对于有序类别特征,我们可以使用 LabelEncoder 进行编码处理,对于低基数无序类别特征(在 lightgbm 中,默认取值个数小于等于 4 的类别特征),可以采用 OneHotEncoder 的方式进行编码,但是对于高基数无序类别特征,若直接采用 OneHotEncoder 的方式编码,在目前效果比较好的 GBDT、Xgboost、lightgbm 等树模型中,会出现特征稀疏性的问题,造成维度灾难, 若先对类别取值进行聚类分组,然后再进行 OneHot 编码,虽然可以降低特征的维度,但是聚类分组过程需要借助较强的业务经验知识。本文介绍一种针对高基数无序类别特征非常有效的预处理方法:平均数编码(Mean Encoding)。在很多数据挖掘类竞赛中,有许多人使用这种方法取得了非常优异的成绩。
二 原理
平均数编码,有些地方也称之为目标编码(Target Encoding),是一种基于目标变量统计(Target Statistics)的有监督编码方式。该方法基于贝叶斯思想,用先验概率和后验概率的加权平均值作为类别特征值的编码值,适用于分类和回归场景。平均数编码的公式如下所示:
其中:
1. prior 为先验概率,在分类场景中表示样本属于某一个 yi 的概率
其中 nyi表示 y =yi时的样本数量,ny表示 y 的总数量;在回归场景下,先验概率为目标变量均值:
2. posterior 为后验概率,在分类场景中表示类别特征为 k 时样本属于某一个 yi的概率
在回归场景下表示 类别特征为 k 时对应目标变量的均值。
3. λ 为权重函数,本文中的权重函数公式相较于原论文做了变换,是一个单调递减函数,函数公式:
其中 输入是特征类别在训练集中出现的次数 n,权重函数有两个参数:
① k:最小阈值,当 n = k 时,λ= 0.5,先验概率和后验概率的权重相同;当 n < k 时,λ> 0.5, 先验概率所占的权重更大。
② f:平滑因子,控制权重函数在拐点处的斜率,f 越大,曲线坡度越缓。下面是 k=1 时,不同 f 对于权重函数的影响:
由图可知,f 越大,权重函数 S 型曲线越缓,正则效应越强。
对于分类问题,在计算后验概率时,目标变量有 C 个类别,就有 C 个后验概率,且满足
一个 yi的概率值必然和其他 yi的概率值线性相关,因此为了避免多重共线性问题,采用平均数编码后数据集将增加 C-1 列特征。对于回归问题,采用平均数编码后数据集将增加 1 列特征。
三 实践
平均数编码不仅可以对单个类别特征编码,也可以对具有层次结构的类别特征进行编码。比如地区特征,国家包含了省,省包含了市,市包含了街区,对于街区特征,每个街区特征对应的样本数量很少,以至于每个街区特征的编码值接近于先验概率。平均数编码通过加入不同层次的先验概率信息解决该问题。下面将以分类问题对这两个场景进行展开:
1. 单个类别特征编码:
在具体实践时可以借助 category_encoders 包,代码如下:
import pandas as pdfrom category_encoders import TargetEncoderdf = pd.DataFrame({'cat': ['a', 'b', 'a', 'b', 'a', 'a', 'b', 'c', 'c', 'd'], 'target': [1, 0, 0, 1, 0, 0, 1, 1, 0, 1]})te = TargetEncoder(cols=["cat"], min_samples_leaf=2, smoothing=1)df["cat_encode"] = te.transform(df)["cat"]print(df)# 结果如下: cat target cat_encode0 a 1 0.2798011 b 0 0.6218432 a 0 0.2798013 b 1 0.6218434 a 0 0.2798015 a 0 0.2798016 b 1 0.6218437 c 1 0.5000008 c 0 0.5000009 d 1 0.634471
2. 层次结构类别特征编码:
对以下数据集,方位类别特征具有 {'N': ('N', 'NE'), 'S': ('S', 'SE'), 'W': 'W'} 层级关系,以 compass 中类别 NE 为例计算 yi=1,k = 2 f = 2 时编码值,计算公式如下:
其中 p1为 HIER_compass_1 中类别 N 的编码值,计算可以参考单个类别特征编码: 0.74527,posterior=3/3=1,λ= 0.37754 ,则类别 NE 的编码值:0.37754 * 0.74527 + (1 - 0.37754)* 1 = 0.90383。
代码如下:
from category_encoders import TargetEncoderfrom category_encoders.datasets import load_compassX, y = load_compass()# 层次参数hierarchy可以为字典或者dataframe# 字典形式hierarchical_map = {'compass': {'N': ('N', 'NE'), 'S': ('S', 'SE'), 'W': 'W'}}te = TargetEncoder(verbose=2, hierarchy=hierarchical_map, cols=['compass'], smoothing=2, min_samples_leaf=2)# dataframe形式,HIER_cols的层级顺序由顶向下HIER_cols = ['HIER_compass_1']te = TargetEncoder(verbose=2, hierarchy=X[HIER_cols], cols=['compass'], smoothing=2, min_samples_leaf=2)te.fit(X.loc[:,['compass']], y)X["compass_encode"] = te.transform(X.loc[:,['compass']])X["label"] = yprint(X)# 结果如下,compass_encode列为结果列: index compass HIER_compass_1 compass_encode label0 1 N N 0.622636 11 2 N N 0.622636 02 3 NE N 0.903830 13 4 NE N 0.903830 14 5 NE N 0.903830 15 6 SE S 0.176600 06 7 SE S 0.176600 07 8 S S 0.460520 18 9 S S 0.460520 09 10 S S 0.460520 110 11 S S 0.460520 011 12 W W 0.403328 112 13 W W 0.403328 013 14 W W 0.403328 014 15 W W 0.403328 015 16 W W 0.403328 1
注意事项:
采用平均数编码,容易引起过拟合,可以采用以下方法防止过拟合:
增大正则项 fk 折交叉验证
以下为自行实现的基于 k 折交叉验证版本的平均数编码,可以应用于二分类、多分类、回归场景中对单一类别特征或具有层次结构类别特征进行编码,该版本中用 prior 对 unknown 类别和缺失值编码。
from itertools import productfrom category_encoders import TargetEncoderfrom sklearn.model_selection import StratifiedKFold, KFoldclass MeanEncoder: def __init__(self, categorical_features, n_splits=5, target_type='classification', min_samples_leaf=2, smoothing=1, hierarchy=None, verbose=0, shuffle=False, random_state=None): """ Parameters ---------- categorical_features: list of str the name of the categorical columns to encode. n_splits: int the number of splits used in mean encoding. target_type: str, 'regression' or 'classification'. min_samples_leaf: int For regularization the weighted average between category mean and global mean is taken. The weight is an S-shaped curve between 0 and 1 with the number of samples for a category on the x-axis. The curve reaches 0.5 at min_samples_leaf. (parameter k in the original paper) smoothing: float smoothing effect to balance categorical average vs prior. Higher value means stronger regularization. The value must be strictly bigger than 0. Higher values mean a flatter S-curve (see min_samples_leaf). hierarchy: dict or dataframe A dictionary or a dataframe to define the hierarchy for mapping. If a dictionary, this contains a dict of columns to map into hierarchies. Dictionary key(s) should be the column name from X which requires mapping. For multiple hierarchical maps, this should be a dictionary of dictionaries. If dataframe: a dataframe defining columns to be used for the hierarchies. Column names must take the form: HIER_colA_1, ... HIER_colA_N, HIER_colB_1, ... HIER_colB_M, ... where [colA, colB, ...] are given columns in cols list. 1:N and 1:M define the hierarchy for each column where 1 is the highest hierarchy (top of the tree). A single column or multiple can be used, as relevant. verbose: int integer indicating verbosity of the output. 0 for none. shuffle : bool, default=False random_state : int or RandomState instance, default=None When `shuffle` is True, `random_state` affects the ordering of the indices, which controls the randomness of each fold for each class. Otherwise, leave `random_state` as `None`. Pass an int for reproducible output across multiple function calls. """ self.categorical_features = categorical_features self.n_splits = n_splits self.learned_stats = {} self.min_samples_leaf = min_samples_leaf self.smoothing = smoothing self.hierarchy = hierarchy self.verbose = verbose self.shuffle = shuffle self.random_state = random_state if target_type == 'classification': self.target_type = target_type self.target_values = [] else: self.target_type = 'regression' self.target_values = None def mean_encode_subroutine(self, X_train, y_train, X_test, variable, target): X_train = X_train[[variable]].copy() X_test = X_test[[variable]].copy() if target is not None: nf_name = '{}_pred_{}'.format(variable, target) X_train['pred_temp'] = (y_train == target).astype(int) # classification else: nf_name = '{}_pred'.format(variable) X_train['pred_temp'] = y_train # regression prior = X_train['pred_temp'].mean() te = TargetEncoder(verbose=self.verbose, hierarchy=self.hierarchy, cols=[variable], smoothing=self.smoothing, min_samples_leaf=self.min_samples_leaf) te.fit(X_train[[variable]], X_train['pred_temp']) tmp_l = te.ordinal_encoder.mapping[0]["mapping"].reset_index() tmp_l.rename(columns={"index":variable, 0:"encode"}, inplace=True) tmp_l.dropna(inplace=True) tmp_r = te.mapping[variable].reset_index() if self.hierarchy is None: tmp_r.rename(columns={variable: "encode", 0:nf_name}, inplace=True) else: tmp_r.rename(columns={"index": "encode", 0:nf_name}, inplace=True) col_avg_y = pd.merge(tmp_l, tmp_r, how="left",on=["encode"]) col_avg_y.drop(columns=["encode"], inplace=True) col_avg_y.set_index(variable, inplace=True) nf_train = X_train.join(col_avg_y, on=variable)[nf_name].values nf_test = X_test.join(col_avg_y, on=variable).fillna(prior, inplace=False)[nf_name].values return nf_train, nf_test, prior, col_avg_y def fit(self, X, y): """ :param X: pandas DataFrame, n_samples * n_features :param y: pandas Series or numpy array, n_samples :return X_new: the transformed pandas DataFrame containing mean-encoded categorical features """ X_new = X.copy() if self.target_type == 'classification': skf = StratifiedKFold(self.n_splits, shuffle=self.shuffle, random_state=self.random_state) else: skf = KFold(self.n_splits, shuffle=self.shuffle, random_state=self.random_state) if self.target_type == 'classification': self.target_values = sorted(set(y)) self.learned_stats = {'{}_pred_{}'.format(variable, target): [] for variable, target in product(self.categorical_features, self.target_values)} for variable, target in product(self.categorical_features, self.target_values): nf_name = '{}_pred_{}'.format(variable, target) X_new.loc[:, nf_name] = np.nan for large_ind, small_ind in skf.split(y, y): nf_large, nf_small, prior, col_avg_y = self.mean_encode_subroutine( X_new.iloc[large_ind], y.iloc[large_ind], X_new.iloc[small_ind], variable, target) X_new.iloc[small_ind, -1] = nf_small self.learned_stats[nf_name].append((prior, col_avg_y)) else: self.learned_stats = {'{}_pred'.format(variable): [] for variable in self.categorical_features} for variable in self.categorical_features: nf_name = '{}_pred'.format(variable) X_new.loc[:, nf_name] = np.nan for large_ind, small_ind in skf.split(y, y): nf_large, nf_small, prior, col_avg_y = self.mean_encode_subroutine( X_new.iloc[large_ind], y.iloc[large_ind], X_new.iloc[small_ind], variable, None) X_new.iloc[small_ind, -1] = nf_small self.learned_stats[nf_name].append((prior, col_avg_y)) return X_new def transform(self, X): """ :param X: pandas DataFrame, n_samples * n_features :return X_new: the transformed pandas DataFrame containing mean-encoded categorical features """ X_new = X.copy() if self.target_type == 'classification': for variable, target in product(self.categorical_features, self.target_values): nf_name = '{}_pred_{}'.format(variable, target) X_new[nf_name] = 0 for prior, col_avg_y in self.learned_stats[nf_name]: X_new[nf_name] += X_new[[variable]].join(col_avg_y, on=variable).fillna(prior, inplace=False)[ nf_name] X_new[nf_name] /= self.n_splits else: for variable in self.categorical_features: nf_name = '{}_pred'.format(variable) X_new[nf_name] = 0 for prior, col_avg_y in self.learned_stats[nf_name]: X_new[nf_name] += X_new[[variable]].join(col_avg_y, on=variable).fillna(prior, inplace=False)[ nf_name] X_new[nf_name] /= self.n_splits return X_new四 总结
本文介绍了一种对高基数类别特征非常有效的编码方式:平均数编码。详细的讲述了该种编码方式的原理,在实际工程应用中有效避免过拟合的方法,并且提供了一个直接上手的代码版本。
作者:京东保险 赵风龙
来源:京东云开发者社区 转载请注明来源