首页
AI
【金融风控系列】_[2]_欺诈识别

【金融风控系列】_[2]_欺诈识别

热心网友
转载
2025-07-22
来源:https://www.php.cn/faq/1421593.html

本文围绕IEEE-CIS欺诈检测赛题展开,目标是识别欺诈交易。介绍了训练集和测试集数据情况,含交易和身份数据字段。阐述了关键策略,如构建用户唯一标识、聚合特征等,还涉及特征选择、编码、验证策略及模型训练,最终线上评分为0.959221,旨在学习特征构建。

【金融风控系列】_[2]_欺诈识别 - 游乐网

IEEE-CIS 欺诈检测

该赛题来自 KAGGLE,仅用作学习交流

该赛题的主要目标是识别出每笔交易是否是欺诈的。

其中训练集样本约59万(欺诈占3.5%),测试集样本约50万。

数据主要分为2类,交易数据transaction和identity数据。

本文主要是对与参考文献的收集整理


字段表

交易表


分类特征:

ProductCDcard1 - card6addr1, addr2P_emaildomainR_emaildomainM1 - M9

身份表

该表中的变量是身份信息——与交易相关的网络连接信息(IP、ISP、代理等)和数字签名(UA/浏览器/操作系统/版本等)。

它们由 Vesta 的欺诈保护系统和数字安全合作伙伴收集。

(字段名称被屏蔽,不提供成对字典用于隐私保护和合同协议)


分类特征:

DeviceTypeDeviceInfoid_12 - id_38

参考:

[1] https://zhuanlan.zhihu.com/p/85947569

[2] https://www.kaggle.com/c/ieee-fraud-detection/discussion/111284

[3] https://www.kaggle.com/c/ieee-fraud-detection/discussion/111308

[4] https://www.kaggle.com/c/ieee-fraud-detection/discussion/101203

主要策略

构建用户的唯一标识(十分重要)使用UID构建聚合特征类别特征的编码(主要是用频率编码和label encode)水平方向:模型融合;垂直方向:针对用户的后处理

欺诈行为定义

标记的逻辑是将卡上报告的退款定义为欺诈交易 (isFraud=1),并将其后的用户帐户、电子邮件地址或账单地址直接关联到这些属性的交易也定义为欺诈。如果以上均未在120天内出现,则我们定义该笔定义为合法交易(isFraud=0)。

你可能认为 120 天后,一张卡片就变成了isFraud=0。我们很少在训练数据中看到这一点。(也许欺诈性信用卡会被终止使用)。训练数据集有 73838 个客户(信用卡)有2 个或更多交易。其中,71575 (96.9%) 始终为isFraud=0,2134 (2.9%) 始终为isFraud=1。只有129(0.2%)具有的混合物isFraud=0和isFraud=1。
登录后复制        

从中,我们可以获得在业务中欺诈的逻辑,一个用户有过欺诈经历,那么他下次欺诈的概率还是非常高的,我们需要关注到这一点。


唯一客户标识

原始数据中未包含唯一UID,因此需要对客户进行唯一标识,识别客户的关键是三列card1,addr1和D1

D1 列是“自客户(信用卡)开始以来的天数”

card1 列是“银行卡的前多少位”

addr1 列是“用户地址代码”

确定了用户的唯一标识之后,我们并不能直接把它当作一个特征直接加入到模型中去,因为通过分析发现,测试集中有68.2%的用户是新用户,并不在训练集中。我们需要间接的使用`UID`,用`UID`构造一些聚合特征。
登录后复制        

特征选择

前向特征选择(使用单个或一组特征)递归特征消除(使用单个或一组特征)排列重要性对抗验证相关分析时间一致性客户一致性训练/测试分布分析

一个叫做“时间一致性”的有趣技巧是在训练数据集的第一个月使用单个特征(或一小组特征)训练单个模型,并预测isFraud最后一个月的训练数据集。这会评估特征本身是否随时间保持一致。95% 是,但我们发现 5% 的列不符合我们的模型。他们的训练 AUC 约为 0.60,验证 AUC 为 0.40。


验证策略

训练两个月/ 跳过两个月 / 预测两个月训练四个月/ 跳过一个月 / 预测一个月

特征编码

主要使用以下五种特征编码方式

频率编码 :统计该值出现的个数

def encode_FE(df1, df2, cols):    for col in cols:        df = pd.concat([df1[col], df2[col]])        vc = df.value_counts(dropna=True, normalize=True).to_dict()        vc[-1] = -1        nm = col + "FE"        df1[nm] = df1[col].map(vc)        df1[nm] = df1[nm].astype("float32")        df2[nm] = df2[col].map(vc)        df2[nm] = df2[nm].astype("float32")        print(col)
登录后复制        

标签编码 :将原数据映射称为一组顺序数字,类似ONE-HOT,不过 pd.factorize 映射为[1],[2],[3]。 pd.get_dummies() 映射为 [1,0,0],[0,1,0],[0,0,1]

def encode_LE(col, train=X_train, test=X_test, verbose=True):    df_comb = pd.concat([train[col], test[col]], axis=0)    df_comb, _ = pd.factorize(df_comb)    nm = col    if df_comb.max() > 32000:        train[nm] = df_comb[0: len(train)].astype("float32")        test[nm] = df_comb[len(train):].astype("float32")    else:        train[nm] = df_comb[0: len(train)].astype("float16")        test[nm] = df_comb[len(train):].astype("float16")    del df_comb    gc.collect()    if verbose:        print(col)
登录后复制        

统计特征:主要使用 pd.groupby对变量进行分组,再使用agg计算分组的统计特征

def encode_AG(main_columns, uids, aggregations=["mean"], df_train=X_train, df_test=X_test, fillna=True, usena=False):    for main_column in main_columns:        for col in uids:            for agg_type in aggregations:                new_column = main_column + "_" + col + "_" + agg_type                temp_df = pd.concat([df_train[[col, main_column]], df_test[[col, main_column]]])                if usena:                    temp_df.loc[temp_df[main_column] == -1, main_column] = np.nan                #求每个uid下,该col的均值或标准差                temp_df = temp_df.groupby([col])[main_column].agg([agg_type]).reset_index().rename(                    columns={agg_type: new_column})                #将uid设成index                temp_df.index = list(temp_df[col])                temp_df = temp_df[new_column].to_dict()                #temp_df是一个映射字典                df_train[new_column] = df_train[col].map(temp_df).astype("float32")                df_test[new_column] = df_test[col].map(temp_df).astype("float32")                if fillna:                    df_train[new_column].fillna(-1, inplace=True)                    df_test[new_column].fillna(-1, inplace=True)                print(new_column)
登录后复制        

交叉特征:对两列的特征重新组合成为新特征,再进行标签编码

def encode_CB(col1, col2, df1=X_train, df2=X_test):    nm = col1 + '_' + col2    df1[nm] = df1[col1].astype(str) + '_' + df1[col2].astype(str)    df2[nm] = df2[col1].astype(str) + '_' + df2[col2].astype(str)    encode_LE(nm, verbose=False)    print(nm, ', ', end='')
登录后复制        

唯一值特征:分组后返回目标属性的唯一值个数

def encode_AG2(main_columns, uids, train_df=X_train, test_df=X_test):    for main_column in main_columns:        for col in uids:            comb = pd.concat([train_df[[col] + [main_column]], test_df[[col] + [main_column]]], axis=0)            mp = comb.groupby(col)[main_column].agg(['nunique'])['nunique'].to_dict()            train_df[col + '_' + main_column + '_ct'] = train_df[col].map(mp).astype('float32')            test_df[col + '_' + main_column + '_ct'] = test_df[col].map(mp).astype('float32')            print(col + '_' + main_column + '_ct, ', end='')
登录后复制    

复现代码

因为数据集命名有空格的问题,请先将文件夹/data104475下数据集手动重命名为 IEEE_CIS_Fraud_Detection.zip

In [2]
# 解压数据集 仅第一次运行时运行!unzip -q -o data/data104475/IEEE_CIS_Fraud_Detection.zip -d /home/aistudio/data
登录后复制        
unzip:  cannot find or open data/data104475/IEEE_CIS_Fraud_Detection.zip, data/data104475/IEEE_CIS_Fraud_Detection.zip.zip or data/data104475/IEEE_CIS_Fraud_Detection.zip.ZIP.
登录后复制        In [3]
# 安装依赖包!pip install xgboost
登录后复制    In [6]
import numpy as np  # linear algebraimport pandas as pd  # data processing, CSV file I/O (e.g. pd.read_csv)import os, gcfrom sklearn.model_selection import GroupKFoldfrom sklearn.metrics import roc_auc_scoreimport xgboost as xgbimport datetime
登录后复制    In [4]
path_train_transaction = "./data/raw_data/train_transaction.csv"path_train_id = "./data/raw_data/train_identity.csv"path_test_transaction = "./data/raw_data/test_transaction.csv"path_test_id = "./data/raw_data/test_identity.csv"path_sample_submission = './data/raw_data/sample_submission.csv'path_submission = 'sub_xgb_95.csv'
登录后复制    In [7]
BUILD95 = FalseBUILD96 = True# cols with stringsstr_type = ['ProductCD', 'card4', 'card6', 'P_emaildomain', 'R_emaildomain', 'M1', 'M2', 'M3', 'M4', 'M5',            'M6', 'M7', 'M8', 'M9', 'id_12', 'id_15', 'id_16', 'id_23', 'id_27', 'id_28', 'id_29', 'id_30',            'id_31', 'id_33', 'id_34', 'id_35', 'id_36', 'id_37', 'id_38', 'DeviceType', 'DeviceInfo']# fisrt 53 columnscols = ['TransactionID', 'TransactionDT', 'TransactionAmt',        'ProductCD', 'card1', 'card2', 'card3', 'card4', 'card5', 'card6',        'addr1', 'addr2', 'dist1', 'dist2', 'P_emaildomain', 'R_emaildomain',        'C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9', 'C10', 'C11',        'C12', 'C13', 'C14', 'D1', 'D2', 'D3', 'D4', 'D5', 'D6', 'D7', 'D8',        'D9', 'D10', 'D11', 'D12', 'D13', 'D14', 'D15', 'M1', 'M2', 'M3', 'M4',        'M5', 'M6', 'M7', 'M8', 'M9']# V COLUMNS TO LOAD DECIDED BY CORRELATION EDA# https://www.kaggle.com/cdeotte/eda-for-columns-v-and-idv = [1, 3, 4, 6, 8, 11]v += [13, 14, 17, 20, 23, 26, 27, 30]v += [36, 37, 40, 41, 44, 47, 48]v += [54, 56, 59, 62, 65, 67, 68, 70]v += [76, 78, 80, 82, 86, 88, 89, 91]# v += [96, 98, 99, 104] #relates to groups, no NANv += [107, 108, 111, 115, 117, 120, 121, 123]  # maybe group, no NANv += [124, 127, 129, 130, 136]  # relates to groups, no NAN# LOTS OF NAN BELOWv += [138, 139, 142, 147, 156, 162]  # b1v += [165, 160, 166]  # b1v += [178, 176, 173, 182]  # b2v += [187, 203, 205, 207, 215]  # b2v += [169, 171, 175, 180, 185, 188, 198, 210, 209]  # b2v += [218, 223, 224, 226, 228, 229, 235]  # b3v += [240, 258, 257, 253, 252, 260, 261]  # b3v += [264, 266, 267, 274, 277]  # b3v += [220, 221, 234, 238, 250, 271]  # b3v += [294, 284, 285, 286, 291, 297]  # relates to grous, no NANv += [303, 305, 307, 309, 310, 320]  # relates to groups, no NANv += [281, 283, 289, 296, 301, 314]  # relates to groups, no NAN# v += [332, 325, 335, 338] # b4 lots NANcols += ['V' + str(x) for x in v]dtypes = {}for c in cols + ['id_0' + str(x) for x in range(1, 10)] + ['id_' + str(x) for x in range(10, 34)]:    dtypes[c] = 'float32'for c in str_type:    dtypes[c] = 'category'# load data and mergeprint("load data...")X_train = pd.read_csv(path_train_transaction, index_col="TransactionID", dtype=dtypes, usecols=cols + ["isFraud"])train_id = pd.read_csv(path_train_id, index_col="TransactionID", dtype=dtypes)X_train = X_train.merge(train_id, how="left", left_index=True, right_index=True)X_test = pd.read_csv(path_test_transaction, index_col="TransactionID", dtype=dtypes, usecols=cols)test_id = pd.read_csv(path_test_id, index_col="TransactionID", dtype=dtypes)X_test = X_test.merge(test_id, how="left", left_index=True, right_index=True)# targety_train = X_train["isFraud"]del train_id, test_id, X_train["isFraud"]print("X_train shape:{}, X_test shape:{}".format(X_train.shape, X_test.shape))
登录后复制        
load data...X_train shape:(590540, 213), X_test shape:(506691, 213)
登录后复制        In [21]
# transform D feature "time delta" as "time point"for i in range(1, 16):    if i in [1, 2, 3, 5, 9]:        continue    X_train["D" + str(i)] = X_train["D" + str(i)] - X_train["TransactionDT"] / np.float32(60 * 60 * 24)    X_test["D" + str(i)] = X_test["D" + str(i)] - X_test["TransactionDT"] / np.float32(60 * 60 * 24)# encoding function# frequency encodedef encode_FE(df1, df2, cols):    for col in cols:        df = pd.concat([df1[col], df2[col]])        vc = df.value_counts(dropna=True, normalize=True).to_dict()        vc[-1] = -1        nm = col + "FE"        df1[nm] = df1[col].map(vc)        df1[nm] = df1[nm].astype("float32")        df2[nm] = df2[col].map(vc)        df2[nm] = df2[nm].astype("float32")        print(col)# label encodedef encode_LE(col, train=X_train, test=X_test, verbose=True):    df_comb = pd.concat([train[col], test[col]], axis=0)    df_comb, _ = pd.factorize(df_comb)    nm = col    if df_comb.max() > 32000:        train[nm] = df_comb[0: len(train)].astype("float32")        test[nm] = df_comb[len(train):].astype("float32")    else:        train[nm] = df_comb[0: len(train)].astype("float16")        test[nm] = df_comb[len(train):].astype("float16")    del df_comb    gc.collect()    if verbose:        print(col)def encode_AG(main_columns, uids, aggregations=["mean"], df_train=X_train, df_test=X_test, fillna=True, usena=False):    for main_column in main_columns:        for col in uids:            for agg_type in aggregations:                new_column = main_column + "_" + col + "_" + agg_type                temp_df = pd.concat([df_train[[col, main_column]], df_test[[col, main_column]]])                if usena:                    temp_df.loc[temp_df[main_column] == -1, main_column] = np.nan                #求每个uid下,该col的均值或标准差                temp_df = temp_df.groupby([col])[main_column].agg([agg_type]).reset_index().rename(                    columns={agg_type: new_column})                #将uid设成index                temp_df.index = list(temp_df[col])                temp_df = temp_df[new_column].to_dict()                #temp_df是一个映射字典                df_train[new_column] = df_train[col].map(temp_df).astype("float32")                df_test[new_column] = df_test[col].map(temp_df).astype("float32")                if fillna:                    df_train[new_column].fillna(-1, inplace=True)                    df_test[new_column].fillna(-1, inplace=True)                print(new_column)# COMBINE FEATURES交叉特征def encode_CB(col1, col2, df1=X_train, df2=X_test):    nm = col1 + '_' + col2    df1[nm] = df1[col1].astype(str) + '_' + df1[col2].astype(str)    df2[nm] = df2[col1].astype(str) + '_' + df2[col2].astype(str)    encode_LE(nm, verbose=False)    print(nm, ', ', end='')# GROUP AGGREGATION NUNIQUEdef encode_AG2(main_columns, uids, train_df=X_train, test_df=X_test):    for main_column in main_columns:        for col in uids:            comb = pd.concat([train_df[[col] + [main_column]], test_df[[col] + [main_column]]], axis=0)            mp = comb.groupby(col)[main_column].agg(['nunique'])['nunique'].to_dict()            train_df[col + '_' + main_column + '_ct'] = train_df[col].map(mp).astype('float32')            test_df[col + '_' + main_column + '_ct'] = test_df[col].map(mp).astype('float32')            print(col + '_' + main_column + '_ct, ', end='')print("encode cols...")# TRANSACTION AMT CENTSX_train['cents'] = (X_train['TransactionAmt'] - np.floor(X_train['TransactionAmt'])).astype('float32')X_test['cents'] = (X_test['TransactionAmt'] - np.floor(X_test['TransactionAmt'])).astype('float32')print('cents, ', end='')
登录后复制        
encode cols...cents,
登录后复制        In [19]
# FREQUENCY ENCODE: ADDR1, CARD1, CARD2, CARD3, P_EMAILDOMAINencode_FE(X_train, X_test, ['addr1', 'card1', 'card2', 'card3', 'P_emaildomain'])# COMBINE COLUMNS CARD1+ADDR1, CARD1+ADDR1+P_EMAILDOMAINencode_CB('card1', 'addr1')encode_CB('card1_addr1', 'P_emaildomain')# FREQUENCY ENOCDEencode_FE(X_train, X_test, ['card1_addr1', 'card1_addr1_P_emaildomain'])# GROUP AGGREGATEencode_AG(['TransactionAmt', 'D9', 'D11'], ['card1', 'card1_addr1', 'card1_addr1_P_emaildomain'], ['mean', 'std'],          usena=False)for col in str_type:    encode_LE(col, X_train, X_test)"""Feature Selection - Time ConsistencyWe added 28 new feature above. We have already removed 219 V Columns from correlation analysis done here. So we currently have 242 features now. We will now check each of our 242 for "time consistency". We will build 242 models. Each model will be trained on the first month of the training data and will only use one feature. We will then predict the last month of the training data. We want both training AUC and validation AUC to be above AUC = 0.5. It turns out that 19 features fail this test so we will remove them.  Additionally we will remove 7 D columns that are mostly NAN. More techniques for feature selection are listed here"""cols = list(X_train.columns)cols.remove('TransactionDT')for c in ['D6', 'D7', 'D8', 'D9', 'D12', 'D13', 'D14']:    cols.remove(c)# FAILED TIME CONSISTENCY TESTfor c in ['C3', 'M5', 'id_08', 'id_33']:    cols.remove(c)for c in ['card4', 'id_07', 'id_14', 'id_21', 'id_30', 'id_32', 'id_34']:    cols.remove(c)for c in ['id_' + str(x) for x in range(22, 28)]:    cols.remove(c)print('NOW USING THE FOLLOWING', len(cols), 'FEATURES.')# CHRIS - TRAIN 75% PREDICT 25%idxT = X_train.index[:3 * len(X_train) // 4]idxV = X_train.index[3 * len(X_train) // 4:]print(X_train.info())# X_train = X_train.convert_objects(convert_numeric=True)# X_test = X_test.convert_objects(convert_numeric=True)for col in str_type:    print(col)    X_train[col] = X_train[col].astype(int)    X_test[col] = X_test[col].astype(int)print("after transform:")print(X_train.info())# fillnafor col in cols:    X_train[col].fillna(-1, inplace=True)    X_test[col].fillna(-1, inplace=True)
登录后复制    In [22]
START_DATE = datetime.datetime.strptime('2017-11-30', '%Y-%m-%d')X_train['DT_M'] = X_train['TransactionDT'].apply(lambda x: (START_DATE + datetime.timedelta(seconds=x)))X_train['DT_M'] = (X_train['DT_M'].dt.year - 2017) * 12 + X_train['DT_M'].dt.monthX_test['DT_M'] = X_test['TransactionDT'].apply(lambda x: (START_DATE + datetime.timedelta(seconds=x)))X_test['DT_M'] = (X_test['DT_M'].dt.year - 2017) * 12 + X_test['DT_M'].dt.monthprint("training...")if BUILD95:    oof = np.zeros(len(X_train))    preds = np.zeros(len(X_test))    skf = GroupKFold(n_splits=6)    for i, (idxT, idxV) in enumerate(skf.split(X_train, y_train, groups=X_train['DT_M'])):        month = X_train.iloc[idxV]['DT_M'].iloc[0]        print('Fold', i, 'withholding month', month)        print(' rows of train =', len(idxT), 'rows of holdout =', len(idxV))        clf = xgb.XGBClassifier(            n_estimators=5000,            max_depth=12,            learning_rate=0.02,            subsample=0.8,            colsample_bytree=0.4,            missing=-1,            eval_metric='auc',            # USE CPU            # nthread=4,            # tree_method='hist'            # USE GPU            tree_method='gpu_hist'        )        h = clf.fit(X_train[cols].iloc[idxT], y_train.iloc[idxT],                    eval_set=[(X_train[cols].iloc[idxV], y_train.iloc[idxV])],                    verbose=100, early_stopping_rounds=200)        oof[idxV] += clf.predict_proba(X_train[cols].iloc[idxV])[:, 1]        preds += clf.predict_proba(X_test[cols])[:, 1] / skf.n_splits        del h, clf        x = gc.collect()    print('#' * 20)    print('XGB95 OOF CV=', roc_auc_score(y_train, oof))if BUILD95:    sample_submission = pd.read_csv(path_sample_submission)    sample_submission.isFraud = preds    sample_submission.to_csv(path_submission, index=False)X_train['day'] = X_train.TransactionDT / (24 * 60 * 60)X_train['uid'] = X_train.card1_addr1.astype(str) + '_' + np.floor(X_train.day - X_train.D1).astype(str)X_test['day'] = X_test.TransactionDT / (24 * 60 * 60)X_test['uid'] = X_test.card1_addr1.astype(str) + '_' + np.floor(X_test.day - X_test.D1).astype(str)# FREQUENCY ENCODE UIDencode_FE(X_train, X_test, ['uid'])# AGGREGATEencode_AG(['TransactionAmt', 'D4', 'D9', 'D10', 'D15'], ['uid'], ['mean', 'std'], fillna=True, usena=True)# AGGREGATEencode_AG(['C' + str(x) for x in range(1, 15) if x != 3], ['uid'], ['mean'], X_train, X_test, fillna=True, usena=True)# AGGREGATEencode_AG(['M' + str(x) for x in range(1, 10)], ['uid'], ['mean'], fillna=True, usena=True)# AGGREGATEencode_AG2(['P_emaildomain', 'dist1', 'DT_M', 'id_02', 'cents'], ['uid'], train_df=X_train, test_df=X_test)# AGGREGATEencode_AG(['C14'], ['uid'], ['std'], X_train, X_test, fillna=True, usena=True)# AGGREGATEencode_AG2(['C13', 'V314'], ['uid'], train_df=X_train, test_df=X_test)# AGGREATEencode_AG2(['V127', 'V136', 'V309', 'V307', 'V320'], ['uid'], train_df=X_train, test_df=X_test)# NEW FEATUREX_train['outsider15'] = (np.abs(X_train.D1 - X_train.D15) > 3).astype('int8')X_test['outsider15'] = (np.abs(X_test.D1 - X_test.D15) > 3).astype('int8')print('outsider15')cols = list(X_train.columns)cols.remove('TransactionDT')for c in ['D6', 'D7', 'D8', 'D9', 'D12', 'D13', 'D14']:    if c in cols:        cols.remove(c)for c in ['oof', 'DT_M', 'day', 'uid']:    if c in cols:        cols.remove(c)# FAILED TIME CONSISTENCY TESTfor c in ['C3', 'M5', 'id_08', 'id_33']:    if c in cols:        cols.remove(c)for c in ['card4', 'id_07', 'id_14', 'id_21', 'id_30', 'id_32', 'id_34']:    if c in cols:        cols.remove(c)for c in ['id_' + str(x) for x in range(22, 28)]:    if c in cols:        cols.remove(c)print('NOW USING THE FOLLOWING', len(cols), 'FEATURES.')print(np.array(cols))if BUILD96:    oof = np.zeros(len(X_train))    preds = np.zeros(len(X_test))    skf = GroupKFold(n_splits=6)    for i, (idxT, idxV) in enumerate(skf.split(X_train, y_train, groups=X_train['DT_M'])):        month = X_train.iloc[idxV]['DT_M'].iloc[0]        print('Fold', i, 'withholding month', month)        print(' rows of train =', len(idxT), 'rows of holdout =', len(idxV))        clf = xgb.XGBClassifier(            n_estimators=5000,            max_depth=12,            learning_rate=0.02,            subsample=0.8,            colsample_bytree=0.4,            missing=-1,            eval_metric='auc',            # USE CPU            # nthread=4,            # tree_method='hist'            # USE GPU            tree_method='gpu_hist'        )        h = clf.fit(X_train[cols].iloc[idxT], y_train.iloc[idxT],                    eval_set=[(X_train[cols].iloc[idxV], y_train.iloc[idxV])],                    verbose=100, early_stopping_rounds=200)        oof[idxV] += clf.predict_proba(X_train[cols].iloc[idxV])[:, 1]        preds += clf.predict_proba(X_test[cols])[:, 1] / skf.n_splits        del h, clf        x = gc.collect()    print('#' * 20)    print('XGB96 OOF CV=', roc_auc_score(y_train, oof))if BUILD96:    sample_submission = pd.read_csv(path_sample_submission)    sample_submission.isFraud = preds    sample_submission.to_csv(path_submission, index=False)
登录后复制    

总结

本项目主要对IEEE-CIS Fraud Detection相关资料进行了收集汇总,目的是学习特征的构建。

数据的提交结果如下:(提交需要科学上网)

免责声明

游乐网为非赢利性网站,所展示的游戏/软件/文章内容均来自于互联网或第三方用户上传分享,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系youleyoucom@outlook.com。

同类文章

中国ISV出海指南:技术迭代、合规筑基与云生态战略解析

随着全球数字化浪潮与人工智能技术的双重驱动,中国独立软件开发商(ISV)正加速向海外市场拓展。然而,这一进程不仅面临合规壁垒、技术迭代等挑战,更在商业化落地环节遭遇现实考验。与此同时,云服务市场围绕

2025-10-19.

库克上海谈AI:苹果正积极推进Apple Intelligence入华

近日,苹果公司首席执行官蒂姆·库克现身上海,出席全球财富管理论坛·2025上海苏河湾大会。在会上,他与全球财富管理论坛执委会主席、清华大学经管学院院长白重恩展开对话,探讨“科技驱动时代的创新边界”这

2025-10-19.

临港新片区生活指南:科技之外的烟火日常与舒适体验

提及上海临港新片区,人们脑海中往往会浮现出“集成电路”“人工智能”等充满科技感的词汇,或是联想到一系列国家战略在这里落地生根。这些标签,无疑勾勒出了临港耀眼夺目的A面,彰显着它作为科技创新高地的硬核

2025-10-19.

芯片初创融资指南:从种子到ABC轮的进阶策略与关键考量

在半导体行业,初创企业如何从种子轮融资走向A轮、B轮甚至C轮,是创业者普遍关注的焦点。虽然技术创新和产品性能是核心,但融资策略同样决定了企业的生存与发展。本文结合多位行业专家的观点,梳理了芯片初创企

2025-10-19.

第十二届宝创赛落幕,45个获奖项目助力创新成果转化

湾区新技术新产品展示中心内,第十二届深圳宝安创新创业大赛颁奖典礼现场气氛热烈。这场以人工智能、高端装备制造、生命健康、新材料与绿色低碳五大领域为核心的创新盛会,吸引了全球601个优质项目参与角逐。经

2025-10-19.

热门教程

更多
  • 游戏攻略
  • 安卓教程
  • 苹果教程
  • 电脑教程

最新下载

更多
最强祖师小米
最强祖师小米 角色扮演 2025-10-18更新
查看
最强祖师九游
最强祖师九游 角色扮演 2025-10-18更新
查看
蛋仔派对繁中
蛋仔派对繁中 休闲益智 2025-10-19更新
查看
我爱拼模型国际服
我爱拼模型国际服 休闲益智 2025-10-19更新
查看
三国谋定天下正
三国谋定天下正 棋牌策略 2025-10-18更新
查看
最强祖师华为渠道服
最强祖师华为渠道服 角色扮演 2025-10-18更新
查看
我爱拼模型
我爱拼模型 休闲益智 2025-10-19更新
查看
蛋仔派对先锋
蛋仔派对先锋 休闲益智 2025-10-19更新
查看
最强祖师
最强祖师 角色扮演 2025-10-19更新
查看