您好,欢迎来到游6网!

当前位置:首页 > AI > 科大讯飞-人脸关键点检测挑战赛:基础思路 MAE 2.2

科大讯飞-人脸关键点检测挑战赛:基础思路 MAE 2.2

发布时间:2025-07-17    编辑:游乐网

该内容是人脸关键点检测竞赛方案,涉及4个关键点检测。使用5千张带标注训练集和2千张测试集,数据含图像与坐标标注。构建了全连接和CNN两种模型,经数据加载、预处理、训练验证,CNN模型表现更优,40轮训练后验证集MAE约0.061,最后用模型对测试集预测并可视化结果。

科大讯飞-人脸关键点检测挑战赛:基础思路 mae 2.2 - 游乐网

赛题介绍

人脸识别是基于人的面部特征信息进行身份识别的一种生物识别技术,金融和安防是目前人脸识别应用最广泛的两个领域。人脸关键点是人脸识别中的关键技术。人脸关键点检测需要识别出人脸的指定位置坐标,例如眉毛、眼睛、鼻子、嘴巴和脸部轮廓等位置坐标等。

科大讯飞-人脸关键点检测挑战赛:基础思路 MAE 2.2 - 游乐网

赛事任务

给定人脸图像,找到4个人脸关键点,赛题任务可以视为一个关键点检测问题。

训练集:5千张人脸图像,并且给定了具体的人脸关键点标注。

测试集:约2千张人脸图像,需要选手识别出具体的关键点位置。

数据说明

赛题数据由训练集和测试集组成,train.csv为训练集标注数据,train.npy和test.npy为训练集图片和测试集图片,可以使用numpy.load进行读取。

train.csv的信息为左眼坐标、右眼坐标、鼻子坐标和嘴巴坐标,总共8个点。

left_eye_center_x,left_eye_center_y,right_eye_center_x,right_eye_center_y,nose_tip_x,nose_tip_y,mouth_center_bottom_lip_x,mouth_center_bottom_lip_y66.3423640449,38.5236134831,28.9308404494,35.5777725843,49.256844943800004,68.2759550562,47.783946067399995,85.361582024568.9126037736,31.409116981100002,29.652226415100003,33.0280754717,51.913358490600004,48.408452830200005,50.6988679245,79.574037735868.7089943925,40.371149158899996,27.1308201869,40.9406803738,44.5025226168,69.9884859813,45.9264269159,86.2210093458
登录后复制

评审规则

本次竞赛的评价标准回归MAE进行评价,数值越小性能更优,最高分为0。评估代码参考:

from sklearn.metrics import mean_absolute_errory_true = [3, -0.5, 2, 7]y_pred = [2.5, 0.0, 2, 8]mean_absolute_error(y_true, y_pred)
登录后复制

步骤1:数据集解压

In [1]
!echo y | unzip -O CP936 /home/aistudio/data/data117050/人脸关键点检测挑战赛_数据集.zip!mv 人脸关键点检测挑战赛_数据集/* ./!echo y | unzip test.npy.zip!echo y | unzip train.npy.zip
登录后复制
Archive:  /home/aistudio/data/data117050/人脸关键点检测挑战赛_数据集.zip  inflating: 人脸关键点检测挑战赛_数据集/sample_submit.csv    inflating: 人脸关键点检测挑战赛_数据集/test.npy.zip    inflating: 人脸关键点检测挑战赛_数据集/train.csv    inflating: 人脸关键点检测挑战赛_数据集/train.npy.zip  Archive:  test.npy.zipreplace test.npy? [y]es, [n]o, [A]ll, [N]one, [r]ename:   inflating: test.npy                Archive:  train.npy.zipreplace train.npy? [y]es, [n]o, [A]ll, [N]one, [r]ename:   inflating: train.npy
登录后复制

步骤2:数据集读取

In [2]
import pandas as pdimport numpy as np
登录后复制train.csv:存储的是八个关键点的坐标。train.npy:训练集图像test.npy:测试集图像In [3]
# 读取标注train_df = pd.read_csv('train.csv')train_df = train_df.fillna(48)train_df.head()
登录后复制
   left_eye_center_x  left_eye_center_y  right_eye_center_x  \0          66.342364          38.523613           28.930840   1          68.912604          31.409117           29.652226   2          68.708994          40.371149           27.130820   3          65.334176          35.471878           29.366461   4          68.634857          29.999486           31.094571      right_eye_center_y  nose_tip_x  nose_tip_y  mouth_center_bottom_lip_x  \0           35.577773   49.256845   68.275955                  47.783946   1           33.028075   51.913358   48.408453                  50.698868   2           40.940680   44.502523   69.988486                  45.926427   3           37.767684   50.411373   64.934767                  50.028780   4           29.616429   50.247429   51.450857                  47.948571      mouth_center_bottom_lip_y  0                  85.361582  1                  79.574038  2                  86.221009  3                  74.883241  4                  84.394286
登录后复制In [4]
# 读取数据集train_img = np.load('train.npy')test_img = np.load('test.npy')train_img = np.transpose(train_img, [2, 0, 1])train_img = train_img.reshape(-1, 1, 96, 96)test_img = np.transpose(test_img, [2, 0, 1])test_img = test_img.reshape(-1, 1, 96, 96)print(train_img.shape, test_img.shape)
登录后复制
(5000, 1, 96, 96) (2049, 1, 96, 96)
登录后复制

步骤3: 数据集可视化

In [5]
%pylab inlineidx = 409xy = train_df.iloc[idx].values.reshape(-1, 2)plt.scatter(xy[:, 0], xy[:, 1], c='r')plt.imshow(train_img[idx, 0, :, :], cmap='gray')
登录后复制
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/__init__.py:107: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working  from collections import MutableMapping/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/rcsetup.py:20: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working  from collections import Iterable, Mapping/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/colors.py:53: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working  from collections import Sized
登录后复制
Populating the interactive namespace from numpy and matplotlib
登录后复制
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:2349: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working  if isinstance(obj, collections.Iterator):/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:2366: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working  return list(data) if isinstance(data, collections.MappingView) else data
登录后复制
登录后复制
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:425: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_min = np.asscalar(a_min.astype(scaled_dtype))/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:426: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_max = np.asscalar(a_max.astype(scaled_dtype))
登录后复制登录后复制登录后复制登录后复制登录后复制
登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制In [6]
idx = 4090xy = train_df.iloc[idx].values.reshape(-1, 2)plt.scatter(xy[:, 0], xy[:, 1], c='r')plt.imshow(train_img[idx, 0, :, :], cmap='gray')
登录后复制
登录后复制
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:425: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_min = np.asscalar(a_min.astype(scaled_dtype))/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:426: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_max = np.asscalar(a_max.astype(scaled_dtype))
登录后复制登录后复制登录后复制登录后复制登录后复制
登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制In [7]
xy = 96 - train_df.mean(0).values.reshape(-1, 2)plt.scatter(xy[:, 0], xy[:, 1], c='r')
登录后复制
登录后复制
登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制

步骤4:构建模型和数据集

In [8]
import paddlepaddle.__version__
登录后复制
'2.2.2'
登录后复制

全连接模型

In [9]
from paddle.io import DataLoader, Datasetfrom PIL import Image# 自定义模型class MyDataset(Dataset):    def __init__(self, img, keypoint):        super(MyDataset, self).__init__()        self.img = img        self.keypoint = keypoint        def __getitem__(self, index):        img = Image.fromarray(self.img[index, 0, :, :])        return np.asarray(img).astype(np.float32)/255, self.keypoint[index] / 96.0    def __len__(self):        return len(self.keypoint)# 训练集train_dataset = MyDataset(    train_img[:-500, :, :, :],     paddle.to_tensor(train_df.values[:-500].astype(np.float32)))train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)# 验证集val_dataset = MyDataset(    train_img[-500:, :, :, :],     paddle.to_tensor(train_df.values[-500:].astype(np.float32)))val_loader = DataLoader(val_dataset, batch_size=64, shuffle=False)# 测试集test_dataset = MyDataset(    test_img[:, :, :],     paddle.to_tensor(np.zeros((test_img.shape[2], 8))))test_loader = DataLoader(test_dataset, batch_size=64, shuffle=False)
登录后复制In [10]
# 定义全连接模型model = paddle.nn.Sequential(    paddle.nn.Flatten(),    paddle.nn.Linear(96*96,128),    paddle.nn.LeakyReLU(),    paddle.nn.Linear(128, 8))paddle.summary(model, (64, 96, 96))
登录后复制
W0123 00:43:41.304462   119 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1W0123 00:43:41.309953   119 device_context.cc:465] device: 0, cuDNN Version: 7.6.
登录后复制
--------------------------------------------------------------------------- Layer (type)       Input Shape          Output Shape         Param #    ===========================================================================   Flatten-1       [[64, 96, 96]]         [64, 9216]             0          Linear-1         [[64, 9216]]          [64, 128]          1,179,776     LeakyReLU-1       [[64, 128]]           [64, 128]              0          Linear-2         [[64, 128]]            [64, 8]             1,032     ===========================================================================Total params: 1,180,808Trainable params: 1,180,808Non-trainable params: 0---------------------------------------------------------------------------Input size (MB): 2.25Forward/backward pass size (MB): 4.63Params size (MB): 4.50Estimated Total Size (MB): 11.38---------------------------------------------------------------------------
登录后复制
{'total_params': 1180808, 'trainable_params': 1180808}
登录后复制In [11]
# 损失函数和优化器optimizer = paddle.optimizer.Adam(parameters=model.parameters(), learning_rate=0.0001)criterion = paddle.nn.MSELoss()from sklearn.metrics import mean_absolute_errorfor epoch in range(0, 40):    Train_Loss, Val_Loss = [], []    Train_MAE, Val_MAE = [], []    # 训练    model.train()    for i, (x, y) in enumerate(train_loader):        pred = model(x)        loss = criterion(pred, y)        Train_Loss.append(loss.item())        loss.backward()        optimizer.step()        optimizer.clear_grad()        Train_MAE.append(mean_absolute_error(y.numpy(), pred.numpy()) * 96  / y.shape[0])        # 验证    model.eval()    for i, (x, y) in enumerate(val_loader):        pred = model(x)        loss = criterion(pred, y)        Val_Loss.append(loss.item())        Val_MAE.append(mean_absolute_error(y.numpy(), pred.numpy()) * 96 / y.shape[0])        if epoch % 1 == 0:        print(f'\nEpoch: {epoch}')        print(f'Loss {np.mean(Train_Loss):3.5f}/{np.mean(Val_Loss):3.5f}')        print(f'MAE {np.mean(Train_MAE):3.5f}/{np.mean(Val_MAE):3.5f}')
登录后复制
Epoch: 0Loss 0.05956/0.02340MAE 0.25278/0.18601Epoch: 1Loss 0.02075/0.02269MAE 0.17376/0.17984Epoch: 2Loss 0.01832/0.01881MAE 0.16236/0.16371Epoch: 3Loss 0.01752/0.01729MAE 0.15944/0.15727Epoch: 4Loss 0.01630/0.01783MAE 0.15351/0.16075Epoch: 5Loss 0.01535/0.01593MAE 0.14883/0.15059Epoch: 6Loss 0.01489/0.01655MAE 0.14582/0.15519Epoch: 7Loss 0.01469/0.01596MAE 0.14487/0.14971Epoch: 8Loss 0.01362/0.01582MAE 0.13930/0.15087Epoch: 9Loss 0.01355/0.01506MAE 0.13915/0.14637Epoch: 10Loss 0.01293/0.01490MAE 0.13586/0.14514Epoch: 11Loss 0.01289/0.01367MAE 0.13555/0.13847Epoch: 12Loss 0.01187/0.01372MAE 0.12944/0.13950Epoch: 13Loss 0.01184/0.01281MAE 0.12905/0.13358Epoch: 14Loss 0.01181/0.01534MAE 0.12995/0.14891Epoch: 15Loss 0.01124/0.01334MAE 0.12593/0.13727Epoch: 16Loss 0.01083/0.01371MAE 0.12342/0.14003Epoch: 17Loss 0.01057/0.01181MAE 0.12188/0.12769Epoch: 18Loss 0.01041/0.01207MAE 0.12105/0.12884Epoch: 19Loss 0.01017/0.01149MAE 0.11868/0.12613Epoch: 20Loss 0.00965/0.01348MAE 0.11610/0.13499Epoch: 21Loss 0.00993/0.01133MAE 0.11817/0.12543Epoch: 22Loss 0.00906/0.01080MAE 0.11226/0.12200Epoch: 23Loss 0.00883/0.01117MAE 0.11127/0.12394Epoch: 24Loss 0.00865/0.01064MAE 0.10986/0.12086Epoch: 25Loss 0.00924/0.01023MAE 0.11396/0.11844Epoch: 26Loss 0.00850/0.01001MAE 0.10874/0.11812Epoch: 27Loss 0.00801/0.00998MAE 0.10525/0.11665Epoch: 28Loss 0.00809/0.00978MAE 0.10666/0.11558Epoch: 29Loss 0.00743/0.01073MAE 0.10161/0.12184Epoch: 30Loss 0.00752/0.00916MAE 0.10146/0.11186Epoch: 31Loss 0.00715/0.00982MAE 0.09895/0.11673Epoch: 32Loss 0.00717/0.00907MAE 0.09980/0.11068Epoch: 33Loss 0.00718/0.00967MAE 0.09976/0.11560Epoch: 34Loss 0.00677/0.01463MAE 0.09663/0.14721Epoch: 35Loss 0.00764/0.00852MAE 0.10249/0.10766Epoch: 36Loss 0.00650/0.00916MAE 0.09434/0.11061Epoch: 37Loss 0.00644/0.00840MAE 0.09397/0.10676Epoch: 38Loss 0.00642/0.00852MAE 0.09410/0.10684Epoch: 39Loss 0.00611/0.00798MAE 0.09161/0.10284
登录后复制In [13]
# 预测函数def make_predict(model, loader):    model.eval()    predict_list = []    for i, (x, y) in enumerate(loader):        pred = model(x)        predict_list.append(pred.numpy())    return np.vstack(predict_list)test_pred = make_predict(model, test_loader) * 96
登录后复制In [14]
idx = 40xy = test_pred[idx, :].reshape(-1, 2)plt.scatter(xy[:, 0], xy[:, 1], c='r')plt.imshow(test_img[idx, 0, :, :], cmap='gray')
登录后复制登录后复制
登录后复制
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:425: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_min = np.asscalar(a_min.astype(scaled_dtype))/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:426: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_max = np.asscalar(a_max.astype(scaled_dtype))
登录后复制登录后复制登录后复制登录后复制登录后复制
登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制In [15]
idx = 42xy = test_pred[idx, :].reshape(-1, 2)plt.scatter(xy[:, 0], xy[:, 1], c='r')plt.imshow(test_img[idx, 0, :, :], cmap='gray')
登录后复制登录后复制
登录后复制
登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制

CNN模型

In [17]
from paddle.io import DataLoader, Datasetfrom PIL import Imageclass MyDataset(Dataset):    def __init__(self, img, keypoint):        super(MyDataset, self).__init__()        self.img = img        self.keypoint = keypoint        def __getitem__(self, index):        img = Image.fromarray(self.img[index, 0, :, :])        return np.asarray(img).reshape(1, 96, 96).astype(np.float32)/255, self.keypoint[index] / 96.0    def __len__(self):        return len(self.keypoint)train_dataset = MyDataset(    train_img[:-500, :, :, :],     paddle.to_tensor(train_df.values[:-500].astype(np.float32)))train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)val_dataset = MyDataset(    train_img[-500:, :, :, :],     paddle.to_tensor(train_df.values[-500:].astype(np.float32)))val_loader = DataLoader(val_dataset, batch_size=64, shuffle=False)test_dataset = MyDataset(    test_img[:, :, :],     paddle.to_tensor(np.zeros((test_img.shape[2], 8))))test_loader = DataLoader(test_dataset, batch_size=64, shuffle=False)
登录后复制In [18]
# 卷积模型model = paddle.nn.Sequential(    paddle.nn.Conv2D(1, 10, (5, 5)),    paddle.nn.ReLU(),    paddle.nn.MaxPool2D((2, 2)),    paddle.nn.Conv2D(10, 20, (5, 5)),    paddle.nn.ReLU(),    paddle.nn.MaxPool2D((2, 2)),    paddle.nn.Conv2D(20, 40, (5, 5)),    paddle.nn.ReLU(),    paddle.nn.MaxPool2D((2, 2)),    paddle.nn.Flatten(),    paddle.nn.Linear(2560, 8),)paddle.summary(model, (64, 1, 96, 96))
登录后复制
--------------------------------------------------------------------------- Layer (type)       Input Shape          Output Shape         Param #    ===========================================================================   Conv2D-4      [[64, 1, 96, 96]]     [64, 10, 92, 92]         260          ReLU-4       [[64, 10, 92, 92]]    [64, 10, 92, 92]          0         MaxPool2D-4    [[64, 10, 92, 92]]    [64, 10, 46, 46]          0          Conv2D-5      [[64, 10, 46, 46]]    [64, 20, 42, 42]        5,020         ReLU-5       [[64, 20, 42, 42]]    [64, 20, 42, 42]          0         MaxPool2D-5    [[64, 20, 42, 42]]    [64, 20, 21, 21]          0          Conv2D-6      [[64, 20, 21, 21]]    [64, 40, 17, 17]       20,040         ReLU-6       [[64, 40, 17, 17]]    [64, 40, 17, 17]          0         MaxPool2D-6    [[64, 40, 17, 17]]     [64, 40, 8, 8]           0          Flatten-3      [[64, 40, 8, 8]]        [64, 2560]             0          Linear-4         [[64, 2560]]           [64, 8]            20,488     ===========================================================================Total params: 45,808Trainable params: 45,808Non-trainable params: 0---------------------------------------------------------------------------Input size (MB): 2.25Forward/backward pass size (MB): 145.54Params size (MB): 0.17Estimated Total Size (MB): 147.97---------------------------------------------------------------------------
登录后复制
{'total_params': 45808, 'trainable_params': 45808}
登录后复制In [19]
# 损失函数和优化器optimizer = paddle.optimizer.Adam(parameters=model.parameters(), learning_rate=0.0001)criterion = paddle.nn.MSELoss()from sklearn.metrics import mean_absolute_errorfor epoch in range(0, 40):    Train_Loss, Val_Loss = [], []    Train_MAE, Val_MAE = [], []        # 训练    model.train()    for i, (x, y) in enumerate(train_loader):        pred = model(x)        loss = criterion(pred, y)        Train_Loss.append(loss.item())        loss.backward()        optimizer.step()        optimizer.clear_grad()        Train_MAE.append(mean_absolute_error(y.numpy(), pred.numpy()) * 96  / y.shape[0])        # 验证    model.eval()    for i, (x, y) in enumerate(val_loader):        pred = model(x)        loss = criterion(pred, y)        Val_Loss.append(loss.item())        Val_MAE.append(mean_absolute_error(y.numpy(), pred.numpy()) * 96 / y.shape[0])        if epoch % 1 == 0:        print(f'\nEpoch: {epoch}')        print(f'Loss {np.mean(Train_Loss):3.5f}/{np.mean(Val_Loss):3.5f}')        print(f'MAE {np.mean(Train_MAE):3.5f}/{np.mean(Val_MAE):3.5f}')
登录后复制
Epoch: 0Loss 0.23343/0.03865MAE 0.44735/0.23946Epoch: 1Loss 0.03499/0.03301MAE 0.22689/0.22072Epoch: 2Loss 0.03006/0.02846MAE 0.20913/0.20492Epoch: 3Loss 0.02614/0.02548MAE 0.19541/0.19341Epoch: 4Loss 0.02270/0.02314MAE 0.18112/0.18211Epoch: 5Loss 0.01965/0.01952MAE 0.16927/0.16763Epoch: 6Loss 0.01704/0.01763MAE 0.15715/0.15866Epoch: 7Loss 0.01492/0.01483MAE 0.14711/0.14516Epoch: 8Loss 0.01260/0.01268MAE 0.13498/0.13350Epoch: 9Loss 0.01034/0.00996MAE 0.12187/0.11828Epoch: 10Loss 0.00855/0.00836MAE 0.11041/0.10738Epoch: 11Loss 0.00751/0.00737MAE 0.10320/0.10133Epoch: 12Loss 0.00644/0.00657MAE 0.09478/0.09471Epoch: 13Loss 0.00592/0.00626MAE 0.09048/0.09321Epoch: 14Loss 0.00556/0.00568MAE 0.08704/0.08790Epoch: 15Loss 0.00518/0.00538MAE 0.08444/0.08551Epoch: 16Loss 0.00491/0.00524MAE 0.08204/0.08433Epoch: 17Loss 0.00474/0.00495MAE 0.08087/0.08178Epoch: 18Loss 0.00450/0.00476MAE 0.07885/0.08041Epoch: 19Loss 0.00431/0.00460MAE 0.07685/0.07922Epoch: 20Loss 0.00421/0.00458MAE 0.07596/0.07887Epoch: 21Loss 0.00393/0.00421MAE 0.07302/0.07515Epoch: 22Loss 0.00387/0.00419MAE 0.07282/0.07502Epoch: 23Loss 0.00373/0.00416MAE 0.07131/0.07482Epoch: 24Loss 0.00354/0.00385MAE 0.06945/0.07177Epoch: 25Loss 0.00347/0.00386MAE 0.06882/0.07173Epoch: 26Loss 0.00340/0.00368MAE 0.06781/0.06999Epoch: 27Loss 0.00323/0.00363MAE 0.06601/0.06949Epoch: 28Loss 0.00320/0.00349MAE 0.06580/0.06794Epoch: 29Loss 0.00307/0.00349MAE 0.06427/0.06842Epoch: 30Loss 0.00300/0.00336MAE 0.06357/0.06692Epoch: 31Loss 0.00291/0.00329MAE 0.06240/0.06611Epoch: 32Loss 0.00287/0.00326MAE 0.06206/0.06594Epoch: 33Loss 0.00280/0.00323MAE 0.06119/0.06572Epoch: 34Loss 0.00276/0.00312MAE 0.06076/0.06427Epoch: 35Loss 0.00268/0.00304MAE 0.05994/0.06345Epoch: 36Loss 0.00262/0.00301MAE 0.05915/0.06306Epoch: 37Loss 0.00256/0.00294MAE 0.05834/0.06231Epoch: 38Loss 0.00256/0.00288MAE 0.05833/0.06166Epoch: 39Loss 0.00246/0.00284MAE 0.05717/0.06128
登录后复制In [20]
def make_predict(model, loader):    model.eval()    predict_list = []    for i, (x, y) in enumerate(loader):        pred = model(x)        predict_list.append(pred.numpy())    return np.vstack(predict_list)test_pred = make_predict(model, test_loader) * 96
登录后复制In [21]
idx = 40xy = test_pred[idx, :].reshape(-1, 2)plt.scatter(xy[:, 0], xy[:, 1], c='r')plt.imshow(test_img[idx, 0, :, :], cmap='gray')
登录后复制登录后复制
登录后复制
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:425: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_min = np.asscalar(a_min.astype(scaled_dtype))/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:426: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_max = np.asscalar(a_max.astype(scaled_dtype))
登录后复制登录后复制登录后复制登录后复制登录后复制
登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制In [22]
idx = 42xy = test_pred[idx, :].reshape(-1, 2)plt.scatter(xy[:, 0], xy[:, 1], c='r')plt.imshow(test_img[idx, 0, :, :], cmap='gray')
登录后复制登录后复制
登录后复制
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:425: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_min = np.asscalar(a_min.astype(scaled_dtype))/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:426: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_max = np.asscalar(a_max.astype(scaled_dtype))
登录后复制登录后复制登录后复制登录后复制登录后复制
登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制

热门合集

MORE

+

MORE

+

变态游戏推荐

MORE

+

最新专题

MORE

+

热门游戏推荐

MORE

+

关于我们  |  游戏下载排行榜  |  专题合集  |  端游游戏  |  手机游戏  |  联系方式: youleyoucom@outlook.com

Copyright 2013-2019 www.youleyou.com    湘公网安备 43070202000716号

声明:游6网为非赢利性网站 不接受任何赞助和广告 湘ICP备2023003002号-9