Tensorflow.keras筆記-卷積神經網絡
cifar10數據集
1. 載入cifar10數據集
2. 處理數據,歸一化
3. 數據擴增
4. 建模
5. 斷點續訓
6. 給圖識物
#載入包import tensorflow as tfimport osimport numpy as npfrom matplotlib import pyplot as pltfrom tensorflow.keras.layers import Conv2D,BatchNormalization,Activation,MaxPool2D,Dropout,Flatten,Densefrom tensorflow.keras import Modelnp.set_printoptions(threshold=np.inf)1-2:載入cifar10數據集,並進行歸一化處理
cifar10數據集共有60000張彩色圖像,這些圖像是32*32,分為10個類,每類6000張圖。這裡面有50000張用於訓練,另外10000用於測試。
#數據導入處理cifar10 = tf.keras.datasets.cifar10(x_train,y_train),(x_test,y_test) = cifar10.load_data()x_train,x_test = x_train / 255.0, x_test / 255.03:建模,卷積神經網絡核心是應用卷積層對圖片進行特徵提取,而後在用全連接層進行神經網絡分析。卷積層包含五部分:卷積層(Conv2D),批標準化(BN),激活(Activation),池化層(Pooling)和Dropout。卷積就是採用多個『CBAPD』提取圖片特徵。#建模class Baseline(Model): #繼承Model def __init__(self): super(Baseline,self).__init__() self.c1 = Conv2D(filters=6, kernel_size=(5,5), padding='same') #卷積 self.b1 = BatchNormalization() #批標準化 self.a1 = Activation('relu') #激活函數 self.p1 = MaxPool2D(pool_size=(2,2), strides=2,padding='same') #最大池化 self.d1 = Dropout(0.2) #隨機沉默參數 self.flatten = Flatten() #全連接 self.f1 = Dense(128, activation='relu') self.d2 = Dropout(0.2) self.f2 = Dense(10, activation='softmax') def call(self, x): x = self.c1(x) x = self.b1(x) x = self.a1(x) x = self.p1(x) x = self.d1(x) x = self.flatten(x) x = self.f1(x) x = self.d2(x) y = self.f2(x) return ymodel = Baseline()4:設置參數,斷點續訓,數據增強,模型訓練。數據增強函數 數據增強採用keras自帶的ImageDataGenerator函數tf.keras.preprocessing.image.ImageDataGenerator(rescale = 所有數據將乘以該數值rotation_range = 隨機旋轉角度數範圍width_shift_range = 隨機寬度偏移量height_shift_ragne = 隨機高度偏移量horizontal_flip = 是否隨機水平翻轉zoom_range = 隨機縮放的範圍[1-n, 1+n])image_gen_train.fit(x_train)model.fit(image_gen_train.flow(x_train,y_train,batch_size=32),...)model.compile(optimizer = 'adam', loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), metrics=['sparse_categorical_accuracy'])cp_callback = tf.keras.callbacks.ModelCheckpoint( filepath='D:/Python/cifar10/check.skpt', save_weight_only=True, save_best_only=True)model.fit(x_train, y_train,batch_size=32, epochs=5, validation_data=(x_test,y_test), validation_freq=1, callbacks=[cp_callback])5. 給圖識物:採用model.predict()函數識別圖片,並輸出結果import matplotlib as pltfrom PIL import Imageclothes = ['T恤', '長褲', '長袖', '裙子', '外套', '涼鞋', '襯衫', '休閒鞋', '包包', '鞋子']for i in range(10): path = 'D:/1_數據/Python/Tensorflow學習/數據/exam_fashion/' + str(i) + '.jpg' img = Image.open(path) img = img.resize((28,28),Image.ANTIALIAS) img = img.convert('L') img = np.array(img) img = 255-img img = img / 255.0 x_predict = img[tf.newaxis, ...,tf.newaxis] result = model.predict(x_predict) pred = tf.argmax(result, axis=1) num = int(np.array(pred)) print('這件是:{0}_{1}'.format(num,clothes[num]))https://www.icourse163.org/course/PKU-1002536002