참고 링크
[딥러닝 | CNN] - VGG16 (tistory.com)
[딥러닝 | CNN] - VGG16
VGG16이 수록된 논문 "Very deep convolutional networks for large-scale image recognition" 의 Model Architecture 설명부분까지의 내용을 기반으로 정리하겠다. 1. VGG16란? ILSVRC 2014년 대회에서 2위를 한 CNN모델이다. 그중
brave-greenfrog.tistory.com
연습문제
1. VGG16 신경망 텐서플로로 구성하기
2. 임의 데이터 만드어서 신경망 테스트하기
import tensorflow as tf
model=tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(224,224,3)),
tf.keras.layers.Conv2D(64,(3,3),(1,1),padding='same'),
tf.keras.layers.Conv2D(64,(3,3),(1,1),padding='same'),
tf.keras.layers.MaxPooling2D((2,2),(2,2)),
tf.keras.layers.Conv2D(128,(3,3),(1,1),padding='same'),
tf.keras.layers.Conv2D(128,(3,3),(1,1),padding='same'),
tf.keras.layers.MaxPooling2D((2,2),(2,2)),
tf.keras.layers.Conv2D(256,(3,3),(1,1),padding='same'),
tf.keras.layers.Conv2D(256,(3,3),(1,1),padding='same'),
tf.keras.layers.Conv2D(256,(3,3),(1,1),padding='same'),
tf.keras.layers.MaxPooling2D((2,2),(2,2)),
tf.keras.layers.Conv2D(512,(3,3),(1,1),padding='same'),
tf.keras.layers.Conv2D(512,(3,3),(1,1),padding='same'),
tf.keras.layers.Conv2D(512,(3,3),(1,1),padding='same'),
tf.keras.layers.MaxPooling2D((2,2),(2,2)),
tf.keras.layers.Conv2D(512,(3,3),(1,1),padding='same'),
tf.keras.layers.Conv2D(512,(3,3),(1,1),padding='same'),
tf.keras.layers.Conv2D(512,(3,3),(1,1),padding='same'),
tf.keras.layers.MaxPooling2D((2,2),(2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(4096),
tf.keras.layers.Dense(4096),
tf.keras.layers.Dense(1000,activation='relu')
])
model.summary()
연습문제
3. cifar10 데이터로 신경망 학습해보기
4. BatchNormalization 적용해보기
5. Dropout 적용해보기
import tensorflow as tf
mnist=tf.keras.datasets.cifar10
(X,YT),(x,yt)=mnist.load_data()
X,x=X/255,x/255
model=tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(32,32,3)),
tf.keras.layers.Conv2D(64,(3,3),(1,1),padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Conv2D(64,(3,3),(1,1),padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.MaxPooling2D((2,2),(2,2)),
tf.keras.layers.Conv2D(128,(3,3),(1,1),padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Conv2D(128,(3,3),(1,1),padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.MaxPooling2D((2,2),(2,2)),
tf.keras.layers.Conv2D(256,(3,3),(1,1),padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Conv2D(256,(3,3),(1,1),padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Conv2D(256,(3,3),(1,1),padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.MaxPooling2D((2,2),(2,2)),
tf.keras.layers.Conv2D(512,(3,3),(1,1),padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Conv2D(512,(3,3),(1,1),padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Conv2D(512,(3,3),(1,1),padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.MaxPooling2D((2,2),(2,2)),
tf.keras.layers.Conv2D(512,(3,3),(1,1),padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Conv2D(512,(3,3),(1,1),padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Conv2D(512,(3,3),(1,1),padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.MaxPooling2D((2,2),(2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(4096),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(4096),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10,activation='relu')
])
model.summary()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(X,YT,epochs=5)
model.evaluate(x,yt)
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 32, 32, 64) 1792
batch_normalization (BatchN (None, 32, 32, 64) 256
ormalization)
re_lu (ReLU) (None, 32, 32, 64) 0
dropout (Dropout) (None, 32, 32, 64) 0
conv2d_1 (Conv2D) (None, 32, 32, 64) 36928
batch_normalization_1 (Batc (None, 32, 32, 64) 256
hNormalization)
re_lu_1 (ReLU) (None, 32, 32, 64) 0
dropout_1 (Dropout) (None, 32, 32, 64) 0
max_pooling2d (MaxPooling2D (None, 16, 16, 64) 0
)
conv2d_2 (Conv2D) (None, 16, 16, 128) 73856
batch_normalization_2 (Batc (None, 16, 16, 128) 512
hNormalization)
re_lu_2 (ReLU) (None, 16, 16, 128) 0
dropout_2 (Dropout) (None, 16, 16, 128) 0
conv2d_3 (Conv2D) (None, 16, 16, 128) 147584
batch_normalization_3 (Batc (None, 16, 16, 128) 512
hNormalization)
re_lu_3 (ReLU) (None, 16, 16, 128) 0
dropout_3 (Dropout) (None, 16, 16, 128) 0
max_pooling2d_1 (MaxPooling (None, 8, 8, 128) 0
2D)
conv2d_4 (Conv2D) (None, 8, 8, 256) 295168
batch_normalization_4 (Batc (None, 8, 8, 256) 1024
hNormalization)
re_lu_4 (ReLU) (None, 8, 8, 256) 0
dropout_4 (Dropout) (None, 8, 8, 256) 0
conv2d_5 (Conv2D) (None, 8, 8, 256) 590080
batch_normalization_5 (Batc (None, 8, 8, 256) 1024
hNormalization)
re_lu_5 (ReLU) (None, 8, 8, 256) 0
dropout_5 (Dropout) (None, 8, 8, 256) 0
conv2d_6 (Conv2D) (None, 8, 8, 256) 590080
batch_normalization_6 (Batc (None, 8, 8, 256) 1024
hNormalization)
re_lu_6 (ReLU) (None, 8, 8, 256) 0
dropout_6 (Dropout) (None, 8, 8, 256) 0
max_pooling2d_2 (MaxPooling (None, 4, 4, 256) 0
2D)
conv2d_7 (Conv2D) (None, 4, 4, 512) 1180160
batch_normalization_7 (Batc (None, 4, 4, 512) 2048
hNormalization)
re_lu_7 (ReLU) (None, 4, 4, 512) 0
dropout_7 (Dropout) (None, 4, 4, 512) 0
conv2d_8 (Conv2D) (None, 4, 4, 512) 2359808
batch_normalization_8 (Batc (None, 4, 4, 512) 2048
hNormalization)
re_lu_8 (ReLU) (None, 4, 4, 512) 0
dropout_8 (Dropout) (None, 4, 4, 512) 0
conv2d_9 (Conv2D) (None, 4, 4, 512) 2359808
batch_normalization_9 (Batc (None, 4, 4, 512) 2048
hNormalization)
re_lu_9 (ReLU) (None, 4, 4, 512) 0
dropout_9 (Dropout) (None, 4, 4, 512) 0
max_pooling2d_3 (MaxPooling (None, 2, 2, 512) 0
2D)
conv2d_10 (Conv2D) (None, 2, 2, 512) 2359808
batch_normalization_10 (Bat (None, 2, 2, 512) 2048
chNormalization)
re_lu_10 (ReLU) (None, 2, 2, 512) 0
dropout_10 (Dropout) (None, 2, 2, 512) 0
conv2d_11 (Conv2D) (None, 2, 2, 512) 2359808
batch_normalization_11 (Bat (None, 2, 2, 512) 2048
chNormalization)
re_lu_11 (ReLU) (None, 2, 2, 512) 0
dropout_11 (Dropout) (None, 2, 2, 512) 0
conv2d_12 (Conv2D) (None, 2, 2, 512) 2359808
batch_normalization_12 (Bat (None, 2, 2, 512) 2048
chNormalization)
re_lu_12 (ReLU) (None, 2, 2, 512) 0
dropout_12 (Dropout) (None, 2, 2, 512) 0
max_pooling2d_4 (MaxPooling (None, 1, 1, 512) 0
2D)
flatten (Flatten) (None, 512) 0
dense (Dense) (None, 4096) 2101248
batch_normalization_13 (Bat (None, 4096) 16384
chNormalization)
re_lu_13 (ReLU) (None, 4096) 0
dropout_13 (Dropout) (None, 4096) 0
dense_1 (Dense) (None, 4096) 16781312
batch_normalization_14 (Bat (None, 4096) 16384
chNormalization)
re_lu_14 (ReLU) (None, 4096) 0
dropout_14 (Dropout) (None, 4096) 0
dense_2 (Dense) (None, 10) 40970
=================================================================
Total params: 33,687,882
Trainable params: 33,663,050
Non-trainable params: 24,832
_________________________________________________________________
Epoch 1/5
1563/1563 [==============================] - 1765s 1s/step - loss: 2.4409 - accuracy: 0.1003
Epoch 2/5
1563/1563 [==============================] - 1831s 1s/step - loss: 2.3026 - accuracy: 0.1000
Epoch 3/5
1563/1563 [==============================] - 1800s 1s/step - loss: 2.3026 - accuracy: 0.1000
Epoch 4/5
1563/1563 [==============================] - 1753s 1s/step - loss: 2.3026 - accuracy: 0.1000
Epoch 5/5
1563/1563 [==============================] - 1644s 1s/step - loss: 2.3026 - accuracy: 0.1000
313/313 [==============================] - 38s 119ms/step - loss: 2.3026 - accuracy: 0.1000
'즐거운프로그래밍' 카테고리의 다른 글
[딥러닝] 사용자 데이터로 CNN 학습하기- 2. 수집한 데이터 불러오기 (0) | 2023.11.06 |
---|---|
[딥러닝] 사용자 데이터로 CNN 학습하기- 1. 라벨링하기 (0) | 2023.11.06 |
[딥러닝] 알렉스넷 바탕으로 신경망 구성하기, 배치 정규화, 드롭아웃 적용하기 (0) | 2023.11.02 |
[딥러닝] 딥러닝 모델 학습 후 저장하고 필터 이미지 추출하기 (1) | 2023.11.02 |
[딥러닝] CNN 활용 예제(엔비디아 자료 보고 신경망 구성하기) (1) | 2023.11.01 |