Back to : deep-learning-study
Contents

์ฌ์ธต ์ ๊ฒฝ๋ง์ ์ํ์  ๊ธฐ์ด 6๊ฐ (9์ 23์ผ) ์ ๊ธฐ๋ฐํฉ๋๋ค.

์ด ๊ธ์ SVM๊ณผ Logistic Regression ๋งํฌ, Softmax Regression ๋งํฌ ์ ์ด์ด์ง๋ ๋ด์ฉ์๋๋ค.

๋์ค์ ์ค๋ช์ ๋ณด๊ฐํด์ ๋ค์ ์์ฑ๋  ์์ ์๋๋ค.

Logistic regression ๊ฐ์ $f_\theta(x) = a^T x + b$ case๋ฅผ 1-layer (linear layer) neural network๋ก ๋ณผ ์ ์๋ค.

Softmax Regression ์ค๋ช ๋ง์ง๋ง์ ํ๋ ๊ฒ์ฒ๋ผ, ์ ์ ํ loss function $\ell$ ์ ๋์ํ ๋ค์, $\ell(f_\theta(x), y)$ ๋ฅผ ์ต์ ํํ๋ ๊ฒฝ์ฐ๋ฅผ ์๊ฐํ์. Logistic regression์ ์ฌ๊ธฐ์ $\ell$๋ก logistic loss๋ฅผ, $f_\theta$ ์๋ฆฌ์ linear model์ ๋ฃ์ ํน์ํ ์ผ์ด์ค์ด๋ค. ์ด๋ฅผ ์ข๋ ์๋ฐํ๊ฒ ์๊ฐํ๊ธฐ ์ํด, Linear layer๋ฅผ ์๊ฐํ์.

### Linear Layer

์๋ ฅ์ผ๋ก $X \in \R^{B \x n}$, where $B =$ batch size, $n =$ ์๋ ฅ ํฌ๊ธฐ๋ฅผ ๋ฐ์์, ์ถ๋ ฅ $Y \in \R^{B \x m}$ ํฌ๊ธฐ์ ํ์๋ฅผ ์ถ๋ ฅํ๋๋ฐ, $$Y_{k, i} = \sum_{j = 1}^{n} A_{i, j} X_{k, j} + b_i$$ ์ด์ ๊ฐ์ด ์๋ํ๋ layer ๋ฅผ, batch์ ๊ฐ ๋ฒกํฐ $x_k$ ์ ๋ํด $y_k = A x_k + b$ ํํ์ ์ ํ์ผ๋ก ๋ํ๋๋ค๋ ์๋ฏธ์์ linear layer๋ผ ํ๋ค. ์ด๋ $A$ ํ๋ ฌ์ weight, $b$ ๋ฒกํฐ๋ฅผ bias๋ผ ํ๋ค.

๋ฐ๋ผ์, Logistic Regression์ด๋, ํ๋์ Linear layer๋ฅผ ์ด์ฉํ๊ณ , loss function์ผ๋ก logistic loss (KL-divergence with logistic probability) ๋ฅผ ์ฌ์ฉํ๋ Shallow neural network ๋ผ๊ณ  ๋ค์ ์ ์ํ  ์ ์๋ค.

### Multi Layer Perceptron

Multi-Layer (Deep Network) ๋ฅผ ์๊ฐํ๋ฉด, linear function์ ๊น์ ๊ฒฐํฉ์ ์ด์ฐจํผ linearํ๋ฏ๋ก ์๋ฌด ์๋ฏธ๊ฐ ์๋ค.

๊ทธ๋ฌ๋, ์ ๋นํ non-linear activation function $\sigma$ ๋ฅผ ๋์ํ์ฌ, ๋ค์๊ณผ ๊ฐ์ layer๋ฅผ ๊ตฌ์ถํ๋ฉด ์๋ฏธ๊ฐ ์๊ฒ ๋๋ค.

์ฆ, ์ด๋ฅผ ์์ผ๋ก ์ฐ๋ฉดโฆ \begin{align*} y_L &= W_L y_{L-1} + b_L \\ y_{L - 1} &= \sigma(W_{L-1} y_{L - 2} + b_{L - 1}) \\ \cdots & \cdots \\ y_2 &= \sigma (W_2 y_1 + b_2) \\ y_1 &= \sigma (W_1 x + b_1) \end{align*} where $x \in \R^{n_0}, W_l \in \R^{n_l \x n_{l-1}}, n_L = 1$. (Binary classification๋ง ์ ๊น ์๊ฐํ๊ธฐ๋ก ํ์)

• ์ฃผ๋ก $\sigma$ ๋ก๋ ReLU $= \max(z, 0)$, Sigmoid $\frac{1}{1 + e^{-z}}$, Hyperbolic tangent $\frac{1 - e^{-2z}}{1 + e^{-2z}}$ ๋ฅผ ์ด๋ค.
• ๊ด๋ก์ ์ผ๋ก ๋ง์ง๋ง layer์๋ $\sigma$๋ฅผ ๋ฃ์ง ์๋ ๊ฒฝํฅ์ด ์๋ค.

์ด ๋ชจ๋ธ์ MultiLayer Perceptron (MLP) ๋๋ Fully connected neural network ๋ผ ํ๋ค.

### Weight Initialization

SGD $\theta^{k + 1} = \theta^k - \alpha g^k$ ์์, $\theta^0$ ์ convex optimization์์๋ ์ด๋ค ์ ์ ๊ณจ๋ผ๋ global solution์ผ๋ก ์๋ ดํ๋ฏ๋ก ์๋ฏธ๊ฐ ์์ง๋ง, deep learning์์๋ $\theta^0$ ์ ์ ์ฃผ๋ ๊ฒ์ด ์ค์ํ ๋ฌธ์ ๊ฐ ๋๋ค.

๋จ์ํ๊ฒ $\theta^0 = 0$ ์ ์ฐ๋ฉด, vanishing gradient ์ ๋ฌธ์ ๊ฐ ๋ฐ์ํ๋ค. Pytorch์์๋ ๋ฐ๋ก ์ด๋ฅผ ์ฒ๋ฆฌํ๋ ๋ฐฉ๋ฒ์ด ์์.

### Gradient Computation : Back propagation

๋ค์ ์ ๊น logistic regression์ ์๊ฐํ๋ฉด, loss function์ ๋ค ์์ํ ๋ค์ ๊ฒฐ๊ตญ ๋ง์ง๋ง์๋ stochastic gradient descent ๊ฐ์ ๋ฐฉ๋ฒ์ ์ด์ฉํด์ ์ต์ ํํ  ๊ณํ์ผ๋ก ์งํํ๋ค. ๊ทธ๋ ๋ค๋ ๋ง์, ๊ฒฐ๊ตญ ์ด๋ป๊ฒ๋  ๋ญ๊ฐ ์  loss function์ gradient๋ฅผ ๊ณ์ฐํ  ๋ฐฉ๋ฒ์ด ์๊ธฐ๋ ํด์ผ ํ๋ค๋ ์๋ฏธ๊ฐ ๋๋ค. ์ฆ, ๊ฐ layer์ weight๋ค๊ณผ bias๋ค์ ๊ฐ ์์๋ค $A_{i, j, k}$์ ๋ํด, $\pdv{y_L}{A_{i, j, k}}$ ๋ฅผ ๊ณ์ฐํ  ์ ์์ด์ผ ํ๋ค.

MLP์์๋ ์ด gradient ๊ณ์ฐ์ด ์ง์  ์ํํ๊ธฐ์๋ ๋งค์ฐ ์ด๋ ต๊ธฐ ๋๋ฌธ์, ์ด๋ฅผ pytorch์์๋ autograd ํจ์๋ก ์ ๊ณตํ๋ค. ๋ค๋ง ๊ธฐ๋ณธ์ ์ธ ์๋ฆฌ๋ vector calculus์ chain rule์ ๊ธฐ๋ฐํ๋ค. ๋์ค์ ์ด๋ฅผ ๋ฐ๋ก ๋ค๋ฃฌ๋ค.