Back to : deep-learning-study
Contents

์ฌ์ธต ์ ๊ฒฝ๋ง์ ์ํ์  ๊ธฐ์ด 3๊ฐ (9์ 9์ผ), 4๊ฐ (9์ 14์ผ) ์ ๊ธฐ๋ฐํฉ๋๋ค.

์ด ๋ฌธ์๋ $\LaTeX$๋ฅผ pandoc์ผ๋ก ๋ณํํ์ฌ ์์ฑํ์๊ธฐ ๋๋ฌธ์, ๋ ์ด์์ ๋ฑ์ด ๊น๋ํ์ง ์์ ์ ์์ต๋๋ค. ์ธ์  ๊ฐ pdf ๋ฒ์ ์ ๋ธํธ๋ฅผ ๊ณต๊ฐํ๋ค๋ฉด ๊ทธ์ชฝ์ ์ฐธ๊ณ ํ๋ฉด ์ข์ ๊ฒ ๊ฐ์ต๋๋ค.

## Binary Classification

์ ์ ์์์ ์ ์๋ฅผ ๋์๋ณด์.

๋ฐ์ดํฐ $X_1, \dots X_n \in \mathcal{X}$์ด ์๊ณ , ์ด์ ๋ํ ์ ๋ต ๋ผ๋ฒจ $Y_1, \dots Y_n \in \mathcal{Y}$์ด ์ฃผ์ด์ง ๊ฒฝ์ฐ๋ฅผ ์๊ฐํด ๋ณด์. ์ด๋, ์ด๋ค True Unknown Function $f_\star : \mathcal{X} \to \mathcal{Y}$ ๊ฐ ์๋ค๊ณ  ์๊ฐํ๋ฉด, $Y_i = f_\star(X_i)$ ๋ฅผ ๋ง์กฑํ๋ค.

์ฐ๋ฆฌ๋, $X_i, Y_i$๋ก๋ถํฐ, $f_\star$๊ณผ ๊ฐ๊น์ด ์ด๋ค ํจ์ $f$๋ฅผ ์ฐพ์๋ด๋ ์์์ ์ํํ๊ณ  ์ถ๋ค. $X_i$๋ค์ ๋ํด $Y_i$๋ ์ฌ๋์ด ์์งํ ๋ฐ์ดํฐ๋ฅผ ์ฐ๊ธฐ ๋๋ฌธ์, ์ด๋ฅผ Supervised Learning์ด๋ผ๊ณ  ๋ถ๋ฅธ๋ค.

Supervised Learning์ ์ํด, ์ฐ๋ฆฌ๋ ๋ค์๊ณผ ๊ฐ์ ์ต์ ํ ๋ฌธ์ ๋ฅผ ์๊ฐํ  ๊ฒ์ด๋ค. $$\underset{\theta \in \Theta}{\minimize}\ \frac{1}{N}\sum_{i = 1}^{N} \ell(f_\theta(x_i), f_\star(x_i))$$

ํนํ, ์ด๋ฒ์๋ $\mathcal{X} = \R^p$, $\mathcal{Y} = \Set{-1, +1}$ ์ธ ๋ฌธ์ ๋ฅผ ์๊ฐํ์. ์ฆ, ๋ฐ์ดํฐ๋ฅผ ๋ ํด๋์ค๋ก ๋ถ๋ฆฌํด๋ด๋ ๊ฒ์ด๋ค. ์ด๋, ํน๋ณํ ์ด ๋ฐ์ดํฐ๊ฐ linearly seperableํ์ง๋ฅผ ์๊ฐํ๋ค. ์ด๋ค ์ดํ๋ฉด $a^T x + b$ ๊ฐ ์กด์ฌํ์ฌ, $y$๊ฐ์ $a^T x + b$์ ๋ถํธ์ ๋ฐ๋ผ ์ฐ์ด๋ผ ์ ์์ผ๋ฉด linearly seperableํ๋ค๊ณ  ์ ์ํ๋ค.

## Linear Classification

Binary classifcation, ํนํ linear classifcation ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ๋ค์๊ณผ ๊ฐ์ affine model์ ์๊ฐํ๋ค. $$f_{a, b}(x) = \sgn(a^T x + b)$$ ์ฌ๊ธฐ์ loss function์ผ๋ก, ํ๋ฆฐ ๋ผ๋ฒจ์ ๊ฐ์๋ฅผ ์ธ๋ ๊ฒ์ด ๋งค์ฐ ์์ฐ์ค๋ฝ๋ค. ์ด๋ ๊ฒ ์ปดํฉํธํ๊ฒ ์ธ ์ ์๋ค. $$\ell(y_1, y_2) = \frac{1}{2}\abs{1 - y_1 y_2}$$

์ด์ , ๋ค์์ ์ต์ ํ ๋ฌธ์ ๋ฅผ ํ๊ณ  ์ถ๋ค. $$\underset{a \in \R^p, b \in \R}{\minimize}\ \frac{1}{N}\sum_{i = 1}^{N} \ell(f_{a, b}(x_i), y_i)$$ ๊ทธ๋ฌ๋ฉด Linearly seperableํ์ง๋ ์ด ์ต์ ํ ๋ฌธ์ ์ ์ต์ ํด๊ฐ 0์ธ์ง์ ๋์น์ด๋ค. ๊ทธ๋ฐ๋ฐ, ์ด ํจ์๋ ์ฐ์ํจ์๊ฐ ์๋๊ธฐ ๋๋ฌธ์ (์ ํํ๋ ๋์ถฉ ๋ฏธ๋ถ๊ฐ๋ฅํ๋ค๋ ์กฐ๊ฑด์ ์๊ตฌํ๋ค) SGD๊ฐ์ ์๊ณ ๋ฆฌ์ฆ์ ๋๋ฆด์๊ฐ ์๋ค.

## Support Vector Machine

๋ฐ๋ผ์, ์ด ๋ฌธ์ ๋ฅผ continuousํ๊ฒ relaxationํ๊ณ ์ ํ๋ค. ๊ด์ ์ ๋ฐ๊พธ๋ฉด, ์ด ๋ผ๋ฒจ์ด 1์ผ / -1์ผ โConfidenceโ๋ฅผ ๋ฐํํ๋๋ก ๋ชจ๋ธ์ ์ข ์ ํ์ฅํ๊ณ ์ ํ๋ค. 0.5์ด๋ฉด โ์๋ง๋ 1์ผ ๊ฒ์ผ๋ก ๋ณด์ธ๋คโ ๊ฐ์ ๋๋์ผ๋ก.

์ด๋ฅผ ์ํด์๋ $y_i f_{a, b}(x_i) > 0$ ์ ๋ง์กฑํด์ผ ํ๋ค.

๊ทธ๋ฐ๋ฐ, ์ค์ ๋ก๋ ์ด๋ ๊ฒ ํ๋ฉด $f$๊ฐ์ด 0 ๊ทผ์ฒ์์๋ง ์๋ค๊ฐ๋คํ๋ ๋ฌธ์ ๊ฐ ์๊ณ , ์ด๋ numericalํ ๋ฉด์์๋ neural network์ confidence๋ผ๋ ํด์์ผ๋ก๋ ์ ์ ํ์ง ์์ผ๋ฏ๋ก ์ ๋นํ margin์ ์ฃผ๋ ๊ฒ์ด ๋ฐ๋์งํ๋ค.

์ ๋นํ margin์ 1๋งํผ ์ค์, $y_i f_{a, b}(x_i) \geq 1$ ์ ๋ง์กฑํ๋ฉด ์ข์ ๊ฒ ๊ฐ๋ค. ์ฌ๊ธฐ์ โ์ข์ ๊ฒ ๊ฐ๋คโ ๋ ๋ง์ ๋ฐ๋๋ก ์  ์ฑ์ง์ ๋ง์กฑํ์ง ์์ผ๋ฉด ํ๋ํฐ๋ฅผ ๋ถ๊ณผํ๊ฒ ๋ค๋ ๋ฐ์์ผ๋ก๋ ํด์๋  ์ ์๊ณ โฆ ์ด ํ๋ํฐ ํจ์๋ฅผ ์ต์ํํ๋ ๋ฌธ์ ๋ก ์ฐ๋ฉด, $$\underset{a \in \R^p, b \in \R}{\minimize}\ \frac{1}{N}\sum_{i = 1}^{N} \max(0, 1 - y_i f_{a, b}(x_i)) = \frac{1}{N}\sum_{i = 1}^{N} \max(0, 1 - y_i (a^T x_i + b))$$

๋ฐ์ดํฐ๊ฐ linearly seperableํ๋ฉด, ์ด ์๋ optimal value๊ฐ 0์์ ์ ์ ์๋ค. ์ด ๋ฐฉ๋ฒ์ Support Vector Machine ์ด๋ผ๊ณ  ๋ถ๋ฅด๋ฉฐ, ํํ regularizer๋ฅผ ์ถ๊ฐํ ์๋ ์์ผ๋ก ์ด๋ค.$$\underset{a \in \R^p, b \in \R}{\minimize}\ \frac{1}{N}\sum_{i = 1}^{N} \max(0, 1 - y_i (a^T x_i + b)) + \frac{\lambda}{2}\norm{a}^2$$

์ด ์ต์ ํ ๋ฌธ์  (Relaxation ๋ฃ๊ธฐ ์ !)๊ฐ ์๋ณธ ๋ฌธ์ ์ relaxation์ด๋ผ๋ ์ฌ์ค์ ๋ณด์ด๋ ๊ฒ์ ์ด๋ ต์ง ์๋ค. ์๋ ๋ฌธ์ ์ ์ต์ ํด๋ฅผ $p_1^\star$ ๋ผ ํ๊ณ , SVM์ ์ต์ ํด๋ฅผ $p_2^\star$ ๋ผ ํ๋ฉด, $p_1^\star = 0 \iff p_2^\star = 0$ ์์ ์ ์ ์๋ค.

๊ฒฐ๊ตญ, relaxed supervised learning์ point prediction์ relaxation ํด์ label value ๋์  ๊ทธ label์ probability๋ฅผ ์์ธกํ๋ ๋ฐฉํฅ์ผ๋ก ์๊ฐํ๋ ๊ฒ. Single prediction๋ณด๋ค ํจ์ฌ realisticํ ์ธํ์ผ๋ก ์๊ฐํ  ์ ์๋ค.

## Logistic Regression

Linear binary classification์ ๋ํ ๋๋ค๋ฅธ ๋ฐฉ๋ฒ. ์ฌ์ ํ Decision boundary $a^T x + b$ ๋ฅผ ์๊ณ ์ ํ๋ค. ๋จผ์ ...

Binary classification์์, ์ฐ๋ฆฌ๊ฐ ํ์ธํ ๋ฐ์ดํฐ์ Label์ ํ๋ฅ ๋ฒกํฐ๋ก ๋ง๋ค์ด์ (๋ง์ฝ ์์ ํ label์ด ํ๋๋ผ๋ฉด, (1, 0) ๊ณผ (0, 1) ์ฒ๋ผ) ํํํ ๊ฒ์ empirical distribution $\mathcal{P}(y)$ ๋ผ๊ณ  ์ ์ํ๊ธฐ๋ก ํ๋ค.

๋ค์๊ณผ ๊ฐ์ ๋ชจ๋ธ์ ์ด์ฉํ์ฌ ์ต์ ํํ๋ supervised learning์ Logistic Regression์ด๋ผ ํ๋ค. $$f_{a, b}(x) = \begin{bmatrix} \frac{1}{1 + e^{a^T x + b}} \\ \frac{1}{1 + e^{-(a^Tx + b)}} \end{bmatrix}$$

์ด ๋ชจ๋ธ์ ์ด์ฉํ์ฌ, ๋ค์๊ณผ ๊ฐ์ ์ต์ ํ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ณ ์ ํ๋ค. $$\underset{a \in \R^p, b \in \R}{\minimize}\ \sum_{i = 1}^{N} \DKL{\mathcal{P}(Y_i)}{f_{a, b}(X_i)}$$ ์ฆ, ์ฐ๋ฆฌ๋ empirical distribution๊ณผ์ KL-Divergence๋ฅผ ์ต์ํํ๊ณ  ์ถ๋ค. ์ด ์์ ์ ๋ฆฌํ๋ฉด... $$\underset{a \in \R^p, b \in \R}{\minimize}\ \sum_{i = 1}^{N} H(\mathcal{P}(Y_i), f_{a, b}(X_i)) + \text{ Terms independent of } a, b$$ ์ ํํ Cross entropy $H$๋ฅผ ์ ๊ฐํ๊ณ , ์ค๋ฅธ์ชฝ term๋ค์ ๋ค ๋ฒ๋ฆฌ๋ฉด... $$\underset{a \in \R^p, b \in \R}{\minimize}\ - \frac{1}{N}\sum_{i = 1}^{N} \P(y_i = -1) \log\left(\frac{1}{1 + e^{a^Tx_i + b}}\right) + \P(y_i = 1)\log\left(\frac{1}{1 + e^{-a^Tx_i - b}}\right)$$ ์ด๋ ๋ค์, $\P(y_i = 1)$ ๊ณผ $\P(y_i = -1)$ ์ด one-hot์ด๋ฏ๋ก, ๋์ค์ ์ด๋์ชฝ์ด 1์ธ์ง๋ฅผ ๊น๋ํ๊ฒ ์ ๋ฆฌํ์ฌ, $$\underset{a \in \R^p, b \in \R}{\minimize}\ - \frac{1}{N}\sum_{i = 1}^{N} \log\left(\frac{1}{1 + e^{-y_i(a^Tx_i + b)}}\right)$$ ๋จ์กฐ๊ฐ์ํจ์์ธ Loss function $\ell(z) = \log(1 + e^{-z})$๋ฅผ ๋์ํ์ฌ ๋ถํธ๋ฅผ ๋ผ๊ณ  ๊น๋ํ๊ฒ ์ ๋ฆฌํ  ์ ์๋ค. $$\underset{a \in \R^p, b \in \R}{\minimize}\ \frac{1}{N}\sum_{i = 1}^{N}\ell(y_i(a^T x_i + b))$$ ์ด ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ ํ, $a^T x + b$ ์ ๋ถํธ์ ๋ฐ๋ผ predictionํ๋ค.

SVM๊ณผ ๋น๊ตํ๋ฉด, ์ถ๋ฐ์ ์ด ๋ฌ๋์ง๋ง ๊ฒฐ๊ตญ์ ๊ฐ์ ๋ฌธ์ ๊ฐ ๋๋๋ฐ, $\ell(z)$ ๋ฅผ ์ด๋ป๊ฒ ์ ์ํ๋๋์ ๋ฌธ์ ๊ฐ ๋๋ค. SVM์ $\max(0, 1-z)$์ด๊ณ , Logistic regression์ $\log(1 + e^{-z})$ ๋ฅผ ์ฐ๋ ๊ฒฝ์ฐ๋ก ์๊ฐํ  ์ ์๋ค. ์ขํ์ ๊ทธ๋ ค๋ณด๋ฉด ๋ ํจ์๊ฐ ์ฌ์ค ๊ต์ฅํ ๋น์ทํ๊ฒ ์๊ฒผ๋ค.

SVM๊ณผ LR์ ๋๋ค (Decision boundary๊ฐ hyperplane์ด๋ผ๋ ๊ด์ ์์) Linear classifier์ด์ง๋ง, LR์ด ์ข๋ ์์ฐ์ค๋ฝ๊ฒ multiclass classification์ผ๋ก ํ์ฅ๋๋ค. (Softmax Regression)