中文分词词性和序列标注之MEMM-4-最优化算法-IIS

5. 最优化算法

1. IIS

最大熵模型:

\[ \begin{align} p(y|x) &= \frac{\exp \left[ \sum_{i=1} w_i f_i(x,y) \right] }{ Z_w(x) } \\
&= \frac{ \exp \left[ \sum_{i=1} w_i f_i(x,y) \right] }{ \sum_y \exp \left[ \sum_{i=1} w_i f_i(x,y) \right] }
\end{align} \]

极大对数似然估计:

\[ \begin{align}
L(w) &= \sum_{x,y} \tilde{p}(x,y) \log p(y|x) \\
&= \sum_{x,y} \tilde{p}(x,y) \sum_{i=1} w_i f_i(x,y) – \sum_{x} \tilde{p}(x) \log Z_w(x)
\end{align} \]

改进的迭代尺度优化算法的思路,假设把现有参数\( w=(w_1,w_2,\cdots,w_n)^T \)加上一个新的参数向量\( w + \delta =(w_1 + \delta_1,w_2 + \delta_2,\cdots,w_n + \delta_n)^T \),可以使得模型的对数似然增大,如果存在这样一个参数向量,那么重复\( \imath: w \to w + \delta \),直到模型似然最大化为止。

计算参数改变对应的似然改变量:

\[ \begin{align}
L(w + \delta) – L(w) &= \sum_{x,y} \tilde{p}(x,y) \log p_{w + \delta}(y|x) – \sum_{x,y}\tilde{p}(x,y) \log p_{w}(y|x) \\
&= \sum_{x,y} \tilde{p}(x,y) \sum_{i=1} (w + \delta)_i f_i(x,y) – \sum_{x} \tilde{p}(x) \log Z_{w + \delta}(x) \\
& \quad – \sum_{x,y} \tilde{p}(x,y) \sum_{i=1} w_i f_i(x,y) + \sum_{x} \tilde{p}(x) \log Z_w(x) \\
&= \sum_{x,y} \tilde{p}(x,y) \sum_{i=1} \delta_i f_i(x,y) – \sum_{x} \tilde{p}(x) \log \frac{Z_{w+\delta}(x)}{Z_w(x)} \\
& \text{because:} \sum_y \tilde{p}(x) = 1 \qquad \text{and} \qquad – \log a \geq 1-a, a > 0 \\
& \geq \sum_{x,y} \tilde{p}(x,y) \sum_{i=1} \delta_i f_i(x,y) + 1 – \sum_{x} \tilde{p}(x) \frac{Z_{w+\delta}(x)}{Z_w(x)} \\
& \geq \sum_{x,y} \tilde{p}(x,y) \sum_{i=1} \delta_i f_i(x,y) + 1 – \sum_{x} \tilde{p}(x) \frac{ \sum_y \exp \sum_{i=1} ( w + \delta )_i f_i (x,y) }{ \sum_y \exp \sum_{i=1} w_i f_i (x,y) } \\
& \geq \sum_{x,y} \tilde{p}(x,y) \sum_{i=1} \delta_i f_i(x,y) + 1 – \sum_{x} \tilde{p}(x) \frac{ \sum_y \exp \sum_{i=1} w_i f_i (x,y) \exp \sum_{i=1} \delta_i f_i(x,y) }{ \sum_y \exp \sum_{i=1} w_i f_i (x,y) } \\
& \geq \sum_{x,y} \tilde{p}(x,y) \sum_{i=1} \delta_i f_i(x,y) + 1 – \sum_{x} \tilde{p}(x) \sum_y p_w(y|x) \exp \sum_{i=1} \delta_i f_i(x,y)
\end{align} \]

将右端记为:
\[ A(\delta|w) = \sum_{x,y} \tilde{p}(x,y) \sum_{i=1} \delta_i f_i(x,y) + 1 – \sum_{x} \tilde{p}(x) \sum_y p_w(y|x) \exp \sum_{i=1} \delta_i f_i(x,y) \]
表示参数增量带来的似然增量变化, 于是有以下表示对数似然改变量的下界。
\[ L(w + \delta) – L(w) \geq A(\delta|w) \]

从上文可得知函数 \( A(\delta|w) \) 是凹函数。当\( \delta \)导数为零时,对似然的增量最大(且不会超过似然增量的下界引起优化反作用), 但是\( \delta \)是个向量,其参数会互相影响而不易优化。因此时偏导为:
\[ \frac{\partial A}{\partial \delta_i } = \sum_{x,y} \tilde{p}(x,y) f_i(x,y) – \sum_{x,y} \tilde{p}(x) p_w(y|x) f_i(x,y) \left[ \exp \sum_{i=1} \delta_i f_i(x,y) \right] \]

进一步改进函数以得到优化时变量之间互相干扰比较小的函数,为此规定:
\[ f^\#(x,y) = \sum_{i=1} f_i(x,y) \]

因 \( f_i \) 是二值函数,这里实际上是对特定 \( x,y \) 做特征计数

将\( A(\delta|w) \)改写为:

\[ A(\delta|w) = \sum_{x,y} \tilde{p}(x,y) \sum_{i=1} \delta_i f_i(x,y) + 1 – \sum_{x} \tilde{p}(x) \sum_{y} p_w(y|x) \exp \sum_{i=1} \frac{ \delta_i f^\#(x,y) f_i(x,y) }{ f^\#(x,y) } \]

\( \exp \)是凸函数,且\( \sum_{i=1} \frac{ f_i(x,y) } { f^\#(x,y) } = 1 \), 使用\( jensen \)不等式改写为:

\[ \begin{align} A(\delta|w) \geq B(\delta|w) = \sum_{x,y} \tilde{p}(x,y) \sum_{i=1} \delta_i f_i(x,y) + 1 – \sum_x \tilde{p}(x) \sum_y p_w(y|x) \sum_{i=1} \frac{f_i(x,y)}{f^\#(x,y)} \exp(\delta_i f^\#(x,y)) \end{align} \]

Jensen不等式, 若f在区间(a,b)凸函数\( x_1,x_2,\cdots,x_n \in (a,b) \),且\( a_1 + a_2 + \cdots a_n = 1 \)
则:
\[ f(a_1 x_1 + a_2 x_2 + \cdots + a_n x_n) \leq a_1 f(x_1) + a_2 f(x_2) + \cdots a_n f(x_n) \]

这是相对\( A(\delta|w) \)不那么紧的新的一个似然增量函数,求导令其为0依次优化每个参数即可,因为在这里只存在一个参数变量。

\[ \begin{align}
\frac{\partial B}{\partial \delta_i} & = \sum_{x,y} \tilde{p}(x,y) f_i(x,y) – \sum_{x,y} \tilde{p}(x) p_w(y|x) f_i(x,y) \exp(\delta_i f^\#(x,y)) \\
& = E_{\tilde{p}}(f_i) – \sum_{x,y} \tilde{p}(x) p_w(y|x) f_i(x,y) \exp(\delta_i f^\#(x,y))
\end{align} \]

如 \( f^\#(x,y) = M \) 为常量,即对任何x,y特征计数相同(在分词和词性标注任务中此假设并不成立)。则参数可直接计算如下:
\[ \delta_i = \frac{1}{M} \log \frac{E_{\tilde{p}(f_i)}}{E_p(f_i)} \]

可使用一维搜索法解此方程,设\( g(\delta_i) = \frac{\partial B}{\partial \delta_i} \)。牛顿法迭代公式为:

\[ \delta^{k+1}_i = \delta^{k}_i – \frac{g(\delta^k_i)}{g^{\prime}(\delta^k_i)} \]

五、参考资料
[1] https://en.wikipedia.org/wiki/Maximum-entropy_Markov_model
[2] McCallum, Andrew; Freitag, Dayne; Pereira, Fernando (2000)
[3] Ethem Alpaydın: Introduction to Machine Learning Third Edition 15 Hidden Markov Models
[4] Daphne Koller, Nir Friedman; Probabilistic Graphical Models – Principles and Techniques
[5] 李航; 统计学习方法
[6] peghoty; http://blog.csdn.net/itplus/article/details/26550369
[7] 琴生不等式 https://baike.baidu.com/item/琴生不等式/397409?fr=aladdin

Leave a Reply

Your email address will not be published.

Time limit is exhausted. Please reload the CAPTCHA.

Proudly powered by WordPress   Premium Style Theme by www.gopiplus.com