1、前言
diffusion,这几年一直很火的模型,比如这段时间在网上的文生图大模型——Stable diffusion。就是以diffusion作为基底模型,但由于该模型与VAE那边,都涉及了较多了概率论知识,实在让人望而却步。所以,本文将对diffusion进行数学原理推导,如果你没有上过概率论这门课,建议先去学。
论文:
①Deep Unsupervised Learning using Nonequilibrium Thermodynamics (arxiv.org)
②Denoising Diffusion Probabilistic Models (arxiv.org)
③Understanding Diffusion Models: A Unified Perspective (arxiv.org)
参考代码:DL-Demos/dldemos at master · SingleZombie/DL-Demos · GitHub
视频:【Diffusion扩散模型原理解析-哔哩哔哩】
案例演示:
受CSDN篇幅限制,分成两篇,下一篇:【深度学习】Diffusion扩散模型原理解析2
2、Diffusion流程
2.1、运行流程
Diffusion分为两个步骤——扩散、逆扩散
扩散过程是对图像加入高斯噪声的过程(图中上半部分):
给定一张图像,然后构造T个时刻,每一个时刻对应一张图像,如图中t=0,对应我们给定的初始图像
然后,对这张图像加一个高斯噪声,得到t=1时刻的图像;再对t=1时刻的图像加入噪声,得到t=2时刻的噪声。然后重复此法,到T时刻时,就得到了一张全是噪点的图像(t=T)
逆扩散过程就是对图像减去噪声的过程,也就是还原图像的过程(图中下半部分):
给定一张全是噪点的图像,然后不断去噪,去到t=2时,得到图中的图像,再去噪,得到t=1时刻的图像,再再去噪,就还原回了坤坤的图像。
2.2、原因
为什么要费尽心思把它加入这么多噪声,再复原回来?
对于T时刻的图像,它是服从正态分布的,并且,是近似服从标准正态分布。那么,如果要生成图像,是不是可以直接从标准正态分布中采样出数据,然后一点点去噪,就能还原回原来的图像了呢?这就是diffusion的思想
2.3、前向加噪
记时刻1为原始图像,表示为 x 1 x_1 x1,则时刻2表达为 x 2 x_2 x2
每一个时刻都要加入一个噪声,那么该如何加噪呢?假设在
t
−
1
t-1
t−1时刻,我们该如何得到
i
i
i时刻的图像?其公式如下
x
t
=
α
t
x
t
−
1
+
1
−
α
t
ϵ
t
(1)
x_t=\sqrt\alpha_t x_{t-1}+\sqrt{1-\alpha_t}\epsilon_t\tag{1}
xt=αtxt−1+1−αtϵt(1)
x
t
−
1
、
x
t
x_{t-1}、x_t
xt−1、xt分别表示
t
−
1
t-1
t−1时刻和
t
t
t时刻的图像。
α
t
\alpha_t
αt一般是一个大于0小于1的超参数,随着时刻的增加而减小,
ϵ
t
∼
N
(
0
,
I
)
\epsilon_t\sim N(0,I)
ϵt∼N(0,I)的标准正态分布(Ps:为什么是加权平均,并且要开根号?感兴趣请看参考①)
从这个式子不难看出,这其实不是简单的从 x t − 1 x_{t-1} xt−1中加噪,而是加权平均。
从 t − 1 t-1 t−1到 t t t可以用式(1)表示,那从 t − 2 t-2 t−2到 t t t呢?难道我们每次都要一次一次的加入噪声吗?有没有办法更好的表示?
我们先看一个正态分布的基本定理:
定理①:
假设: x 1 ∼ N ( 0 , σ 1 I ) , x 2 ∼ N ( 0 , σ 2 I ) x_1\sim N(0,\sigma_1I),x_2\sim N(0,\sigma_2I) x1∼N(0,σ1I),x2∼N(0,σ2I)
则: ( x 1 + x 2 ) ∼ N ( 0 , ( σ 1 + σ 2 ) I ) (x_1+x_2)\sim N(0,(\sigma_1+\sigma_2)I) (x1+x2)∼N(0,(σ1+σ2)I)
定理②:
假设: x 1 ∼ N ( 0 , I ) x_1 \sim N(0,I) x1∼N(0,I)
则: a x 1 ∼ N ( 0 , a 2 I ) ax_1 \sim N(0,a^2I) ax1∼N(0,a2I)
Ps:这几个定理证明很简单,此处不做证明,不懂的可以自行百度,或者问我。
现在,我们把
t
−
2
t-2
t−2到
t
t
t由式(1)推导出来
x
t
=
α
t
x
t
−
1
+
1
−
α
t
ϵ
t
=
α
t
(
α
t
−
1
x
t
−
2
+
1
−
α
t
−
1
ϵ
t
−
1
)
+
1
−
α
t
ϵ
t
=
α
t
α
t
−
1
x
t
−
2
+
α
t
(
1
−
α
t
−
1
)
ϵ
t
−
1
+
1
−
α
t
ϵ
t
(2)
\begin{aligned}x_t = &\sqrt\alpha_tx_{t-1}+\sqrt{1-\alpha_t}\epsilon_t\\=&\sqrt\alpha_t(\sqrt{\alpha_{t-1}}x_{t-2}+\sqrt{1-\alpha_{t-1}}\epsilon_{t-1})+\sqrt{1-\alpha_t}\epsilon_t\\=&\sqrt{\alpha_t\alpha_{t-1}}x_{t-2}+\sqrt{\alpha_{t}(1-\alpha_{t-1})}\epsilon_{t-1}+\sqrt{1-\alpha_t}\epsilon_t\end{aligned}\tag{2}
xt===αtxt−1+1−αtϵtαt(αt−1xt−2+1−αt−1ϵt−1)+1−αtϵtαtαt−1xt−2+αt(1−αt−1)ϵt−1+1−αtϵt(2)
其中
ϵ
∼
N
(
0
,
I
)
\epsilon \sim N(0,I)
ϵ∼N(0,I)
由定理②可得: α t ( 1 − α t − 1 ) ϵ t − 1 ∼ N ( 0 , α t ( 1 − α t − 1 ) I ) \sqrt{\alpha_{t}(1-\alpha_{t-1})}\epsilon_{t-1} \sim N(0,\alpha_{t}(1-\alpha_{t-1})I) αt(1−αt−1)ϵt−1∼N(0,αt(1−αt−1)I)
由定理②可得: ( 1 − α t ) ϵ t ∼ N ( 0 , ( 1 − α t ) I ) (1-\sqrt{\alpha_t})\epsilon_{t} \sim N(0,(1-\alpha_t)I) (1−αt)ϵt∼N(0,(1−αt)I)
所以由定理①可得:
α
t
(
1
−
α
t
−
1
)
ϵ
t
−
1
+
(
1
−
α
t
)
ϵ
t
∼
N
(
0
,
(
α
t
(
1
−
α
t
−
1
)
+
1
−
α
t
)
I
)
→
N
(
0
,
(
1
−
α
t
α
t
−
1
)
I
)
(3)
\sqrt{\alpha_{t}(1-\alpha_{t-1})}\epsilon_{t-1}+(1-\sqrt{\alpha_t})\epsilon_t \sim N(0,(\alpha_{t}(1-\alpha_{t-1})+1-\alpha_t)I)\rightarrow N(0,(1-\alpha_t\alpha_{t-1})I)\tag{3}
αt(1−αt−1)ϵt−1+(1−αt)ϵt∼N(0,(αt(1−αt−1)+1−αt)I)→N(0,(1−αtαt−1)I)(3)
而由定理②可知:
N
(
0
,
(
1
−
α
t
α
t
−
1
)
I
)
→
1
−
α
t
α
t
−
1
ϵ
ˉ
N(0,(1-\alpha_t\alpha_{t-1})I)\rightarrow\sqrt{1-\alpha_t\alpha_{t-1}}\bar\epsilon
N(0,(1−αtαt−1)I)→1−αtαt−1ϵˉ
N
(
0
,
(
1
−
α
t
α
t
−
1
)
I
)
→
1
−
α
t
α
t
−
1
ϵ
ˉ
(4)
N(0,(1-\alpha_t\alpha_{t-1})I)\rightarrow\sqrt{1-\alpha_t\alpha_{t-1}}\bar\epsilon\tag{4}
N(0,(1−αtαt−1)I)→1−αtαt−1ϵˉ(4)
其中
ϵ
ˉ
∼
N
(
0
,
I
)
\bar\epsilon\sim N(0,I)
ϵˉ∼N(0,I)
那么
α
t
α
t
−
1
x
t
−
2
+
α
t
(
1
−
α
t
−
1
)
ϵ
t
−
1
+
1
−
α
t
ϵ
t
∼
N
(
α
t
α
t
−
1
x
t
−
2
,
(
1
−
α
t
α
t
−
1
)
I
)
\sqrt{\alpha_t\alpha_{t-1}}x_{t-2}+\sqrt{\alpha_{t}(1-\alpha_{t-1})}\epsilon_{t-1}+\sqrt{1-\alpha_t}\epsilon_t\sim N(\sqrt{\alpha_t\alpha_{t-1}}x_{t-2},(1-\alpha_t\alpha_{t-1})I)
αtαt−1xt−2+αt(1−αt−1)ϵt−1+1−αtϵt∼N(αtαt−1xt−2,(1−αtαt−1)I)
所以不难看出式(3)直接等于式(4)
所以式(2)的后两项可直接用式(4)代换,得
x
t
=
α
t
α
t
−
1
x
t
−
2
+
1
−
α
t
α
t
−
1
ϵ
ˉ
x_t=\sqrt{\alpha_t\alpha_{t-1}}x_{t-2}+\sqrt{1-\alpha_t\alpha_{t-1}}\bar\epsilon
xt=αtαt−1xt−2+1−αtαt−1ϵˉ
不难看出,现在就有了
x
t
−
1
x_{t-1}
xt−1图像和
x
t
x_t
xt的图像关系,就不需要再一步步加入噪声了,可以一步到位
所以,当求某个随机变量的概率时,可以写成
q
(
x
t
∣
x
t
−
1
)
∼
N
(
x
t
∣
α
t
x
t
−
1
,
(
1
−
α
t
)
I
)
q(x_t|x_{t-1})\sim N(x_t|\sqrt{\alpha_t}x_{t-1},(1-\alpha_{t})I)
q(xt∣xt−1)∼N(xt∣αtxt−1,(1−αt)I)
为了更好的表示从0到
t
t
t时刻的概率分布,我们设
α
ˉ
t
=
∏
i
=
0
t
α
i
\bar\alpha_t=\prod\limits_{i=0}^t\alpha_i
αˉt=i=0∏tαi
加噪的过程则可表示为
x
t
=
α
ˉ
t
x
0
+
1
−
α
ˉ
t
ϵ
t
(5)
x_t=\sqrt{\bar\alpha_t}x_{0}+\sqrt{1-\bar\alpha_t}\epsilon_t\tag{5}
xt=αˉtx0+1−αˉtϵt(5)
其概率分布表示为
q
(
x
t
∣
x
0
)
∼
N
(
x
t
∣
α
ˉ
t
x
0
,
(
1
−
α
ˉ
t
)
I
)
(6)
q(x_t|x_0)\sim N(x_t|\sqrt{\bar\alpha_t}x_{0},(1-\bar\alpha_t)I)\tag{6}
q(xt∣x0)∼N(xt∣αˉtx0,(1−αˉt)I)(6)
除此之外,为了后面的表达方便,作者还使用
β
=
1
−
α
\beta = 1-\alpha
β=1−α,
β
ˉ
=
1
−
α
ˉ
\bar\beta=1-\bar\alpha
βˉ=1−αˉ
因此,可得
x
t
=
1
−
β
ˉ
t
x
t
−
1
+
β
ˉ
t
ϵ
t
q
(
x
t
∣
x
t
−
1
)
∼
N
(
x
t
∣
1
−
β
t
x
t
−
1
,
β
t
I
)
q
(
x
t
∣
x
0
)
∼
N
(
x
t
∣
1
−
β
ˉ
t
x
0
,
β
ˉ
t
I
)
x_t=\sqrt{1-\bar\beta_t}x_{t-1}+\sqrt{\bar\beta_t}\epsilon_t\\q(x_t|x_{t-1})\sim N(x_t|\sqrt{1-\beta_t}x_{t-1},\beta_tI)\\ q(x_t|x_0)\sim N(x_t|\sqrt{1-\bar\beta_t}x_{0},\bar\beta_tI)
xt=1−βˉtxt−1+βˉtϵtq(xt∣xt−1)∼N(xt∣1−βtxt−1,βtI)q(xt∣x0)∼N(xt∣1−βˉtx0,βˉtI)
2.4、逆扩散过程
得到了 q ( x t ∣ x t − 1 ) q(x_t|x_{t-1}) q(xt∣xt−1)的加噪过程及其概率分布。前面提到,我们的最终目标是从T时刻采样出数据,然后慢慢去噪,就得到生成的图像。那么,该如何去噪呢?换句话说,我们该如何求出 q ( x t − 1 ∣ x t ) q(x_{t-1}|x_t) q(xt−1∣xt)呢?
论文提出,当扩散过程的 β \beta β足够小,那么其逆操作,也同样符合正态分布,也就是 q ( x t − 1 ∣ x t ) ∼ N q(x_{t-1}|x_t)\sim N q(xt−1∣xt)∼N
2.5、一阶马尔可夫假设
在扩散过程中,模型总是一个时刻一个时刻地加噪,也就是说, t t t时刻的图像,是来自 t − 1 t-1 t−1时刻的
所以,在这种过程中,引入一个假设——马尔可夫假设
一阶马尔可夫假设:给定 t − 1 t-1 t−1时刻的状态, t t t时刻仅与 t − 1 t-1 t−1时刻有关,与过去的状态无关
概率表达为
q
(
x
t
∣
x
t
−
1
,
x
t
−
2
,
⋯
,
x
0
)
=
q
(
x
t
∣
x
t
−
1
)
q(x_t|x_{t-1},x_{t-2},\cdots,x_0)=q(x_t|x_{t-1})
q(xt∣xt−1,xt−2,⋯,x0)=q(xt∣xt−1)
逆扩散也是一样的道理(按照图中箭头即可看到)
P
(
x
t
−
1
∣
x
t
,
x
t
+
1
,
⋯
,
x
T
)
=
P
(
x
t
−
1
∣
x
t
)
P(x_{t-1}|x_t,x_{t+1},\cdots,x_T)=P(x_{t-1}|x_t)
P(xt−1∣xt,xt+1,⋯,xT)=P(xt−1∣xt)
由马尔可夫假设,我们不难得出
q
(
x
1
:
T
∣
x
0
)
=
q
(
x
2
:
T
∣
x
0
,
x
1
)
q
(
x
1
∣
x
0
)
=
q
(
x
3
:
T
∣
x
0
,
x
1
,
x
2
)
q
(
x
2
∣
x
0
,
x
1
)
q
(
x
1
∣
x
0
)
=
q
(
x
3
:
T
∣
x
0
,
x
1
,
x
2
)
q
(
x
2
∣
x
1
)
q
(
x
1
∣
x
0
)
\begin{aligned}q(x_{1:T}|x_0)=&q(x_{2:T}|x_0,x_1)q(x_{1}|x_0)\\=&q(x_{3:T}|x_0,x_1,x_2)q(x_{2}|x_0,x_1)q(x_{1}|x_0)\\=&q(x_{3:T}|x_0,x_1,x_2)q(x_{2}|x_1)q(x_{1}|x_0)\end{aligned}\nonumber
q(x1:T∣x0)===q(x2:T∣x0,x1)q(x1∣x0)q(x3:T∣x0,x1,x2)q(x2∣x0,x1)q(x1∣x0)q(x3:T∣x0,x1,x2)q(x2∣x1)q(x1∣x0)
然后不断拆分,就得到了
q
(
x
1
:
T
∣
x
0
)
=
∏
t
=
1
T
q
(
x
t
∣
x
t
−
1
)
q(x_{1:T}|x_0)=\prod\limits_{t=1}^Tq(x_t|x_{t-1})
q(x1:T∣x0)=t=1∏Tq(xt∣xt−1)
逆扩散过程也同理,
P
(
x
0
:
T
−
1
∣
x
T
)
=
∏
t
=
1
T
P
(
x
t
−
1
∣
x
t
)
P(x_{0:T-1}|x_T)=\prod\limits_{t=1}^TP(x_{t-1}|x_{t})
P(x0:T−1∣xT)=t=1∏TP(xt−1∣xt)
3、Diffusion训练
3.1、目标函数
按照传统做法,直接对训练图片求极大似然,定义所有图像为
X
X
X
X
=
(
x
1
,
x
2
,
x
3
,
.
.
,
x
n
)
X=\begin{pmatrix}x^{1},x^{2},x^{3},..,x^n\end{pmatrix}
X=(x1,x2,x3,..,xn)
x
i
x^{i}
xi是第
i
i
i个样本,样本之间独立同分布
对X求对数似然
log
P
(
X
)
=
log
P
(
x
1
,
x
2
,
.
.
.
,
x
n
)
=
log
∏
i
=
1
n
P
(
x
i
)
=
∑
i
=
1
n
log
P
(
x
i
)
\begin{aligned}\log P(X)=&\log P(x^1,x^2,...,x^n)\\=&\log\prod\limits_{i=1}^nP(x^{i})\\=&\sum\limits_{i=1}^n\log P(x^i)\end{aligned}\nonumber
logP(X)===logP(x1,x2,...,xn)logi=1∏nP(xi)i=1∑nlogP(xi)
我们先从某一个样本出发,为了简便,直接记作
P
(
x
)
P(x)
P(x)。
而x是前向扩散的初始图像,为了区分不同时刻的图像,我们把
P
(
x
)
P(x)
P(x)表示为
P
(
x
0
)
P(x_0)
P(x0),代表是初始图像
log
P
(
x
0
)
=
log
P
(
x
0
:
T
)
P
(
x
1
:
T
∣
x
0
)
\log P(x_0)=\log\frac{P(x_{0:T})}{P(x_{1:T}|x_0)}
logP(x0)=logP(x1:T∣x0)P(x0:T)
P
(
x
0
:
T
)
=
P
(
x
0
,
x
1
,
⋯
,
x
T
)
P(x_{0:T})=P(x_0,x_1,\cdots,x_T)
P(x0:T)=P(x0,x1,⋯,xT)
引入一个 q ( x 1 : T ∣ x 0 ) q(x_{1:T}|x_0) q(x1:T∣x0),在等式左右,关于 q ( x 1 : T ∣ x 0 ) q(x_{1:T}|x_0) q(x1:T∣x0)求积分
等式左边:
∫
log
P
(
x
0
)
q
(
x
1
:
T
∣
x
0
)
d
x
1
:
T
=
log
P
(
x
0
)
∫
q
(
x
1
:
T
∣
x
0
)
d
x
1
:
T
=
log
P
(
x
0
)
\int\log P(x_0)q(x_{1:T}|x_0)dx_{1:T}=\log P(x_0)\int q(x_{1:T}|x_0)dx_{1:T}=\log P(x_0)
∫logP(x0)q(x1:T∣x0)dx1:T=logP(x0)∫q(x1:T∣x0)dx1:T=logP(x0)
等式右边:
∫
log
P
(
x
0
:
T
)
P
(
x
1
:
T
∣
x
0
)
q
(
x
1
:
T
∣
x
0
)
d
x
1
:
T
\int \log\frac{P(x_{0:T})}{P(x_{1:T}|x_0)}q(x_{1:T}|x_0)dx_{1:T}
∫logP(x1:T∣x0)P(x0:T)q(x1:T∣x0)dx1:T
所以:
log
P
(
x
0
)
=
∫
log
P
(
x
0
:
T
)
P
(
x
1
:
T
∣
x
0
)
q
(
x
1
:
T
∣
x
0
)
d
x
1
:
T
=
∫
log
P
(
x
0
:
T
)
q
(
x
1
:
T
∣
x
0
)
P
(
x
1
:
T
∣
x
0
)
q
(
x
1
:
T
∣
x
0
)
q
(
x
1
:
T
∣
x
0
)
d
x
1
:
T
=
∫
(
log
P
(
x
0
:
T
)
q
(
x
1
:
T
∣
x
0
)
−
log
P
(
x
1
:
T
∣
x
0
)
q
(
x
1
:
T
∣
x
0
)
)
q
(
x
1
:
T
∣
x
0
)
d
x
1
:
T
=
∫
log
P
(
x
0
:
T
)
q
(
x
1
:
T
∣
x
0
)
q
(
x
1
:
T
∣
x
0
)
d
x
1
:
T
−
∫
log
P
(
x
1
:
T
∣
x
0
)
q
(
x
1
:
T
∣
x
0
)
q
(
x
1
:
T
∣
x
0
)
d
x
1
:
T
=
∫
log
P
(
x
0
:
T
)
q
(
x
1
:
T
∣
x
0
)
q
(
x
1
:
T
∣
x
0
)
d
x
1
:
T
⏟
①
+
K
L
(
q
(
x
1
:
T
∣
x
0
)
∣
∣
P
(
x
1
:
T
∣
x
0
)
)
⏟
②
\begin{aligned}\log P(x_0)=&\int \log\frac{P(x_{0:T})}{P(x_{1:T}|x_0)}q(x_{1:T}|x_0)dx_{1:T}\\=&\int\log \frac{\frac{P(x_{0:T})}{q(x_{1:T}|x_0)}}{\frac{P(x_{1:T}|x_0)}{q(x_{1:T}|x_0)}}q(x_{1:T}|x_0)dx_{1:T}\\=&\int\left(\log \frac{P(x_{0:T})}{q(x_{1:T}|x_0)}-\log\frac{P(x_{1:T}|x_0)}{q(x_{1:T}|x_0)} \right) q(x_{1:T}|x_0)dx_{1:T}\\=&\int\log \frac{P(x_{0:T})}{q(x_{1:T}|x_0)}q(x_{1:T}|x_0)dx_{1:T}-\int\log\frac{P(x_{1:T}|x_0)}{q(x_{1:T}|x_0)}q(x_{1:T}|x_0)dx_{1:T}\\=&\underbrace{\int\log \frac{P(x_{0:T})}{q(x_{1:T}|x_0)}q(x_{1:T}|x_0)dx_{1:T}}_{①}+\underbrace{KL(q(x_{1:T}|x_0)||P(x_{1:T}|x_0))}_{②}\end{aligned}\nonumber
logP(x0)=====∫logP(x1:T∣x0)P(x0:T)q(x1:T∣x0)dx1:T∫logq(x1:T∣x0)P(x1:T∣x0)q(x1:T∣x0)P(x0:T)q(x1:T∣x0)dx1:T∫(logq(x1:T∣x0)P(x0:T)−logq(x1:T∣x0)P(x1:T∣x0))q(x1:T∣x0)dx1:T∫logq(x1:T∣x0)P(x0:T)q(x1:T∣x0)dx1:T−∫logq(x1:T∣x0)P(x1:T∣x0)q(x1:T∣x0)dx1:T①
∫logq(x1:T∣x0)P(x0:T)q(x1:T∣x0)dx1:T+②
KL(q(x1:T∣x0)∣∣P(x1:T∣x0))
②是一个KL散度,但是
P
(
x
1
:
T
∣
x
0
)
P(x_{1:T}|x_0)
P(x1:T∣x0)我们是没有办法求出来的。因此,我们可以选择去求出
q
(
x
1
:
T
∣
x
0
)
q(x_{1:T}|x_0)
q(x1:T∣x0),由KL散度
≥
0
\ge0
≥0可知,当两个概率分布相等,KL等于0,只需要让②最小,所以我们有一个优化目标,也就是最小化
min
K
L
(
q
(
x
1
:
T
∣
x
0
)
∣
∣
P
(
x
1
:
T
∣
x
0
)
)
\min KL(q(x_{1:T}|x_0)||P(x_{1:T}|x_0))
minKL(q(x1:T∣x0)∣∣P(x1:T∣x0))
而我们知道,当给定训练数据
x
0
x_0
x0跟对应的似然参数时,
log
P
(
x
0
)
\log P(x_0)
logP(x0)的值是唯一确定的。对于一个确定的值,我们最小化②,就相当于最大化①。因为
log
P
(
x
0
)
\log P(x_0)
logP(x0)是确定的,所以②变小,就意味着①增大,
max
∫
z
log
P
(
x
0
:
T
)
q
(
x
1
:
T
∣
x
0
)
q
(
x
1
:
T
∣
x
0
)
d
x
1
:
T
↔
min
K
L
(
q
(
x
1
:
T
∣
x
0
)
∣
∣
P
(
x
1
:
T
∣
x
0
)
)
\max \int_{z}\log\frac{P(x_{0:T})}{q(x_{1:T}|x_0)}q(x_{1:T}|x_0)dx_{1:T} \leftrightarrow \min KL(q(x_{1:T}|x_0)||P(x_{1:T}|x_0))
max∫zlogq(x1:T∣x0)P(x0:T)q(x1:T∣x0)dx1:T↔minKL(q(x1:T∣x0)∣∣P(x1:T∣x0))
由于式子②始终大于等于0,所以对于①,由于
log
P
(
x
0
)
≥
①
\log P(x_0)\ge①
logP(x0)≥①,所以①又被称为变分下界
所以,我们优化的,其实根本就不是极大似然,而是似然的变分下界,这也是一种无奈之举
优化其变分下界
log P ( x 0 ) ≥ ∫ log P ( x 0 : T ) q ( x 1 : T ∣ x 0 ) q ( x 1 : T ∣ x 0 ) d x 1 : T = E q [ log P ( x 0 : T ) q ( x 1 : T ∣ x 0 ) ] = E q [ log P ( x T ) P ( x 0 : T − 1 ∣ x T ) q ( x 1 : T ∣ x 0 ) ] = E q [ log P ( x T ) + log P ( x 0 : T − 1 ∣ x T ) q ( x 1 : T ∣ x 0 ) ] = E q [ log P ( x T ) + log ∏ t = 1 T P ( x t − 1 ∣ x t ) ∏ t = 1 T q ( x t ∣ x t − 1 ) ] = E q [ log P ( x T ) + ∑ t = 1 T log P ( x t − 1 ∣ x t ) q ( x t ∣ x t − 1 ) ] = E q [ log P ( x T ) + ∑ t = 2 T log P ( x t − 1 ∣ x t ) q ( x t ∣ x t − 1 ) + log P ( x 0 ∣ x 1 ) q ( x 1 ∣ x 0 ) ] (7) \begin{aligned}\log P(x_0)\ge& \int\log \frac{P(x_{0:T})}{q(x_{1:T}|x_0)}q(x_{1:T}|x_0)dx_{1:T}\\=&\mathbb{E}_q\left[\log \frac{P(x_{0:T})}{q(x_{1:T}|x_0)}\right] \\=&\mathbb{E}_q\left[\log \frac{P(x_T)P(x_{0:T-1}|x_T)}{q(x_{1:T}|x_0)}\right]\\=&\mathbb{E}_q\left[\log P(x_T)+\log\frac{P(x_{0:T-1}|x_T)}{q(x_{1:T}|x_0)}\right]\\=&\mathbb{E}_q\left[\log P(x_T)+\log\frac{\prod\limits_{t=1}^{T}P(x_{t-1}|x_t)}{\prod\limits_{t=1}^Tq(x_t|x_{t-1})}\right]\\=&\mathbb{E}_q\left[\log P(x_T)+\sum\limits_{t=1}^T\log\frac{P(x_{t-1}|x_t)}{q(x_t|x_{t-1})}\right]\\=&\mathbb{E}_q\left[\log P(x_T)+\sum\limits_{t=2}^T\log\frac{P(x_{t-1}|x_t)}{\color{red}{q(x_t|x_{t-1})}}+\log\frac{P(x_{0}|x_1)}{q(x_1|x_0)}\right]\nonumber\end{aligned}\tag{7} logP(x0)≥======∫logq(x1:T∣x0)P(x0:T)q(x1:T∣x0)dx1:TEq[logq(x1:T∣x0)P(x0:T)]Eq[logq(x1:T∣x0)P(xT)P(x0:T−1∣xT)]Eq[logP(xT)+logq(x1:T∣x0)P(x0:T−1∣xT)]Eq logP(xT)+logt=1∏Tq(xt∣xt−1)t=1∏TP(xt−1∣xt) Eq[logP(xT)+t=1∑Tlogq(xt∣xt−1)P(xt−1∣xt)]Eq[logP(xT)+t=2∑Tlogq(xt∣xt−1)P(xt−1∣xt)+logq(x1∣x0)P(x0∣x1)](7)
对红色部分,由马尔可夫假设和条件概率公式可得
1
q
(
x
t
∣
x
t
−
1
)
=
1
q
(
x
t
∣
x
t
−
1
,
x
0
)
=
q
(
x
t
−
1
∣
x
0
)
q
(
x
t
,
x
t
−
1
∣
x
0
)
=
q
(
x
t
−
1
∣
x
0
)
q
(
x
t
−
1
∣
,
x
t
,
x
0
)
q
(
x
t
∣
x
0
)
\frac{1}{q(x_t|x_{t-1})}=\frac{1}{q(x_t|x_{t-1},x_0)}=\frac{q(x_{t-1}|x_0)}{q(x_{t},x_{t-1}|x_0)}=\frac{q(x_{t-1}|x_0)}{q(x_{t-1}|,x_{t},x_0)q(x_{t}|x_0)}
q(xt∣xt−1)1=q(xt∣xt−1,x0)1=q(xt,xt−1∣x0)q(xt−1∣x0)=q(xt−1∣,xt,x0)q(xt∣x0)q(xt−1∣x0)
将其代入式(7),可得
log
P
(
x
0
)
≥
E
q
[
log
P
(
x
T
)
+
∑
t
=
2
T
log
P
(
x
t
−
1
∣
x
t
)
q
(
x
t
−
1
∣
x
t
,
x
0
)
q
(
x
t
−
1
∣
x
0
)
q
(
x
t
∣
x
0
)
+
log
P
(
x
0
∣
x
1
)
q
(
x
1
∣
x
0
)
]
=
E
q
[
log
P
(
x
T
)
+
∑
t
=
2
T
[
log
P
(
x
t
−
1
∣
x
t
)
q
(
x
t
−
1
∣
x
t
,
x
0
)
+
log
q
(
x
t
−
1
∣
x
0
)
q
(
x
t
∣
x
0
)
]
+
log
P
(
x
0
∣
x
1
)
q
(
x
1
∣
x
0
)
]
=
E
q
[
log
P
(
x
T
)
+
∑
t
=
2
T
log
P
(
x
t
−
1
∣
x
t
)
q
(
x
t
−
1
∣
x
t
,
x
0
)
+
∑
t
=
2
T
log
q
(
x
t
−
1
∣
x
0
)
q
(
x
t
∣
x
0
)
+
log
P
(
x
0
∣
x
1
)
q
(
x
1
∣
x
0
)
]
=
E
q
[
log
P
(
x
T
)
+
∑
t
=
2
T
log
P
(
x
t
−
1
∣
x
t
)
q
(
x
t
−
1
∣
x
t
,
x
0
)
+
log
∏
t
=
2
T
q
(
x
t
−
1
∣
x
0
)
q
(
x
t
∣
x
0
)
+
log
P
(
x
0
∣
x
1
)
q
(
x
1
∣
x
0
)
]
(8)
\begin{aligned}\log P(x_0)\ge&\mathbb{E}_q\left[\log P(x_T)+\sum\limits_{t=2}^T\log \frac{P(x_{t-1}|x_t)}{q(x_{t-1}|x_{t},x_0)}\frac{q(x_{t-1}|x_0)}{q(x_{t}|x_0)}+\log\frac{P(x_0|x_1)}{q(x_1|x_0)}\right]\\=&\mathbb{E}_q\left[\log P(x_T)+\sum\limits_{t=2}^T\left[\log \frac{P(x_{t-1}|x_t)}{q(x_{t-1}|x_{t},x_0)}+\log \frac{q(x_{t-1}|x_0)}{q(x_{t}|x_0)}\right]+\log\frac{P(x_0|x_1)}{q(x_1|x_0)}\right]\\=&\mathbb{E}_q\left[\log P(x_T)+\sum\limits_{t=2}^T\log \frac{P(x_{t-1}|x_t)}{q(x_{t-1}|x_{t},x_0)}+\sum\limits_{t=2}^T\log \frac{q(x_{t-1}|x_0)}{q(x_{t}|x_0)}+\log\frac{P(x_0|x_1)}{q(x_1|x_0)}\right]\\=&\mathbb{E}_q\left[\log P(x_T)+\sum\limits_{t=2}^T\log \frac{P(x_{t-1}|x_t)}{q(x_{t-1}|x_{t},x_0)}+{\color{red}\log\prod\limits_{t=2}^T \frac{q(x_{t-1}|x_0)}{q(x_{t}|x_0)}}+\log\frac{P(x_0|x_1)}{q(x_1|x_0)}\right]\end{aligned}\tag{8}
logP(x0)≥===Eq[logP(xT)+t=2∑Tlogq(xt−1∣xt,x0)P(xt−1∣xt)q(xt∣x0)q(xt−1∣x0)+logq(x1∣x0)P(x0∣x1)]Eq[logP(xT)+t=2∑T[logq(xt−1∣xt,x0)P(xt−1∣xt)+logq(xt∣x0)q(xt−1∣x0)]+logq(x1∣x0)P(x0∣x1)]Eq[logP(xT)+t=2∑Tlogq(xt−1∣xt,x0)P(xt−1∣xt)+t=2∑Tlogq(xt∣x0)q(xt−1∣x0)+logq(x1∣x0)P(x0∣x1)]Eq[logP(xT)+t=2∑Tlogq(xt−1∣xt,x0)P(xt−1∣xt)+logt=2∏Tq(xt∣x0)q(xt−1∣x0)+logq(x1∣x0)P(x0∣x1)](8)
标红那一部分,把连乘展开
log
∏
t
=
2
T
q
(
x
t
−
1
∣
x
0
)
q
(
x
t
∣
x
0
)
=
log
q
(
x
1
∣
x
0
)
q
(
x
2
∣
x
0
)
∗
q
(
x
2
∣
x
0
)
q
(
x
3
∣
x
0
)
⋯
∗
q
(
x
T
−
2
∣
x
0
)
q
(
x
T
−
1
∣
x
0
)
∗
q
(
x
T
−
1
∣
x
0
)
q
(
x
T
∣
x
0
)
=
log
q
(
x
1
∣
x
0
)
q
(
x
T
∣
x
0
)
\log\prod\limits_{t=2}^T \frac{q(x_{t-1}|x_0)}{q(x_{t}|x_0)}=\log\frac{q(x_1|x_0)}{q(x_2|x_0)}*\frac{q(x_2|x_0)}{q(x_3|x_0)}\cdots *\frac{q(x_{T-2}|x_0)}{q(x_{T-1}|x_0)}*\frac{q(x_{T-1}|x_0)}{q(x_T|x_0)}=\log \frac{q(x_1|x_0)}{q(x_T|x_0)}
logt=2∏Tq(xt∣x0)q(xt−1∣x0)=logq(x2∣x0)q(x1∣x0)∗q(x3∣x0)q(x2∣x0)⋯∗q(xT−1∣x0)q(xT−2∣x0)∗q(xT∣x0)q(xT−1∣x0)=logq(xT∣x0)q(x1∣x0)
将其代入式(8),可得
log
P
(
x
0
)
≥
E
q
[
log
P
(
x
T
)
+
∑
t
=
2
T
log
P
(
x
t
−
1
∣
x
t
)
q
(
x
t
−
1
∣
x
t
,
x
0
)
+
log
q
(
x
1
∣
x
0
)
q
(
x
T
∣
x
0
)
+
log
P
(
x
0
∣
x
1
)
q
(
x
1
∣
x
0
)
]
=
E
q
[
log
P
(
x
T
)
+
∑
t
=
2
T
log
P
(
x
t
−
1
∣
x
t
)
q
(
x
t
−
1
∣
x
t
,
x
0
)
+
log
q
(
x
1
∣
x
0
)
−
log
q
(
x
T
∣
x
0
)
+
log
P
(
x
0
∣
x
1
)
q
(
x
1
∣
x
0
)
]
\begin{aligned}\log P(x_0)\ge&\mathbb{E}_q\left[\log P(x_T)+\sum\limits_{t=2}^T\log \frac{P(x_{t-1}|x_t)}{q(x_{t-1}|x_{t},x_0)}+\log \frac{q(x_{1}|x_0)}{q(x_{T}|x_0)}+\log\frac{P(x_0|x_1)}{q(x_1|x_0)}\right]\\=&\mathbb{E}_q\left[{\color{blue}\log P(x_T)}+\sum\limits_{t=2}^T\log \frac{P(x_{t-1}|x_t)}{q(x_{t-1}|x_{t},x_0)}+{\color{red}\log q(x_1|x_0)}-{\color{blue}\log q(x_T|x_0)}+{\color{red}\log\frac{P(x_0|x_1)}{q(x_1|x_0)}}\right]\end{aligned}\nonumber
logP(x0)≥=Eq[logP(xT)+t=2∑Tlogq(xt−1∣xt,x0)P(xt−1∣xt)+logq(xT∣x0)q(x1∣x0)+logq(x1∣x0)P(x0∣x1)]Eq[logP(xT)+t=2∑Tlogq(xt−1∣xt,x0)P(xt−1∣xt)+logq(x1∣x0)−logq(xT∣x0)+logq(x1∣x0)P(x0∣x1)]
依据
log
\log
log运算法则,将蓝色跟蓝色的式子结合起来(红色同理)
log
P
(
x
0
)
≥
E
q
[
log
P
(
x
T
)
q
(
x
T
∣
x
0
)
+
∑
t
=
2
T
log
P
(
x
t
−
1
∣
x
t
)
q
(
x
t
−
1
∣
x
t
,
x
0
)
+
log
P
(
x
0
∣
x
1
)
]
=
E
q
[
log
P
(
x
T
)
q
(
x
T
∣
x
0
)
]
⏟
①
+
E
q
[
∑
t
=
2
T
log
P
(
x
t
−
1
∣
x
t
)
q
(
x
t
−
1
∣
x
t
,
x
0
)
]
⏟
②
+
E
q
[
log
P
(
x
0
∣
x
1
)
]
⏟
③
(9)
\begin{aligned}\log P(x_0)\ge&\mathbb{E}_q\left[\log \frac{{P(x_T)}}{ q(x_T|x_0)}+\sum\limits_{t=2}^T\log \frac{P(x_{t-1}|x_t)}{q(x_{t-1}|x_{t},x_0)}+\log P(x_0|x_1)\right]\\=&\underbrace{\mathbb{E}_q\left[\log \frac{{P(x_T)}}{ q(x_T|x_0)}\right]}_{①}+\underbrace{\mathbb{E}_q\left[\sum\limits_{t=2}^T\log \frac{P(x_{t-1}|x_t)}{q(x_{t-1}|x_{t},x_0)}\right]}_{②}+\underbrace{\mathbb{E}_q\left[\log P(x_0|x_1)\right]}_{③}\end{aligned}\tag{9}
logP(x0)≥=Eq[logq(xT∣x0)P(xT)+t=2∑Tlogq(xt−1∣xt,x0)P(xt−1∣xt)+logP(x0∣x1)]①
Eq[logq(xT∣x0)P(xT)]+②
Eq[t=2∑Tlogq(xt−1∣xt,x0)P(xt−1∣xt)]+③
Eq[logP(x0∣x1)](9)
从式(7)不难看出,里面的
q
q
q是
q
(
x
2
:
T
∣
x
1
)
q(x_{2:T}|x_1)
q(x2:T∣x1),对于①
我们可得
E
q
[
log
P
(
x
T
)
q
(
x
T
∣
x
0
)
]
=
E
q
(
x
T
∣
x
0
)
[
log
P
(
x
T
)
q
(
x
T
∣
x
0
)
]
\mathbb{E}_q\left[\log \frac{P(x_T)}{q(x_T|x_0)}\right]=\mathbb{E}_{q(x_T|x_0)}\left[\log\frac{P(x_T)}{q(x_T|x_0)}\right]
Eq[logq(xT∣x0)P(xT)]=Eq(xT∣x0)[logq(xT∣x0)P(xT)]
这是因为
q
q
q是
q
(
x
1
:
T
∣
x
0
)
q(x_{1:T}|x_0)
q(x1:T∣x0),而
E
q
[
log
P
(
x
T
)
q
(
x
T
∣
x
0
)
]
\mathbb{E}_q\left[\log \frac{P(x_T)}{q(x_T|x_0)}\right]
Eq[logq(xT∣x0)P(xT)]里面只有
x
T
x_T
xT这个随机变量,其他的随机变量里面都没有那关于
q
(
x
1
:
T
−
1
∣
x
0
)
q(x_{1:T-1}|x_0)
q(x1:T−1∣x0)求期望时,就完全是对常数求期望一样,完全不变。如果你不明白,我们可以做个推导
我们先看①
E
q
[
log
P
(
x
T
)
q
(
x
T
∣
x
0
)
]
=
∫
x
1
:
T
q
(
x
1
:
T
∣
x
0
)
log
P
(
x
T
)
q
(
x
T
∣
x
0
)
d
x
1
:
T
=
∫
x
T
∫
x
T
−
1
⋯
∫
x
2
∫
x
1
q
(
x
1
:
T
∣
x
0
)
log
P
(
x
T
)
q
(
x
T
∣
x
0
)
d
x
1
⏟
d
x
2
⋯
d
x
T
=
∫
x
T
∫
x
T
−
1
⋯
∫
x
2
log
P
(
x
T
)
q
(
x
T
∣
x
0
)
∫
x
1
q
(
x
1
:
T
∣
x
0
)
d
x
1
⏟
d
x
2
⋯
d
x
T
=
∫
x
T
∫
x
T
−
1
⋯
∫
x
2
log
[
P
(
x
T
)
q
(
x
T
∣
x
0
)
]
q
(
x
2
:
T
∣
x
0
)
d
x
2
⋯
d
x
T
⋮
=
∫
x
T
q
(
x
T
∣
x
0
)
log
P
(
x
T
)
q
(
x
T
∣
x
0
)
d
x
T
=
E
q
(
x
T
∣
x
0
)
[
log
P
(
x
T
)
q
(
x
T
∣
x
0
)
]
=
−
K
L
(
q
(
x
T
∣
x
0
)
∣
∣
P
(
x
T
)
)
\begin{aligned}\mathbb{E}_q\left[\log \frac{P(x_T)}{q(x_T|x_0)}\right]=&\int_{x_{1:T}} q(x_{1:T}|x_0)\log \frac{P(x_T)}{q(x_T|x_0)}dx_{1:T}\\=&\int_{x_T}\int_{x_{T-1}}\cdots \int_{x_{2}}\underbrace{\int_{x_1} q(x_{1:T}|x_0)\log\frac{P(x_T)}{q(x_T|x_0)}dx_1}dx_{2}\cdots dx_{T}\\=&\int_{x_T}\int_{x_{T-1}}\cdots \int_{x_{2}}\underbrace{\log\frac{P(x_T)}{q(x_T|x_0)}\int_{x_1} q(x_{1:T}|x_0)dx_1}dx_{2}\cdots dx_{T}\\=&\int_{x_T}\int_{x_{T-1}}\cdots \int_{x_{2}}\log\left[\frac{P(x_T)}{q(x_T|x_0)}\right] q(x_{2:T}|x_0)dx_{2}\cdots dx_{T}\\\vdots&\\=&\int_{x_T}q(x_{T}|x_0)\log \frac{P(x_T)}{q(x_T|x_0)}dx_T\\=&{\color{red}\mathbb{E}_{q(x_T|x_0)}\left[\log\frac{P(x_T)}{q(x_T|x_0)}\right]}\\=&-KL\left(q(x_T|x_0)||P(x_T)\right)\end{aligned}\nonumber
Eq[logq(xT∣x0)P(xT)]====⋮===∫x1:Tq(x1:T∣x0)logq(xT∣x0)P(xT)dx1:T∫xT∫xT−1⋯∫x2
∫x1q(x1:T∣x0)logq(xT∣x0)P(xT)dx1dx2⋯dxT∫xT∫xT−1⋯∫x2
logq(xT∣x0)P(xT)∫x1q(x1:T∣x0)dx1dx2⋯dxT∫xT∫xT−1⋯∫x2log[q(xT∣x0)P(xT)]q(x2:T∣x0)dx2⋯dxT∫xTq(xT∣x0)logq(xT∣x0)P(xT)dxTEq(xT∣x0)[logq(xT∣x0)P(xT)]−KL(q(xT∣x0)∣∣P(xT))
对于②,③。也是一样的道理,所以式(9)得
log
P
(
x
0
)
≥
E
q
(
x
1
∣
x
0
)
[
log
P
(
x
0
∣
x
1
)
]
+
∑
t
=
2
T
E
q
(
x
t
−
1
,
x
t
∣
x
0
)
[
log
P
(
x
t
−
1
∣
x
t
)
q
(
x
t
−
1
∣
x
t
,
x
0
)
]
+
E
q
(
x
T
∣
x
0
)
[
log
P
(
x
T
)
q
(
x
T
∣
x
0
)
]
=
E
q
(
x
1
∣
x
0
)
[
log
P
(
x
0
∣
x
1
)
]
+
∑
t
=
2
T
E
q
(
x
t
−
1
∣
x
0
,
x
t
)
q
(
x
t
∣
x
0
)
[
log
P
(
x
t
−
1
∣
x
t
)
q
(
x
t
−
1
∣
x
t
,
x
0
)
]
+
E
q
(
x
T
∣
x
0
)
[
log
P
(
x
T
)
q
(
x
T
∣
x
0
)
]
=
E
q
(
x
1
∣
x
0
)
[
log
P
(
x
0
∣
x
1
)
]
+
∑
t
=
2
T
E
q
(
x
t
∣
x
0
)
∫
q
(
x
t
−
1
∣
x
0
,
x
t
)
log
P
(
x
t
−
1
∣
x
t
)
q
(
x
t
−
1
∣
x
t
,
x
0
)
d
x
t
−
1
+
∫
q
(
x
T
∣
x
0
)
log
P
(
x
T
)
q
(
x
T
∣
x
0
)
d
x
T
=
E
q
(
x
1
∣
x
0
)
[
log
P
(
x
0
∣
x
1
)
]
−
∑
t
=
2
T
E
q
(
x
t
∣
x
0
)
[
K
L
(
q
(
x
t
−
1
∣
x
t
,
x
0
)
∣
∣
P
(
x
t
−
1
∣
x
t
)
)
]
−
K
L
(
q
(
x
T
∣
x
0
)
∣
∣
P
(
x
T
)
)
(10)
\begin{aligned}\log P(x_0)\ge& \mathbb{E}_{q(x_1|x_0)}\left[\log P(x_0|x_1)\right]+\sum\limits_{t=2}^T\mathbb{E}_{q(x_{t-1},x_{t}|x_0)}\left[\log \frac{P(x_{t-1}|x_t)}{q(x_{t-1}|x_{t},x_0)}\right]+\mathbb{E}_{q(x_T|x_0)}\left[\log\frac{P(x_T)}{q(x_T|x_0)}\right]\\=& \mathbb{E}_{q(x_1|x_0)}\left[\log P(x_0|x_1)\right]+\sum\limits_{t=2}^T\mathbb{E}_{q(x_{t-1}|x_0,x_{t})q(x_{t}|x_0)}\left[\log \frac{P(x_{t-1}|x_t)}{q(x_{t-1}|x_{t},x_0)}\right]+\mathbb{E}_{q(x_T|x_0)}\left[\log\frac{P(x_T)}{q(x_T|x_0)}\right]\\=&\mathbb{E}_{q(x_1|x_0)}\left[\log P(x_0|x_1)\right]+\sum\limits_{t=2}^T\mathbb{E}_{q(x_{t}|x_0)}\int q(x_{t-1}|x_0,x_t)\log \frac{P(x_{t-1}|x_t)}{q(x_{t-1}|x_{t},x_0)}dx_{t-1}+\int q(x_T|x_0)\log\frac{P(x_T)}{q(x_T|x_0)}dx_T\\=&\mathbb{E}_{q(x_1|x_0)}\left[\log P(x_0|x_1)\right]-\sum\limits_{t=2}^T\mathbb{E}_{q(x_t|x_0)}\left[KL(q(x_{t-1}|x_t,x_0)||P(x_{t-1}|x_t))\right]-KL\left(q(x_T|x_0)||P(x_T)\right)\end{aligned}\tag{10}
logP(x0)≥===Eq(x1∣x0)[logP(x0∣x1)]+t=2∑TEq(xt−1,xt∣x0)[logq(xt−1∣xt,x0)P(xt−1∣xt)]+Eq(xT∣x0)[logq(xT∣x0)P(xT)]Eq(x1∣x0)[logP(x0∣x1)]+t=2∑TEq(xt−1∣x0,xt)q(xt∣x0)[logq(xt−1∣xt,x0)P(xt−1∣xt)]+Eq(xT∣x0)[logq(xT∣x0)P(xT)]Eq(x1∣x0)[logP(x0∣x1)]+t=2∑TEq(xt∣x0)∫q(xt−1∣x0,xt)logq(xt−1∣xt,x0)P(xt−1∣xt)dxt−1+∫q(xT∣x0)logq(xT∣x0)P(xT)dxTEq(x1∣x0)[logP(x0∣x1)]−t=2∑TEq(xt∣x0)[KL(q(xt−1∣xt,x0)∣∣P(xt−1∣xt))]−KL(q(xT∣x0)∣∣P(xT))(10)
所以,式(10)就是我们要优化的目标函数