理论课:C1W1.Sentiment Analysis with Logistic Regression
文章目录
- 前期准备
- 导入包
- 导入数据
- 处理推文文本
- Part 1: Logistic regression
- Part 1.1: Sigmoid
- 实现 sigmoid 函数
- Logistic regression: regression and a sigmoid
- Part 1.2 Cost function and Gradient
- Update the weights
- Instructions: Implement gradient descent function
- Part 2: Extracting the features
- Instructions: Implement the extract_features function.
- Part 3: Training Your Model
- Part 4: Test your logistic regression
- Check performance using the test set
- Part 5: Error Analysis
- Part 6: Predict with your own tweet
理论课: C1W1.Sentiment Analysis with Logistic Regression
前期准备
导入包
# run this cell to import nltk
import nltk
from os import getcwd
import w1_unittest
import numpy as np
import pandas as pd
from nltk.corpus import twitter_samples
from utils import process_tweet, build_freqs
导入数据
分别加载正负推文,因为数据集包含五千条正面推文、五千条负面推文和全部 10,000 条推文的子集。
如果使用所有三个数据集,我们就会引入重复的正面推文和负面推文
# select the set of positive and negative tweets
all_positive_tweets = twitter_samples.strings('positive_tweets.json')
all_negative_tweets = twitter_samples.strings('negative_tweets.json')
20% 用于测试集,80% 用于训练集。
# split the data into two pieces, one for training and one for testing (validation set)
test_pos = all_positive_tweets[4000:]
train_pos = all_positive_tweets[:4000]
test_neg = all_negative_tweets[4000:]
train_neg = all_negative_tweets[:4000]
train_x = train_pos + train_neg
test_x = test_pos + test_neg
创建正标签和负标签的 numpy 数组。
# combine positive and negative labels
train_y = np.append(np.ones((len(train_pos), 1)), np.zeros((len(train_neg), 1)), axis=0)
test_y = np.append(np.ones((len(test_pos), 1)), np.zeros((len(test_neg), 1)), axis=0)
创建情感词频字典
# create frequency dictionary
freqs = build_freqs(train_x, train_y)
# check the output
print("type(freqs) = " + str(type(freqs)))
print("len(freqs) = " + str(len(freqs.keys())))
结果:
type(freqs) = <class ‘dict’>
len(freqs) = 11436
处理推文文本
分词、、去停用词、词干处理等:
# test the function below
print('This is an example of a positive tweet: \n', train_x[0])
print('\nThis is an example of the processed version of the tweet: \n', process_tweet(train_x[0]))
结果:
This is an example of a positive tweet:
#FollowFriday @France_Inte @PKuchly57 @Milipol_Paris for being top engaged members in my community this week 😃
This is an example of the processed version of the tweet:
[‘followfriday’, ‘top’, ‘engag’, ‘member’, ‘commun’, ‘week’, ‘😃’]
Part 1: Logistic regression
Part 1.1: Sigmoid
Sigmoid函数定义为:
h
(
z
)
=
1
1
+
exp
−
z
(1)
h(z) = \frac{1}{1+\exp^{-z}} \tag{1}
h(z)=1+exp−z1(1)
图像为:
它将输入的 "z "映射为一个介于 0 和 1 之间的值,因此可以将其视为一个概率。
实现 sigmoid 函数
无论 z 是标量或数组,该函数都应该能正常工作。
def sigmoid(z):
'''
Input:
z: is the input (can be a scalar or an array)
Output:
h: the sigmoid of z
'''
### START CODE HERE ###
# calculate the sigmoid of z
h = 1/(1+np.exp(-z))
### END CODE HERE ###
return h
# Testing your function
if (sigmoid(0) == 0.5):
print('SUCCESS!')
else:
print('Oops!')
if (sigmoid(4.92) == 0.9927537604041685):
print('CORRECT!')
else:
print('Oops again!')
Logistic regression: regression and a sigmoid
逻辑回归是先进行线性变换(回归),然后对变换结果进行sigmoid。
线性变换部分:
z
=
θ
0
x
0
+
θ
1
x
1
+
θ
2
x
2
+
.
.
.
θ
N
x
N
z = \theta_0 x_0 + \theta_1 x_1 + \theta_2 x_2 + ... \theta_N x_N
z=θ0x0+θ1x1+θ2x2+...θNxN
上式中
θ
\theta
θ值是 “权重”。在深度学习中常常会用
w
w
w向量来表示权重。
逻辑回归:
h
(
z
)
=
1
1
+
exp
−
z
h(z) = \frac{1}{1+\exp^{-z}}
h(z)=1+exp−z1
Part 1.2 Cost function and Gradient
逻辑回归使用的成本函数是所有训练实例的对数损失平均值:
J
(
θ
)
=
−
1
m
∑
i
=
1
m
y
(
i
)
log
(
h
(
z
(
θ
)
(
i
)
)
)
+
(
1
−
y
(
i
)
)
log
(
1
−
h
(
z
(
θ
)
(
i
)
)
)
(5)
J(\theta) = -\frac{1}{m} \sum_{i=1}^m y^{(i)}\log (h(z(\theta)^{(i)})) + (1-y^{(i)})\log (1-h(z(\theta)^{(i)}))\tag{5}
J(θ)=−m1i=1∑my(i)log(h(z(θ)(i)))+(1−y(i))log(1−h(z(θ)(i)))(5)
- m m m是训练实例的数量
- y ( i ) y^{(i)} y(i) 是训练实例 "i "的实际标签。
- h ( z ( i ) ) h(z^{(i)}) h(z(i)) 是模型对训练示例 "i "的预测值。
单个训练样本的损失函数为:
L
o
s
s
=
−
1
×
(
y
(
i
)
log
(
h
(
z
(
θ
)
(
i
)
)
)
+
(
1
−
y
(
i
)
)
log
(
1
−
h
(
z
(
θ
)
(
i
)
)
)
)
Loss = -1 \times \left( y^{(i)}\log (h(z(\theta)^{(i)})) + (1-y^{(i)})\log (1-h(z(\theta)^{(i)})) \right)
Loss=−1×(y(i)log(h(z(θ)(i)))+(1−y(i))log(1−h(z(θ)(i))))
所有
h
h
h值都在 0 和 1 之间,因此对数将为负值。这就是两个损失项之和的系数为-1 的原因。
当模型预测值为 1 (
h
(
z
(
θ
)
)
=
1
h(z(\theta)) = 1
h(z(θ))=1) 而标签 "y "也是 1 时,该训练示例的损失为 0。
同样,当模型预测值为 0 (
h
(
z
(
θ
)
)
=
0
h(z(\theta)) = 0
h(z(θ))=0)) 而实际标签也是 0 时,该训练实例的损失为 0。
但是,当模型预测值接近 1 (
h
(
z
(
θ
)
)
=
0.9999
h(z(\theta)) = 0.9999
h(z(θ))=0.9999) 而标签为 0 时,对数损失的第二项就会变成一个很大的负数,然后乘以总系数-1,将其转换为正损失值。
−
1
×
(
1
−
0
)
×
l
o
g
(
1
−
0.9999
)
≈
9.2
-1 \times (1 - 0) \times log(1 - 0.9999) \approx 9.2
−1×(1−0)×log(1−0.9999)≈9.2
模型预测值越接近 1,损失就越大。
# verify that when the model predicts close to 1, but the actual label is 0, the loss is a large positive value
-1 * (1 - 0) * np.log(1 - 0.9999) # loss is about 9.2
Update the weights
为了更新权重向量
θ
\theta
θ,可应用梯度下降来迭代改进模型的预测结果。
成本函数
J
J
J 的梯度
θ
j
\theta_j
θj为 :
∇
θ
j
J
(
θ
)
=
1
m
∑
i
=
1
m
(
h
(
i
)
−
y
(
i
)
)
x
j
(
i
)
(6)
\nabla_{\theta_j}J(\theta) = \frac{1}{m} \sum_{i=1}^m(h^{(i)}-y^{(i)})x^{(i)}_j \tag{6}
∇θjJ(θ)=m1i=1∑m(h(i)−y(i))xj(i)(6)
'i’是所有’m’个训练实例的索引
'j’是权重
θ
j
\theta_j
θj的索引,因此
x
j
(
i
)
x^{(i)}_j
xj(i) 是与权重
θ
j
\theta_j
θj相关联的特征
要更新权重
θ
j
\theta_j
θj,可通过以下公式
θ
j
=
θ
j
−
α
×
∇
θ
j
J
(
θ
)
\theta_j = \theta_j - \alpha \times \nabla_{\theta_j}J(\theta)
θj=θj−α×∇θjJ(θ)
α
\alpha
α是学习率
Instructions: Implement gradient descent function
迭代次数 "num_iters "是使用整个训练集的次数。
每次迭代,都要使用所有训练示例(共有 "m "个训练示例)并针对所有特征计算代价函数。
更新列向量中的所有权重,而不是每次更新一个权重
θ
i
\theta_i
θi
θ
=
(
θ
0
θ
1
θ
2
⋮
θ
n
)
\mathbf{\theta} = \begin{pmatrix} \theta_0 \\ \theta_1 \\ \theta_2 \\ \vdots \\ \theta_n \end{pmatrix}
θ=
θ0θ1θ2⋮θn
θ
\mathbf{\theta}
θ的维度为:(n+1, 1),其中 "n "为特征个数,还有一个元素为偏置项
θ
0
\theta_0
θ0
(注意,相应的特征值
x
0
\mathbf{x_0}
x0为1)
逻辑回归的线性变换部分可直接有特征矩阵和权重向量相乘得到:
z
=
x
θ
z = \mathbf{x}\mathbf{\theta}
z=xθ
- x \mathbf{x} x 维度为: (m, n+1)
- θ \mathbf{\theta} θ维度为: (n+1, 1)
- z \mathbf{z} z维度为: (m, 1)
预测值’h’是通过对’z’中的每个元素应用 sigmoid 计算得出的:
h
(
z
)
=
s
i
g
m
o
i
d
(
z
)
h(z) = sigmoid(z)
h(z)=sigmoid(z),维数为 (m,1)。
成本函数
J
J
J 的计算方法是取向量 "y "和 "log(h) "的点积。由于 "y "和 "h "都是列向量(m,1),因此要将向量向左转置,这样行向量与列向量的矩阵乘法就能得到点积。
J
=
−
1
m
×
(
y
T
⋅
l
o
g
(
h
)
+
(
1
−
y
)
T
⋅
l
o
g
(
1
−
h
)
)
J = \frac{-1}{m} \times \left(\mathbf{y}^T \cdot log(\mathbf{h}) + \mathbf{(1-y)}^T \cdot log(\mathbf{1-h}) \right)
J=m−1×(yT⋅log(h)+(1−y)T⋅log(1−h))
Theta 的更新也是矢量化的。因为
x
\mathbf{x}
x 的维度为 (m,n+1),且
h
\mathbf{h}
h 和
y
\mathbf{y}
y维度
都是(m, 1),因此我们需要将
x
\mathbf{x}
x 并将其置于左边,以便执行矩阵乘法,从而得到我们需要的 (n+1, 1) :
θ
=
θ
−
α
m
×
(
x
T
⋅
(
h
−
y
)
)
\mathbf{\theta} = \mathbf{\theta} - \frac{\alpha}{m} \times \left( \mathbf{x}^T \cdot \left( \mathbf{h-y} \right) \right)
θ=θ−mα×(xT⋅(h−y))
提示:
使用 numpy.dot 进行矩阵乘法。
为确保分数 -1/m 是十进制值,可以对分子或分母(或两者)进行投影,如 float(1)
,或写成 1.
表示 1 的 float 版本。
具体代码:
# UNQ_C2 GRADED FUNCTION: gradientDescent
def gradientDescent(x, y, theta, alpha, num_iters):
'''
Input:
x: matrix of features which is (m,n+1)
y: corresponding labels of the input matrix x, dimensions (m,1)
theta: weight vector of dimension (n+1,1)
alpha: learning rate
num_iters: number of iterations you want to train your model for
Output:
J: the final cost
theta: your final weight vector
Hint: you might want to print the cost to make sure that it is going down.
'''
### START CODE HERE ###
# get 'm', the number of rows in matrix x
m = x.shape[0]
for i in range(0, num_iters):
# get z, the dot product of x and theta
z = np.dot(x,theta)
# get the sigmoid of z
h = sigmoid(z)
# calculate the cost function
J = (-1/m)*(np.matmul(np.transpose(y),np.log(h)) + np.matmul(np.transpose(1-y),np.log(1-h)))
# update the weights theta
theta = theta - (alpha/m)*np.dot(np.transpose(x),(h-y))
### END CODE HERE ###
J = float(J)
return J, theta
# Check the function
# Construct a synthetic test case using numpy PRNG functions
np.random.seed(1)
# X input is 10 x 3 with ones for the bias terms
tmp_X = np.append(np.ones((10, 1)), np.random.rand(10, 2) * 2000, axis=1)
# Y Labels are 10 x 1
tmp_Y = (np.random.rand(10, 1) > 0.35).astype(float)
# Apply gradient descent
tmp_J, tmp_theta = gradientDescent(tmp_X, tmp_Y, np.zeros((3, 1)), 1e-8, 700)
print(f"The cost after training is {tmp_J:.8f}.")
print(f"The resulting vector of weights is {[round(t, 8) for t in np.squeeze(tmp_theta)]}")
Part 2: Extracting the features
- 给定推文列表,提取特征并将其存储在矩阵中。需要提取两个特征。
- 第一个特征是一条推文中正面词语的数量。
- 第二个特征是一条推文中负面词语的数量。
- 然后根据这些特征训练逻辑回归分类器。
- 在验证集上测试分类器。
Instructions: Implement the extract_features function.
- 该函数接收一条推文。
- 使用导入的
process_tweet
函数处理 tweet,并保存 tweet 单词列表。 - 循环查看已处理字词列表中的每个字词
- 对于每个单词,检查 "freqs "字典中该单词有正 "1 "标签时的计数。(检查关键字 (word, 1.0)
- 当单词与负标签 "0 "相关联时,做同样的计数(检查关键字(word, 0.0))。
# UNQ_C3 GRADED FUNCTION: extract_features
def extract_features(tweet, freqs, process_tweet=process_tweet):
'''
Input:
tweet: a list of words for one tweet
freqs: a dictionary corresponding to the frequencies of each tuple (word, label)
Output:
x: a feature vector of dimension (1,3)
'''
# process_tweet tokenizes, stems, and removes stopwords
word_l = process_tweet(tweet)
# 3 elements in the form of a 1 x 3 vector
x = np.zeros((1, 3))
#bias term is set to 1
x[0,0] = 1
### START CODE HERE ###
# loop through each word in the list of words
for word in word_l:
if (word, 1.0) in freqs.keys() :
# increment the word count for the positive label 1
x[0,1] += freqs[(word, 1.0)]
if (word, 0.0) in freqs.keys() :
# increment the word count for the negative label 0
x[0,2] += freqs[(word, 0.0)]
### END CODE HERE ###
assert(x.shape == (1, 3))
return x
# test on training data
tmp1 = extract_features(train_x[0], freqs)
print(tmp1)
结果:
[[1.000e+00 3.133e+03 6.100e+01]]
思考:为什么会有小数?
Part 3: Training Your Model
- 将所有训练实例的特征堆叠成矩阵 X。
- 调用上面已实现的函数
gradientDescent
。
# collect the features 'x' and stack them into a matrix 'X'
X = np.zeros((len(train_x), 3))
for i in range(len(train_x)):
X[i, :]= extract_features(train_x[i], freqs)
# training labels corresponding to X
Y = train_y
# Apply gradient descent
J, theta = gradientDescent(X, Y, np.zeros((3, 1)), 1e-9, 1500)
print(f"The cost after training is {J:.8f}.")
print(f"The resulting vector of weights is {[round(t, 8) for t in np.squeeze(theta)]}")
Part 4: Test your logistic regression
编写predict_tweet
函数,预测一条推文是正面的还是负面的。
- 给定一条推文,对其进行处理,然后提取特征。
- 在特征上应用模型学习到的权重,得到对数。
- 将 sigmoid 应用于对数,得到预测值(介于 0 和 1 之间的值)。
# UNQ_C4 GRADED FUNCTION: predict_tweet
def predict_tweet(tweet, freqs, theta):
'''
Input:
tweet: a string
freqs: a dictionary corresponding to the frequencies of each tuple (word, label)
theta: (3,1) vector of weights
Output:
y_pred: the probability of a tweet being positive or negative
'''
### START CODE HERE ###
# extract the features of the tweet and store it into x
x = extract_features(tweet, freqs)
# make the prediction using x and theta
y_pred = sigmoid(np.dot(x,theta))
### END CODE HERE ###
return y_pred
测试函数:
# Run this cell to test your function
for tweet in ['I am happy', 'I am bad', 'this movie should have been great.', 'great', 'great great', 'great great great', 'great great great great']:
print( '%s -> %f' % (tweet, predict_tweet(tweet, freqs, theta)))
结果:
I am happy -> 0.519275
I am bad -> 0.494347
this movie should have been great. -> 0.515979
great -> 0.516065
great great -> 0.532096
great great great -> 0.548062
great great great great -> 0.563929
Check performance using the test set
实现函数:test_logistic_regression
- 给定测试数据和训练模型的权重,计算逻辑回归模型的准确性。
- 使用 "predict_tweet "函数对测试集中的每条 tweet 进行预测。
- 如果预测结果大于 0.5,则将模型的分类 "y_hat "设为 1,否则将模型的分类 "y_hat "设为 0。
- 当 y_hat 等于 test_y 时,预测就是准确的。 将所有相等的实例相加,然后除以 m。
# UNQ_C5 GRADED FUNCTION: test_logistic_regression
def test_logistic_regression(test_x, test_y, freqs, theta, predict_tweet=predict_tweet):
"""
Input:
test_x: a list of tweets
test_y: (m, 1) vector with the corresponding labels for the list of tweets
freqs: a dictionary with the frequency of each pair (or tuple)
theta: weight vector of dimension (3, 1)
Output:
accuracy: (# of tweets classified correctly) / (total # of tweets)
"""
### START CODE HERE ###
# the list for storing predictions
y_hat = []
for tweet in test_x:
# get the label prediction for the tweet
y_pred = predict_tweet(tweet, freqs, theta)
if y_pred > 0.5:
# append 1.0 to the list
y_hat.append(1.0)
else:
# append 0 to the list
y_hat.append(0.0)
# With the above implementation, y_hat is a list, but test_y is (m,1) array
# convert both to one-dimensional arrays in order to compare them using the '==' operator
accuracy = 0
for i in range(test_y.shape[0]):
if test_y[i]==y_hat[i]:
accuracy+=1
accuracy=np.float64(accuracy/test_y.shape[0])
### END CODE HERE ###
return accuracy
Part 5: Error Analysis
思考:
为什么会出现分类错误?模型误分类了哪类推文?
# Some error analysis done for you
print('Label Predicted Tweet')
for x, y in zip(test_x, test_y):
# 遍历测试数据集,其中test_x是推文的列表,test_y是对应的情感标签列表。
# zip函数将两个列表组合在一起,每次迭代返回一对元素。
y_hat = predict_tweet(x, freqs, theta)
# 使用predict_tweet函数对当前推文x进行情感预测,传入推文x,之前构建的频率字典freqs,以及模型参数theta。
# predict_tweet函数的实现没有给出,但可以推测它基于给定的参数进行预测,并返回预测的情感分数。
if np.abs(y - (y_hat > 0.5)) > 0:
# 比较实际情感标签y和模型预测的情感标签(如果y_hat大于0.5则为1,否则为0)。
# np.abs计算绝对值,如果两者不相等(即预测错误),则执行if块内的代码。
print('THE TWEET IS:', x)
# 打印原始推文。
print('THE PROCESSED TWEET IS:', process_tweet(x))
# 打印经过process_tweet函数处理后的推文,这个函数对推文进行预处理,比如分词、去除停用词等。
print('%d\t%0.8f\t%s' % (y, y_hat, ' '.join(process_tweet(x)).encode('ascii', 'ignore')))
# 打印实际情感标签、预测的情感分数以及处理后的推文。
# 这里使用%格式化字符串来格式化输出,其中:
# %d表示整数,用来打印情感标签y;
# %0.8f表示浮点数,打印预测的情感分数y_hat,保留8位小数;
# %s表示字符串,这里将处理后的推文列表转换为字符串,并使用encode函数将其编码为ASCII,忽略无法编码的字符。
Part 6: Predict with your own tweet
# Feel free to change the tweet below
my_tweet = 'I won!'
print(process_tweet(my_tweet))
y_hat = predict_tweet(my_tweet, freqs, theta)
print(y_hat)
if y_hat > 0.5:
print('Positive sentiment')
else:
print('Negative sentiment')