注意:本文引用自专业人工智能社区Venus AI
更多AI知识请参考原站 ([www.aideeplearning.cn])
背景
- 输电线路的重要性:输电线路在电力系统中至关重要,负责将电力从电源传输到配电网络。现代社会对可靠电力的需求呈指数级增长,因此输电线路的效率变得越来越重要。
- 挑战:电力系统由复杂、动态且相互作用的元素组成,容易出现干扰或电气故障。快速准确地检测和分类这些故障对于维护系统稳定性和防止停机至关重要。
- 人工神经网络 (ANN) 的作用:人工神经网络因其在识别错误模式和通过模式识别对故障进行分类方面的潜力而得到认可。它们的功能包括归一化、泛化、抗噪性、鲁棒性和容错性,使其适合电力系统中的故障检测。
项目目标
- 故障检测和分类:开发基于人工神经网络的算法,能够检测和分类输电系统中的故障。
- 提高系统可靠性:确保故障检测方法在各种系统条件和电网参数下保持有效。
- 评估性能:通过模拟不同类型的故障并分析结果的有效性来测试算法。
项目应用
- 电力系统保护:增强电力系统保护设备的运行,减少检测和清除故障的时间。
- 电力系统中的模式识别:应用模式识别技术来区分故障状态和健康状态,以及发生故障的三相电力系统的不同相。
数据集
- 仿真环境:在MATLAB中对电力系统进行建模以模拟故障分析。其中包括 4 台发电机(每台 11000 V)、变压器和传输线。
- 数据收集:该研究涉及收集正常和各种故障条件下的线电压和线电流数据。
- 数据点:收集并标记了大约 12,000 个数据点。
该数据集包含以下列,每列代表模拟电力系统中的特定测量值和参数:
- G(发电机状态):指示系统中四台发电机中每台发电机的运行状态。这可能包括有关发电机在各种模拟场景期间是活动还是不活动的信息。
- C(Condition):代表数据记录时电力系统的整体状况。这可能包括“正常”、“故障”或特定类型的故障等标签,有助于区分正常运行数据和故障情况下收集的数据。
- B(断路器状态):反映系统中断路器的状态。这些数据可以指示断路器是断开还是闭合,这对于理解系统对不同故障情况的响应至关重要。
- A(区域):标识电力系统内收集数据的位置,例如传输线路上的不同点或各个变压器处。这对于断层的空间分析至关重要。
- Ia、Ib、Ic(A、B、C 相的线电流):这些列记录电力系统三相中每一相的线电流测量值。这些电流的变化可以指示故障状况,并且是故障检测和分析的关键参数。
- Va、Vb、Vc(A、B、C 相的线电压):与电流测量值类似,这些列包含每相的电压测量值。电压数据对于识别和分类不同类型的故障以及了解电力系统的整体健康状况至关重要。
代码实现
# 导入依赖库
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
from tensorflow import keras
from keras import models
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn import metrics
from sklearn.metrics import accuracy_score
# 加载数据集
detection = pd.read_csv('detect_dataset.csv')
df_class = pd.read_csv('classData.csv')
电力故障检测--二分类
数据预处理
detection.head()
Output (S) Ia Ib Ic Va Vb Vc Unnamed: 7 Unnamed: 8
0 0 -170.472196 9.219613 161.252583 0.054490 -0.659921 0.605431 NaN NaN
1 0 -122.235754 6.168667 116.067087 0.102000 -0.628612 0.526202 NaN NaN
2 0 -90.161474 3.813632 86.347841 0.141026 -0.605277 0.464251 NaN NaN
3 0 -79.904916 2.398803 77.506112 0.156272 -0.602235 0.445963 NaN NaN
4 0 -63.885255 0.590667 63.294587 0.180451 -0.591501 0.411050 NaN NaN
# 移除空值
detection.drop(detection.iloc[:,[7,8]], axis=1, inplace=True)
detection.shape
(12001, 7)
数据可视化
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].scatter(detection['Ia'], detection['Va'])
axs[0].set_xlabel("Ia")
axs[0].set_ylabel("Va")
axs[1].scatter(detection['Ib'], detection['Vb'], color = "red")
axs[1].set_xlabel("Ib")
axs[1].set_ylabel("Vb")
axs[2].scatter(detection['Ic'], detection['Vc'], color = "green")
axs[2].set_xlabel("Ic")
axs[2].set_ylabel("Vc")
plt.tight_layout()
plt.show()
# 绘制有故障和无故障的图表
fault = detection[detection['Output (S)'] == 1]
no_fault = detection[detection['Output (S)'] == 0]
fig, axs = plt.subplots(3, 2, figsize=(10, 10))
axs[0, 0].scatter(no_fault['Ia'], no_fault['Va'])
axs[0, 0].set_xlabel("Ia")
axs[0, 0].set_ylabel("Va")
axs[0, 0].set_title("No Fault")
axs[0, 1].scatter(fault['Ia'], fault['Va'])
axs[0, 1].set_xlabel("Ia")
axs[0, 1].set_ylabel("Va")
axs[0, 1].set_title("Fault")
axs[1, 0].scatter(no_fault['Ib'], no_fault['Vb'], color = "red")
axs[1, 0].set_xlabel("Ib")
axs[1, 0].set_ylabel("Vb")
axs[1, 0].set_title("No Fault")
axs[1, 1].scatter(fault['Ib'], fault['Vb'], color = "red")
axs[1, 1].set_xlabel("Ib")
axs[1, 1].set_ylabel("Vb")
axs[1, 1].set_title("Fault")
axs[2, 0].scatter(no_fault['Ic'], no_fault['Vc'], color = "green")
axs[2, 0].set_xlabel("Ic")
axs[2, 0].set_ylabel("Vc")
axs[2, 0].set_title("No Fault")
axs[2, 1].scatter(fault['Ic'], fault['Vc'], color = "green")
axs[2, 1].set_xlabel("Ic")
axs[2, 1].set_ylabel("Vc")
axs[2, 1].set_title("Fault")
plt.tight_layout()
plt.show()
# 检测空值
detection.isna().sum()
Output (S) 0
Ia 0
Ib 0
Ic 0
Va 0
Vb 0
Vc 0
dtype: int64
创建模型
# 设置特征和真值
y = detection.iloc[:,0]
X = detection.iloc[:,1:7]
# 划分训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=1)
# 创建神经网络模型
detection_model = models.Sequential()
detection_model.add(keras.layers.Dense(6,
input_shape=(6,),
name='Input_layer',
activation='relu'))
detection_model.add(keras.layers.Dense(16,
name='Hidden_layer1',
activation='relu'))
detection_model.add(keras.layers.Dense(1,
name='Output_layer',
activation='sigmoid'))
detection_model.compile(optimizer = keras.optimizers.Adam(),
loss = keras.losses.binary_crossentropy,
metrics = keras.metrics.binary_accuracy)
# 训练NN模型
detection_model.fit(X_train, y_train, epochs=15)
Epoch 1/15
300/300 [==============================] - 1s 977us/step - loss: 2.0507 - binary_accuracy: 0.6274
Epoch 2/15
300/300 [==============================] - 0s 945us/step - loss: 0.2647 - binary_accuracy: 0.9384
Epoch 3/15
300/300 [==============================] - 0s 918us/step - loss: 0.1595 - binary_accuracy: 0.9759
Epoch 4/15
300/300 [==============================] - 0s 927us/step - loss: 0.1106 - binary_accuracy: 0.9831
Epoch 5/15
300/300 [==============================] - 0s 980us/step - loss: 0.0885 - binary_accuracy: 0.9845
Epoch 6/15
300/300 [==============================] - 0s 878us/step - loss: 0.0759 - binary_accuracy: 0.9848
Epoch 7/15
300/300 [==============================] - 0s 930us/step - loss: 0.0688 - binary_accuracy: 0.9850
Epoch 8/15
300/300 [==============================] - 0s 884us/step - loss: 0.0638 - binary_accuracy: 0.9861
Epoch 9/15
300/300 [==============================] - 0s 964us/step - loss: 0.0601 - binary_accuracy: 0.9864
Epoch 10/15
300/300 [==============================] - 0s 928us/step - loss: 0.0577 - binary_accuracy: 0.9869
Epoch 11/15
300/300 [==============================] - 0s 969us/step - loss: 0.0558 - binary_accuracy: 0.9873
Epoch 12/15
300/300 [==============================] - 0s 920us/step - loss: 0.0538 - binary_accuracy: 0.9875
Epoch 13/15
300/300 [==============================] - 0s 1ms/step - loss: 0.0515 - binary_accuracy: 0.9882
Epoch 14/15
300/300 [==============================] - 0s 959us/step - loss: 0.0511 - binary_accuracy: 0.9883
Epoch 15/15
300/300 [==============================] - 0s 913us/step - loss: 0.0493 - binary_accuracy: 0.9887
<keras.src.callbacks.History at 0x1f18102bcd0>
# 模型预测
y_pred = detection_model.predict(X_test)
y_pred = np.where(y_pred>0.5, 1, 0)
76/76 [==============================] - 0s 734us/step
模型性能
print(classification_report(y_test, y_pred))
precision recall f1-score support
0 0.98 1.00 0.99 1297
1 1.00 0.98 0.99 1104
accuracy 0.99 2401
macro avg 0.99 0.99 0.99 2401
weighted avg 0.99 0.99 0.99 2401
print(f'Accuracy Score : {accuracy_score(y_test, y_pred)*100:.02f}%')
Accuracy Score : 99.04%
# 混淆矩阵
fig, ax = plt.subplots()
class_names=[0,1]
tick_marks = np.arange(len(class_names))
plt.xticks(tick_marks, class_names)
plt.yticks(tick_marks, class_names)
cnf_matrix = metrics.confusion_matrix(y_test, y_pred)
sns.heatmap(pd.DataFrame(cnf_matrix), annot=True, cmap="YlGnBu" ,fmt='g')
ax.xaxis.set_label_position("top")
plt.tight_layout()
plt.title('Confusion matrix', y=1.1)
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
Text(0.5, 427.9555555555555, 'Predicted label')
电力故障检测--多分类
数据预处理
df_class.head()
G C B A Ia Ib Ic Va Vb Vc
0 1 0 0 1 -151.291812 -9.677452 85.800162 0.400750 -0.132935 -0.267815
1 1 0 0 1 -336.186183 -76.283262 18.328897 0.312732 -0.123633 -0.189099
2 1 0 0 1 -502.891583 -174.648023 -80.924663 0.265728 -0.114301 -0.151428
3 1 0 0 1 -593.941905 -217.703359 -124.891924 0.235511 -0.104940 -0.130570
4 1 0 0 1 -643.663617 -224.159427 -132.282815 0.209537 -0.095554 -0.113983
df_class.shape
(7861, 10)
输出:[G C B A]
[0 0 0 0] - 无故障
[1 0 0 1] - LG 故障(A 相与地之间)
[0 0 1 1] - LL 故障(A 相和 B 相之间)
[1 0 1 1] - LLG 故障(A、B 相与接地之间)
[0 1 1 1] - LLL 故障(所有三相之间)
[1 1 1 1] - LLLG 故障(三相对称故障)
# 为故障类型创建新列
df_class['faultType'] = (df_class['G'].astype(str) +
df_class['C'].astype(str) +
df_class['B'].astype(str) +
df_class['A'].astype(str))
df_class.head()
G C B A Ia Ib Ic Va Vb Vc faultType
0 1 0 0 1 -151.291812 -9.677452 85.800162 0.400750 -0.132935 -0.267815 1001
1 1 0 0 1 -336.186183 -76.283262 18.328897 0.312732 -0.123633 -0.189099 1001
2 1 0 0 1 -502.891583 -174.648023 -80.924663 0.265728 -0.114301 -0.151428 1001
3 1 0 0 1 -593.941905 -217.703359 -124.891924 0.235511 -0.104940 -0.130570 1001
4 1 0 0 1 -643.663617 -224.159427 -132.282815 0.209537 -0.095554 -0.113983 1001
df_class["faultType"].unique()
array(['1001', '1011', '0110', '0111', '1111', '0000'], dtype=object)
数据可视化
values = list(df_class["faultType"].value_counts())
faults =['No Fault', 'LLG Fault', 'LLLG Fault', 'LG Fault', 'LLL Fault', 'LL Fault']
plt.bar(faults, values, width=0.6)
<BarContainer object of 6 artists>
plt.pie(df_class['faultType'].value_counts(), autopct='%1.1f%%', labels=faults)
plt.show()
创建模型
# 设置数据特征
x = df_class.iloc[:,4:10]
# 设置数据真值
y = df_class['faultType']
enc = LabelEncoder()
y = enc.fit_transform(y)
y = keras.utils.to_categorical(y,6)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.20)
class_model = keras.models.Sequential()
class_model.add(keras.layers.Dense(128,
input_shape=(6,),
name='Input_layer',
activation='relu'))
class_model.add(keras.layers.Dense(240,
name='Hidden_layer1',
activation='relu'))
class_model.add(keras.layers.Dense(240,
name='Hidden_layer2',
activation='tanh'))
class_model.add(keras.layers.Dense(6,
name='output_layer',
activation='softmax'))
class_model.compile(
loss='categorical_crossentropy',
metrics = ['accuracy']
)
class_model.fit(x_train, y_train, epochs=50, batch_size=64, validation_split=0.2)
Epoch 1/50
79/79 [==============================] - 1s 4ms/step - loss: 1.1702 - accuracy: 0.5807 - val_loss: 1.0718 - val_accuracy: 0.5890
Epoch 2/50
79/79 [==============================] - 0s 2ms/step - loss: 0.9922 - accuracy: 0.6412 - val_loss: 0.9843 - val_accuracy: 0.6455
Epoch 3/50
79/79 [==============================] - 0s 2ms/step - loss: 0.9151 - accuracy: 0.6598 - val_loss: 0.9624 - val_accuracy: 0.6216
Epoch 4/50
79/79 [==============================] - 0s 2ms/step - loss: 0.8811 - accuracy: 0.6610 - val_loss: 0.8986 - val_accuracy: 0.5946
Epoch 5/50
79/79 [==============================] - 0s 2ms/step - loss: 0.8151 - accuracy: 0.6843 - val_loss: 0.8542 - val_accuracy: 0.6550
Epoch 6/50
79/79 [==============================] - 0s 2ms/step - loss: 0.7482 - accuracy: 0.7000 - val_loss: 1.0069 - val_accuracy: 0.5485
Epoch 7/50
79/79 [==============================] - 0s 2ms/step - loss: 0.6781 - accuracy: 0.7252 - val_loss: 0.6106 - val_accuracy: 0.7361
Epoch 8/50
79/79 [==============================] - 0s 2ms/step - loss: 0.6297 - accuracy: 0.7378 - val_loss: 0.5969 - val_accuracy: 0.7591
Epoch 9/50
79/79 [==============================] - 0s 2ms/step - loss: 0.5815 - accuracy: 0.7567 - val_loss: 0.5684 - val_accuracy: 0.7695
Epoch 10/50
79/79 [==============================] - 0s 2ms/step - loss: 0.5764 - accuracy: 0.7553 - val_loss: 0.5504 - val_accuracy: 0.7377
Epoch 11/50
79/79 [==============================] - 0s 2ms/step - loss: 0.5567 - accuracy: 0.7604 - val_loss: 0.5000 - val_accuracy: 0.7830
Epoch 12/50
79/79 [==============================] - 0s 2ms/step - loss: 0.5384 - accuracy: 0.7672 - val_loss: 0.5533 - val_accuracy: 0.7496
Epoch 13/50
79/79 [==============================] - 0s 2ms/step - loss: 0.5350 - accuracy: 0.7728 - val_loss: 0.4889 - val_accuracy: 0.7846
Epoch 14/50
79/79 [==============================] - 0s 2ms/step - loss: 0.5124 - accuracy: 0.7660 - val_loss: 0.5275 - val_accuracy: 0.7599
Epoch 15/50
79/79 [==============================] - 0s 2ms/step - loss: 0.5131 - accuracy: 0.7708 - val_loss: 0.5131 - val_accuracy: 0.7456
Epoch 16/50
79/79 [==============================] - 0s 2ms/step - loss: 0.5079 - accuracy: 0.7724 - val_loss: 0.5055 - val_accuracy: 0.7623
Epoch 17/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4781 - accuracy: 0.7815 - val_loss: 0.5120 - val_accuracy: 0.7719
Epoch 18/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4848 - accuracy: 0.7732 - val_loss: 0.4581 - val_accuracy: 0.7957
Epoch 19/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4703 - accuracy: 0.7867 - val_loss: 0.5137 - val_accuracy: 0.7424
Epoch 20/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4674 - accuracy: 0.7869 - val_loss: 0.5217 - val_accuracy: 0.7512
Epoch 21/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4635 - accuracy: 0.7849 - val_loss: 0.4628 - val_accuracy: 0.7933
Epoch 22/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4618 - accuracy: 0.7873 - val_loss: 0.4528 - val_accuracy: 0.8045
Epoch 23/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4564 - accuracy: 0.7903 - val_loss: 0.4700 - val_accuracy: 0.7814
Epoch 24/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4549 - accuracy: 0.7922 - val_loss: 0.4851 - val_accuracy: 0.7893
Epoch 25/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4494 - accuracy: 0.7905 - val_loss: 0.4289 - val_accuracy: 0.7822
Epoch 26/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4493 - accuracy: 0.7913 - val_loss: 0.4805 - val_accuracy: 0.7838
Epoch 27/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4351 - accuracy: 0.7970 - val_loss: 0.5136 - val_accuracy: 0.7822
Epoch 28/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4352 - accuracy: 0.7950 - val_loss: 0.4145 - val_accuracy: 0.7878
Epoch 29/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4265 - accuracy: 0.7994 - val_loss: 0.4031 - val_accuracy: 0.8108
Epoch 30/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4187 - accuracy: 0.8036 - val_loss: 0.4390 - val_accuracy: 0.7790
Epoch 31/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4201 - accuracy: 0.7913 - val_loss: 0.4498 - val_accuracy: 0.8029
Epoch 32/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4107 - accuracy: 0.8028 - val_loss: 0.4140 - val_accuracy: 0.7981
Epoch 33/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4176 - accuracy: 0.8046 - val_loss: 0.4122 - val_accuracy: 0.8029
Epoch 34/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4027 - accuracy: 0.8034 - val_loss: 0.3649 - val_accuracy: 0.8172
Epoch 35/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4072 - accuracy: 0.8016 - val_loss: 0.5376 - val_accuracy: 0.7361
Epoch 36/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4193 - accuracy: 0.7930 - val_loss: 0.3750 - val_accuracy: 0.8092
Epoch 37/50
79/79 [==============================] - 0s 2ms/step - loss: 0.3911 - accuracy: 0.8113 - val_loss: 0.3858 - val_accuracy: 0.8100
Epoch 38/50
79/79 [==============================] - 0s 2ms/step - loss: 0.3897 - accuracy: 0.8129 - val_loss: 0.3601 - val_accuracy: 0.8140
Epoch 39/50
79/79 [==============================] - 0s 2ms/step - loss: 0.4003 - accuracy: 0.8054 - val_loss: 0.3973 - val_accuracy: 0.8116
Epoch 40/50
79/79 [==============================] - 0s 2ms/step - loss: 0.3877 - accuracy: 0.8141 - val_loss: 0.4003 - val_accuracy: 0.8140
Epoch 41/50
79/79 [==============================] - 0s 2ms/step - loss: 0.3887 - accuracy: 0.8135 - val_loss: 0.3980 - val_accuracy: 0.7989
Epoch 42/50
79/79 [==============================] - 0s 2ms/step - loss: 0.3887 - accuracy: 0.8105 - val_loss: 0.3622 - val_accuracy: 0.8172
Epoch 43/50
79/79 [==============================] - 0s 2ms/step - loss: 0.3902 - accuracy: 0.8087 - val_loss: 0.3714 - val_accuracy: 0.8140
Epoch 44/50
79/79 [==============================] - 0s 2ms/step - loss: 0.3800 - accuracy: 0.8054 - val_loss: 0.4569 - val_accuracy: 0.7989
Epoch 45/50
79/79 [==============================] - 0s 2ms/step - loss: 0.3788 - accuracy: 0.8121 - val_loss: 0.3890 - val_accuracy: 0.8037
Epoch 46/50
79/79 [==============================] - 0s 2ms/step - loss: 0.3760 - accuracy: 0.8175 - val_loss: 0.3601 - val_accuracy: 0.8116
Epoch 47/50
79/79 [==============================] - 0s 2ms/step - loss: 0.3663 - accuracy: 0.8177 - val_loss: 0.3666 - val_accuracy: 0.7941
Epoch 48/50
79/79 [==============================] - 0s 2ms/step - loss: 0.3760 - accuracy: 0.8119 - val_loss: 0.3720 - val_accuracy: 0.8021
Epoch 49/50
79/79 [==============================] - 0s 2ms/step - loss: 0.3749 - accuracy: 0.8219 - val_loss: 0.3885 - val_accuracy: 0.8164
Epoch 50/50
79/79 [==============================] - 0s 2ms/step - loss: 0.3639 - accuracy: 0.8262 - val_loss: 0.4709 - val_accuracy: 0.7734
<keras.src.callbacks.History at 0x1f1838f8be0>
y_pred_prob = class_model.predict(x_test)
y_pred = np.argmax(y_pred_prob, axis=1)
y_test = np.argmax(y_test,axis=1)
50/50 [==============================] - 0s 963us/step
模型性能
print(classification_report(y_test, y_pred, zero_division=0))
precision recall f1-score support
0 0.99 1.00 0.99 465
1 0.79 0.62 0.70 217
2 0.43 0.89 0.58 209
3 0.93 0.98 0.95 243
4 0.85 0.93 0.89 212
5 0.00 0.00 0.00 227
accuracy 0.78 1573
macro avg 0.66 0.74 0.68 1573
weighted avg 0.71 0.78 0.73 1573
print(f'Accuracy Score : {accuracy_score(y_test, y_pred)*100:.02f}%')
Accuracy Score : 77.81%