全流程点云机器学习(二)使用PaddlePaddle进行PointNet的机器学习训练和评估

前言

这不是高支模项目需要嘛,他们用传统算法切那个横杆竖杆流程复杂耗时很长,所以想能不能用机器学习完成这些工作,所以我就来整这个工作了。

基于上文的数据集切分 ,现在来对切分好的数据来进行正式的训练。

本系列文章所用的核心骨干网络代码主要来自点云处理:实现PointNet点云分割

原文的代码有点问题,这里做了一点修改,主要应用了paddlepaddle进行的pointNet进行分割任务。

流程

这里用的PointNet网络由于使用了全连接层,所以输入必须要抽稀出结果,故而流程如下:

  1. 读取原始点云和标签
  2. 随机对原始点云和标签进行采样
  3. 进行数据集划分
  4. 创建模型
  5. 进行训练
  6. 保存模型
  7. 对象评估

具体内容

1.依赖

import os
import tqdm
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import  warnings
warnings.filterwarnings("ignore", module="matplotlib")    
from mpl_toolkits.mplot3d import Axes3D
# paddle相关库
import paddle
from paddle.io import Dataset
import paddle.nn.functional as F
from paddle.nn import Conv2D, MaxPool2D, Linear, BatchNorm, Dropout, ReLU, Softmax, Sequential

2.点云的可视化

def visualize_data(point_cloud, label, title):
    COLORS = ['b', 'y', 'r', 'g', 'pink']
    label_map = ['none', 'Support'] 
    df = pd.DataFrame(
        data={
            "x": point_cloud[:, 0],
            "y": point_cloud[:, 1],
            "z": point_cloud[:, 2],
            "label": label,
        }
    )
    fig = plt.figure(figsize=(15, 10))
    ax = plt.axes(projection="3d")
    ax.scatter(df["x"], df["y"], df["z"])
    for i in range(label.min(), label.max()+1):
        c_df = df[df['label'] == i]
        ax.scatter(c_df["x"], c_df["y"], c_df["z"], label=label_map[i], alpha=0.5, c=COLORS[i])
    ax.legend()
    plt.title(title)
    plt.show()
    input()

3.点云抽稀和数据集

data_path = 'J:\\output\\Data'
label_path = 'J:\\output\\Label'
# 采样点
NUM_SAMPLE_POINTS = 1024 
# 存储点云与label
point_clouds = []
point_clouds_labels = []

file_list = os.listdir(data_path)
for file_name in tqdm.tqdm(file_list):
    # 获取label和data的地址
    label_name = file_name.replace('.pts', '.seg')
    point_cloud_file_path = os.path.join(data_path, file_name)
    label_file_path = os.path.join(label_path, label_name)
    # 读取label和data
    point_cloud = np.loadtxt(point_cloud_file_path)
    label = np.loadtxt(label_file_path).astype('int')
    # 如果本身的点少于需要采样的点,则直接去除
    if len(point_cloud) < NUM_SAMPLE_POINTS:
        continue
    # 采样
    num_points = len(point_cloud)
    # 确定随机采样的index
    sampled_indices = random.sample(list(range(num_points)), NUM_SAMPLE_POINTS)
    # 点云采样
    sampled_point_cloud = np.array([point_cloud[i] for i in sampled_indices])
    # label采样
    sampled_label_cloud = np.array([label[i] for i in sampled_indices])
    # 正则化
    norm_point_cloud = sampled_point_cloud - np.mean(sampled_point_cloud, axis=0)
    norm_point_cloud /= np.max(np.linalg.norm(norm_point_cloud, axis=1))
    # 存储
    point_clouds.append(norm_point_cloud)
    point_clouds_labels.append(sampled_label_cloud)


class MyDataset(Dataset):
    def __init__(self, data, label):
        super(MyDataset, self).__init__()
        self.data = data
        self.label = label

    def __getitem__(self, index):

        data = self.data[index]
        label = self.label[index]
        data = np.reshape(data, (1, 1024, 3))

        return data, label

    def __len__(self):
        return len(self.data)

4. 进行数据集的划分

# 数据集划分
VAL_SPLIT = 0.2
split_index = int(len(point_clouds) * (1 - VAL_SPLIT))
train_point_clouds = point_clouds[:split_index]
train_label_cloud = point_clouds_labels[:split_index]
print(train_label_cloud)
total_training_examples = len(train_point_clouds)
val_point_clouds = point_clouds[split_index:]
val_label_cloud = point_clouds_labels[split_index:]
print("Num train point clouds:", len(train_point_clouds))
print("Num train point cloud labels:", len(train_label_cloud))
print("Num val point clouds:", len(val_point_clouds))
print("Num val point cloud labels:", len(val_label_cloud))

# 测试定义的数据集
train_dataset = MyDataset(train_point_clouds, train_label_cloud)
val_dataset = MyDataset(val_point_clouds, val_label_cloud)

print('=============custom dataset test=============')
for data, label in train_dataset:
    print('data shape:{} \nlabel shape:{}'.format(data.shape, label.shape))
    break

# Batch_size 大小
BATCH_SIZE = 64
# # 数据加载
train_loader = paddle.io.DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
val_loader = paddle.io.DataLoader(val_dataset, batch_size=BATCH_SIZE, shuffle=False)

5. 创建PointNet网络

class PointNet(paddle.nn.Layer):
    def __init__(self, name_scope='PointNet_', num_classes=4, num_point=1024):
        super(PointNet, self).__init__()
        self.num_point = num_point
        self.input_transform_net = Sequential(
            Conv2D(1, 64, (1, 3)),
            BatchNorm(64),
            ReLU(),
            Conv2D(64, 128, (1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, 1024, (1, 1)),
            BatchNorm(1024),
            ReLU(),
            MaxPool2D((num_point, 1))
        )
        self.input_fc = Sequential(
            Linear(1024, 512),
            ReLU(),
            Linear(512, 256),
            ReLU(),
            Linear(256, 9, 
                weight_attr=paddle.framework.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.zeros((256, 9)))),
                bias_attr=paddle.framework.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.reshape(paddle.eye(3), [-1])))
            )
        )
        self.mlp_1 = Sequential(
            Conv2D(1, 64, (1, 3)),
            BatchNorm(64),
            ReLU(),
            Conv2D(64, 64,(1, 1)),
            BatchNorm(64),
            ReLU(),
        )
        self.feature_transform_net = Sequential(
            Conv2D(64, 64, (1, 1)),
            BatchNorm(64),
            ReLU(),
            Conv2D(64, 128, (1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, 1024, (1, 1)),
            BatchNorm(1024),
            ReLU(),

            MaxPool2D((num_point, 1))
        )
        self.feature_fc = Sequential(
            Linear(1024, 512),
            ReLU(),
            Linear(512, 256),
            ReLU(),
            Linear(256, 64*64)
        )
        self.mlp_2 = Sequential(
            Conv2D(64, 64, (1, 1)),
            BatchNorm(64),
            ReLU(),
            Conv2D(64, 128,(1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, 1024,(1, 1)),
            BatchNorm(1024),
            ReLU(),
        )
        self.seg_net = Sequential(
            Conv2D(1088, 512, (1, 1)),
            BatchNorm(512),
            ReLU(),
            Conv2D(512, 256, (1, 1)),
            BatchNorm(256),
            ReLU(),
            Conv2D(256, 128, (1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, 128, (1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, num_classes, (1, 1)),
            Softmax(axis=1)
        )
    def forward(self, inputs):
        batchsize = inputs.shape[0]

        t_net = self.input_transform_net(inputs)
        t_net = paddle.squeeze(t_net)
        t_net = self.input_fc(t_net)
        t_net = paddle.reshape(t_net, [batchsize, 3, 3])
      
        x = paddle.reshape(inputs, shape=(batchsize, 1024, 3))
        x = paddle.matmul(x, t_net)
        x = paddle.unsqueeze(x, axis=1)
        x = self.mlp_1(x)

        t_net = self.feature_transform_net(x)
        t_net = paddle.squeeze(t_net)
        t_net = self.feature_fc(t_net)
        t_net = paddle.reshape(t_net, [batchsize, 64, 64])

        x = paddle.reshape(x, shape=(batchsize, 64, 1024))
        x = paddle.transpose(x, (0, 2, 1))
        x = paddle.matmul(x, t_net)
        x = paddle.transpose(x, (0, 2, 1))
        x = paddle.unsqueeze(x, axis=-1)
        point_feat = x
        x = self.mlp_2(x)
        x = paddle.max(x, axis=2)

        global_feat_expand = paddle.tile(paddle.unsqueeze(x, axis=1), [1, self.num_point, 1, 1])        
        x = paddle.concat([point_feat, global_feat_expand], axis=1)
        x = self.seg_net(x)
        x = paddle.squeeze(x, axis=-1)
        x = paddle.transpose(x, (0, 2, 1))

        return x

# 创建模型
model = PointNet()
model.train()
# 优化器定义
optim = paddle.optimizer.Adam(parameters=model.parameters(), weight_decay=0.001)
# 损失函数定义
loss_fn = paddle.nn.CrossEntropyLoss()
# 评价指标定义
m = paddle.metric.Accuracy()

6. 训练模型

# 训练轮数
epoch_num = 50
# 每多少个epoch保存
save_interval = 2
# 每多少个epoch验证
val_interval = 2
best_acc = 0
# 模型保存地址
output_dir = './output'
if not os.path.exists(output_dir):
    os.makedirs(output_dir)
# 训练过程
plot_acc = []
plot_loss = []
for epoch in range(epoch_num):
    total_loss = 0
    for batch_id, data in enumerate(train_loader()):
        inputs = paddle.to_tensor(data[0], dtype='float32')
        labels = paddle.to_tensor(data[1], dtype='int64')
        print(data[1])
        print(labels)
        predicts = model(inputs)
      
        # 计算损失和反向传播
        loss = loss_fn(predicts, labels)
        if loss.ndim == 0:
            total_loss += loss.numpy()  # 零维数组,直接取值
        else:
            total_loss += loss.numpy()[0]  # 非零维数组,取第一个元素
        loss.backward()
        # 计算acc
        predicts = paddle.reshape(predicts, (predicts.shape[0]*predicts.shape[1], -1))
        labels = paddle.reshape(labels, (labels.shape[0]*labels.shape[1], 1))
        correct = m.compute(predicts, labels)
        m.update(correct)
        # 优化器更新
        optim.step()
        optim.clear_grad()
    avg_loss = total_loss/batch_id
    plot_loss.append(avg_loss)
    print("epoch: {}/{}, loss is: {}, acc is:{}".format(epoch, epoch_num, avg_loss, m.accumulate()))
    m.reset()
    # 保存
    if epoch % save_interval == 0:
        model_name = str(epoch)
        paddle.save(model.state_dict(), './output/PointNet_{}.pdparams'.format(model_name))
        paddle.save(optim.state_dict(), './output/PointNet_{}.pdopt'.format(model_name))
    # 训练中途验证
    if epoch % val_interval == 0:
        model.eval()
        for batch_id, data in enumerate(val_loader()): 
            inputs = paddle.to_tensor(data[0], dtype='float32')
            labels = paddle.to_tensor(data[1], dtype='int64')
            predicts = model(inputs)
            predicts = paddle.reshape(predicts, (predicts.shape[0]*predicts.shape[1], -1))
            labels = paddle.reshape(labels, (labels.shape[0]*labels.shape[1], 1))
            correct = m.compute(predicts, labels)
            m.update(correct)
        val_acc = m.accumulate()
        plot_acc.append(val_acc)
        if val_acc > best_acc:
            best_acc = val_acc
            print("===================================val===========================================")
            print('val best epoch in:{}, best acc:{}'.format(epoch, best_acc))
            print("===================================train===========================================")
            paddle.save(model.state_dict(), './output/best_model.pdparams')
            paddle.save(optim.state_dict(), './output/best_model.pdopt')
        m.reset()
        model.train()

7.尝试对点云进行预测

ckpt_path = 'output/best_model.pdparams'
para_state_dict = paddle.load(ckpt_path)
# 加载网络和参数
model = PointNet()
model.set_state_dict(para_state_dict)
model.eval()
# 加载数据集
point_cloud = point_clouds[0]
show_point_cloud = point_cloud
point_cloud = paddle.to_tensor(np.reshape(point_cloud, (1, 1, 1024, 3)), dtype='float32')
label = point_clouds_labels[0]
# 前向推理
preds = model(point_cloud)
show_pred = paddle.argmax(preds, axis=-1).numpy() + 1

visualize_data(show_point_cloud, show_pred[0], 'pred')
visualize_data(show_point_cloud, label, 'label')

全流程代码

import os
import tqdm
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import  warnings
warnings.filterwarnings("ignore", module="matplotlib")    
from mpl_toolkits.mplot3d import Axes3D

# paddle相关库
import paddle
from paddle.io import Dataset
import paddle.nn.functional as F
from paddle.nn import Conv2D, MaxPool2D, Linear, BatchNorm, Dropout, ReLU, Softmax, Sequential

# 可视化使用的颜色和对应label的名字


def visualize_data(point_cloud, label, title):
    COLORS = ['b', 'y', 'r', 'g', 'pink']
    label_map = ['none', 'Support'] 
    df = pd.DataFrame(
        data={
            "x": point_cloud[:, 0],
            "y": point_cloud[:, 1],
            "z": point_cloud[:, 2],
            "label": label,
        }
    )
    fig = plt.figure(figsize=(15, 10))
    ax = plt.axes(projection="3d")
    ax.scatter(df["x"], df["y"], df["z"])
    for i in range(label.min(), label.max()+1):
        c_df = df[df['label'] == i]
        ax.scatter(c_df["x"], c_df["y"], c_df["z"], label=label_map[i], alpha=0.5, c=COLORS[i])
    ax.legend()
    plt.title(title)
    plt.show()
    input()

data_path = 'J:\\output\\Data'
label_path = 'J:\\output\\Label'
# 采样点
NUM_SAMPLE_POINTS = 1024 
# 存储点云与label
point_clouds = []
point_clouds_labels = []

file_list = os.listdir(data_path)
for file_name in tqdm.tqdm(file_list):
    # 获取label和data的地址
    label_name = file_name.replace('.pts', '.seg')
    point_cloud_file_path = os.path.join(data_path, file_name)
    label_file_path = os.path.join(label_path, label_name)
    # 读取label和data
    point_cloud = np.loadtxt(point_cloud_file_path)
    label = np.loadtxt(label_file_path).astype('int')
    # 如果本身的点少于需要采样的点,则直接去除
    if len(point_cloud) < NUM_SAMPLE_POINTS:
        continue
    # 采样
    num_points = len(point_cloud)
    # 确定随机采样的index
    sampled_indices = random.sample(list(range(num_points)), NUM_SAMPLE_POINTS)
    # 点云采样
    sampled_point_cloud = np.array([point_cloud[i] for i in sampled_indices])
    # label采样
    sampled_label_cloud = np.array([label[i] for i in sampled_indices])
    # 正则化
    norm_point_cloud = sampled_point_cloud - np.mean(sampled_point_cloud, axis=0)
    norm_point_cloud /= np.max(np.linalg.norm(norm_point_cloud, axis=1))
    # 存储
    point_clouds.append(norm_point_cloud)
    point_clouds_labels.append(sampled_label_cloud)
        
#visualize_data(point_clouds[0], point_clouds_labels[0], 'label')

class MyDataset(Dataset):
    def __init__(self, data, label):
        super(MyDataset, self).__init__()
        self.data = data
        self.label = label

    def __getitem__(self, index):

        data = self.data[index]
        label = self.label[index]
        data = np.reshape(data, (1, 1024, 3))

        return data, label

    def __len__(self):
        return len(self.data)

# 数据集划分
VAL_SPLIT = 0.2
split_index = int(len(point_clouds) * (1 - VAL_SPLIT))
train_point_clouds = point_clouds[:split_index]
train_label_cloud = point_clouds_labels[:split_index]
print(train_label_cloud)
total_training_examples = len(train_point_clouds)
val_point_clouds = point_clouds[split_index:]
val_label_cloud = point_clouds_labels[split_index:]
print("Num train point clouds:", len(train_point_clouds))
print("Num train point cloud labels:", len(train_label_cloud))
print("Num val point clouds:", len(val_point_clouds))
print("Num val point cloud labels:", len(val_label_cloud))

# 测试定义的数据集
train_dataset = MyDataset(train_point_clouds, train_label_cloud)
val_dataset = MyDataset(val_point_clouds, val_label_cloud)

print('=============custom dataset test=============')
for data, label in train_dataset:
    print('data shape:{} \nlabel shape:{}'.format(data.shape, label.shape))
    break

# Batch_size 大小
BATCH_SIZE = 64
# # 数据加载
train_loader = paddle.io.DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
val_loader = paddle.io.DataLoader(val_dataset, batch_size=BATCH_SIZE, shuffle=False)



class PointNet(paddle.nn.Layer):
    def __init__(self, name_scope='PointNet_', num_classes=4, num_point=1024):
        super(PointNet, self).__init__()
        self.num_point = num_point
        self.input_transform_net = Sequential(
            Conv2D(1, 64, (1, 3)),
            BatchNorm(64),
            ReLU(),
            Conv2D(64, 128, (1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, 1024, (1, 1)),
            BatchNorm(1024),
            ReLU(),
            MaxPool2D((num_point, 1))
        )
        self.input_fc = Sequential(
            Linear(1024, 512),
            ReLU(),
            Linear(512, 256),
            ReLU(),
            Linear(256, 9, 
                weight_attr=paddle.framework.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.zeros((256, 9)))),
                bias_attr=paddle.framework.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.reshape(paddle.eye(3), [-1])))
            )
        )
        self.mlp_1 = Sequential(
            Conv2D(1, 64, (1, 3)),
            BatchNorm(64),
            ReLU(),
            Conv2D(64, 64,(1, 1)),
            BatchNorm(64),
            ReLU(),
        )
        self.feature_transform_net = Sequential(
            Conv2D(64, 64, (1, 1)),
            BatchNorm(64),
            ReLU(),
            Conv2D(64, 128, (1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, 1024, (1, 1)),
            BatchNorm(1024),
            ReLU(),

            MaxPool2D((num_point, 1))
        )
        self.feature_fc = Sequential(
            Linear(1024, 512),
            ReLU(),
            Linear(512, 256),
            ReLU(),
            Linear(256, 64*64)
        )
        self.mlp_2 = Sequential(
            Conv2D(64, 64, (1, 1)),
            BatchNorm(64),
            ReLU(),
            Conv2D(64, 128,(1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, 1024,(1, 1)),
            BatchNorm(1024),
            ReLU(),
        )
        self.seg_net = Sequential(
            Conv2D(1088, 512, (1, 1)),
            BatchNorm(512),
            ReLU(),
            Conv2D(512, 256, (1, 1)),
            BatchNorm(256),
            ReLU(),
            Conv2D(256, 128, (1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, 128, (1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, num_classes, (1, 1)),
            Softmax(axis=1)
        )
    def forward(self, inputs):
        batchsize = inputs.shape[0]

        t_net = self.input_transform_net(inputs)
        t_net = paddle.squeeze(t_net)
        t_net = self.input_fc(t_net)
        t_net = paddle.reshape(t_net, [batchsize, 3, 3])
      
        x = paddle.reshape(inputs, shape=(batchsize, 1024, 3))
        x = paddle.matmul(x, t_net)
        x = paddle.unsqueeze(x, axis=1)
        x = self.mlp_1(x)

        t_net = self.feature_transform_net(x)
        t_net = paddle.squeeze(t_net)
        t_net = self.feature_fc(t_net)
        t_net = paddle.reshape(t_net, [batchsize, 64, 64])

        x = paddle.reshape(x, shape=(batchsize, 64, 1024))
        x = paddle.transpose(x, (0, 2, 1))
        x = paddle.matmul(x, t_net)
        x = paddle.transpose(x, (0, 2, 1))
        x = paddle.unsqueeze(x, axis=-1)
        point_feat = x
        x = self.mlp_2(x)
        x = paddle.max(x, axis=2)

        global_feat_expand = paddle.tile(paddle.unsqueeze(x, axis=1), [1, self.num_point, 1, 1])        
        x = paddle.concat([point_feat, global_feat_expand], axis=1)
        x = self.seg_net(x)
        x = paddle.squeeze(x, axis=-1)
        x = paddle.transpose(x, (0, 2, 1))

        return x

# 创建模型
model = PointNet()
model.train()
# 优化器定义
optim = paddle.optimizer.Adam(parameters=model.parameters(), weight_decay=0.001)
# 损失函数定义
loss_fn = paddle.nn.CrossEntropyLoss()
# 评价指标定义
m = paddle.metric.Accuracy()
# 训练轮数
epoch_num = 50
# 每多少个epoch保存
save_interval = 2
# 每多少个epoch验证
val_interval = 2
best_acc = 0
# 模型保存地址
output_dir = './output'
if not os.path.exists(output_dir):
    os.makedirs(output_dir)
# 训练过程
plot_acc = []
plot_loss = []
for epoch in range(epoch_num):
    total_loss = 0
    for batch_id, data in enumerate(train_loader()):
        inputs = paddle.to_tensor(data[0], dtype='float32')
        labels = paddle.to_tensor(data[1], dtype='int64')
        print(data[1])
        print(labels)
        predicts = model(inputs)
      
        # 计算损失和反向传播
        loss = loss_fn(predicts, labels)
        if loss.ndim == 0:
            total_loss += loss.numpy()  # 零维数组,直接取值
        else:
            total_loss += loss.numpy()[0]  # 非零维数组,取第一个元素
        loss.backward()
        # 计算acc
        predicts = paddle.reshape(predicts, (predicts.shape[0]*predicts.shape[1], -1))
        labels = paddle.reshape(labels, (labels.shape[0]*labels.shape[1], 1))
        correct = m.compute(predicts, labels)
        m.update(correct)
        # 优化器更新
        optim.step()
        optim.clear_grad()
    avg_loss = total_loss/batch_id
    plot_loss.append(avg_loss)
    print("epoch: {}/{}, loss is: {}, acc is:{}".format(epoch, epoch_num, avg_loss, m.accumulate()))
    m.reset()
    # 保存
    if epoch % save_interval == 0:
        model_name = str(epoch)
        paddle.save(model.state_dict(), './output/PointNet_{}.pdparams'.format(model_name))
        paddle.save(optim.state_dict(), './output/PointNet_{}.pdopt'.format(model_name))
    # 训练中途验证
    if epoch % val_interval == 0:
        model.eval()
        for batch_id, data in enumerate(val_loader()): 
            inputs = paddle.to_tensor(data[0], dtype='float32')
            labels = paddle.to_tensor(data[1], dtype='int64')
            predicts = model(inputs)
            predicts = paddle.reshape(predicts, (predicts.shape[0]*predicts.shape[1], -1))
            labels = paddle.reshape(labels, (labels.shape[0]*labels.shape[1], 1))
            correct = m.compute(predicts, labels)
            m.update(correct)
        val_acc = m.accumulate()
        plot_acc.append(val_acc)
        if val_acc > best_acc:
            best_acc = val_acc
            print("===================================val===========================================")
            print('val best epoch in:{}, best acc:{}'.format(epoch, best_acc))
            print("===================================train===========================================")
            paddle.save(model.state_dict(), './output/best_model.pdparams')
            paddle.save(optim.state_dict(), './output/best_model.pdopt')
        m.reset()
        model.train()

ckpt_path = 'output/best_model.pdparams'
para_state_dict = paddle.load(ckpt_path)
# 加载网络和参数
model = PointNet()
model.set_state_dict(para_state_dict)
model.eval()
# 加载数据集
point_cloud = point_clouds[0]
show_point_cloud = point_cloud
point_cloud = paddle.to_tensor(np.reshape(point_cloud, (1, 1, 1024, 3)), dtype='float32')
label = point_clouds_labels[0]
# 前向推理
preds = model(point_cloud)
show_pred = paddle.argmax(preds, axis=-1).numpy() + 1

visualize_data(show_point_cloud, show_pred[0], 'pred')
visualize_data(show_point_cloud, label, 'label')

看了下结果,对点云的数据进行了一个测试

在这里插入图片描述
目标检测的是横杆,训练集的数据较少,所以结果比较一般,后续可以添加更多数据,应该能得到更好的结果。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/401018.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

Windows下搭建EFK实例

资源下载 elasticSearch &#xff1a;下载最新版本的就行 kibana filebeat&#xff1a;注意选择压缩包下载 更新elasticsearch.yml&#xff0c;默认端口9200&#xff1a; # Elasticsearch Configuration # # NOTE: Elasticsearch comes with reasonable defaults for most …

计算机网络-局域网和城域网(二)

1.局域网互联设备&#xff1a; 2层网桥&#xff08;生成树、源路由&#xff09;、3层交换机、路由器。网桥要求3层以上协议相同&#xff0c;1、2层协议不同可互联。 2.生成树网桥&#xff1a; 又叫透明网桥&#xff0c;IEEE802.1d&#xff0c;生成树算法。基本思想是在网桥之…

Flask数据库操作-Flask-SQLAlchemy

Flask中一般使用flask-sqlalchemy来操作数据库。flask-sqlalchemy的使用介绍如下&#xff1a; 一、SQLAlchemy SQLALchemy 实际上是对数据库的抽象&#xff0c;让开发者不用直接和 SQL 语句打交道&#xff0c;而是通过 Python 对象来操作数据库&#xff0c;在舍弃一些性能开销…

Java Swing游戏开发学习1

不使用游戏引擎&#xff0c;只使用Java SDK开发游戏的学习。 游戏原理 图片来自某大佬视频讲解 原理结合实际代码 public class GamePanel extends Jpanel implements Runnable {...run(){}// 详情看下图... }项目结构 运行效果 代码code 在我的下载里面可以找到&#xf…

从输入url到页面显示中间发生了什么

文章目录 整体概述URL释义用户输入缓存处理域名解析IP 地址什么是域名解析浏览器查找域名对应IP小结 TCP 三次握手握手时序三次握手数据包分析为什么需要三次握手 HTTP 请求HTTP 响应服务器MVC 后台处理阶段http 响应报文 TCP 四次挥手浏览器渲染 整体概述 浏览器输入 URL 到页…

Cesium 问题:加载 gltf 格式的模型之后太小,如何让相机视角拉近

文章目录 问题分析问题 刚加载的模型太小,如何拉近视角放大 分析 在这里有两种方式进行拉近视角, 一种是点击复位进行视角拉近一种是刚加载就直接拉近视角// 模型三加载 this.damModel = new Cesium.Entity({name: "gltf模型",position:</

Tomcat 学习之 Servlet

目录 1 Servlet 介绍 2 创建一个 Servlet 3 web.xml 介绍&#xff08;不涉及 filter 和 listener 标签&#xff09; 3.1 display-name 3.2 welcome-file-list 3.3 servlet 3.4 session-config 3.5 error-page 4 Tomcat 如何根据 URL 定位到 Servlet 5 执行 Servlet …

QT-Day2

思维导图 作业 使用手动连接&#xff0c;将登录框中的取消按钮使用qt4版本的连接到自定义的槽函数中&#xff0c;在自定义的槽函数中调用关闭函数 将登录按钮使用qt5版本的连接到自定义的槽函数中&#xff0c;在槽函数中判断ui界面上输入的账号是否为"admin"&#x…

【Crypto | CTF】BUUCTF RSA2

天命&#xff1a;密码学越来越难了&#xff0c;看别人笔记都不知道写啥 天命&#xff1a;莫慌&#xff0c;虽然我不会推演法&#xff0c;但我可以用归纳法 虽然我不知道解题的推演&#xff0c;但我可以背公式啊哈哈哈 虽然我不会这题&#xff0c;但是我也能做出来 公式我不知…

ElasticStack安装(windows)

官网 : Elasticsearch 平台 — 大规模查找实时答案 | Elastic Elasticsearch Elastic Stack(一套技术栈) 包含了数据的整合 >提取 >存储 >使用&#xff0c;一整套! 各组件介绍: beats 套件:从各种不同类型的文件/应用中采集数据。比如:a,b,cd,e,aa,bb,ccLogstash:…

挑战杯 基于人工智能的图像分类算法研究与实现 - 深度学习卷积神经网络图像分类

文章目录 0 简介1 常用的分类网络介绍1.1 CNN1.2 VGG1.3 GoogleNet 2 图像分类部分代码实现2.1 环境依赖2.2 需要导入的包2.3 参数设置(路径&#xff0c;图像尺寸&#xff0c;数据集分割比例)2.4 从preprocessedFolder读取图片并返回numpy格式(便于在神经网络中训练)2.5 数据预…

四、分类算法 - 朴素贝叶斯算法

目录 1、朴素贝叶斯算法 1.1 案例 1.2 联合概率、条件概率、相互独立 1.3 贝叶斯公式 1.4 朴素贝叶斯算法原理 1.5 应用场景 2、朴素贝叶斯算法对文本进行分类 2.1 案例 2.2 拉普拉斯平滑系数 3、API 4、案例&#xff1a;20类新闻分类 4.1 步骤分析 4.2 代码分析 …

Java SE 入门到精通—基础语法【Java】

敲重点&#xff01; 本篇讲述了比较重要的基础&#xff0c;是必须要掌握的 1.程序入口 在Java中&#xff0c;main方法是程序的入口点&#xff0c;是JVM&#xff08;Java虚拟机&#xff09;执行Java应用程序的起始点。 main方法的方法签名必须遵循下面规范&#xff1a; publ…

css知识:盒模型盒子塌陷BFC

1. css盒模型 标准盒子模型&#xff0c;content-box 设置宽度即content的宽度 width content 总宽度content&#xff08;width设定值&#xff09; padding border IE/怪异盒子模型&#xff0c;border-box width content border padding 总宽度 width设定值 2. 如何…

好用的备忘录便签软件推荐哪一款?

好用的备忘录便签软件推荐哪一款&#xff1f;备忘录便签软件支持放在桌面上&#xff0c;可以辅助大家快捷记录&#xff0c;而且可以高效辅助大家进行记录&#xff0c;支持在电脑桌面显示的便签工具是比较多的&#xff0c;但是论其是否好用&#xff0c;大家还需要耐心的筛选一番…

Chromium的下载地址

Chromium的下载地址&#xff1a; Download Chromiumhttps://www.chromium.org/getting-involved/download-chromium/ https://commondatastorage.googleapis.com/chromium-browser-snapshots/index.html?prefixWin_x64/https://commondatastorage.googleapis.com/chromium-br…

LLM之RAG实战(二十七)| 如何评估RAG系统

有没有想过今天的一些应用程序是如何看起来几乎神奇地智能的&#xff1f;这种魔力很大一部分来自于一种叫做RAG和LLM的东西。把RAG&#xff08;Retrieval Augmented Generation&#xff09;想象成人工智能世界里聪明的书呆子&#xff0c;它会挖掘大量信息&#xff0c;准确地找到…

Kafka进阶

文章目录 概要应用场景消息队列两种模式kafka的基础架构分区常见问题小结 概要 kafka的传统定义&#xff1a;kafka是一个分布式的基于发布\订阅模式的消息队列&#xff0c;主要用于大数据实时处理领域。 kafka的最新概念&#xff1a;kafka是一个开源的分布式事件流平台&#x…

一种基于动态水位值的Flink调度优化算法(flink1.5以前),等同于实现flink的Credit-based反压原理

优化flink反压 说明1 flink反压介绍1.1 介绍1.2 大数据系统反压现状1.4 flink task与task之间的反压1.5 netty水位机制作用分析 2 反压优化算法3 重点&#xff01; 但是 可但是 flink1.5以后的反压过程。4 flink反压问题的查找瓶颈办法 说明 首先说明&#xff0c;偶然看了个论…

iMazing2024Windows和Mac的iOS设备管理软件(可以替代iTunes进行数据备份和管理)

iMazing2024是一款兼容 Windows 和 Mac 的 iOS 设备管理软件&#xff0c;可以替代 iTunes 进行数据备份和管理。以下是一些 iMazing 的主要功能和优点&#xff1a; 数据备份和恢复&#xff1a;iMazing 提供了强大的数据备份和恢复功能&#xff0c;可以备份 iOS 设备上的各种数据…