Pytorch从入门到精通:二、dataset与datalodar

数据是深度学习的基础,一般来说,数据量越大,训练出来的模型也越强大。如果现在有了一些数据,该怎么把这些数据加到模型中呢?Pytorch中提供了dataset和dataloader,让我们一起来学习一下吧,dataset和dataloader博主将用几个例子来说明,感谢支持!
在这里插入图片描述

文章目录

  • 一、dataset
  • 二、查看dataset
  • 三、os操作读取文件夹下的对象
  • 四、Dataset
    • Dataset实操一
    • Dataset 实操二
    • dataset实操三
  • 五、 datalodar
    • 自定义dataset并用datalodar加载
  • 六、os的一些操作

一、dataset

提供一种方式去获取数据及其label
● 如何获取每一个数据及其label
● 告诉我们有多少数据
查看pytorch是否可用

print(torch.cuda.is_available()) # 查看当前cuda是否可用
True

二、查看dataset

from torch.utils.data import Dataset
help(Dataset) # 用帮助文档查看Dataset

Help on class Dataset in module torch.utils.data.dataset:
class Dataset(typing.Generic)
| Dataset(*args, **kwds)
|
| An abstract class representing a :class:Dataset.
|
| All datasets that represent a map from keys to data samples should subclass
| it. All subclasses should overwrite :meth:__getitem__, supporting fetching a
| data sample for a given key. Subclasses could also optionally overwrite
| :meth:__len__, which is expected to return the size of the dataset by many
| :class:~torch.utils.data.Sampler implementations and the default options
| of :class:~torch.utils.data.DataLoader.
|
| … note::
| :class:~torch.utils.data.DataLoader by default constructs a index
| sampler that yields integral indices. To make it work with a map-style
| dataset with non-integral indices/keys, a custom sampler must be provided.
|
| Method resolution order:
| Dataset
| typing.Generic
| builtins.object
|
| Methods defined here:
|
| add(self, other: ‘Dataset[T_co]’) -> ‘ConcatDataset[T_co]’
|
| getattr(self, attribute_name)
|
| getitem(self, index) -> +T_co
|
| ----------------------------------------------------------------------
| Class methods defined here:
|
| register_datapipe_as_function(function_name, cls_to_register, enable_df_api_tracing=False) from builtins.type
|
| register_function(function_name, function) from builtins.type
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| dict
| dictionary for instance variables (if defined)
|
| weakref
| list of weak references to the object (if defined)
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| annotations = {‘functions’: typing.Dict[str, typing.Callable]}
|
| orig_bases = (typing.Generic[+T_co],)
|
| parameters = (+T_co,)
|
| functions = {‘concat’: functools.partial(<function Dataset.register_da…
|
| ----------------------------------------------------------------------
| Class methods inherited from typing.Generic:
|
| class_getitem(params) from builtins.type
|
| init_subclass(*args, **kwargs) from builtins.type
| This method is called when a class is subclassed.
|
| The default implementation does nothing. It may be
| overridden to extend subclasses.
|
| ----------------------------------------------------------------------
| Static methods inherited from typing.Generic:
|
| new(cls, *args, **kwds)
| Create and return a new object. See help(type) for accurate signature.

三、os操作读取文件夹下的对象

import os
dir_path = "hymenoptera_data\\hymenoptera_data\\train\\ants"  # 文件夹目录
data_dir = os.listdir(dir_path)  # 获取文件夹目录中的对象
data_dir

[‘0013035.jpg’,
‘1030023514_aad5c608f9.jpg’,
‘1095476100_3906d8afde.jpg’,
‘1099452230_d1949d3250.jpg’,
‘116570827_e9c126745d.jpg’,
‘1225872729_6f0856588f.jpg’,
‘1262877379_64fcada201.jpg’,
‘1269756697_0bce92cdab.jpg’,
‘1286984635_5119e80de1.jpg’,
‘132478121_2a430adea2.jpg’,
‘1360291657_dc248c5eea.jpg’,
‘1368913450_e146e2fb6d.jpg’,
‘1473187633_63ccaacea6.jpg’,
‘148715752_302c84f5a4.jpg’,
‘1489674356_09d48dde0a.jpg’,
‘149244013_c529578289.jpg’,
‘150801003_3390b73135.jpg’,
‘150801171_cd86f17ed8.jpg’,
‘154124431_65460430f2.jpg’,
‘162603798_40b51f1654.jpg’,
‘1660097129_384bf54490.jpg’,
‘167890289_dd5ba923f3.jpg’,
‘1693954099_46d4c20605.jpg’,
‘175998972.jpg’,
‘178538489_bec7649292.jpg’,
‘1804095607_0341701e1c.jpg’,
‘1808777855_2a895621d7.jpg’,
‘188552436_605cc9b36b.jpg’,
‘1917341202_d00a7f9af5.jpg’,
‘1924473702_daa9aacdbe.jpg’,
‘196057951_63bf063b92.jpg’,
‘196757565_326437f5fe.jpg’,
‘201558278_fe4caecc76.jpg’,
‘201790779_527f4c0168.jpg’,
‘2019439677_2db655d361.jpg’,
‘207947948_3ab29d7207.jpg’,
‘20935278_9190345f6b.jpg’,
‘224655713_3956f7d39a.jpg’,
‘2265824718_2c96f485da.jpg’,
‘2265825502_fff99cfd2d.jpg’,
‘226951206_d6bf946504.jpg’,
‘2278278459_6b99605e50.jpg’,
‘2288450226_a6e96e8fdf.jpg’,
‘2288481644_83ff7e4572.jpg’,
‘2292213964_ca51ce4bef.jpg’,
‘24335309_c5ea483bb8.jpg’,
‘245647475_9523dfd13e.jpg’,
‘255434217_1b2b3fe0a4.jpg’,
‘258217966_d9d90d18d3.jpg’,
‘275429470_b2d7d9290b.jpg’,
‘28847243_e79fe052cd.jpg’,
‘318052216_84dff3f98a.jpg’,
‘334167043_cbd1adaeb9.jpg’,
‘339670531_94b75ae47a.jpg’,
‘342438950_a3da61deab.jpg’,
‘36439863_0bec9f554f.jpg’,
‘374435068_7eee412ec4.jpg’,
‘382971067_0bfd33afe0.jpg’,
‘384191229_5779cf591b.jpg’,
‘386190770_672743c9a7.jpg’,
‘392382602_1b7bed32fa.jpg’,
‘403746349_71384f5b58.jpg’,
‘408393566_b5b694119b.jpg’,
‘424119020_6d57481dab.jpg’,
‘424873399_47658a91fb.jpg’,
‘450057712_771b3bfc91.jpg’,
‘45472593_bfd624f8dc.jpg’,
‘459694881_ac657d3187.jpg’,
‘460372577_f2f6a8c9fc.jpg’,
‘460874319_0a45ab4d05.jpg’,
‘466430434_4000737de9.jpg’,
‘470127037_513711fd21.jpg’,
‘474806473_ca6caab245.jpg’,
‘475961153_b8c13fd405.jpg’,
‘484293231_e53cfc0c89.jpg’,
‘49375974_e28ba6f17e.jpg’,
‘506249802_207cd979b4.jpg’,
‘506249836_717b73f540.jpg’,
‘512164029_c0a66b8498.jpg’,
‘512863248_43c8ce579b.jpg’,
‘518773929_734dbc5ff4.jpg’,
‘522163566_fec115ca66.jpg’,
‘522415432_2218f34bf8.jpg’,
‘531979952_bde12b3bc0.jpg’,
‘533848102_70a85ad6dd.jpg’,
‘535522953_308353a07c.jpg’,
‘540889389_48bb588b21.jpg’,
‘541630764_dbd285d63c.jpg’,
‘543417860_b14237f569.jpg’,
‘560966032_988f4d7bc4.jpg’,
‘5650366_e22b7e1065.jpg’,
‘6240329_72c01e663e.jpg’,
‘6240338_93729615ec.jpg’,
‘649026570_e58656104b.jpg’,
‘662541407_ff8db781e7.jpg’,
‘67270775_e9fdf77e9d.jpg’,
‘6743948_2b8c096dda.jpg’,
‘684133190_35b62c0c1d.jpg’,
‘69639610_95e0de17aa.jpg’,
‘707895295_009cf23188.jpg’,
‘7759525_1363d24e88.jpg’,
‘795000156_a9900a4a71.jpg’,
‘822537660_caf4ba5514.jpg’,
‘82852639_52b7f7f5e3.jpg’,
‘841049277_b28e58ad05.jpg’,
‘886401651_f878e888cd.jpg’,
‘892108839_f1aad4ca46.jpg’,
‘938946700_ca1c669085.jpg’,
‘957233405_25c1d1187b.jpg’,
‘9715481_b3cb4114ff.jpg’,
‘998118368_6ac1d91f81.jpg’,
‘ant photos.jpg’,
‘Ant_1.jpg’,
‘army-ants-red-picture.jpg’,
‘formica.jpeg’,
‘hormiga_co_por.jpg’,
‘imageNotFound.gif’,
‘kurokusa.jpg’,
‘MehdiabadiAnt2_600.jpg’,
‘Nepenthes_rafflesiana_ant.jpg’,
‘swiss-army-ant.jpg’,
‘termite-vs-ant.jpg’,
‘trap-jaw-ant-insect-bg.jpg’,
‘VietnameseAntMimicSpider.jpg’]
注意在windows下,路径使用双斜线\

四、Dataset

Dataset实操一

from torch.utils.data import Dataset
import os
from PIL import Image


class Mydata(Dataset):
    def __init__(self,root_path,label_path):
        self.root_path = root_path  # hymenoptera_data/hymenoptera_data/train
        self.label_path = label_path  # /ants
        self.path = os.path.join(self.root_path,self.label_path)  # 从根目录开始的绝对路径
        self.image_path = os.listdir(self.path) # 从根目录开始绝对路径文件夹下的对象 hymenoptera_data/hymenoptera_data/train/ants下的图片 type--> list
    def __getitem__(self, idx):
        image_name = self.image_path[idx] # 单一的图片名称
        image_item_path = os.path.join(self.root_path,self.label_path,image_name)
        img = Image.open(image_item_path)
        label = self.label_path
        return img,label
    def __len__(self):
        return len(self.image_path)

ants_root_path = "hymenoptera_data\\hymenoptera_data\\train"
ants_label_path = "ants"
Ants = Mydata(ants_root_path,ants_label_path)
Ants[0][0].show() # 第一个0是索引,拿到第一个图像和标签,第二个0是拿到第一个图像,并显示出来

D:\anaconda\envs\Gpu-Pytorch\lib\site-packages\tqdm\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
在这里插入图片描述

bee_label_path = "bees"
Bees = Mydata(bee_root_path,bee_label_path)
Bees[0][0].show()

在这里插入图片描述

# 创建训练集

train = Ants + Bees   # 直接将数据集加起来
print("the length of Ants is ",Ants.__len__())
print("the length of Bees is ",Bees.__len__())
print("the length of train is ",train.__len__())
the length of Ants is  124
the length of Bees is  121
the length of train is  245
# 查看是否正确
train[123][0].show() # 应该为蚂蚁
train[124][0].show() # 应该为蜜蜂

在这里插入图片描述
在这里插入图片描述

Dataset 实操二

#!/usr/bin/env python
# -*- coding: UTF-8 -*-
"""
@Project :Pytorch学习 
@File    :task_3.py
@IDE     :PyCharm 
@Author  :咋
@Date    :2023/6/29 14:29 
"""
from torch.utils.data import Dataset
import os
from PIL import Image

class Mydata(Dataset):
    def __init__(self,root_path,image_path,label_path):
        self.root_path = root_path
        self.image_path = image_path
        self.label_path = label_path
        self.A_image_path = os.path.join(self.root_path,self.image_path)
        self.A_label_path = os.path.join(self.root_path,self.label_path)
        self.img_item = os.listdir(self.A_image_path)
        self.label_item = os.listdir(self.A_label_path)

    def __getitem__(self, idx):
        img_name = self.img_item[idx]
        img_path = os.path.join(self.A_image_path, img_name)
        label_list = [i.split(".")[0] for i in self.label_item if i.count(".") == 1]
        # print(label_list)
        if img_name.split(".")[0] in label_list:
            img = Image.open(img_path)
            label_path = os.path.join(self.A_label_path,img_name.split(".")[0])
            label_path += ".txt"
            file = open(label_path, 'r')
            label = file.read()
            file.close()
            return img,label
        else:
            print("{0}没有对应的标签".format(img_name))
            return 0

    def __len__(self):
        return len(self.img_item)





train_ants_root_path = "练手数据集\\train"
train_ants_image_path = "ants_image"
train_ants_label_path = "ants_label"
Ants = Mydata(train_ants_root_path,train_ants_image_path,train_ants_label_path)
for i in range(Ants.__len__()):
    try:
        print(Ants[i][1])
    except TypeError:
        print("跳过此张图片!")
# Ants[122][0].show()
# print(Ants[122][1])

ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
ants
formica.jpeg没有对应的标签
跳过此张图片!
ants
imageNotFound.gif没有对应的标签
跳过此张图片!
ants
ants
ants
ants
ants
ants
添加了异常捕获,解决了图片没有对应标签的问题!

dataset实操三

使用torchvision中的数据集创建dataset

#!/usr/bin/env python
# -*- coding: UTF-8 -*-
"""
@Project :Pytorch_learn 
@File    :dataset_3.py
@IDE     :PyCharm 
@Author  :咋
@Date    :2023/7/2 14:58 
"""
import torchvision
from torch.utils.data import DataLoader
from tensorboardX import SummaryWriter
from torchvision import transforms
dataset = torchvision.datasets.MNIST("./Mnist",train=True,download=True,transform=transforms.ToTensor())
dataloader = DataLoader(dataset,batch_size=64,shuffle=False,num_workers=0)
# 使用tensorboard将dataloader展示出来
'''方式一
# write = SummaryWriter("log_2")
# count = 0
# for data in dataloader:
#     image,label = data
#     # print(data[1])
#     # print(image.shape)
#     write.add_images("dataloader",image,count)
#     count += 1
'''

# 方式二
write = SummaryWriter("log_3")
for i,data in enumerate(dataloader):
    image,label = data
    write.add_images("dataloader",image,i)

write.close()

在这里插入图片描述
enumerate会将可迭代对象中的内容和其索引一起返回:

例如对于一个seq,得到:
(0, seq[0]), (1, seq[1]), (2, seq[2])

五、 datalodar

为后面的网络提供不同的数据类型

自定义dataset并用datalodar加载

import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
from net import Net
import softmax
from torch.utils.data import Dataset
import os
from PIL import Image
import numpy as np


transform_tool = transforms.ToTensor()  # 创建一个transform工具
# # image_tensor = transform_tool(image)
with open("mnist-label.txt", 'r') as f:
    label_str = f.read().strip()   # 打开文件读入缓存
class Mydata(Dataset):
    def __init__(self,image_path):
        self.image_path = image_path
        # self.label_path = label_path  # /ants
        self.image = os.listdir(self.image_path) # 从根目录开始绝对路径文件夹下的对象 hymenoptera_data/hymenoptera_data/train/ants下的图片 type--> list
    def __getitem__(self, idx):
        image_name = self.image[idx] # 单一的图片名称
        image_item_path = os.path.join(self.image_path,image_name)
        img = Image.open(image_item_path)
        # transform_tool = transforms.ToTensor()  # 创建一个transform工具
        img = transform_tool(img)
        labels_list = [int(label) for label in label_str.split(',')]  # 读取标签,不用每次都打开
        labels = np.array(labels_list)
        label = labels[idx]
        return img,label
    def __len__(self):
        return len(self.image)
# trainset = Mydata("mnist-dataset")

# 设置训练参数
batch_size = 32
epochs = 5
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# 数据集
# transform = transforms.Compose([transforms.ToTensor(),
#                                 transforms.Normalize((0.5,), (0.5,))])
# trainset =
# trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainset = Mydata("mnist-dataset")

trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=False,num_workers=0)
print(len(trainloader))
# 输出提示信息
print("batch_size:", batch_size)
print("data_batches:", len(trainloader))
print("epochs:", epochs)

# 神经网络
net = Net().to(device)
# net.load_state_dict(torch.load('./model/model.pth'))

# 损失函数和优化器
# 负对数似然损失
criterion = nn.NLLLoss()
optimizer = optim.SGD(net.parameters(), lr=0.0005, momentum=0.9)
total_correct = 0
total_samples = 0
# 训练网络
```python
for epoch in range(epochs):
    running_loss = 0.0
    for i, data in enumerate(trainloader):
        inputs, labels = data
        inputs, labels = Variable(inputs).to(device), Variable(labels).to(device)

        # 反向传播优化参数
        optimizer.zero_grad()
        outputs = net(inputs)
        # outputs = int(net(inputs))
        # print(outputs)
        labels = labels.long()
        # print(labels)
        # print(type(labels))
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
        # 计算每个batch的准确率
        _, predicted = torch.max(outputs.data, 1)
        total_samples += labels.size(0)
        total_correct += (predicted == labels).sum().item()

        if i % 5 == 0:    # 每轮输出损失值
            accuracy = 100.0 * total_correct / total_samples
            print('[epoch: %d, batches: %d] loss: %.5f accuracy: %.2f%%' %
                  (epoch + 1, i + 1, running_loss / 2000, accuracy))
            total_correct = 0
            total_samples = 0
            running_loss = 0.0
torch.save(net.state_dict(), 'model.pth')  # 每轮保存模型参数

print('Finished Training')

打开文件可以在定义类之前打开,把文件信息读入缓存中,在__getitem__中读取各个标签,不用每次执行__getitem__都打开一次文件。

六、os的一些操作

windows使用两个\\表示路径
import os
dir_path = "/home/aistudio"  # 文件夹目录
data_dir = os.listdir(dir_path)  # 获取文件夹目录中的对象
label_path = "label"
all_path = os.path.join(dir_path,label_path)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/43995.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

物通博联5G+工业互联网解决方案助力打造5G智能工厂

面对来自成本和市场等压力挑战&#xff0c;工业企业正通过数字化升级提升效益降低成本&#xff0c;拓展发展空间。 随着科技的不断发展&#xff0c;5G技术已经成为了全球关注的焦点。5G技术的高速度、低延迟和大连接特性为各行各业带来了巨大的变革机遇。工业和信息化部有关负…

10年程序员,想对新人说什么?

前言 最近知乎上&#xff0c;有一位大佬邀请我回答下面这个问题&#xff0c;看到这个问题我百感交集&#xff0c;感触颇多。 在我是新人时&#xff0c;如果有前辈能够指导方向一下&#xff0c;分享一些踩坑经历&#xff0c;或许会让我少走很多弯路&#xff0c;节省更多的学习的…

linux升级mysql

linux升级mysql 一.介绍二.下载三.文件配置1.查找删除mysql2.解压配置 四.修改配置五.初始化mysql服务六.启动mysql七.配置数据库七.测试 一.介绍 由于最近业务需要&#xff0c;不得不将之前的mysql5.7.26升级到mysql8.0加了 Linux安装mysql&#xff08;5.7.26&#xff09;&…

Docker-Compose 轻松搭建 Grafana+InfluxDb 实用 Jmeter 监控面板

目录 前言&#xff1a; 1、背景 2、GranfanaInfluxDB 配置 2.1 服务搭建 2.2 配置 Grafana 数据源 2.3 配置 Grafana 面板 3、Jmeter 配置 3.1 配置 InfluxDB 监听器 3.2 实际效果 前言&#xff1a; Grafana 和 InfluxDB 是两个非常流行的监控工具&#xff0c;它们可…

华为OD机试真题 Java 实现【告警抑制】【2023 B卷 100分】,附详细解题思路

目录 专栏导读一、题目描述二、输入描述三、输出描述四、解题思路五、Java算法源码六、效果展示1、输入2、输出3、说明 华为OD机试 2023B卷题库疯狂收录中&#xff0c;刷题点这里 专栏导读 本专栏收录于《华为OD机试&#xff08;JAVA&#xff09;真题&#xff08;A卷B卷&#…

KnowStreaming系列教程第二篇——项目整体架构分析

一、KS项目代码结构&#xff1a; ks项目代码结构如上&#xff1a; (1)km-console 是前端部分&#xff0c;基于React开发 (2)km-rest 是后端部分&#xff0c;主要是接受前端请求&#xff0c;对应controller相关代码所在模块 (3)km-biz:业务逻辑处理 (4)km-core:核心逻辑 (5…

剖析未曾开言先转腚-UMLChina建模知识竞赛第4赛季第8轮

DDD领域驱动设计批评文集 欢迎加入“软件方法建模师”群 《软件方法》各章合集 之前的第8轮题目无人答对&#xff0c;现换题重出。 参考潘加宇在《软件方法》和UMLChina公众号文章中发表的内容作答。在本文下留言回答。 只要最先答对前3题&#xff0c;即可获得本轮优胜。第…

【Java】JVM运行流程以及垃圾回收处理

目录 1.JVM简介 2.JVM 和《Java虚拟机规范》 3.JVM运行流程 1.类加载器 1.一个类的生命周期 2.双亲委派模型 2.JVM运行时数据区 1.方法区&#xff08;线程共享&#xff09; JDK 1.8 元空间的变化 运行时常量池 2.堆&#xff08;线程共享&#xff09; 2.1演示OOM异常…

用i18next使你的应用国际化-Vue

ref: https://www.i18next.com/ 在vue项目中安装相关依赖&#xff1a; i18nexti18next-vuei18next-browser-languagedetector&#xff0c;用于检测用户语言 npm install i18next i18next-vue i18next-browser-languagedetector创建i18n.js文件&#xff1a; import i18next f…

Sony索尼CMOS图像传感器SubLVDS与SLVS-EC接口FPGA开发方案

索尼Sony公司的工业CMOS图像传感器主要有3种接口&#xff1a;Sub-LVDS、SLVS、SLVS-EC。 Sub-LVDS接口的CMOS主要是IMX2XX系列和IMX3XX系列的一部分型号&#xff0c;例如IMX250&#xff0c;IMX252、IMX255、IMX392、IMX304等。 SLVS与SLVS-EC接口的CMOS主要是IMX3XX系列的一部分…

听GPT 讲K8s源代码--pkg(七)

k8s项目中 pkg/kubelet/config&#xff0c;pkg/kubelet/configmap&#xff0c;pkg/kubelet/container&#xff0c;pkg/kubelet/cri 这几个目录处理与 kubelet 配置、ConfigMap、容器管理和容器运行时交互相关的功能。它们共同构成了 kubelet 的核心功能&#xff0c;使其能够在 …

AIGC书籍推荐:《生成式深度学习的数学原理》

生成式 AI 使用各种机器学习算法&#xff0c;从数据中学习要素&#xff0c;使机器能够创建全新的数字视频、图像、文本、音频或代码等内容。生成式 AI 技术在近两年取得了重大突破&#xff0c;产生了全球性的影响。它的发展离不开近年来生成式深度学习大模型的突破。与一般意义…

Vue组件自定义事件

v-on:xxx"" &#xff1a;绑定 this.$emit(xxx) : 触发 this.$off() : 解绑 App.vue <template><div class"app"><h1>{{msg}}</h1><!--通过父组件给子组件传递函数类型的props实现&#xff1a;子给父传递参数--><…

java 应用 cpu 过高故障排查

文章目录 一、前言二、测试代码 Test.java三、Linux 编译运行 Test.java 程序四、top 命令查看 cpu 使用情况五、查看进程下的线程详情 top -H -p 11748六、将线程 12240 的 pid 转为 16 进制 printf "0x%x\n" 12240七、jstack 查看进程的快照遗留 一、前言 前两天…

广告行业中那些趣事系列64:低成本训练一个媲美ChatGPT效果的Vicuna模型

导读&#xff1a;本文是“数据拾光者”专栏的第六十四篇文章&#xff0c;这个系列将介绍在广告行业中自然语言处理和推荐系统实践。本篇主要从理论到实践介绍低成本训练一个媲美ChatGPT效果的Vicuna模型&#xff0c;对于希望搭建自己的大语言模型并应用到实际业务场景感兴趣的小…

研发机器配网方案(针对禁止外网电脑的组网方案)

背景&#xff1a;公司是研发型小公司&#xff0c;难免会使用A某D和K某l 等国内免费软件&#xff0c;这两个是业界律师函发得最多的软件。最简单的方案是离网使用&#xff0c;但是离网使用比较麻烦的是要进行文件传输&#xff0c;需要使用U盘拷贝&#xff0c;另外研发型企业一般…

有向图的强联通分量-SCC-Tarjan算法

有向图的强联通分量(SCC)Tarjan算法 强连通分量&#xff08;Strongly Connected Components&#xff0c;SCC&#xff09;的定义是&#xff1a;极大的强连通子图。 下图中&#xff0c;子图{1,2,3,4}为一个强连通分量&#xff0c;因为顶点1,2,3,4两两可达。{5},{6}也分别是两个强…

c语言练手项目【编写天天酷跑游戏2.0】EASYX图形库的运用。代码开源,素材已打包

天天酷跑项目的开发 项目前言 项目是基于Windows&#xff0c;easyX图形库进行开发的&#xff0c; 开发环境&#xff1a;Visual Studio 2022 项目技术最低要求&#xff1a; 常量&#xff0c;变量&#xff0c;数组&#xff0c;循环&#xff0c;函数。 文章目录 天天酷跑项目的…

Hadoop概念学习(无spring集成)

Hadoop 分布式的文件存储系统 三个核心组件 但是现在已经发展到很多组件的s 或者这个图 官网地址: https://hadoop.apache.org 历史 hadoop历史可以看这个: https://zhuanlan.zhihu.com/p/54994736 优点 高可靠性&#xff1a; Hadoop 底层维护多个数据副本&#xff0c;所…

初识网络 --- 浅了解一些基础概念

文章目录 初识网络局域网与广域网 初识协议协议分层 OSI七层模型TCP/IP 四层&#xff08;五层&#xff09;模型网络传输基本流程协议报头局域网通信原理传输流程图数据包封装和分用 初识网络 在每台计算机独立的情况下&#xff1a;假设现在有三台计算机&#xff0c;每台计算机…