5、深入剖析PyTorch DataLoader源码

文章目录

  • 1. 重要类
  • 2. DataSet
  • 3. DataLoader
  • 4. Python实例

参考大神B站,记录学习笔记
5、深入剖析PyTorch DataLoader源码
其他大神笔记: pytorch数据操作—dataset,dataloader,transform
在这里插入图片描述

1. 重要类

  • Data Loader
  • Dataset
  • Sample
  • Random Sampler
  • Sequential Sampler
  • Batch Sample
  • Shuffle
  • torch.randperm
  • yield from
  • next,iter
  • torch.Generator
  • collate_fn
  • multi_processing
  • SingleProcessDataLoaderIter
  • get_iterator
  • index_sampler
  • BaseDataLoaderIter
  • index_sampler

2. DataSet

  • DataSet:主要是生成特征和标签对,自定义Dataset需要继承自抽象类dataset,需要实例化dataset的三种方法:__init__,__len__,__getitem__
  • init: 主要是定义特征features,标签labels的来源,有的特征features是图片,有的是csv格式文件,有的需要对图片进行一些变化,保证最后得到的特征是张量
  • len: 表示的是整个dataset数据集中的大小
  • getitem:是最重要的部分,需要形成(特征,标签)对,这样方便后续训练和识别,有训练数据集中的特征,标签的前处理,主要是能够根据dataset[i]返回第i个特征标签对,
    d a t a s e t [ i ] = ( f e a t u r e i , l a b e l i ) dataset[i]=(feature_i,label_i) dataset[i]=(featurei,labeli)
  • 注:属于训练数据集的预处理阶段,dataset根据服务器的大小进行分块处理,以便能够进行下去

3. DataLoader

  • DataLoader主要是为了将样本图片和标签一起按照指定的batchsize打包起来,形成一捆一捆的,这样能加速训练,打包中涉及到顺序打包,随机打包,涉及到采样的概念,为了增加程序的鲁棒性,一般会打乱打包;
  • 比如我们的dataset有500个数据,DataLoader中的batchsize=5,那么一个DataLoader[i]中包含5个dataset[i],一共有100个dataloader[i]

4. Python实例

  • python 代码
#!/usr/bin/env python
# -*- coding:utf-8 -*-
# @FileName  :my_test_label.py
# @Time      :2024/11/19 20:15
# @Author    :Jason Zhang
import numpy as np
import torch
from torch.utils.data import Dataset, DataLoader

np.set_printoptions(suppress=True, precision=2)
np.random.seed(43434)


class MyDataset(Dataset):
    def __init__(self, in_data, in_label):
        self.data = in_data
        self.label = in_label

    def __len__(self):
        return len(self.label)

    def __getitem__(self, item):
        index_data = torch.tensor(self.data[item])
        index_label = torch.tensor(self.label[item])
        return index_data, index_label


if __name__ == "__main__":
    run_code = 0
    my_data = np.linspace(1, 100, 500, dtype=np.float64)
    print(f"my_data=\n{my_data}")
    my_label = np.sin(my_data)
    print(f"my_label=\n{my_label}")
    my_dataset = MyDataset(my_data, my_label)
    my_data_loader = DataLoader(my_dataset, batch_size=5, shuffle=True, drop_last=True)
    i = 0
    for loader in my_data_loader:
        print(f"loader[{i}]={loader}")
        i += 1
  • 结果:
my_data=
[  1.     1.2    1.4    1.6    1.79   1.99   2.19   2.39   2.59   2.79
   2.98   3.18   3.38   3.58   3.78   3.98   4.17   4.37   4.57   4.77
   4.97   5.17   5.36   5.56   5.76   5.96   6.16   6.36   6.56   6.75
   6.95   7.15   7.35   7.55   7.75   7.94   8.14   8.34   8.54   8.74
   8.94   9.13   9.33   9.53   9.73   9.93  10.13  10.32  10.52  10.72
  10.92  11.12  11.32  11.52  11.71  11.91  12.11  12.31  12.51  12.71
  12.9   13.1   13.3   13.5   13.7   13.9   14.09  14.29  14.49  14.69
  14.89  15.09  15.28  15.48  15.68  15.88  16.08  16.28  16.47  16.67
  16.87  17.07  17.27  17.47  17.67  17.86  18.06  18.26  18.46  18.66
  18.86  19.05  19.25  19.45  19.65  19.85  20.05  20.24  20.44  20.64
  20.84  21.04  21.24  21.43  21.63  21.83  22.03  22.23  22.43  22.63
  22.82  23.02  23.22  23.42  23.62  23.82  24.01  24.21  24.41  24.61
  24.81  25.01  25.2   25.4   25.6   25.8   26.    26.2   26.39  26.59
  26.79  26.99  27.19  27.39  27.59  27.78  27.98  28.18  28.38  28.58
  28.78  28.97  29.17  29.37  29.57  29.77  29.97  30.16  30.36  30.56
  30.76  30.96  31.16  31.35  31.55  31.75  31.95  32.15  32.35  32.55
  32.74  32.94  33.14  33.34  33.54  33.74  33.93  34.13  34.33  34.53
  34.73  34.93  35.12  35.32  35.52  35.72  35.92  36.12  36.31  36.51
  36.71  36.91  37.11  37.31  37.51  37.7   37.9   38.1   38.3   38.5
  38.7   38.89  39.09  39.29  39.49  39.69  39.89  40.08  40.28  40.48
  40.68  40.88  41.08  41.27  41.47  41.67  41.87  42.07  42.27  42.46
  42.66  42.86  43.06  43.26  43.46  43.66  43.85  44.05  44.25  44.45
  44.65  44.85  45.04  45.24  45.44  45.64  45.84  46.04  46.23  46.43
  46.63  46.83  47.03  47.23  47.42  47.62  47.82  48.02  48.22  48.42
  48.62  48.81  49.01  49.21  49.41  49.61  49.81  50.    50.2   50.4
  50.6   50.8   51.    51.19  51.39  51.59  51.79  51.99  52.19  52.38
  52.58  52.78  52.98  53.18  53.38  53.58  53.77  53.97  54.17  54.37
  54.57  54.77  54.96  55.16  55.36  55.56  55.76  55.96  56.15  56.35
  56.55  56.75  56.95  57.15  57.34  57.54  57.74  57.94  58.14  58.34
  58.54  58.73  58.93  59.13  59.33  59.53  59.73  59.92  60.12  60.32
  60.52  60.72  60.92  61.11  61.31  61.51  61.71  61.91  62.11  62.3
  62.5   62.7   62.9   63.1   63.3   63.49  63.69  63.89  64.09  64.29
  64.49  64.69  64.88  65.08  65.28  65.48  65.68  65.88  66.07  66.27
  66.47  66.67  66.87  67.07  67.26  67.46  67.66  67.86  68.06  68.26
  68.45  68.65  68.85  69.05  69.25  69.45  69.65  69.84  70.04  70.24
  70.44  70.64  70.84  71.03  71.23  71.43  71.63  71.83  72.03  72.22
  72.42  72.62  72.82  73.02  73.22  73.41  73.61  73.81  74.01  74.21
  74.41  74.61  74.8   75.    75.2   75.4   75.6   75.8   75.99  76.19
  76.39  76.59  76.79  76.99  77.18  77.38  77.58  77.78  77.98  78.18
  78.37  78.57  78.77  78.97  79.17  79.37  79.57  79.76  79.96  80.16
  80.36  80.56  80.76  80.95  81.15  81.35  81.55  81.75  81.95  82.14
  82.34  82.54  82.74  82.94  83.14  83.33  83.53  83.73  83.93  84.13
  84.33  84.53  84.72  84.92  85.12  85.32  85.52  85.72  85.91  86.11
  86.31  86.51  86.71  86.91  87.1   87.3   87.5   87.7   87.9   88.1
  88.29  88.49  88.69  88.89  89.09  89.29  89.48  89.68  89.88  90.08
  90.28  90.48  90.68  90.87  91.07  91.27  91.47  91.67  91.87  92.06
  92.26  92.46  92.66  92.86  93.06  93.25  93.45  93.65  93.85  94.05
  94.25  94.44  94.64  94.84  95.04  95.24  95.44  95.64  95.83  96.03
  96.23  96.43  96.63  96.83  97.02  97.22  97.42  97.62  97.82  98.02
  98.21  98.41  98.61  98.81  99.01  99.21  99.4   99.6   99.8  100.  ]
my_label=
[ 0.84  0.93  0.98  1.    0.98  0.91  0.81  0.68  0.53  0.35  0.16 -0.04
 -0.24 -0.42 -0.59 -0.74 -0.86 -0.94 -0.99 -1.   -0.97 -0.9  -0.79 -0.66
 -0.5  -0.32 -0.12  0.07  0.27  0.45  0.62  0.76  0.88  0.95  0.99  1.
  0.96  0.88  0.77  0.63  0.47  0.29  0.09 -0.11 -0.3  -0.48 -0.65 -0.78
 -0.89 -0.96 -1.   -0.99 -0.95 -0.87 -0.75 -0.61 -0.44 -0.25 -0.06  0.14
  0.33  0.51  0.67  0.8   0.9   0.97  1.    0.99  0.94  0.85  0.73  0.58
  0.41  0.22  0.03 -0.17 -0.36 -0.54 -0.69 -0.82 -0.92 -0.98 -1.   -0.98
 -0.93 -0.83 -0.71 -0.56 -0.38 -0.19  0.01  0.2   0.39  0.57  0.72  0.84
  0.93  0.98  1.    0.98  0.91  0.82  0.69  0.53  0.35  0.16 -0.04 -0.24
 -0.42 -0.59 -0.74 -0.86 -0.94 -0.99 -1.   -0.97 -0.9  -0.8  -0.66 -0.5
 -0.32 -0.13  0.07  0.27  0.45  0.62  0.76  0.87  0.95  0.99  1.    0.96
  0.88  0.78  0.64  0.47  0.29  0.09 -0.1  -0.3  -0.48 -0.64 -0.78 -0.89
 -0.96 -1.   -0.99 -0.95 -0.87 -0.75 -0.61 -0.44 -0.26 -0.06  0.14  0.33
  0.51  0.67  0.8   0.9   0.97  1.    0.99  0.94  0.85  0.73  0.58  0.41
  0.22  0.03 -0.17 -0.36 -0.54 -0.69 -0.82 -0.92 -0.98 -1.   -0.98 -0.93
 -0.83 -0.71 -0.56 -0.38 -0.19  0.    0.2   0.39  0.56  0.72  0.84  0.93
  0.98  1.    0.98  0.91  0.82  0.69  0.53  0.35  0.16 -0.04 -0.23 -0.42
 -0.59 -0.74 -0.86 -0.94 -0.99 -1.   -0.97 -0.9  -0.8  -0.66 -0.5  -0.32
 -0.13  0.07  0.26  0.45  0.62  0.76  0.87  0.95  0.99  1.    0.96  0.89
  0.78  0.64  0.47  0.29  0.1  -0.1  -0.3  -0.48 -0.64 -0.78 -0.89 -0.96
 -1.   -0.99 -0.95 -0.87 -0.76 -0.61 -0.44 -0.26 -0.06  0.13  0.33  0.51
  0.67  0.8   0.9   0.97  1.    0.99  0.94  0.85  0.73  0.59  0.41  0.23
  0.03 -0.17 -0.36 -0.54 -0.69 -0.82 -0.92 -0.98 -1.   -0.98 -0.93 -0.84
 -0.71 -0.56 -0.38 -0.19  0.    0.2   0.39  0.56  0.71  0.84  0.93  0.98
  1.    0.98  0.91  0.82  0.69  0.53  0.35  0.16 -0.04 -0.23 -0.42 -0.59
 -0.74 -0.86 -0.94 -0.99 -1.   -0.97 -0.9  -0.8  -0.66 -0.5  -0.32 -0.13
  0.07  0.26  0.45  0.62  0.76  0.87  0.95  0.99  1.    0.96  0.89  0.78
  0.64  0.47  0.29  0.1  -0.1  -0.29 -0.48 -0.64 -0.78 -0.89 -0.96 -1.
 -0.99 -0.95 -0.87 -0.76 -0.61 -0.45 -0.26 -0.06  0.13  0.33  0.51  0.67
  0.8   0.9   0.97  1.    0.99  0.94  0.85  0.74  0.59  0.42  0.23  0.03
 -0.17 -0.36 -0.53 -0.69 -0.82 -0.92 -0.98 -1.   -0.98 -0.93 -0.84 -0.71
 -0.56 -0.39 -0.2   0.    0.2   0.39  0.56  0.71  0.84  0.93  0.98  1.
  0.98  0.92  0.82  0.69  0.53  0.36  0.16 -0.03 -0.23 -0.42 -0.59 -0.74
 -0.85 -0.94 -0.99 -1.   -0.97 -0.9  -0.8  -0.67 -0.5  -0.32 -0.13  0.07
  0.26  0.45  0.61  0.76  0.87  0.95  0.99  1.    0.96  0.89  0.78  0.64
  0.48  0.29  0.1  -0.1  -0.29 -0.48 -0.64 -0.78 -0.89 -0.96 -1.   -0.99
 -0.95 -0.87 -0.76 -0.61 -0.45 -0.26 -0.07  0.13  0.32  0.5   0.66  0.8
  0.9   0.97  1.    0.99  0.94  0.86  0.74  0.59  0.42  0.23  0.03 -0.16
 -0.35 -0.53 -0.69 -0.82 -0.92 -0.98 -1.   -0.98 -0.93 -0.84 -0.71 -0.56
 -0.39 -0.2  -0.    0.2   0.39  0.56  0.71  0.84  0.93  0.98  1.    0.98
  0.92  0.82  0.69  0.53  0.36  0.17 -0.03 -0.23 -0.42 -0.59 -0.73 -0.85
 -0.94 -0.99 -1.   -0.97 -0.9  -0.8  -0.67 -0.51]
loader[0]=[tensor([84.7234, 50.5992, 31.9499, 72.4228, 30.7595], dtype=torch.float64), tensor([ 0.0994,  0.3276,  0.5090, -0.1655, -0.6103], dtype=torch.float64)]
loader[1]=[tensor([93.4529, 27.9820, 60.5190, 38.2986, 27.5852], dtype=torch.float64), tensor([-0.7138,  0.2882, -0.7371,  0.5642,  0.6359], dtype=torch.float64)]
loader[2]=[tensor([86.9058, 34.9259, 56.5511, 83.5331, 24.2124], dtype=torch.float64), tensor([-0.8718, -0.3601,  0.0024,  0.9608, -0.7958], dtype=torch.float64)]
loader[3]=[tensor([43.8537, 10.7214, 36.1162, 85.7154, 11.9118], dtype=torch.float64), tensor([-0.1282, -0.9627, -0.9999, -0.7786, -0.6088], dtype=torch.float64)]
loader[4]=[tensor([35.7194, 83.3347, 90.0802, 57.1463, 75.2004], dtype=torch.float64), tensor([-0.9176,  0.9966,  0.8552,  0.5627, -0.1965], dtype=torch.float64)]
loader[5]=[tensor([ 3.7776, 36.5130, 52.1864, 57.9399, 58.5351], dtype=torch.float64), tensor([-0.5940, -0.9269,  0.9393,  0.9839,  0.9149], dtype=torch.float64)]
loader[6]=[tensor([19.2525, 78.3747, 87.3026, 83.9299, 13.8958], dtype=torch.float64), tensor([ 0.3921,  0.1643, -0.6147,  0.7790,  0.9710], dtype=torch.float64)]
loader[7]=[tensor([70.4389, 39.6874, 68.6533, 88.6914, 94.6433], dtype=torch.float64), tensor([ 0.9697,  0.9141, -0.4455,  0.6645,  0.3853], dtype=torch.float64)]
loader[8]=[tensor([74.8036, 97.8176, 75.0020, 77.5812, 81.9459], dtype=torch.float64), tensor([-0.5602, -0.4153, -0.3859,  0.8184,  0.2614], dtype=torch.float64)]
loader[9]=[tensor([68.8517, 26.3948,  8.1423, 87.1042, 35.1242], dtype=torch.float64), tensor([-0.2603,  0.9527,  0.9587, -0.7581, -0.5369], dtype=torch.float64)]
loader[10]=[tensor([25.7996, 69.0501, 32.9419, 84.9218, 85.5170], dtype=torch.float64), tensor([ 0.6185, -0.0649,  0.9990, -0.0987, -0.6396], dtype=torch.float64)]
loader[11]=[tensor([42.8617, 54.3687, 33.1403, 35.9178, 40.4810], dtype=torch.float64), tensor([-0.9004, -0.8201,  0.9882, -0.9779,  0.3520], dtype=torch.float64)]
loader[12]=[tensor([66.0741, 78.9699, 89.4850, 28.9739, 38.1002], dtype=torch.float64), tensor([-0.1005, -0.4170,  0.9987, -0.6439,  0.3904], dtype=torch.float64)]
loader[13]=[tensor([16.0782, 54.5671, 36.7114, 74.6052,  2.9840], dtype=torch.float64), tensor([-0.3618, -0.9168, -0.8348, -0.7125,  0.1570], dtype=torch.float64)]
loader[14]=[tensor([92.2625,  3.5792, 14.6894, 47.6232, 89.0882], dtype=torch.float64), tensor([-0.9153, -0.4237,  0.8514, -0.4789,  0.9017], dtype=torch.float64)]
loader[15]=[tensor([90.4770, 66.2725, 11.5150, 11.1182, 64.4870], dtype=torch.float64), tensor([ 0.5885, -0.2947, -0.8681, -0.9925,  0.9964], dtype=torch.float64)]
loader[16]=[tensor([76.7876, 68.0581, 32.1483, 13.4990, 77.1844], dtype=torch.float64), tensor([ 0.9836, -0.8708,  0.6686,  0.8032,  0.9769], dtype=torch.float64)]
loader[17]=[tensor([18.4589, 64.6854, 73.0180, 84.3267, 19.6493], dtype=torch.float64), tensor([-0.3808,  0.9603, -0.6899,  0.4762,  0.7172], dtype=torch.float64)]
loader[18]=[tensor([73.6132, 25.6012,  1.9920, 29.9659, 75.7956], dtype=torch.float64), tensor([-0.9771,  0.4515,  0.9126, -0.9927,  0.3870], dtype=torch.float64)]
loader[19]=[tensor([53.1784, 40.0842, 84.5251, 50.4008, 31.5531], dtype=torch.float64), tensor([0.2267, 0.6864, 0.2936, 0.1349, 0.1367], dtype=torch.float64)]
loader[20]=[tensor([24.6092,  6.1583, 80.9539, 81.7475, 21.8317], dtype=torch.float64), tensor([-0.4999, -0.1245, -0.6650,  0.0660,  0.1588], dtype=torch.float64)]
loader[21]=[tensor([69.8437, 95.6353, 35.3226, 17.0701, 77.3828], dtype=torch.float64), tensor([ 0.6659,  0.9832, -0.6926, -0.9783,  0.9156], dtype=torch.float64)]
loader[22]=[tensor([41.8697, 12.1102, 91.6673, 30.9579,  3.3808], dtype=torch.float64), tensor([-0.8568, -0.4405, -0.5322, -0.4422, -0.2369], dtype=torch.float64)]
loader[23]=[tensor([76.3908, 69.2485, 66.6693, 96.8257, 90.6754], dtype=torch.float64), tensor([ 0.8374,  0.1331, -0.6411,  0.5343,  0.4176], dtype=torch.float64)]
loader[24]=[tensor([67.8597, 34.7275, 20.4429, 29.3707, 71.8277], dtype=torch.float64), tensor([-0.9506, -0.1691,  0.9997, -0.8896,  0.4159], dtype=torch.float64)]
loader[25]=[tensor([98.0160, 99.0080, 13.1022, 68.4549, 26.7916], dtype=torch.float64), tensor([-0.5864, -0.9989,  0.5106, -0.6132,  0.9961], dtype=torch.float64)]
loader[26]=[tensor([61.9078, 30.1643, 81.3507, 54.7655, 73.2164], dtype=torch.float64), tensor([-0.7980, -0.9495, -0.3247, -0.9775, -0.8191], dtype=torch.float64)]
loader[27]=[tensor([98.8096, 23.2204, 92.8577, 46.8297, 53.7735], dtype=torch.float64), tensor([-0.9887, -0.9423, -0.9837,  0.2900, -0.3583], dtype=torch.float64)]
loader[28]=[tensor([74.0100, 56.3527, 17.4669, 37.9018,  4.5711], dtype=torch.float64), tensor([-0.9834, -0.1947, -0.9823,  0.2013, -0.9900], dtype=torch.float64)]
loader[29]=[tensor([61.5110, 58.3367, 48.2184, 12.5070, 13.6974], dtype=torch.float64), tensor([-0.9689,  0.9765, -0.8887, -0.0593,  0.9048], dtype=torch.float64)]
loader[30]=[tensor([40.6794, 90.2786, 22.4269, 54.1703, 10.5230], dtype=torch.float64), tensor([ 0.1606,  0.7363, -0.4220, -0.6913, -0.8904], dtype=torch.float64)]
loader[31]=[tensor([30.5611, 80.5571, 90.8737,  9.5311, 39.0922], dtype=torch.float64), tensor([-0.7544, -0.9020,  0.2304, -0.1061,  0.9842], dtype=torch.float64)]
loader[32]=[tensor([47.0281, 95.0401, 64.2886,  1.5952, 21.6333], dtype=torch.float64), tensor([0.0957, 0.7120, 0.9935, 0.9997, 0.3503], dtype=torch.float64)]
loader[33]=[tensor([ 1.3968, 91.4689, 32.3467, 55.9559, 32.7435], dtype=torch.float64), tensor([ 0.9849, -0.3548,  0.8021, -0.5586,  0.9706], dtype=torch.float64)]
loader[34]=[tensor([20.2445, 45.0441, 65.4790,  5.5631, 44.2505], dtype=torch.float64), tensor([ 0.9846,  0.8732,  0.4746, -0.6594,  0.2650], dtype=torch.float64)]
loader[35]=[tensor([41.2745, 29.7675, 39.2906,  1.1984, 39.4890], dtype=torch.float64), tensor([-0.4204, -0.9970,  0.9998,  0.9315,  0.9761], dtype=torch.float64)]
loader[36]=[tensor([67.2645, 85.1202, 62.8998, 47.8216,  6.7535], dtype=torch.float64), tensor([-0.9611, -0.2929,  0.0679, -0.6425,  0.4532], dtype=torch.float64)]
loader[37]=[tensor([97.4208,  2.3888, 88.8898, 65.0822, 59.3287], dtype=torch.float64), tensor([-0.0315,  0.6837,  0.7987,  0.7779,  0.3538], dtype=torch.float64)]
loader[38]=[tensor([11.7134, 72.0261, 86.3106, 28.3788, 88.0962], dtype=torch.float64), tensor([-0.7532,  0.2285, -0.9965, -0.1042,  0.1312], dtype=torch.float64)]
loader[39]=[tensor([34.3307, 36.3146, 26.5932, 46.4329, 42.2665], dtype=torch.float64), tensor([ 0.2249, -0.9827,  0.9939,  0.6373, -0.9895], dtype=torch.float64)]
loader[40]=[tensor([99.4048, 76.5892, 23.0220, 24.0140,  2.1904], dtype=torch.float64), tensor([-0.9028,  0.9287, -0.8578, -0.8995,  0.8141], dtype=torch.float64)]
loader[41]=[tensor([51.1944, 92.4609, 38.6954, 51.7896, 59.7255], dtype=torch.float64), tensor([ 0.8010, -0.9767,  0.8395,  0.9989, -0.0352], dtype=torch.float64)]
loader[42]=[tensor([75.5972, 49.8056, 83.1363, 12.3086,  1.7936], dtype=torch.float64), tensor([ 0.1977, -0.4438,  0.9933, -0.2549,  0.9753], dtype=torch.float64)]
loader[43]=[tensor([65.2806, 82.3427, 14.0942,  4.9679, 49.0120], dtype=torch.float64), tensor([ 0.6388,  0.6141,  0.9991, -0.9675, -0.9501], dtype=torch.float64)]
loader[44]=[tensor([29.1723, 75.9940, 57.3447, 71.2325, 63.8918], dtype=torch.float64), tensor([-0.7821,  0.5611,  0.7146,  0.8543,  0.8723], dtype=torch.float64)]
loader[45]=[tensor([82.5411, 23.8156, 67.4629, 18.0621, 34.5291], dtype=torch.float64), tensor([ 0.7576, -0.9680, -0.9967, -0.7085,  0.0285], dtype=torch.float64)]
loader[46]=[tensor([ 94.8417, 100.0000,  51.3928,  64.0902,  30.3627], dtype=torch.float64), tensor([ 0.5596, -0.5064,  0.9033,  0.9516, -0.8690], dtype=torch.float64)]
loader[47]=[tensor([45.2425, 66.4709, 80.3587, 75.3988, 22.2285], dtype=torch.float64), tensor([ 9.5215e-01, -4.7723e-01, -9.6938e-01,  5.7391e-04, -2.3509e-01],
       dtype=torch.float64)]
loader[48]=[tensor([ 6.3567, 76.9860, 86.7074, 57.7415, 61.1142], dtype=torch.float64), tensor([ 0.0735,  0.9999, -0.9512,  0.9294, -0.9892], dtype=torch.float64)]
loader[49]=[tensor([ 5.1663, 17.8637, 50.2024, 15.4830, 79.3667], dtype=torch.float64), tensor([-0.8987, -0.8337, -0.0630,  0.2231, -0.7358], dtype=torch.float64)]
loader[50]=[tensor([96.4289,  5.9599, 33.3387, 31.3547, 52.3848], dtype=torch.float64), tensor([ 0.8195, -0.3177,  0.9387, -0.0612,  0.8533], dtype=torch.float64)]
loader[51]=[tensor([47.2265, 94.4449, 97.6192, 22.8236, 62.1062], dtype=torch.float64), tensor([-0.1024,  0.1958, -0.2278, -0.7396, -0.6636], dtype=torch.float64)]
loader[52]=[tensor([31.7515,  6.5551, 63.0982, 63.6934, 41.6713], dtype=torch.float64), tensor([ 0.3293,  0.2686,  0.2632,  0.7588, -0.7384], dtype=torch.float64)]
loader[53]=[tensor([ 9.9279, 70.2405, 56.7495, 61.3126, 50.7976], dtype=torch.float64), tensor([-0.4821,  0.9025,  0.1995, -0.9987,  0.5074], dtype=torch.float64)]
loader[54]=[tensor([52.9800, 81.1523, 98.4128, 93.0561,  2.5872], dtype=torch.float64), tensor([ 0.4142, -0.5048, -0.8539, -0.9290,  0.5264], dtype=torch.float64)]
loader[55]=[tensor([60.3206, 72.8196, 19.0541, 79.5651, 91.8657], dtype=torch.float64), tensor([-0.5895, -0.5337,  0.2031, -0.8549, -0.6886], dtype=torch.float64)]
loader[56]=[tensor([ 7.5471, 19.8477, 87.8978, 97.2224, 18.8557], dtype=torch.float64), tensor([ 0.9533,  0.8405, -0.0667,  0.1662,  0.0062], dtype=torch.float64)]
loader[57]=[tensor([26.9900, 25.9980, 27.3868, 67.0661, 73.4148], dtype=torch.float64), tensor([ 0.9593,  0.7613,  0.7755, -0.8879, -0.9161], dtype=torch.float64)]
loader[58]=[tensor([63.2966, 82.1443, 22.0301, 85.3186, 89.6834], dtype=torch.float64), tensor([ 0.4482,  0.4465, -0.0389, -0.4756,  0.9891], dtype=torch.float64)]
loader[59]=[tensor([69.4469, 61.7094, 58.1383, 59.5271, 96.2305], dtype=torch.float64), tensor([ 0.3258, -0.9012,  0.9998,  0.1625,  0.9164], dtype=torch.float64)]
loader[60]=[tensor([95.8337, 64.8838, 40.2826, 50.9960, 77.9780], dtype=torch.float64), tensor([0.9999, 0.8865, 0.5296, 0.6672, 0.5328], dtype=torch.float64)]
loader[61]=[tensor([48.0200, 78.5731, 96.0321, 37.7034, 63.4950], dtype=torch.float64), tensor([-0.7809, -0.0333,  0.9773,  0.0043,  0.6156], dtype=torch.float64)]
loader[62]=[tensor([71.6293, 20.0461, 58.7335, 57.5431, 28.5772], dtype=torch.float64), tensor([ 0.5870,  0.9308,  0.8173,  0.8384, -0.2982], dtype=torch.float64)]
loader[63]=[tensor([86.5090, 46.6313, 88.4930,  7.3487, 98.6112], dtype=torch.float64), tensor([-0.9934,  0.4729,  0.5041,  0.8750, -0.9397], dtype=torch.float64)]
loader[64]=[tensor([14.2926, 62.3046,  8.5391, 44.0521, 23.4188], dtype=torch.float64), tensor([ 0.9879, -0.5032,  0.7744,  0.0698, -0.9898], dtype=torch.float64)]
loader[65]=[tensor([18.6573, 99.6032, 32.5451, 89.2866, 55.5591], dtype=torch.float64), tensor([-0.1911, -0.8003,  0.9041,  0.9692, -0.8358], dtype=torch.float64)]
loader[66]=[tensor([40.8778,  8.3407, 25.0060, 38.4970, 43.2585], dtype=torch.float64), tensor([-0.0370,  0.8839, -0.1264,  0.7159, -0.6622], dtype=torch.float64)]
loader[67]=[tensor([79.1683, 99.8016, 29.5691,  7.9439, 65.8758], dtype=torch.float64), tensor([-0.5879, -0.6664, -0.9622,  0.9960,  0.0975], dtype=torch.float64)]
loader[68]=[tensor([15.6814, 79.9619, 52.5832,  9.7295, 43.0601], dtype=torch.float64), tensor([ 0.0266, -0.9890,  0.7338, -0.3000, -0.7969], dtype=torch.float64)]
loader[69]=[tensor([ 3.1824, 59.1303, 39.8858, 33.5371, 45.6393], dtype=torch.float64), tensor([-0.0408,  0.5312,  0.8163,  0.8523,  0.9963], dtype=torch.float64)]
loader[70]=[tensor([71.4309,  2.7856, 25.4028,  7.1503, 17.6653], dtype=torch.float64), tensor([ 0.7351,  0.3485,  0.2668,  0.7625, -0.9262], dtype=torch.float64)]
loader[71]=[tensor([25.2044, 74.4068, 16.2766, 98.2144, 12.9038], dtype=torch.float64), tensor([ 0.0716, -0.8368, -0.5384, -0.7346,  0.3311], dtype=torch.float64)]
loader[72]=[tensor([78.1764, 16.8717, 58.9319, 78.7715, 67.6613], dtype=torch.float64), tensor([ 0.3555, -0.9183,  0.6878, -0.2297, -0.9932], dtype=torch.float64)]
loader[73]=[tensor([55.7575, 41.0762, 15.2846, 47.4248, 42.6633], dtype=torch.float64), tensor([-0.7112, -0.2333,  0.4109, -0.2964, -0.9685], dtype=torch.float64)]
loader[74]=[tensor([ 4.7695, 93.2545, 94.0481, 21.2365, 48.4168], dtype=torch.float64), tensor([-0.9984, -0.8378, -0.1984,  0.6851, -0.9616], dtype=torch.float64)]
loader[75]=[tensor([69.6453, 60.7174, 13.3006, 70.0421, 93.6513], dtype=torch.float64), tensor([ 0.5058, -0.8558,  0.6700,  0.7999, -0.5617], dtype=torch.float64)]
loader[76]=[tensor([70.6373, 44.6473, 35.5210, 27.1884, 87.5010], dtype=torch.float64), tensor([ 0.9988,  0.6171, -0.8212,  0.8847, -0.4472], dtype=torch.float64)]
loader[77]=[tensor([91.2705, 43.4569, 18.2605, 44.8457, 33.7355], dtype=torch.float64), tensor([-0.1636, -0.5015, -0.5556,  0.7601,  0.7325], dtype=torch.float64)]
loader[78]=[tensor([45.8377, 46.0361, 77.7796, 53.5752, 99.2064], dtype=torch.float64), tensor([ 0.9598,  0.8856,  0.6891, -0.1673, -0.9698], dtype=torch.float64)]
loader[79]=[tensor([ 9.1343, 37.5050, 87.6994, 20.8397, 95.4369], dtype=torch.float64), tensor([ 0.2864, -0.1929, -0.2621,  0.9134,  0.9280], dtype=torch.float64)]
loader[80]=[tensor([ 1.0000, 86.1122, 71.0341, 41.4729, 31.1563], dtype=torch.float64), tensor([ 0.8415, -0.9606,  0.9400, -0.5910, -0.2567], dtype=torch.float64)]
loader[81]=[tensor([50.0040, 55.1623, 21.0381, 27.7836, 66.8677], dtype=torch.float64), tensor([-0.2585, -0.9830,  0.8152,  0.4713, -0.7798], dtype=torch.float64)]
loader[82]=[tensor([84.1283, 89.8818, 56.9479, 62.7014, 49.2104], dtype=torch.float64), tensor([ 0.6402,  0.9406,  0.3887, -0.1301, -0.8699], dtype=torch.float64)]
loader[83]=[tensor([81.5491, 53.9719, 11.3166, 28.1804, 38.8938], dtype=torch.float64), tensor([-0.1319, -0.5353, -0.9489,  0.0938,  0.9301], dtype=torch.float64)]
loader[84]=[tensor([44.4489, 91.0721, 15.0862, 96.6273, 82.7395], dtype=torch.float64), tensor([0.4499, 0.0340, 0.5825, 0.6905, 0.8714], dtype=torch.float64)]
loader[85]=[tensor([ 8.9359, 93.8497, 33.9339, 37.1082, 16.6733], dtype=torch.float64), tensor([ 0.4697, -0.3876,  0.5840, -0.5571, -0.8223], dtype=torch.float64)]
loader[86]=[tensor([52.7816, 59.9238,  4.3727, 54.9639, 60.9158], dtype=torch.float64), tensor([ 0.5855, -0.2315, -0.9429, -0.9999, -0.9410], dtype=torch.float64)]
loader[87]=[tensor([37.3066, 74.2084, 22.6253, 68.2565, 34.1323], dtype=torch.float64), tensor([-0.3825, -0.9283, -0.5925, -0.7569,  0.4126], dtype=torch.float64)]
loader[88]=[tensor([76.1924, 94.2465, 28.7756,  7.7455, 14.8878], dtype=torch.float64), tensor([ 0.7133, -0.0013, -0.4805,  0.9941,  0.7313], dtype=torch.float64)]
loader[89]=[tensor([51.5912, 95.2385, 80.1603, 42.4649,  9.3327], dtype=torch.float64), tensor([ 0.9701,  0.8364, -0.9988, -0.9986,  0.0920], dtype=torch.float64)]
loader[90]=[tensor([19.4509, 36.9098,  4.1743, 92.6593, 24.8076], dtype=torch.float64), tensor([ 0.5658, -0.7099, -0.8587, -0.9998, -0.3194], dtype=torch.float64)]
loader[91]=[tensor([72.6212, 83.7315, 21.4349, 23.6172, 79.7635], dtype=torch.float64), tensor([-0.3566,  0.8873,  0.5280, -0.9985, -0.9404], dtype=torch.float64)]
loader[92]=[tensor([49.4088, 17.2685, 62.5030, 12.7054, 82.9379], dtype=torch.float64), tensor([-0.7557, -0.9999, -0.3230,  0.1386,  0.9510], dtype=torch.float64)]
loader[93]=[tensor([10.1263,  3.9760, 65.6774, 10.9198, 45.4409], dtype=torch.float64), tensor([-0.6453, -0.7409,  0.2918, -0.9971,  0.9937], dtype=torch.float64)]
loader[94]=[tensor([49.6072, 46.2345,  8.7375, 15.8798, 73.8116], dtype=torch.float64), tensor([-0.6117,  0.7767,  0.6345, -0.1710, -0.9999], dtype=torch.float64)]
loader[95]=[tensor([55.3607,  5.3647, 80.7555, 48.6152, 48.8136], dtype=torch.float64), tensor([-0.9276, -0.7947, -0.7992, -0.9968, -0.9929], dtype=torch.float64)]
loader[96]=[tensor([97.0240, 10.3246, 43.6553, 42.0681, 85.9138], dtype=torch.float64), tensor([ 0.3573, -0.7832, -0.3212, -0.9416, -0.8870], dtype=torch.float64)]
loader[97]=[tensor([ 6.9519, 14.4910, 24.4108,  5.7615, 26.1964], dtype=torch.float64), tensor([ 0.6200,  0.9381, -0.6608, -0.4983,  0.8741], dtype=torch.float64)]
loader[98]=[tensor([53.3768, 20.6413, 56.1543, 16.4749, 70.8357], dtype=torch.float64), tensor([ 0.0303,  0.9757, -0.3842, -0.6940,  0.9888], dtype=torch.float64)]
loader[99]=[tensor([51.9880, 60.1222, 92.0641, 72.2244, 88.2946], dtype=torch.float64), tensor([ 0.9885, -0.4187, -0.8180,  0.0322,  0.3240], dtype=torch.float64)]

Process finished with exit code 0

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/919707.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

D74【 python 接口自动化学习】- python 基础之HTTP

day74 http基础定义 学习日期:20241120 学习目标:http定义及实战 -- http基础介绍 学习笔记: HTTP定义 HTTP 是一个协议(服务器传输超文本到浏览器的传送协议),是基于 TCP/IP 通信协议来传递数据&…

非对称之美(贪心)

非对称之美(贪心) import java.util.*; public class Main{public static void main(String[] arg) {Scanner in new Scanner(System.in);char[] ch in.next().toCharArray(); int n ch.length; int flag 1;for(int i 1; i < n; i) {if(ch[i] ! ch[0]) {flag …

Rust derive macro(Rust #[derive])Rust派生宏

参考文章&#xff1a;附录 D&#xff1a;派生特征 trait 文章目录 Rust 中的派生宏 #[derive]基础使用示例&#xff1a;派生 Debug 派生其他常用特征示例&#xff1a;派生 Clone 和 Copy 派生宏的限制和自定义派生自定义派生宏上面代码运行时报错了&#xff0c;以下是解释 结论…

WebStorm 2022.3.2/IntelliJ IDEA 2024.3出现elementUI提示未知 HTML 标记、组件引用爆红等问题处理

WebStorm 2022.3.2/IntelliJ IDEA 2024.3出现elementUI提示未知 HTML 标记、组件引用爆红等问题处理 1. 标题识别elementUI组件爆红 这个原因是&#xff1a; 在官网说明里&#xff0c;才版本2024.1开始&#xff0c;默认启用的 Vue Language Server&#xff0c;但是在 Vue 2 项…

MySQL库和表的操作

目录 一. 查看数据库 二. 创建数据库 三. 字符集和校验规则 四. 修改和删除数据库 4.1 数据库修改 4.2 数据库删除 五. 备份与恢复 5.1 备份 5.2 还原 5.3 注意事项 5.4 查看连接情况 六. 创建表 七. 查看表结构 八. 修改表 九. …

YouQu使用手册【元素定位】

元素定位 文章目录 前言一、气泡识别二、不依赖OpenCV的图像识别方案三、动态图像识别四、背景五、sniff(嗅探器)使用六、元素操作七、框架封装八、背景【OCR识别】九、实现原理十、使用说明十一、RPC服务端部署十二、负载均衡十三、链式调用十四、背景【相对坐标定位】十五、…

基于YOLOv11的火焰实时检测系统(python+pyside6界面+系统源码+可训练的数据集+也完成的训练模型)

100多种【基于YOLOv8/v10/v11的目标检测系统】目录&#xff08;pythonpyside6界面系统源码可训练的数据集也完成的训练模型 摘要&#xff1a; 本文提出了一种基于YOLOv11算法的火灾检测系统&#xff0c;利用1852张图片&#xff08;1647张训练集&#xff0c;205张验证集&#…

【vulhub】nginx解析漏洞(nginx_parsing_vulnerability)

1. nginx解析漏洞原理 fastcgi 在处理’.php’文件时发现文件并不存在,这时 php.ini 配置文件中cgi.fix_pathinfo1 发挥作用,这项配置用于修复路径,如果当前路径不存在则采用上层路径 (1)由于 nginx.conf的配置导致 nginx把以’.php”结尾的文件交给 fastcgi 处理,为此可以构造…

谈谈Spring的常见基础概念

文章是对Spring一些基础的底层概念进行分析&#xff0c;后续再遇到这些问题的时候&#xff0c;可以采用这些步骤进行详细解释。 一.谈谈SpringIOC的理解&#xff0c;原理与实现? 总&#xff1a; 1.控制反转&#xff1a; (1)原来的对象是由使用者来进行控制&#xff0c;有了S…

Apple Vision Pro开发001-开发配置

一、Vision Pro开发硬件和软件要求 硬件要求软件要求 1、Apple Silicon Mac(M系列芯片的Mac电脑) 2、Apple vision pro-真机调试 XCode15.2及以上&#xff0c;调试开发和打包发布Unity开发者账号&&苹果开发者账号 二 、开启无线调试 1、Apple Vision Pro和Mac连接同…

沸蛇鼠标,多功能智慧AI,重新定义生产力

随着人工智能的快速发展,AI的应用落地已成为当下除大模型外竞争最为激烈的红海之一。手机、汽车、家居等产品都在AI加持下,衍生出了更多使用场景。AI鼠标便是其中一项热门产品。 云决科技作为在互联网数据领域的领军者,始终将用户需求作为首位,为用户提供全方位、高价值的AIGC…

设计模式:4、命令模式(双重委托)

目录 1、命令模式包括四种角色 2、命令模式的UML类图 3、代码示例 1、命令模式包括四种角色 接收者&#xff08;Receiver&#xff09;&#xff1a;接收者是一个类的实例&#xff0c;该实例负责执行与请求相关的操作。命令&#xff08;Command&#xff09;接口&#xff1a;命…

(udp)网络编程套接字Linux(整理)

源IP地址和目的IP地址 唐僧例子1 在IP数据包头部中, 有两个IP地址, 分别叫做源IP地址, 和目的IP地址.思考: 我们光有IP地址就可以完成通信了嘛? 想象一下发qq消息的例子, 有了IP地址能够把消息发送到对方的机器上,但是还需要有一个其他的标识来区分出, 这个数据要给哪个程序进…

【Pikachu】SSRF(Server-Side Request Forgery)服务器端请求伪造实战

尽人事以听天命 1.Server-Side Request Forgery服务器端请求伪造学习 SSRF&#xff08;服务器端请求伪造&#xff09;攻击的详细解析与防范 SSRF&#xff08;Server-Side Request Forgery&#xff0c;服务器端请求伪造&#xff09; 是一种安全漏洞&#xff0c;它允许攻击者通…

STM32 Nucleo-64 boards板卡介绍

目录 概述 1 板卡介绍 2 板卡硬件架构 3 扩展接口介绍 4 ST-LINK接口 4.1 Pin引脚定义 4.2 框图结构 4.3 硬件原理图 概述 本文主要介绍STM32 Nucleo-64 boards的相关内容&#xff0c;包括硬件架构&#xff0c;支持的STM32类型&#xff0c;重点介绍了STM32 Nucleo-64 …

文件上传-阿里云OSS

使用 安装SDK <dependency><groupId>com.aliyun.oss</groupId><artifactId>aliyun-sdk-oss</artifactId><version>3.17.4</version> </dependency> 如果使用的是Java 9及以上的版本&#xff0c;则需要添加JAXB相关依赖。添加…

实验三:构建园区网(静态路由)

目录 一、实验简介 二、实验目的 三、实验需求 四、实验拓扑 五、实验任务及要求 1、任务 1&#xff1a;完成网络部署 2、任务 2&#xff1a;设计全网 IP 地址 3、任务 3&#xff1a;实现全网各主机之间的互访 六、实验步骤 1、在 eNSP 中部署网络 2、配置各主机 IP …

【linux】线程概念与控制

&#x1f308; 个人主页&#xff1a;Zfox_ &#x1f525; 系列专栏&#xff1a;Linux 目录 一&#xff1a;&#x1f525; 线程基本概念 &#x1f98b; 1-1 什么是线程&#x1f98b; 1-2 分⻚式存储管理1-2-1 虚拟地址和⻚表的由来1-2-2 ⻚表1-2-3 ⻚⽬录结构1-2-4 两级⻚表的地…

Flutter:AnimatedIcon图标动画,自定义Icon通过延时Interval,实现交错式动画

配置vsync&#xff0c;需要实现一下with SingleTickerProviderStateMixinclass _MyHomePageState extends State<MyHomePage> with SingleTickerProviderStateMixin{// late延迟初始化 AnimationControllerlate AnimationController _controller;overridevoid initStat…

深入解析小程序组件:view 和 scroll-view 的基本用法

深入解析小程序组件:view 和 scroll-view 的基本用法 引言 在微信小程序的开发中,组件是构建用户界面的基本单元。两个常用的组件是 view 和 scroll-view。这两个组件不仅功能强大,而且使用灵活,是开发者实现复杂布局和交互的基础。本文将深入探讨这两个组件的基本用法,…