推荐收藏!万字长文带入快速使用 keras

这些年,有很多感悟:一个人精力是有限的,一个人视野也有有限的,你总会不经意间发现优秀人的就在身边。

看我文章的小伙伴应该经常听我说过的一句话:技术要学会交流、分享,不建议闭门造车。一个人可以走的很快、一堆人可以走的更远。 这句话是这些年经验的总结。

本文就是由我们粉丝群的小伙伴讨论、分享的总结,最终汇总成了这个《keras 快速使用手册》,如果你也喜欢学习、交流,可以文末的方式加入我们。

我们开始正题:

keras快速使用手册

import keras
import tensorflow as tf
print(keras.__version__)
print(tf.__version__)

基本命令

import random
import tensorflow as tf
import numpy as np
import os
def seed_everything(seed):
    random.seed(seed)
    os.environ['PYTHONHASHSEED'] = str(seed)
    np.random.seed(seed)
    tf.random.set_seed(seed)
    
seed_everything(2019)

定义数据的输入和输出

直接加载到内存中

train_x = np.random.radnom((1000,23))
train_y = np.random.radnom((1000,1))
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(train_x, train_y, test_size=0.1, random_state=0, shuffle=True)

迭代器的形式读取数据

import keras


# helper function for data visualization    
def denormalize(x):
    """Scale image to range 0..1 for correct plot"""
    x_max = np.percentile(x, 98)
    x_min = np.percentile(x, 2)    
    x = (x - x_min) / (x_max - x_min)
    x = x.clip(0, 1)
    return x
    

# classes for data loading and preprocessing
class Dataset:
    """CamVid Dataset. Read images, apply augmentation and preprocessing transformations.
    
    Args:
        images_dir (str): path to images folder
        masks_dir (str): path to segmentation masks folder
        class_values (list): values of classes to extract from segmentation mask
        augmentation (albumentations.Compose): data transfromation pipeline 
            (e.g. flip, scale, etc.)
        preprocessing (albumentations.Compose): data preprocessing 
            (e.g. noralization, shape manipulation, etc.)
    
    """

    def __init__(self, images_dir, masks_dir, classes=None, augmentation=None, preprocessing=None,):
        
        self.CLASSES = ['sky', 'building', 'pole', 'road', 'pavement', 
               'tree', 'signsymbol', 'fence', 'car', 
               'pedestrian', 'bicyclist', 'unlabelled']
        self.ids = os.listdir(images_dir)
        self.images_fps = [os.path.join(images_dir, image_id) for image_id in self.ids]
        self.masks_fps = [os.path.join(masks_dir, image_id) for image_id in self.ids]
        
        # convert str names to class values on masks
        self.class_values = [self.CLASSES.index(cls.lower()) for cls in self.CLASSES]
        
        self.augmentation = augmentation
        self.preprocessing = preprocessing
    
    def __getitem__(self, i):
        
        # read data
        image = cv2.imread(self.images_fps[i])
        image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
        image = cv2.resize(image,(480,480))
        mask = cv2.imread(self.masks_fps[i], 0)
        mask = cv2.resize(mask,(480,480))
        
        # extract certain classes from mask (e.g. cars)
        masks = [(mask == v) for v in self.class_values]
        mask = np.stack(masks, axis=-1).astype('float')
        
        # add background if mask is not binary
        if mask.shape[-1] != 1:
            background = 1 - mask.sum(axis=-1, keepdims=True)
            mask = np.concatenate((mask, background), axis=-1)
        
        # apply augmentations
        if self.augmentation:
            sample = self.augmentation(image=image, mask=mask)
            image, mask = sample['image'], sample['mask']
        
        # apply preprocessing
        if self.preprocessing:
            sample = self.preprocessing(image=image, mask=mask)
            image, mask = sample['image'], sample['mask']
            
        return image, mask
        
    def __len__(self):
        return len(self.ids)
    
    
class Dataloder(keras.utils.Sequence):
    """Load data from dataset and form batches
    
    Args:
        dataset: instance of Dataset class for image loading and preprocessing.
        batch_size: Integet number of images in batch.
        shuffle: Boolean, if `True` shuffle image indexes each epoch.
    """
    
    def __init__(self, dataset, batch_size=1, augment=None, shuffle=False):
        self.dataset = dataset
        self.batch_size = batch_size
        self.shuffle = shuffle
        self.indexes = np.arange(len(dataset))
        self.augment = augment

        self.on_epoch_end()

    def __getitem__(self, i):
        
        # collect batch data
        start = i * self.batch_size
        stop = (i + 1) * self.batch_size
        data = []
        for j in range(start, stop):
            if self.augment is None:
                data.append(np.array(X),np.array(y))
            else:            
                im,mask = [],[]   
                for x,y in zip(X,y):
                    augmented = self.augment(image=x, mask=y)
                    im.append(augmented['image'])
                    mask.append(augmented['mask'])
                data.append(np.array(im),np.array(mask))
        # transpose list of lists
        batch = [np.stack(samples, axis=0) for samples in zip(*data)]
        
        return batch
    
    def __len__(self):
        """Denotes the number of batches per epoch"""
        return len(self.indexes) // self.batch_size
    
    def on_epoch_end(self):
        """Callback function to shuffle indexes each epoch"""
        if self.shuffle:
            self.indexes = np.random.permutation(self.indexes)  


# Dataset for validation images
train_dataset = Dataset(x_train_dir, y_train_dir)
valid_dataset = Dataset(x_valid_dir, y_valid_dir)

# check shapes for errors
BATCH_SIZE =4
train_dataloader = Dataloder(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
valid_dataloader = Dataloder(valid_dataset, batch_size=BATCH_SIZE, shuffle=False)

数据增强

from albumentations import (
  Compose, HorizontalFlip, CLAHE, HueSaturationValue,
  RandomBrightness, RandomContrast, RandomGamma, OneOf,
  ToFloat, ShiftScaleRotate, GridDistortion, ElasticTransform, JpegCompression, HueSaturationValue,
  RGBShift, RandomBrightness, RandomContrast, Blur, MotionBlur, MedianBlur, GaussNoise, CenterCrop,
  IAAAdditiveGaussianNoise, GaussNoise, OpticalDistortion, RandomSizedCrop
)

AUGMENTATIONS_TRAIN = Compose([
  HorizontalFlip(p=0.5),
  OneOf([
    RandomContrast(),
    RandomGamma(),
    RandomBrightness(),
  ], p=0.3),
  OneOf([
    ElasticTransform(alpha=120, sigma=120 * 0.05, alpha_affine=120 * 0.03),
    GridDistortion(),
    OpticalDistortion(distort_limit=2, shift_limit=0.5),
  ], p=0.3),
  RandomSizedCrop(min_max_height=(sz / 2, sz), height=h, width=w, p=0.5),
  ToFloat(max_value=1)
], p=1)

AUGMENTATIONS_TEST = Compose([
  ToFloat(max_value=1)
], p=1)

train_dataloader = Dataloder(train_dataset, batch_size=BATCH_SIZE,augment=AUGMENTATIONS_TRAIN,shuffle=True)
valid_dataloader = Dataloder(valid_dataset, batch_size=BATCH_SIZE, augment=AUGMENTATIONS_TEST,shuffle=False)

定义网络模型

from keras.layers import *
from keras.models import *
import warnings
warnings.filterwarnings("ignore")

def my_model(input_shape, with_dropout= True):
    m_input = Input(input_shape)
    out = Conv2D(32, 3,padding='same', activation="relu")(m_input)
    out = BatchNormalization()(out)
    out = MaxPool2D(2)(out)
    out = Conv2D(64, 3, padding='same', activation="relu")(out)
    out = Conv2D(64, 3, padding='same', activation="relu")(out)
    out = Conv2D(64, 3, padding='same', activation="relu")(out)
    out = BatchNormalization()(out)
    out = MaxPool2D(2)(out)
    out = Conv2D(64, 3, padding='same', activation="relu")(out)
    out = Conv2D(64, 3, padding='same', activation="relu")(out)
    out = Conv2D(64, 3, padding='same', activation="relu")(out)
    out = BatchNormalization()(out)
    out = MaxPool2D(2)(out)
    out = Conv2D(128, 3, padding='same', activation="relu")(out)
    out = Conv2D(128, 3, padding='same', activation="relu")(out)
    out = Conv2D(128, 3, padding='same', activation="relu")(out)
    out = BatchNormalization()(out)
    out = MaxPool2D(2)(out)
    out = Conv2D(128, 3, padding='same', activation="relu")(out)
    out = Conv2D(128, 3, padding='same', activation="relu")(out)
    out = Conv2D(128, 3, padding='same', activation="relu")(out)
    out = Flatten()(out)
    out = Dense(64, activation="relu")(out)
    if with_dropout:
        out = Dropout(0.4)(out)
    out = Dense(2, activation="linear")(out)
    model = Model(m_input, out)
    return model

model = my_model((128,128,1))
model.summary()

获得网络所有的层

  • 获得每一层模型的名称,是否可训练,权重信息
print(" model layers:",len(model.layers))
x = np.random.random((1,128,128,1))
out = model.predict(x)
print("output size:",x.shape)
for layer in model.layers:
    print(layer.name,layer.trainable)
    if len(layer.weights)>0:
        print(layer.weights[0].shape)
        break
import keras.backend as K
from keras.layers import  *
import numpy as np


model=None

model.load_weights("finall_weights.h5")


def get_layers_output(model,x_input,index):
  middle_layer = K.function([model.input], [model.layers[index].output])
  middle_out=middle_layer([np.expand_dims(x_input,axis=0)])[0]
  return middle_out


############################################################################
##获得某一层的权重
layer_dict = dict([(layer.name, layer) for layer in model.layers])
print(layer_dict)
print(model.layers[-1].output)
last_weights=model.layers[-1].get_weights()[0]
last_bais=model.layers[-1].get_weights()[1]
print("last layer weights:",last_weights.shape)
print("last layer bias:",last_bais,last_bais.shape) ##[0.4908378  0.48844907]

############################################################################
##获得某一层的输出
x_train=np.random.random((1,100,100,))
first_layer_out=get_layers_output(model,x_train[0],1)
print(first_layer_out[:,1,1,:3])  ## results:[[0.07100815 0.38357598 0.        ]]
############################################################################


############################################################################
##设置模型网络层是否可训练
def make_trainable(net, val):
    net.trainable = val
    for l in net.layers:
        l.trainable = val
# make_trainable(model,False)
############################################################################

获得网络的权重参数

t=model.get_layer('conv2d_14')
print(t)
for index in t.get_weights():
    print(index.shape)

冻结网络的某一层

for layer in model.layers:
    if not isinstance(layer, BatchNormalization):
        layer.trainable = False

保存和加载权重参数信息

model.save_weights("weights.h5")  ## v1
model.load_weights("weights.h5",by_name=True)## v2

def IoU(y_true, y_pred, eps=1e-6):
    if K.max(y_true) == 0.0:
        return IoU(1-y_true, 1-y_pred) ## empty image; calc IoU of zeros
    intersection = K.sum(y_true * y_pred, axis=[1,2,3])
    union = K.sum(y_true, axis=[1,2,3]) + K.sum(y_pred, axis=[1,2,3]) - intersection
    return K.mean( (intersection + eps) / (union + eps), axis=0)

加载模型

from keras.models import load_model
model = load_model('model.h5', custom_objects={'IoU': IoU}) ## v3

model.save("path_to_my_model")

检查权重是否加载成功

model = get_model()
# Train the model.
test_input = np.random.random((128, 32))
test_target = np.random.random((128, 1))
model.fit(test_input, test_target)

# Calling `save('my_model')` creates a SavedModel folder `my_model`.
model.save("my_model")

# It can be used to reconstruct the model identically.
reconstructed_model = keras.models.load_model("my_model")
# # Option 1: Load with the custom_object argument.
# reconstructed_model = keras.models.load_model(
#     "my_model", custom_objects={"CustomModel": CustomModel}
# )

# Let's check:
np.testing.assert_allclose(
    model.predict(test_input), reconstructed_model.predict(test_input)
)

自定义网络层

import keras.backend as K
from keras.layers import *
from keras.models import *
import numpy as np

def sub_mean(x):
    x-=x/2
    return x

class MyLayer(Layer):
    def __init__(self, output_dim, **kw):
        self.output_dim = output_dim
        super(MyLayer, self).__init__(**kw)

    def build(self, input_shape):
        input_dim = input_shape[1]
        inital_SCALER = np.ones((input_dim,self.output_dim))
        self.SCALER = K.variable(inital_SCALER)
        self.trainable_weights = [self.SCALER]
        super(MyLayer, self).build(input_shape)

    def call(self, x, mask=None):
        x = K.dot(x,self.SCALER)
        return x

    def compute_output_shape(self, input_shape):
        return (input_shape[0],self.output_dim)



def get_submean_model():
    model = Sequential()
    model.add(Dense(5, input_dim=7))
    model.add(MyLayer(1))
    model.add(Lambda(sub_mean,output_shape=lambda input_shape:input_shape))
    model.compile(optimizer='rmsprop', loss='mse')
    return model
model = get_submean_model()
model.summary()
import tensorflow as tf
from keras.layers import *
from keras.models import *

class CustomDense(Layer):
    def __init__(self, units=32):
        super(CustomDense, self).__init__()
        self.units = units

    def build(self, input_shape):
        self.w = self.add_weight(
            shape=(input_shape[-1], self.units),
            initializer="random_normal",
            trainable=True,
        )
        self.b = self.add_weight(
            shape=(self.units,), initializer="random_normal", trainable=True
        )

    def call(self, inputs):
        return tf.matmul(inputs, self.w) + self.b
    def get_config(self):
        return {"units": self.units}


inputs = Input((4,))
outputs = CustomDense(10)(inputs)

model = Model(inputs, outputs)
model.summary()
config = model.get_config()
new_model = Model.from_config(config, custom_objects={"CustomDense": CustomDense})
new_model.summary()
out = new_model(tf.zeros((2,4)))
out.shape

获得某一层参数

layer = layers.Dense(3)
print(layer.weights)  # Empty

x=tf.zeros((2,3))
out=layer(x)
out.shape,layer.weights
model = keras.Sequential()
model.add(layers.Dense(2, activation="relu", input_shape=(4,)))

model.summary()

常用的优化器

from keras.optimizers import RMSprop,Adam,SGD,Adadelta
m_sgd = SGD(lr=0.01,momentum=0.9,decay=1e-3,nesterov=False)
from keras.losses import *
from keras.optimizers import *
model.compile(
    optimizer=RMSprop(1e-3),
    loss={
        "priority": BinaryCrossentropy(from_logits=True),
        "department": CategoricalCrossentropy(from_logits=True),
    },
     metrics=["acc"],
    loss_weights={"priority": 1.0, "department": 0.2},
)

常用的损失函数

from keras.losses import mean_squared_error,sparse_categorical_crossentropy,binary_crossentropy

from keras.optimizers import Adam,SGD

def dice_coef_loss(y_true, y_pred):
    def dice_coef(y_true, y_pred, smooth=1):
        intersection = K.sum(y_true * y_pred, axis=[1, 2, 3])
        union = K.sum(y_true, axis=[1, 2, 3]) + K.sum(y_pred, axis=[1, 2, 3])
        return K.mean((2. * intersection + smooth) / (union + smooth), axis=0)
    return 1 - dice_coef(y_true, y_pred, smooth=1)

# 自定义损失函数
def IoU(y_true, y_pred, eps=1e-6):
    if K.max(y_true) == 0.0:
        return IoU(1-y_true, 1-y_pred) ## empty image; calc IoU of zeros
    intersection = K.sum(y_true * y_pred, axis=[1,2,3])
    union = K.sum(y_true, axis=[1,2,3]) + K.sum(y_pred, axis=[1,2,3]) - intersection
    return K.mean( (intersection + eps) / (union + eps), axis=0)

model.compile(optimizer=Adam(lr=0.001),loss=dice_coef_loss, metrics=[IoU])

常用的指标验证函数

"""
categorical_accuracy
accuracy
binary_accuracy
mse
mape
top_k_categorical_accuracy
"""

model.compile(loss="mse", optimizer=RMSprop(lr=0.001),metrics=['mape'])
# 自定义验证指标函数
def define_rmse(y_true, y_pred):
    return K.sqrt(K.mean(K.square(y_true - y_pred), axis=-1))

model.compile(optimizer='rmsprop',metrics=[define_rmse],loss = define_rmse)

训练过程

# train model
from keras.callbacks import *

callbacks = [
    ModelCheckpoint('./best_model.h5', save_weights_only=True, save_best_only=True, mode='min',verbose=1),
    ReduceLROnPlateau(monitor='val_loss',factor=0.6,patience=6,mode='min',verbose=1),
    EarlyStopping(monitor='val_loss', patience=30, verbose=1),
]

history = model.fit_generator(train_dataloader, steps_per_epoch=len(train_dataloader), epochs=EPOCHS,  callbacks=callbacks, 
                              validation_data=valid_dataloader, 
                              validation_steps=len(valid_dataloader),verbose=1)

## train model
  history = model.fit(x_train, y_train, batch_size=16 * 4, epochs=120, verbose=1, validation_data=(x_test, y_test),
                      callbacks=[checkpoint, early_stopping,r_lr],verbose=1)

预测阶段

scores = model.evaluate_generator(val_dataloader)
scores = model.predict_generator(test_dataloader)


scores = model.evaluate(x = val_train, y = val_label,batch_size = None, verbose = 1)
pred = model.predict(np.expand_dims(image, axis=0))
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
units = 32
timesteps = 10
input_dim = 5

# Define a Functional model
inputs = keras.Input((None, units))
x = layers.GlobalAveragePooling1D()(inputs)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)


class CustomRNN(layers.Layer):
    def __init__(self):
        super(CustomRNN, self).__init__()
        self.units = units
        self.projection_1 = layers.Dense(units=units, activation="tanh")
        self.projection_2 = layers.Dense(units=units, activation="tanh")
        # Our previously-defined Functional model
        self.classifier = model

    def call(self, inputs):
        outputs = []
        state = tf.zeros(shape=(inputs.shape[0], self.units))
        for t in range(inputs.shape[1]):
            x = inputs[:, t, :]
            h = self.projection_1(x)
            y = h + self.projection_2(state)
            state = y
            outputs.append(y)
        features = tf.stack(outputs, axis=1)
        return self.classifier(features)


rnn_model = CustomRNN()
out = rnn_model(tf.zeros((1, timesteps, input_dim)))
print(out.shape)
(1, 1)

keras 模型与pytorch模型

二维卷积层(keras vs pytorch)

import numpy as np
import torch
import torch.nn as nn
import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.layers as layers

#initialize the layers respectively
torch_layer = nn.Conv2d(    
    in_channels=3,
    out_channels=32,
    kernel_size=(3, 3),
    stride=(1, 1)
)
torch_model = nn.Sequential(
              # nn.ZeroPad2d((2,3,2,3)),
              torch_layer
              )

tf_layer = layers.Conv2D(
    filters=32,
    kernel_size=(3, 3),
    strides=(1, 1),
    # padding='same',
    # use_bias=False
)

tf_model = keras.Sequential([
           # layers.ZeroPadding2D((3, 3)),
           tf_layer
           ])

#setting weights in torch layer and tf layer respectively
torch_weights = np.random.rand(32, 3, 3, 3)
torch_bias = np.random.rand(32)
tf_weights = np.transpose(torch_weights, (2, 3, 1, 0))
tf_bias = torch_bias

## 赋值固定的权重
torch_layer.weight = torch.nn.Parameter(torch.Tensor(torch_weights))
torch_layer.bias = torch.nn.Parameter(torch.Tensor(torch_bias))

tf_model(np.zeros((1,256,256,3)))
# tf_layer.kernel.assign(tf_weights)
tf_layer.weights[0].assign(tf_weights)
tf_layer.weights[1].assign(tf_bias)


tf_inputs = np.random.rand(1,  256, 256, 3)
torch_inputs =torch.Tensor(np.transpose(tf_inputs, (0, 3 ,1,2)))

with torch.no_grad():
    torch_output = torch_model(torch_inputs)
tf_output = tf_model(tf_inputs)
print(tf_output.numpy().shape, torch_output.shape)
np.allclose(tf_output.numpy() ,np.transpose(torch_output.numpy(),(0, 2, 3, 1))) #True
(1, 254, 254, 32) torch.Size([1, 32, 254, 254])

True

BatchNormal层(keras vs pytorch)


tf_inputs = np.random.rand(1,  12, 12, 32)
layer =layers.BatchNormalization() 
layer(tf_inputs)
weights = layer.get_weights() # gamma,beta,mean,var
for layer in weights:
    print(layer.shape)
(32,)
(32,)
(32,)
(32,)
torch_layer = nn.BatchNorm2d(32)
torch_layer.weight,torch_layer.bias

(Parameter containing:
 tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
         1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
        requires_grad=True),
 Parameter containing:
 tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
         0., 0., 0., 0., 0., 0., 0., 0.], requires_grad=True))
import numpy as np
import torch
import torch.nn as nn
import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.layers as layers

#initialize the layers respectively
torch_layer = nn.BatchNorm2d(32,eps=1e-05, momentum=0.1)  #weight(gamma)和bias(beta)将被使用
torch_model = nn.Sequential(
              # nn.ZeroPad2d((2,3,2,3)),
              torch_layer
              )

tf_layer = layers.BatchNormalization( momentum=0.1,
    epsilon=1e-05,)

tf_model = keras.Sequential([
           # layers.ZeroPadding2D((3, 3)),
           tf_layer
           ])

#setting weights in torch layer and tf layer respectively
weights = np.random.rand(32)
bias = np.random.rand(32)
## 赋值固定的权重
torch_layer.weight = torch.nn.Parameter(torch.Tensor(weights))
torch_layer.bias = torch.nn.Parameter(torch.Tensor(bias))

tf_model(np.zeros((1,12,12,32)))
# tf_layer.kernel.assign(tf_weights)
tf_layer.weights[0].assign(weights)
tf_layer.weights[1].assign(bias)


tf_inputs = np.random.rand(1,  12, 12, 32)
torch_inputs =torch.Tensor(np.transpose(tf_inputs, (0, 3 ,1,2)))
tf_layer.weights[2].assign(tf_inputs.mean(axis=(0,1,2)))
tf_layer.weights[3].assign(tf_inputs.std(axis=(0,1,2)))

with torch.no_grad():
    torch_output = torch_model(torch_inputs)
tf_output = tf_model(tf_inputs)
print(tf_output.numpy().shape, torch_output.shape)
print(np.allclose(tf_output.numpy() ,np.transpose(torch_output.numpy(),(0, 2, 3, 1)))) #True
tf_output[0,0,0,:10],np.transpose(torch_output.numpy(),(0, 2, 3, 1))[0,0,0,:10]
(1, 12, 12, 32) torch.Size([1, 32, 12, 12])
False

(<tf.Tensor: shape=(10,), dtype=float32, numpy=
 array([ 0.45506844, -0.12573612,  0.81879985,  0.08219968,  0.08907801,
         0.88896   ,  0.6723665 ,  0.9736586 , -0.1965737 ,  0.38121822],
       dtype=float32)>,
 array([ 0.53097904, -0.34610078,  0.8367056 , -0.046165  , -0.2674462 ,
         0.953462  ,  0.69444454,  1.0367548 , -0.36438996,  0.60396177],
       dtype=float32))

最大池化层(keras vs pytorch)

import numpy as np
import torch
import torch.nn as nn
import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.layers as layers

#prepare inputs and do inference
# torch_inputs = torch.Tensor(np.random.rand(1, 3, 256, 256))
# tf_inputs = np.transpose(torch_inputs.numpy(), (0, 2, 3, 1))
tf_inputs = np.random.rand(1,  256, 256, 3)
torch_inputs =torch.Tensor(np.transpose(tf_inputs, (0, 3 ,1,2)))


torch_layer =  nn.MaxPool2d(2)
torch_model = nn.Sequential(torch_layer)
tf_layer = layers.MaxPooling2D(pool_size=(2, 2))
tf_model = keras.Sequential([tf_layer])

with torch.no_grad():
    torch_output = torch_model(torch_inputs)
tf_output = tf_model.predict(tf_inputs)
print(tf_output.shape, torch_output.shape)
np.allclose(tf_output ,np.transpose(torch_output.numpy(),(0, 2, 3, 1))) #True
(1, 128, 128, 3) torch.Size([1, 3, 128, 128])
True

全连接层输出(keras vs pytorch)

import numpy as np
import torch
import torch.nn as nn
import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.layers as layers

#initialize the layers respectively
torch_layer = nn.Linear(256,10)
torch_model = nn.Sequential( torch_layer)

tf_layer = layers.Dense(10,activation=None)
tf_model = keras.Sequential([ tf_layer])

#setting weights in torch layer and tf layer respectively
torch_weights = np.random.rand(10,256)
torch_bias = np.random.rand(10)

tf_weights = np.transpose(torch_weights, (1, 0))
tf_bias = torch_bias


## 赋值固定的权重
torch_layer.weight = torch.nn.Parameter(torch.Tensor(torch_weights))
torch_layer.bias = torch.nn.Parameter(torch.Tensor(torch_bias))
tf_model(np.zeros((4,256)))
tf_layer.weights[0].assign(tf_weights)
tf_layer.weights[1].assign(tf_bias)


inputs = torch.Tensor(np.random.rand(4,  256))
with torch.no_grad():
    torch_output = torch_model(inputs)
tf_output = tf_model(inputs.numpy())
print(tf_output.numpy().shape, torch_output.shape)
np.allclose(tf_output.numpy() ,torch_output.numpy()) #True
(4, 10) torch.Size([4, 10])

True

Flatten vs view() (keras vs pytorch)

import numpy as np
import torch
import torch.nn as nn
import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.layers as layers
np.set_printoptions(precision=4)
torch.set_printoptions(precision=4)

#initialize the layers respectively

tf_layer = layers.Flatten()
tf_model = keras.Sequential([ tf_layer])

tf_inputs = np.random.rand(2,2,2,3)
torch_inputs =torch.Tensor(tf_inputs) #np.transpose(tf_inputs, (0, 3, 1, 2)))
print(tf_inputs.shape, torch_inputs.shape)
with torch.no_grad():
    torch_output =  torch.flatten(torch_inputs,start_dim=1)
tf_output = tf_model(tf_inputs)
print(tf_output.numpy().shape, torch_output.shape)
np.allclose(tf_output.numpy() ,torch_output.numpy()) #True
(2, 2, 2, 3) torch.Size([2, 2, 2, 3])
(2, 12) torch.Size([2, 12])
True

技术交流与资料获取

技术要学会交流、分享,不建议闭门造车。一个人可以走的很快、一堆人可以走的更远。

资料干货、数据、技术交流提升,均可加交流群获取,群友已超过2000人,添加时最好的备注方式为:来源+兴趣方向,方便找到志同道合的朋友。

技术交流、代码、数据获取方式如下

方式①、微信搜索公众号:Python学习与数据挖掘,后台回复:交流
方式②、添加微信号:dkl88194,备注:交流

我们打造了《100个超强算法模型》,特点:从0到1轻松学习,原理、代码、案例应有尽有,所有的算法模型都是按照这样的节奏进行表述,所以是一套完完整整的案例库。

很多初学者是有这么一个痛点,就是案例,案例的完整性直接影响同学的兴致。因此,我整理了 100个最常见的算法模型,在你的学习路上助推一把!
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/299978.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

CSS基础笔记-03选择器

CSS基础笔记系列 《CSS基础笔记-01CSS概述》《CSS基础笔记-02动画》 前言 在前面两篇博客中&#xff0c;我实际上已经使用过了选择器。但到底什么是选择器、有什么作用&#xff0c;我反而不能表达出来。因此&#xff0c;决定记录了我的学习和思考。 什么是选择器 selector…

基于 HTTPS 协议配置 Git 连接 GitHub

文章目录 0.安装 Git1.注册 GitHub 账号2.配置 Git 的用户名和邮箱3.远程连接 GitHub 有两种传输协议4.基于 SSH 协议配置 Git 连接 GitHub5.基于 HTTPS 协议配置 Git 连接 GitHub5.1 创建 GitHub 个人访问令牌5.2 有两种方法将本地仓库和远程仓库关联起来5.2.1 第一种方法&…

【大数据】基于 Flink CDC 构建 MySQL 和 Postgres 的 Streaming ETL

基于 Flink CDC 构建 MySQL 和 Postgres 的 Streaming ETL 1.准备阶段1.1 准备教程所需要的组件1.2 下载 Flink 和所需要的依赖包1.3 准备数据1.3.1 在 MySQL 数据库中准备数据1.3.2 在 Postgres 数据库中准备数据 2.启动 Flink 集群和 Flink SQL CLI3.在 Flink SQL CLI 中使用…

x-cmd pkg | norwegianblue - 软件生命周期查询工具

目录 简介首次用户功能特点进一步探索 简介 norwegianblue 由 Hugo van Kemenade 使用 Python 开发&#xff0c;于 2021 年推出。用于显示多种产品的生命周期终止&#xff08;EOL&#xff09;日期的 CLI 工具。基于 endoflife.date 网站的接口&#xff0c;提供有关各种产品的最…

new和delete表达式的工作步骤

new表达式工作步骤 调用一个operator new库函数开辟未类型化的空间 void *operator new(size_t); 在为类型化的空间上调用构造函数&#xff0c;初始化对象的成员 返回相应类型的指针 delete表达式工作步骤 调用相应类型的析构函数,但析构函数并不能删除对象所在的空间&…

处cp社交类微信小程序前端开源(二)

在上一篇文章介绍如何用SpringBoot整合websocket实现在线聊天&#xff0c;这篇文章介绍如何将uniapp社交类前端源码打包部署微信小程序&#xff0c;和如何上线微信小程序&#xff0c;上线需要的资料&#xff0c;并且介绍我是如何获取用户&#xff0c;如何变现&#xff0c;现在的…

nginx下upstream模块详解

目录 一&#xff1a;介绍 二&#xff1a;特性介绍 一&#xff1a;介绍 Nginx的upstream模块用于定义后端服务器组&#xff0c;以及与这些服务器进行通信的方式。它是Nginx负载均衡功能的核心部分&#xff0c;允许将请求转发到多个后端服务器&#xff0c;并平衡负载。 在upst…

前端-基础 常用标签-超链接标签( 锚点链接 )

锚点链接 &#xff1a; 点击链接&#xff0c;可以快速定位到 页面中的某个位置 如果不好理解&#xff0c;讲一个例子&#xff0c;您就马上明白了 >>> 这个是 刘德华的百度百科 &#xff0c;可以看到&#xff0c;页面里面有很多内容&#xff0c;那就得有个目录了 …

RabbitMQ高级

文章目录 一.消息可靠性1.生产者消息确认2.消息持久化3.消费者确认4.消费者失败重试 MQ的一些常见问题 1.消息可靠性问题:如何确保发送的消息至少被消费一次 2.延迟消息问题:如何实现消息的延迟投递 3.高可用问题:如何避免单点的MQ故障而导致的不可用问题 4.消息堆积问题:如…

狂肝100小时,各大厂20W字面试真题分享

有很多童靴问我&#xff0c;有没有大厂的面试集合&#xff0c;可以针对性备考一下&#xff0c;我说面试题网络上有很多&#xff0c;随便搜索一下&#xff0c;就一大把吧。他们回复说&#xff0c;都是针对各个知识点的题目&#xff0c;想要吃透&#xff0c;至少要1-3个月的时间&…

NCC基础开发技能培训

YonBuilder for NCC 是一个带插件的eclipse工具&#xff0c;跟eclipse没什么区别 NC Cloud2021.11版本开发环境搭建改动 https://nccdev.yonyou.com/article/detail/495 不管是NC Cloud 新手还是老NC开发&#xff0c;在开发NC Cloud时开发环境搭建必看&#xff01;&#xff…

命令行模式的rancher如何安装?

在学习kubectl操作的时候&#xff0c;发现rancher也有命令行模式&#xff0c;学习整理记录此文。 说明 rancher 命令是 Rancher 平台提供的命令行工具&#xff0c;用于管理 Rancher 平台及其服务。 前提 已经参照前文安装过了rancher环境了&#xff0c;拥有了自己的k8s集群…

腾讯云代金券介绍及领取教程分享

腾讯云为了吸引用户经常推出各种优惠活动&#xff0c;其中就包括腾讯云代金券&#xff0c;领取之后可用于抵扣腾讯云平台上购买的部分产品或服务的费用。以下是腾讯云代金券的详细介绍及领取教程。 一、腾讯云代金券介绍 腾讯云代金券是腾讯云优惠券的一种&#xff0c;代金券是…

IMU用于无人机故障诊断

最近&#xff0c;来自韩国的研究团队通过开发以IMU为中心的数据驱动诊断方法&#xff0c;旨在多旋翼飞行器可以自我评估其性能&#xff0c;即时识别和解决推进故障。该方法从单纯的常规目视检查跃升为复杂的诊断细微差别&#xff0c;标志着无人机维护的范式转变。 与依赖额外传…

深入理解并解析Flutter Widget

文章目录 完整代码程序入口构建 Widget 结构定义 widget 状态定义 widget UI获取上下文关于build()build() 常用使用 完整代码 import package:english_words/english_words.dart; import package:flutter/material.dart; import package:provider/provider.dart;void main() …

软件工程专业毕业设计题目怎么选?

文章目录 0 简介1 如何选题2 最新软件工程毕设选题3 最后 0 简介 学长搜集分享最新的软件工程业专业毕设选题&#xff0c;难度适中&#xff0c;适合作为毕业设计&#xff0c;大家参考。 学长整理的题目标准&#xff1a; 相对容易工作量达标题目新颖 1 如何选题 最近非常多的…

单电阻落地扇电机驱动 DEMO 方案

SYNWIT DEMO方案 低压 PMSM 电机&#xff0c;软件上采用SVPWM空间电压矢量调制技术&#xff0c;直接闭环启动&#xff0c;相比传统方波效率提高15%&#xff0c;具有更小的谐波分量及转矩脉动&#xff0c;同时采用32位MCU芯片SWM201G6S7 SSOP28 封装为主控&#xff0c;为驱动算…

Long类型转换精度丢失问题解决

问题: 启动前端项目 页面传递的ID 和数据库保存的ID不一致 原因&#xff1a;给前端返回的id为long类型&#xff0c;在转换json传递到前端以后精度丢失&#xff0c;所以前端给我们的id也是丢失精度的id,不能查询数据。 因为js数字类型最大长度为16位&#xff0c;而java的long类…

好物周刊#35:图标资源获取

https://github.com/cunyu1943/JavaPark https://yuque.com/cunyu1943 村雨遥的好物周刊&#xff0c;记录每周看到的有价值的信息&#xff0c;主要针对计算机领域&#xff0c;每周五发布。 一、项目 1. 正则大全 常用正则大全&#xff0c;支持 web/vscode/idea/Alfred Work…

红日靶场-3

目录 前言 外网渗透 外网渗透打点 1、arp 2、nmap 3、nikto 4、whatweb 5、gobuster 6、dirsearch CMS 1、主页内容 2、/configuration.php~ 目录 3、/administrator 目录 4、Joomla!_version探测 5、joomlascan python脚本 6、joomscan perl脚本 MySQL 1、远…