Category : conv-neural-network

I am building a U-net for image segmentation, the images are grayscale (x-ray), the masks are binary(black and white). when I run the following code I receive the below error. seed = 42 np.random.seed = seed IMG_WIDTH = 256 IMG_HEIGHT = 256 IMG_CHANNELS = 1 TRAIN_PATH = ‘/content/stage1_train/stage1_train/’ TEST_PATH = ‘/content/stage1_test/stage1_test/’ train_ids = next(os.walk(TRAIN_PATH))[1] test_ids ..

Read more

I am running Alexnet on CIFAR10 dataset using Pytorch Lightning, here is my model: class SelfSupervisedModel(pl.LightningModule): def __init__(self, hparams=None, num_classes=10, batch_size=128): super(SelfSupervisedModel, self).__init__() self.batch_size = batch_size self.loss_fn = nn.CrossEntropyLoss() self.hparams["lr"] = ModelHelper.Hyperparam.Learning_rate self.model = torchvision.models.alexnet(pretrained=False) def forward(self, x): return self.model(x) def training_step(self, train_batch, batch_idx): inputs, targets = train_batch predictions = self(inputs) loss = self.loss_fn(predictions, targets) ..

Read more

from keras.models import load_model print("[INFO] Saving model…") pickle.dump(model,open(‘cnn_model.pkl’, ‘wb’)) [INFO] Saving model… ————————————————————————— TypeError Traceback (most recent call last) <ipython-input-95-0812efc6ef19> in <module> 2 3 print("[INFO] Saving model…") —-> 4 pickle.dump(model,open(‘cnn_model.pkl’, ‘wb’)) TypeError: cannot pickle ‘weakref’ object Source: Python..

Read more

I have done my digit data training and got high accuracy. I have save it using .h5, below is my code to save and load. # ###### SAVE FILE ###### # serialize model to JSON model_json = model.to_json() with open("modelML.json", "w") as json_file: json_file.write(model_json) # serialize weights to HDF5 model.save_weights("modelML.h5") print("Waiting to save…..") print("Saved model ..

Read more

Hi I am a beginner in DL and tensorflow, I created a CNN (you can see the model below) model = tf.keras.Sequential() model.add(tf.keras.layers.Conv2D(filters=64, kernel_size=7, activation="relu", input_shape=[512, 640, 3])) model.add(tf.keras.layers.MaxPooling2D(2)) model.add(tf.keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu")) model.add(tf.keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu")) model.add(tf.keras.layers.MaxPooling2D(2)) model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=3, activation="relu")) model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=3, activation="relu")) model.add(tf.keras.layers.MaxPooling2D(2)) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(128, activation=’relu’)) model.add(tf.keras.layers.Dropout(0.5)) model.add(tf.keras.layers.Dense(64, activation=’relu’)) model.add(tf.keras.layers.Dropout(0.5)) model.add(tf.keras.layers.Dense(2, activation=’softmax’)) optimizer = tf.keras.optimizers.SGD(learning_rate=0.2) ..

Read more

So I am trying to make a CNN model for crowdcounting, this is the structure. self.base = nn.Sequential(Conv2d( 1, 64 ,3, same_padding=True, bn=bn), Conv2d(64, 64 ,3, same_padding=True, bn=bn), nn.MaxPool2d(2), Conv2d( 64, 128 ,3, same_padding=True, bn=bn), Conv2d(128, 128 ,3, same_padding=True, bn=bn)) self.layer1_1 = nn.Sequential(nn.MaxPool2d(2), Conv2d(128, 256 ,3, same_padding=True, bn=bn)) self.layer1_2 = nn.Sequential(Conv2d(256, 256 ,3, same_padding=True, bn=bn)) ..

Read more

So I have this code: with torch.no_grad(): X_test = mnist_test.test_data.view(-1, 28 * 28).float().to(device) Y_test = mnist_test.test_labels.to(device) prediction = linear(X_test) correct_prediction = torch.argmax(prediction, 1) == Y_test accuracy = correct_prediction.float().mean() print(‘Accuracy:’, accuracy.item()) r = 1504207845 X_single_data = mnist_test.test_data[r:r + 1].view(-1, 28 * 28).float().to(device) Y_single_data = mnist_test.test_labels[r:r + 1].to(device) print(‘Label: ‘, Y_single_data.item()) single_prediction = linear(X_single_data) print(‘Prediction: ‘, torch.argmax(single_prediction, ..

Read more

dataset, info = tfds.load(‘oxford_iiit_pet:3.*.*’, with_info=True) train_images = dataset[‘train’] test_images = dataset[‘test’] train_batches = ( train_images .cache() .shuffle(BUFFER_SIZE) .batch(BATCH_SIZE) .prefetch(buffer_size=tf.data.AUTOTUNE)) test_batches = test_images.batch(BATCH_SIZE) Now I would like to reduce the test_images size to 100 images. I am expecting some code like: test_images = test_images[100] But this would give an error saying: ‘ParallelMapDataset’ object is not subscriptable ..

Read more