I am a beginner with convolutional neural network and my objective is the image segmentation of Ct-Scan images. After some pre-processing phases of data input, my input data are called train_patient_fin and train_mask_fin, whose dimensions are correctly elaborated and are both (129, 128, 128, 1). The convolutional function is defined as get_model(). Here the code: ..
Let’s say for example I am trying to predict the population within a given postcode at a given datetime. My data source would have multiple datetimes and multiple postcodes with varying values for population in each row. Example dataset: Date Postcode # of people (population) 01/Jan/2021 5000 100,000 01/Jan/2021 5001 97,000 01/Feb/2021 5000 100,100 01/Feb/2021 ..
I’ve been working on this CNN for a little while now. I’ve played around with a lot of different layers, optimizations etc. and I finally figured out an architecture that seems to fit my data properly but only to a certain extent. Model layers: Layer (type) Output Shape Param # ================================================================= conv2d_20 (Conv2D) (None, 251, ..
I am making a facial recognition system utilising a live webcam and using a VGG16 model. When i run code below it come up with an error saying "NotImplementedError: When subclassing the Model class, you should implement a call method." I have tried googling to find a solution but i only found this one on ..
I’m a beginner so bear with me please!! I’m using PyTorch based CNNs to do feature extraction on images of humans in order to use it to re-identify that same person given a different picture. After the whole process I am left with a 1D vector, about 2048×1 which I then compare using L2 distance ..
I am trying to classify an EEG dataset I found online. (BCI comp III, Dataset V) I am just playing around with different models as a side project, and I though I’d start with a CNN. This is the piece of code for extracting the data from the files: def ext_data(subNum, rawNum): file_name = r’C:UsersmeOneDriveDesktopEEGEEG ..
I was developing image recognition algorithms by using convolutional neural networks. However, I am not sure how to handle this problem even though I defined it in my codes. I created two paths for images located in "train" and "validation" folders. X = np.array(i for i in train]).reshape(-1, IMG_SIZE, IMG_SIZE, 3) y = np.array(i for ..
I am trying to implement convolution output using Tensorflow, but my approach involves 2 nested for-loops and which is extremely slower than the tf.keras.layers.Conv2D. convolution_output =  for x in tf.range(0, w_inp-stride, stride): convolution_output_row =  for y in tf.range(0, h_inp-stride, stride): conv_prod = img_input[:, x:x+h_f, y:y+w_f, :, np.newaxis] * kernel_weights[np.newaxis, :, :, :] convolution_output_row.append(tf.math.reduce_sum( ..
I am following the notebook on TF-Hub: Fast Style Transfer for Arbitrary Style and am having difficulties importing my own images. Some background, this notebook basically does image augmentation using 2 input images. Here is the code: https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_arbitrary_image_stylization.ipynb I am importing images through: img = cv.imread(‘path_to_directory’) img = np.float32(img) which gives me ndarray with shape ..
I’m trying to train a CNN face recognition model. I’ve extracted the face embeddings of my data using openface_nn4.small2.v1.t7 and now I want to use them to train my model. But CNN requires a 4D array as input and I have a 2D matrix. # Load the face embeddings print("[INFO] loading face embeddings…") data = ..