So I want to have a conv2d as a window sliding over inputs so that coefficients for each filter layer would be the same for all of its pixels. Say for example we would have a 2d (no matter the resulution) 1 channel image and my 3×3 convolution would only have to seek 18 values ..
I am getting confused with the filter paramater, which is the first parameter in the Conv2D() layer function in keras. As I understand the filters are supposed to do things like edge detection or sharpening the image or blurring the image, but when I am defining the model as input_shape = (32, 32, 3) model ..
The topic of this project is Pothole Detection on Roads. In this project, this dataset (https://www.kaggle.com/sovitrath/road-pothole-images-for-pothole-detection) is used. First of all, all images in this dataset were separated into two classes as Negative(not pothole), and Positive(Contain Potholes). After that, all images were imported to code and a label is assigned to each image. These labels ..
I’ve done this nn with keras that have a 60% accuracy during training. The problem is when i try to test my network with a dataset i get a network that is super sure that all the images are in the first class n. My project have 3 classes n,b and v that stand for ..
I am new to Pytorch, and I obtained a pertained model only (without model definition in python), and I can load it using command: mnet=torch.hub.load(…) which was successful, and I can pass input data to mnet. Now I hope to use the feature from the 3rd last layer as output, so I am doing this: ..
I’m trying to predict future stock returns for 200 days ahead based on the past returns using neural networks. I’ve already implemented network which consists of several LSTM layers and it’s working ok: lstm2=Sequential([ LSTM(50,input_shape=[None,1],return_sequences=True), Dropout(0.2), LSTM(50,return_sequences=True), Dropout(0.2), LSTM(50), Dense(200) ]) However, when I try to stack a CONV1D layer on top of LSTM layers ..
So I’ve trained a model with almost 89% accuracy and 36% loss on EMNIST balanced dataset and it seems that most labels are predicted correctly. So I’m trying to upload a handwritten image and split it into an array of X letters that’s going to be resized to 28×28 and predict each one seperately. What’s ..
When I had a data of angle and distance, then I use CNN to train it with another angle and speed as the output, is it okay to use 2×2 kernel size without padding? Or should I use 3×3 kernel size and add zero padding to it? Source: Python..
Firstly I was checking this notebook from Kaggle, and for some reason I couldn’t reproduce the Visualizing Filter Patterns of Convolution layers section of this notebook. I am getting this error: During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) TypeError: Cannot convert a symbolic Keras input/output to a ..
I wanted to classify images which consist five classes. I wanted to use CNN. But when I try with several models, the training accuracy will not increase than 20%. Please some one help me to overcome this. Mostly model will trained within 3 epoches and when epoches increase there is no improvement in accuracy. Can ..