I have 2 tensors: h=[[0 0 0] [0 0 1] [0 1 0]] and h2=[[0 0 0] [0 0 1] [0 0 1]] I want to create 3 vector with values where h=h2 I mean ,i want compare h=h2 ,h=h2 and h=h2 So I want the 3 vector as: h3=[[0 0 0] [0 0 1] ..
I have a large directory of 3D images (.nii). I’m working to loop through them all and return the maximum shape of the largest file (x, y, z). r =  for subdir, dirs, files in os.walk(data_dir): for file in files: if file.endswith(".nii"): r.append(file) print(r) shapes = np.array([s.spatial_shape for s in r]) print(shapes.max(axis=0)) Produces a ..
I own a dataset of numpy files and they are listed in a csv (path to each numpy file) with all my labels. The feature column is the path to each file. Reading some older question (Effective way to read images from a csv file and return a tf.data.Dataset object), I though I could use ..
I have n XY points each with an associated color. The set of points also have some minimum and maximum along their axes. n = 100 colors = torch.rand((n, 3)) points = torch.randint(0, 10, (n, 2)) x_min, x_max = torch.min(points[:, 0]).data, torch.max(points[:, 0]).data y_min, y_max = torch.min(points[:, 1]).data, torch.max(points[:, 1]).data I would like to discretize ..
I have an encoder section of my network, looking like this: class Encoder(nn.Module): def __init__(self): super(Encoder, self).__init__() c = capacity self.conv1 = nn.Conv2d(in_channels=1, out_channels=c, kernel_size=4, stride=2, padding=1) # out: c x 14 x 14 self.conv2 = nn.Conv2d(in_channels=c, out_channels=c*2, kernel_size=4, stride=2, padding=1) # out: c x 7 x 7 self.fc = nn.Linear(in_features=c*2*7*7, out_features=latent_dims) def forward(self, x): ..
I did the following simple calculation in TensorFlow and was surprised to see that the result is not as I expected. My code: a = tf.constant([[0.2,1],[0.3,0.75]]) b = tf.constant([[0.2,0.1],[0.8,0.2]]) print(tf.square(a)+tf.square(b)) It delivers the result: tf.Tensor( [[0.08000001 1.01 ] [0.73 0.6025 ]], shape=(2, 2), dtype=float32) However, I would have expected: tf.Tensor( [[0.08 1.0 ] [0.73 0.6025 ..
I have a dataset which I intend to use for Binary Classification. However my dataset is very unbalanced due to the very nature of the data itself(the positives are quite rare). The negatives are 99.8% and the positives are 0.02% . I have approximately 60 variables in my dataset. I would like to do a ..
I have a dataset consisting of timesteps=409, samples=500, features=4. I want to predict multiple time steps for all of these series. 14 day sequences input into 7 day horizon output So as I split my input X/Y data I am adding a fourth dimension into my data (14 for X — 7 for Y). Keras ..
If we set the reduction in our binary cross entropy loss to tf.losses.Reduction.NONE when we compile our model is there a way to retrieve the unreduced losses when we evaluate on a sample? (Code below) model.compile(loss= tf.keras.losses.BinaryCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=False), optimizer=Adam(learning_rate=1e-4)) I’m running into an issue where the loss in my model (with zeroed-out weights and bias ..
Why does tf.reduce_sum does not work for uint8? Consider this example: >>> tf.reduce_sum(tf.ones((4, 10, 10), dtype=tf.uint8)) <tf.Tensor: shape=(), dtype=uint8, numpy=144> >>> tf.reduce_sum(tf.ones((4, 10, 10), dtype=tf.uint16)) <tf.Tensor: shape=(), dtype=uint16, numpy=400> Does someone know why that is? The docs dont mention any incompatability with uint8… Source: Python..