I have a tensor which is a like a 2D matrix with every sublist having 4 floats. This is a sample output: [[-57.993378 16.389141 282.08826 335.57178 ] [331.8664 251.26202 467.9576 420.6745 ] [331.85397 59.201126 467.94247 228.73703 ]] I want to apply tf.math.minimum to first two elements of every sublist and tf.math.maximum to the last two. ..
I am creating a tf.keras.model which is compiled with a custom loss and a custom metrics function. I call train_on_batch on model using x=input_batch and y=someFunction(targets) The signature of custom loss and custom metrics functions looks like methodname(y_true,y_pred) Here y_true is fed with someFunction(targets) Is there any way to get targets in custom metrics function ..
I am trying to create checkpoints of the tensors created by the following function, but I am having a rough time (Tensorflow 1.x). Pickle does not work. def _create_discriminator(self, x, train=True, reuse=False, name="discriminator"): with tf.variable_scope(name) as scope: if reuse: scope.reuse_variables() h = x for i in range(self.num_conv_layers): h = lrelu(batch_norm(conv2d(h, self.num_dis_feature_maps * (2 ** i), ..
since my GPU only support TensorFlow 2.x, I have to rewrite a code which is written in TensorFlow 1.x . I used the following code: https://www.tensorflow.org/guide/upgrade Everything worked fine until I came to the following function: tf.contrib.training.bucket_by_sequence_length(input_length, tensors, batch_size, bucket_boundaries, num_threads=1, capacity=32, bucket_capacities=None, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, keep_input=True, shared_name=None, name=None) Unfortunately therefore I haven’t a solution ..
I am trying to run a code, which was written with tensorflow version 1.4.0 I am running my code on google colab which gives in tensorflow version 2.x with it. To run my code, I am using backward compatibility like: replacing import tensorflow as tf with import tensorflow.compat.v1 as tf tf.disable_v2_behavior() which works for certain ..
I’m trying to train my model on Google Colab. When I run training section in notebook that integrated google colab, it gives me an error which I couldn’t find anny solution. Can anyone explain me what is the problem and how can I fix it please. The error: Requirement already satisfied: tf_slim in /usr/local/lib/python3.7/dist-packages (1.1.0) ..
I installed TensorFlow using mayapy pip install tensorflow I then tried import tensorflow as tf, but Maya crashes on this call: _mod = imp.load_module(‘_pywrap_tensorflow_internal’, fp, pathname, description) Full crash log: https://pastebin.com/yVimvEJa Am I missing any library? Tested on: Python: 2.7 Maya 2020 Linux CentOS7 Tensorflow 1.14 Source: Python..
I want to save a tensorflow variable of type tensorflow.python.framework.ops.Tensor as a .npy file while eager execution is disabled. Now this works absolutely fine with eager execution enabled if I simply do tfvar.numpy() and then save it as .npy but it doesn’t work with if eager execution is disabled. Is there any way to do ..
So guys, I am using tensorflow==1.15.0 and python==3.7.10. I can’t upgrade to tensorflow==2.X because I am working upon the code which has been written in 1.X version. So, here is the complete Error:- Type of the model_obj: <class ‘CustomAutoencoder_tf21s.AutoEncoderModel’> Type of test_norm: <class ‘numpy.ndarray’> The test_norm is [[-0.8249858 -1.89925946 -0.19963089 -0.72175069 -0.08146654 0.17114502 -0.78429501 -0.41983836 ..
I have a task where have to quickly run N parallel simple optimizations with Adam optimizers. I have been doing this with tensorflow 1.x for a while but trying to update everything to 2.x or alternatively modern pytorch has resulted in much slower behavior. I constructed minimal(at least to best of my ability) examples demonstrating ..