machine learning - Tensorflow low train/test accuracy -


i restored pre-trained model in tensorflow 1.2 testing work. assumed model well-trained since loss decreased low (0.0001). however, either testing samples or training samples, accuracy ops give me value 0. because i'm using wrong accuracy function or because model problem?

here accuracy function, test_image below batch single test sample, test_image_label single label:

correct_prediction = tf.equal(tf.argmax(googlenet(test_image), 1), tf.argmax(test_image_label, 0)) accuracy = tf.cast(correct_prediction, tf.float32)  session() less:     accuracy_vector = []     num in range(len(testnames)):         accuracy_vector.append(sess.run(accuracy, feed_dict={keep_prob: 1.0}))     print(accuracy_vector)     mean_accuracy = sess.run(tf.divide(tf.add_n(accuracy_vector), len(testnames)))     print("test accuracy %g"%mean_accuracy) 

the model defined googlenet(data) above, function returns logits of input batch. training done this:

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=train_label_batch, logits=googlenet(train_batch))) train_step = tf.train.momentumoptimizer(learning_rate, 0.9).minimize(cost, global_step=global_step) 

the train_step ran in every iteration. think worth noting after restored model, cannot run print(googlenet(test_image).eval(feed_dict={keep_prob: 1.0})) in session, intended take @ output of model. returns error of failedpreconditionerror (see above traceback): attempting use uninitialized value variable_213 [[node: variable_213/read = identity[t=dt_float, _class=["loc:@variable_213"], _device="/job:localhost/replica:0/task:0/cpu:0"](variable_213)]]


Comments

Popular posts from this blog

Is there a better way to structure post methods in Class Based Views -

performance - Why is XCHG reg, reg a 3 micro-op instruction on modern Intel architectures? -

jquery - Responsive Navbar with Sub Navbar -