python - Issues with implementing TensorFlow's MNIST example without feed_dict using a queue -
i came across tensorflow's deep mnist experts , wanted adapt more efficient use on gpus. since feed_dict seems incredibly slow, implemented input pipeline using tf.train.shuffle_batch
, fifoqueue
feed data model.
here's gist stock implementation of tensorflow guide , here's gist attempt @ optimized implementation.
now in example on tensorflow page, accuracy pretty approaches 1 after few thousand iterations. in code, aside queue implementation same model, accuracy seems oscillate between ~0.05 , ~0.15. further, loss reaches 2.3 after couple hundred iterations , doesn't decrease farther that.
another noteworthy point: when make comparison original batch created , batch used in subsequent iterations, appear equivalent. perhaps issue lies in queuing/dequeuing i'm not sure how fix it. if sees issues implementation pointers appreciated!
found solution. turns out tf.train.shuffle_batch
implicitly implements randomshufflequeue
. loading results of tf.train.shuffle_batch
fifoqueue
presumably caused fifoqueue
not update input batch, while labels being updated because weren't being passed fifoqueue
. removing fifoqueue
entirely solved issue.
Comments
Post a Comment