machine learning - how to use iter_size in caffe -


i dont know exact meaning of 'iter_size' in caffe solver though googled lot. it says 'iter_size' way increase batch size without requiring gpu memory.

could understand this:

if set bach_size=10,iter_size=10, behavior same batch_size = 100.

but 2 tests on this:

  1. total samples = 6. batch_size = 6, iter_size = 1, trainning samples , test samples same. loss , accuracy graph :

loss

  1. total samples = 6. batch_size = 1, iter_size = 6, trainning samples , test samples same.

loss

from 2 tests, can see behaves differently.

so misunderstood true meaning of 'iter_size'. how can behavior of gradient descent same on samples rather mini_batch?
give me help?


Comments

Popular posts from this blog

Is there a better way to structure post methods in Class Based Views -

Qt QGraphicsScene is not accessable from QGraphicsView (on Qt 5.6.1) -

What is happening when Matlab is starting a "parallel pool"? -