machine learning - how to use iter_size in caffe -
i dont know exact meaning of 'iter_size' in caffe solver though googled lot. it says 'iter_size' way increase batch size without requiring gpu memory.
could understand this:
if set bach_size=10,iter_size=10, behavior same batch_size = 100.
but 2 tests on this:
- total samples = 6. batch_size = 6, iter_size = 1, trainning samples , test samples same. loss , accuracy graph :

- total samples = 6. batch_size = 1, iter_size = 6, trainning samples , test samples same.

from 2 tests, can see behaves differently.
so misunderstood true meaning of 'iter_size'. how can behavior of gradient descent same on samples rather mini_batch?
give me help?
Comments
Post a Comment