neural network - Lua Torch normalization layer to normalize only a part of the tensor -


i using luatorch normalization layer normalizes input tensor. adding part of network self:add( nn.normalize(2) ). want normalize part of input tensor. not sure how specify part of tensor in following lines.

self:add( nn.view(-1, op_neurons) ) self:add( nn.normalize(2) )  <--- how normalize part of input tensor self:add( nn.view(-1,no_of_objects,op_neurons) ) 

i think clean way derive own class nn.normalize. create file partialnormalize.lua , , proceed (it easy bit time-consuming developp, i'm giving pseudo-code) :

local partialnormalize, parent = torch.class('nn.partialnormalize', 'nn.normalize') --now need override funcions __init, updateoutput , updategradinput parent class (i dont think there need override other functions, shoud make check.) -- can find code nn.normalize in  <your_install_path>/install/share/lua/5.1/nn/normalize.lua -- interval [first_index, last_index] determines parts  input vector want normalized.  function partialnormalize:__init(p,eps,first_index,last_index)   parent.__init(self)    self.first_index=first_index   self.last_index=last_index end   function partialnormalize:updateoutput(input)   --in parent class, returns normalized part    -- modify function returns normalized part self.first_index self.last_index, , passes other elements through end   function partialnormalize:updategradinput(input, gradoutput)     -- make appropriate modifications gradient function: gradient elements self.first_index self.last_index computed in parent class,     -- while gradient other elements 1 everywhere end     -- don't think other functions parent class need overriding, make sure in case 

hope helps.


Comments

Popular posts from this blog

Is there a better way to structure post methods in Class Based Views -

performance - Why is XCHG reg, reg a 3 micro-op instruction on modern Intel architectures? -

jquery - Responsive Navbar with Sub Navbar -