numpy - How to compute gradient of a function with respect to parameters of two separate neural networks -


usually, given neural network, example, in tensorflow, following case,

self.theta = tf.contrib.layers.fully_connected(     inputs=tf.expand_dims(self.state, 0),     num_outputs=1,     activation_fn=none,     weights_initializer=tf.zeros_initializer) 

if have function g(self.theta) , want gradient of g(self.theta) w.r.t parameters, may use self.network_params = tf.trainable_variables() , gradient_theta = tf.gradients(g(self.theta), self.network_params) compute it.

i want how similar things in case: in addition self.theta if define self.sigma in same class following,

self.sigma = tf.contrib.layers.fully_connected(     inputs=tf.expand_dims(self.state, 0),     num_outputs=1,     activation_fn=none,     weights_initializer=tf.zeros_initializer) 

and have function f(self.theta, self.sigma), how compute gradient of f(self.theta, self.sigma) w.r.t self.theta , self.sigma ?

note self.sigma , self.theta both defined in class, think using self.network_params = tf.trainable_variables() here won't work since parameters of both networks, not own network.


Comments

Popular posts from this blog

Is there a better way to structure post methods in Class Based Views -

performance - Why is XCHG reg, reg a 3 micro-op instruction on modern Intel architectures? -

jquery - Responsive Navbar with Sub Navbar -