numpy - using specific forms of gradients of two neural network to minimize a loss w.r.t its parameters -


suppose have loss function f(self.theta, self.sigma) self.theta , self.sigma both defined 2 neural network, example, in tensorflow,

self.sigma = tf.contrib.layers.fully_connected(     inputs=tf.expand_dims(self.state, 0),     num_outputs=1,     activation_fn=none,     weights_initializer=tf.zeros_initializer) 

if have derived forms of gradients of f(self.theta, self.sigma) w.r.t self.theta , self.sigma, denoted theta_gradient , sigma_gradient respectively, how can minimize loss function f(self.theta, self.sigma) using these given forms of gradients minimize loss function.

from this answer of similar question, seems can done using tf.apply_gradient when there set parameters neural network instead of two. have no idea how solve above problem. main difficulty here have 2 forms of gradients, theta_gradient theta, sigma_gradient sigma, how apply them optimize loss function unclear.

based on that, first attempt following,

self.optimizer = tf.train.adamoptimizer(learning_rate=learning_rate)           true_gradients_theta, true_gradients_eta = self.optimizer.compute_gradients(self.loss(theta, eta), [self.theta, self.eta])  my_own_gradients_theta = do_some_stuff_with(true_gradients_theta) my_own_gradients_eta = do_some_stuff_with(true_gradients_eta)  self.train_op = self.optimizer.apply_gradients(my_own_gradients_theta, my_own_gradients_eta) 

however, there 2 main issues: first 1 tf not allow me self.train_op = self.optimizer.apply_gradients(my_own_gradients_theta, my_own_gradients_eta), second issue can not know mathematical form of true_gradients_theta, true_gradients_eta not know how edit them desirable mathematical form.

compute gradients returns list of pairs - variable , gradient, have create analogous structure , concatenate before passing apply_gradients.

to more specific, apply_gradients accepts argument grads_and_vals of form

grads_and_vars: list of (gradient, variable) pairs returned compute_gradients().

so need prepare list of form, example:

theta = tf.variable(...) sigma = tf.variable(...)  theta_grad = compute_my_gradient_for_theta(theta) # note these have tf ops sigma_grad = compute_my_sigma_gradient(sigma)  opt = tf.train.adamoptimizer(...) opt.apply_gradients([(theta_grad, theta), (sigma_grad, sigma)]) 

Comments

Popular posts from this blog

Is there a better way to structure post methods in Class Based Views -

performance - Why is XCHG reg, reg a 3 micro-op instruction on modern Intel architectures? -

jquery - Responsive Navbar with Sub Navbar -