tf.raw_ops.SparseApplyRMSProp

Update '*var' according to the RMSProp algorithm.

Note that in dense implementation of this algorithm, ms and mom will update even if the grad is zero, but in this sparse implementation, ms and mom will not update in iterations during which the grad is zero.

mean_square = decay * mean_square + (1-decay) * gradient ** 2 Delta = learning_rate * gradient / sqrt(mean_square + epsilon)

\[ms <- rho * ms_{t-1} + (1-rho) * grad * grad\]

\[mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon)\]

\[var <- var - mom\]

varA mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, qint16, quint16, uint16, complex128, half, uint32, uint64. Should be from a Variable().
msA mutable Tensor. Must have the same type as var. Should be from a Variable().
momA mutable Tensor. Must have the same type as var. Should be from a Variable().
lrA Tensor. Must have the same type as var. Scaling factor. Must be a scalar.
rhoA Tensor. Must have the same type as var. Decay rate. Must be a scalar.
momentumA Tensor. Must have the same type as var.
epsilonA Tensor. Must have the same type as var. Ridge term. Must be a scalar.
gradA Tensor. Must have the same type as var. The gradient.
indicesA Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var, ms and mom.
use_lockingAn optional bool. Defaults to False. If True, updating of the var, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
nameA name for the operation (optional).

A mutable Tensor. Has the same type as var.