tf.raw_ops.ResourceApplyAdagradV2

Update '*var' according to the adagrad scheme.

accum += grad * grad var -= lr * grad * (1 / (sqrt(accum) + epsilon))

varA Tensor of type resource. Should be from a Variable().
accumA Tensor of type resource. Should be from a Variable().
lrA Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, qint16, quint16, uint16, complex128, half, uint32, uint64. Scaling factor. Must be a scalar.
epsilonA Tensor. Must have the same type as lr. Constant factor. Must be a scalar.
gradA Tensor. Must have the same type as lr. The gradient.
use_lockingAn optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
update_slotsAn optional bool. Defaults to True.
nameA name for the operation (optional).

The created Operation.