torch / torch
torch.qr¶
-
torch.
qr
(input, some=True, *, out=None) -> (Tensor, Tensor)¶ Computes the QR decomposition of a matrix or a batch of matrices
input
, and returns a namedtuple (Q, R) of tensors such that with being an orthogonal matrix or batch of orthogonal matrices and being an upper triangular matrix or batch of upper triangular matrices.If
some
isTrue
, then this function returns the thin (reduced) QR factorization. Otherwise, ifsome
isFalse
, this function returns the complete QR factorization.Warning
If you plan to backpropagate through QR, note that the current backward implementation is only well-defined when the first columns of
input
are linearly independent. This behavior will propably change once QR supports pivoting.Note
precision may be lost if the magnitudes of the elements of
input
are largeNote
While it should always give you a valid decomposition, it may not give you the same one across platforms - it will depend on your LAPACK implementation.
- Parameters
- Keyword Arguments
out (tuple, optional) – tuple of Q and R tensors satisfying
input = torch.matmul(Q, R)
. The dimensions of Q and R are and respectively, where ifsome:
isTrue
and otherwise.
Example:
>>> a = torch.tensor([[12., -51, 4], [6, 167, -68], [-4, 24, -41]]) >>> q, r = torch.qr(a) >>> q tensor([[-0.8571, 0.3943, 0.3314], [-0.4286, -0.9029, -0.0343], [ 0.2857, -0.1714, 0.9429]]) >>> r tensor([[ -14.0000, -21.0000, 14.0000], [ 0.0000, -175.0000, 70.0000], [ 0.0000, 0.0000, -35.0000]]) >>> torch.mm(q, r).round() tensor([[ 12., -51., 4.], [ 6., 167., -68.], [ -4., 24., -41.]]) >>> torch.mm(q.t(), q).round() tensor([[ 1., 0., 0.], [ 0., 1., -0.], [ 0., -0., 1.]]) >>> a = torch.randn(3, 4, 5) >>> q, r = torch.qr(a, some=False) >>> torch.allclose(torch.matmul(q, r), a) True >>> torch.allclose(torch.matmul(q.transpose(-2, -1), q), torch.eye(5)) True
此页内容是否对您有帮助