API

 torch / torch


torch.qr

torch.qr(input, some=True, *, out=None) -> (Tensor, Tensor)

Computes the QR decomposition of a matrix or a batch of matrices input, and returns a namedtuple (Q, R) of tensors such that input=QR\text{input} = Q R with QQ being an orthogonal matrix or batch of orthogonal matrices and RR being an upper triangular matrix or batch of upper triangular matrices.

If some is True, then this function returns the thin (reduced) QR factorization. Otherwise, if some is False, this function returns the complete QR factorization.

Warning

If you plan to backpropagate through QR, note that the current backward implementation is only well-defined when the first min(input.size(1),input.size(2))\min(input.size(-1), input.size(-2)) columns of input are linearly independent. This behavior will propably change once QR supports pivoting.

Note

precision may be lost if the magnitudes of the elements of input are large

Note

While it should always give you a valid decomposition, it may not give you the same one across platforms - it will depend on your LAPACK implementation.

Parameters
  • input (Tensor) – the input tensor of size (,m,n)(*, m, n) where * is zero or more batch dimensions consisting of matrices of dimension m×nm \times n .

  • some (bool, optional) – Set to True for reduced QR decomposition and False for complete QR decomposition.

Keyword Arguments

out (tuple, optional) – tuple of Q and R tensors satisfying input = torch.matmul(Q, R). The dimensions of Q and R are (,m,k)(*, m, k) and (,k,n)(*, k, n) respectively, where k=min(m,n)k = \min(m, n) if some: is True and k=mk = m otherwise.

Example:

>>> a = torch.tensor([[12., -51, 4], [6, 167, -68], [-4, 24, -41]])
>>> q, r = torch.qr(a)
>>> q
tensor([[-0.8571,  0.3943,  0.3314],
        [-0.4286, -0.9029, -0.0343],
        [ 0.2857, -0.1714,  0.9429]])
>>> r
tensor([[ -14.0000,  -21.0000,   14.0000],
        [   0.0000, -175.0000,   70.0000],
        [   0.0000,    0.0000,  -35.0000]])
>>> torch.mm(q, r).round()
tensor([[  12.,  -51.,    4.],
        [   6.,  167.,  -68.],
        [  -4.,   24.,  -41.]])
>>> torch.mm(q.t(), q).round()
tensor([[ 1.,  0.,  0.],
        [ 0.,  1., -0.],
        [ 0., -0.,  1.]])
>>> a = torch.randn(3, 4, 5)
>>> q, r = torch.qr(a, some=False)
>>> torch.allclose(torch.matmul(q, r), a)
True
>>> torch.allclose(torch.matmul(q.transpose(-2, -1), q), torch.eye(5))
True

此页内容是否对您有帮助