Idris2Doc : Tensor

Tensor

The `Tensor` API. A `Tensor` is an array (or scalar) of numbers or booleans. spidr tracks
tensor shape and data type in the types, so you can be sure that if your tensor code compiles,
the shapes and types are consistent.

spidr achieves efficient reuse of tensor computations with `Tag`. See the tutorial
_Nuisances in the Tensor API_ for a discussion of pitfalls to avoid when using `Tag`.

Reexports

importpublic Control.Monad.State
importpublic Data.List
importpublic Data.List.Elem
importpublic Literal
importpublic Primitive
importpublic Types
importpublic Util

Definitions

dataTensor : Shape->Type->Type
  A scalar or array. Construct a `Tensor` with the function `tensor`.

@shape The `Tensor` shape.
@dtype The element type.

Totality: total
Visibility: export
Constructor: 
MkTensor : Expr->Tensorshapedtype

Hints:
Show (Tag (Tensorshapedtype))
Taggable (Tensorshapedtype)
dataTagT : (Type->Type) ->Type->Type
  The effect of tagging nodes in a computational graph.

Totality: total
Visibility: export
Constructor: 
MkTagT : StateTEnvma->TagTma

Hints:
Monadm=>Applicative (TagTm)
Functorm=>Functor (TagTm)
Monadm=>Monad (TagTm)
MonadTransTagT
0Tag : Type->Type
Totality: total
Visibility: public export
interfaceTaggable : Type->Type
Parameters: a
Methods:
tag : Monadm=>a->TagTma
  Mark an expression to be efficiently reused. For example, in
```
bad : Tensor [9999999] F64
bad = let x = fill {shape = [9999999]} 1.0 in x + x

good : Tag $ Tensor [9999999] F64
good = do x <- tag $ fill {shape = [9999999]} 1.0
pure (x + x)
```
the large vector `x` is calculated twice in `bad`, but once in `good`, as `tag` marks it for
sharing.

Types that implement this interface should `tag` constituent components it deems worth sharing.
For example, see the implementation for tuples.

See tutorial [_Nuisances in the Tensor API_](https://github.com/joelberkeley/spidr/blob/master/tutorials/Nuisances.md) for details.

Implementations:
Taggable (Datasetft)
Taggable (Gaussianeventdim)
Taggable (Tensorshapedtype)
(Taggablea, Taggableb) =>Taggable (a, b)
tag : Taggablea=>Monadm=>a->TagTma
  Mark an expression to be efficiently reused. For example, in
```
bad : Tensor [9999999] F64
bad = let x = fill {shape = [9999999]} 1.0 in x + x

good : Tag $ Tensor [9999999] F64
good = do x <- tag $ fill {shape = [9999999]} 1.0
pure (x + x)
```
the large vector `x` is calculated twice in `bad`, but once in `good`, as `tag` marks it for
sharing.

Types that implement this interface should `tag` constituent components it deems worth sharing.
For example, see the implementation for tuples.

See tutorial [_Nuisances in the Tensor API_](https://github.com/joelberkeley/spidr/blob/master/tutorials/Nuisances.md) for details.

Totality: total
Visibility: public export
tensor : PrimitiveRWdtypea=>Literalshapea->Tensorshapedtype
  Construct a `Tensor` from `Literal` data. For example
```
x : Tensor [2, 3] S32
x = tensor [[1, 2, 3],
[4, 5, 6]]
```

Totality: total
Visibility: export
fromDouble : Double->Tensor [] F64
Totality: total
Visibility: export
fromInteger : Integer->Tensor [] S32
Totality: total
Visibility: export
eval : Device->PrimitiveRWdtypety=>Tag (Tensorshapedtype) ->IO (Literalshapety)
  Evaluate a `Tensor`, returning its value as a `Literal`. This function builds and executes the
computational graph.

**Note:** Each call to `eval` will rebuild and execute the graph; multiple calls to `eval` on
different tensors, even if they are in the same computation, will be treated independently.
To efficiently evaluate multiple tensors at once, use `TensorList.Tag.eval`.

Visibility: export
eval : Device->PrimitiveRWdtypety=>Tensorshapedtype->IO (Literalshapety)
  A convenience wrapper for `Tag.eval`, for use with a bare `Tensor`.

Visibility: export
dataTensorList : ListShape->ListType->Type
  A list of `Tensor`s, along with the conversions needed to evaluate them to `Literal`s.
The list is parametrized by the shapes and types of the resulting `Literal`s.

Totality: total
Visibility: public export
Constructors:
Nil : TensorList [] []
(::) : PrimitiveRWdtypety=>Tensorshapedtype->TensorListshapestys->TensorList (shape::shapes) (ty::tys)
eval : Device->Tag (TensorListshapestys) ->IO (All2Literalshapestys)
  Evaluate a list of `Tensor`s as a list of `Literal`s. Tensors in the list can have different
shapes and element types. For example,
```
main : Device -> IO ()
main device = do [x, y] <- eval device $ do let x = tensor {dtype = F64} [1.2, 3.4]
y <- reduce @{Sum} [0] x
pure [x, y]
printLn x
printLn y
```
In contrast to `Tensor.eval` when called on multiple tensors, this function constructs and
compiles the graph just once.

Visibility: export
eval : Device->TensorListshapestys->IO (All2Literalshapestys)
  A convenience wrapper for `TensorList.Tag.eval`, for use with a bare `TensorList`.

Visibility: export
castDtype : Integrala=>Tensorshapea->TensorshapeF64
  Cast the element type. For example, `castDtype (tensor {dtype = S32} [1, -2])` is
`tensor {dtype = F64} [1.0, -2.0]`.

Totality: total
Visibility: export
reshape : Primitivedtype=> {auto0_ : productfrom=productto} ->Tensorfromdtype->Tensortodtype
  Reshape a `Tensor`. For example, `reshape {to = [2, 1]} (tensor [3, 4])` is
`tensor [[3], [4]]`. The output can have a different rank to the input.

Totality: total
Visibility: export
expand : Primitivedtype=> (axis : Nat) -> {auto0inBounds : LTEaxis (lengthshape)} ->Tensorshapedtype->Tensor (insertAtaxis1shape) dtype
  Add a dimension of length one at the specified `axis`. The new dimension will be at the
specified `axis` in the new `Tensor` (as opposed to the original `Tensor`). For example,
`expand 1 $ tensor [[1, 2], [3, 4], [5, 6]]` is `tensor [[[1, 2]], [[3, 4]], [[5, 6]]]`.

Totality: total
Visibility: export
dataSqueezable : (0_ : Shape) -> (0_ : Shape) ->Type
  A `Squeezable from to` constitutes proof that the shape `from` can be squeezed to the
shape `to`. Squeezing is the process of removing any number of dimensions of length one.

Totality: total
Visibility: public export
Constructors:
Same : Squeezablexx
  Proof that a shape can be squeezed to itself. For example:

[] to []
[3, 4] to [3, 4]
Match : Squeezablefromto->Squeezable (x::from) (x::to)
  Proof that any dimensions (including those of length 1) can be preserved in the process of
squeezing. For example:

...
Nest : Squeezablefromto->Squeezable (1::from) to
  Proof that any dimensions of length one can be squeezed out. For example:

[1, 3, 1, 1, 4] to [3, 4]
squeeze : Primitivedtype=> {auto0_ : Squeezablefromto} ->Tensorfromdtype->Tensortodtype
  Remove dimensions of length one from a `Tensor` such that it has the desired shape. For example:

```
x : Tensor [2, 1, 3, 1] S32
x = tensor [[[[4], [5], [6]]],
[[[7], [8], [9]]]]

y : Tensor [2, 1, 3] S32
y = squeeze x
```
is
```
y : Tensor [2, 1, 3] S32
y = tensor [[[4, 5, 6]],
[[7, 8, 9]]]
```

Totality: total
Visibility: export
dataSliceOrIndex : Nat->Type
  A `SliceOrIndex d` is a valid slice or index into a dimension of size `d`. See `slice` for
details.

Totality: total
Visibility: export
Constructors:
Slice : (from : Nat) -> (to : Nat) -> {auto0_ : from+size=to} -> {auto0_ : LTEtod} ->SliceOrIndexd
Index : (idx : Nat) -> {auto0_ : LTidxd} ->SliceOrIndexd
DynamicSlice : Tensor [] U64-> (size : Nat) -> {auto0_ : LTEsized} ->SliceOrIndexd
DynamicIndex : Tensor [] U64->SliceOrIndexd
at : (idx : Nat) -> {auto0_ : LTidxd} ->SliceOrIndexd
  Index at `idx`. See `slice` for details.

Totality: total
Visibility: public export
at : Tensor [] U64->SliceOrIndexd
  Index at the specified index. See `slice` for details.

Totality: total
Visibility: public export
.to : (from : Nat) -> (to : Nat) -> {auto0_ : from+size=to} -> {auto0_ : LTEtod} ->SliceOrIndexd
  Slice from `from` (inclusive) to `to` (exclusive). See `slice` for details.

Totality: total
Visibility: public export
.size : Tensor [] U64-> (size : Nat) -> {auto0_ : LTEsized} ->SliceOrIndexd
  Slice `size` elements starting at the specified scalar `U64` index. See `slice` for details.

Totality: total
Visibility: public export
all : SliceOrIndexd
  Slice across all indices along an axis. See `slice` for details.

Totality: total
Visibility: public export
dataMultiSlice : Shape->Type
  A `MultiSlice shape` is a valid multi-dimensional slice into a tensor with shape `shape`.
See `slice` for details.

Totality: total
Visibility: public export
Constructors:
Nil : MultiSliceds
(::) : SliceOrIndexd->MultiSliceds->MultiSlice (d::ds)
slice : MultiSliceshape->Shape
  The shape of a tensor produced by slicing with the specified multi-dimensional slice. See
`Tensor.slice` for details.

Totality: total
Visibility: public export
slice : Primitivedtype=> (at : MultiSliceshape) ->Tensorshapedtype->Tensor (sliceat) dtype
  Slice or index `Tensor` axes. Each axis can be sliced or indexed, and this can be done with
either static (`Nat`) or dynamic (scalar `U64`) indices.

**Static indices**

Static indices are `Nat`s. For example, for
```
x : Tensor [5, 6] S32
x = tensor [[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29]]
```
we can index as `slice [at 1] x` to get
```
x : Tensor [6] S32
x = tensor [6, 7, 8, 9, 10, 11]
```
or we can slice as `slice [2.to 4] x` to get
```
x : Tensor [2, 6] S32
x = tensor [[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23]]
```
Note that in `2.to 4`, the 2 is inclusive, and the 4 exclusive, so we return indices 2 and 3.

**Dynamic indices**

Dynamic indices are scalar `U64` values, and the API works slightly differently because we
can't know the value of dynamic indices until the graph is executed. For indexing, with scalar
`U64` index `i` in `slice [at i] x`, `i` is clamped to be a valid index into that dimension.
For example, for `i = tensor 1`, `slice [at i] x` is
```
x : Tensor [6] S32
x = tensor [6, 7, 8, 9, 10, 11]
```
as in the static case. However, for `i = tensor 10`, `slice [at i] x` returns the last row
```
x : Tensor [6] S32
x = tensor [24, 25, 26, 27, 28, 29]
```
We can also slice by specifying a scalar `U64` start index, and a static size, as
`slice [i.size 2] x` with `i = tensor 2` to get
```
x : Tensor [2, 6] S32
x = tensor [[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23]]
```
For a given slice `size`, the dynamic start index is clamped such that we always get `size`
elements along that axis. For example, `slice [i.size 2] x` with `i = tensor 4` is
```
x : Tensor [2, 6] S32
x = tensor [[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29]]
```
which starts at index 3 rather than index 4.

**Mixed static, dynamic, slicing and indexing**

Each axis can only be sliced or indexed, and must use only static or dynamic indices. However,
across axes, we can mix these four arbitrarily. For example, with `slice [2.to 4, at 1] x` to
get
```
x : Tensor [2] S32
x = tensor [13, 19]
```
or with `i = tensor 2` in `slice [at 1, i.size 2] x` to get
```
x : Tensor [2] S32
x = tensor [7, 8]
```

Slices and indices apply to the leading axes of the tensor. For trailing axes omitted from the
multi-dimensional slice, the whole of the axis is returned. If we want to slice or index over
later axes and retain all indices in a leading axis, we can use the convenience function `all`,
as `slice [all, at 3] x` to get
```
x : Tensor [5] S32
x = tensor [[3], [9], [15], [21], [27]]
```
This is exactly the same as the more manual `slice [0.to 5, at 3] x` and
`slice [(tensor 0).size 5, at 3] x`.

@at The multi-dimensional slices and indices at which to slice the tensor.

Totality: total
Visibility: export
concat : Primitivedtype=> (axis : Nat) ->Tensorsdtype->Tensors'dtype-> {auto0inBounds : (InBoundsaxiss, InBoundsaxiss')} -> {auto0_ : deleteAtaxiss=deleteAtaxiss'} ->Tensor (replaceAtaxis (indexaxiss+indexaxiss') s) dtype
  Concatenate two `Tensor`s along the specfied `axis`. For example,
`concat 0 (tensor [[1, 2], [3, 4]]) (tensor [[5, 6]])` and
`concat 1 (tensor [[3], [6]]) (tensor [[4, 5], [7, 8]])` are both
`tensor [[1, 2], [3, 4], [5, 6]]`.

Totality: total
Visibility: export
diag : Primitivedtype=>Tensor [n, n] dtype->Tensor [n] dtype
  The diagonal of a matrix as a vector. For example, for
```
x : Tensor [3, 3] S32
x = tensor [[0, 1, 2],
[3, 4, 5],
[6, 7, 8]]
```
`diag x` is `tensor [0, 4, 8]`.

Totality: total
Visibility: export
dataTriangle : Type
  Represents the upper- or lower-triangular component of a matrix.

Totality: total
Visibility: public export
Constructors:
Upper : Triangle
Lower : Triangle
triangle : Primitivedtype=>Triangle->Tensor [n, n] dtype->Tensor [n, n] dtype
  Get the upper- or lower-triangular component of a matrix. For example, for
```
x : Tensor [3, 3] S32
x = tensor [[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]
```
`triangle Lower x` is
```
x : Tensor [3, 3] S32
x = tensor [[1, 0, 0],
[4, 5, 0],
[7, 8, 9]]
```

Totality: total
Visibility: export
.T : Tensor [m, n] dtype->Tensor [n, m] dtype
  Transpose a matrix. For example, `(tensor [[1, 2], [3, 4]]).T` is `tensor [[1, 3], [2, 4]]`.

Totality: total
Visibility: export
transpose : (ordering : ListNat) ->Tensorshapedtype-> {auto0_ : lengthordering=lengthshape} -> {auto0_ : uniqueordering=True} -> {auto0inBounds : All (flipInBoundsshape) ordering} ->Tensor (multiIndexorderingshape) dtype
  Transpose axes of a tensor. This is a more general version of `(.T)`, in which you can
transpose any number of axes in a tensor of arbitrary rank. The i'th axis in the resulting
tensor corresponds to the `index i ordering`'th axis in the input tensor. For example, for
```
x : Tensor [2, 3, 4] S32
x = tensor [[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]]
```
`transpose [0, 2, 1] x` is
```
x : Tensor [2, 4, 3] S32
x = tensor [[[ 0, 4, 8],
[ 1, 5, 9],
[ 2, 6, 10],
[ 3, 7, 11]],
[[12, 16, 20],
[13, 17, 21],
[14, 18, 22],
[15, 19, 23]]]
```
`transpose [2, 0, 1] x` is
```
x : Tensor [4, 2, 3] S32
x = tensor [[[ 0, 4, 8],
[12, 16, 20]],
[[ 1, 5, 9],
[13, 17, 21]],
[[ 2, 6, 10],
[14, 18, 22]],
[[ 3, 7, 11],
[15, 19, 23]]]
```

In order to see what effect transposing a tensor has, it can help to bear in mind the following:
* if an element can be found with `slice [at 3, at 4, at 5] x` in the original tensor,
that same element can instead be found with `slice [at 5, at 3, at 4]` given a
`transpose [2, 0, 1]`. That is, transposing axes re-orders indices when indexing.
* with `transpose [2, 0, 1]`, traversing the first axis in the result is equivalent to
traversing the last axis in the input. Similarly, traversing the last axis in the result is
equivalent to traversing the second axis in the input.

Totality: total
Visibility: export
identity : Numdtype=>Tensor [n, n] dtype
  The identity tensor, with inferred shape and element type. For example,
```
x : Tensor [2, 2] S32
x = identity
```
is
```
x : Tensor [2, 2] S32
x = tensor [[1, 0],
[0, 1]]
```

Totality: total
Visibility: export
dataDimBroadcastable : (0_ : Nat) -> (0_ : Nat) ->Type
  A `DimBroadcastable from to` proves that a dimension of size `from` can be broadcast to a
dimension of size `to`.

Totality: total
Visibility: public export
Constructors:
Same : DimBroadcastablexx
  Proof that any dimension can be broadcast to itself. For example in shapes `[2, 3]` to
`[2, 3]`.
Stack : DimBroadcastable1{_:4554}
  Proof that a dimension of length one can be broadcast to any size. For example in shapes
`[2, 1]` to `[2, 3]`
Zero : DimBroadcastable{_:4560}0
  Proof that any dimension can be broadcast to zero. For example in shapes `[2, 3]` to `[2, 0]`.
dataBroadcastable : (0_ : Shape) -> (0_ : Shape) ->Type
  A `Broadcastable from to` constitutes proof that the shape `from` can be broadcast to the
shape `to`.

Totality: total
Visibility: public export
Constructors:
Same : Broadcastablexx
  Proof that a shape can be broadcast to itself. For example:

[] to []
[3, 4] to [3, 4]

Implementation note: we could have used `Broadcast [] []`, which would have resulted in more
atomic constructors for `Broadcastable`, but the author guesses that this implementation helps
the type checker avoid applications of `Match`.
Match : {auto0_ : lengthfrom=lengthto} -> {auto0_ : DimBroadcastableft} ->Broadcastablefromto->Broadcastable (f::from) (t::to)
  Proof that a dimension of size `f` can be broadcast to size `t` if these dimensions
are `DimBroadcastable f t`. For example:

[2, 3] to [2, 3]
[2, 1] to [2, 3]
[2, 1] to [2, 0]
Nest : Broadcastableft->Broadcastablef ({_:4611}::t)
  Proof that broadcasting can add outer dimensions i.e. nesting. For example:

[3] to [1, 3]
[3] to [5, 3]

Hint: 
(to : Shape) ->Broadcastable [] to
broadcastableByLeading : (leading : ListNat) ->Broadcastableshape (leading++shape)
  A shape can be extended with any number of leading dimensions.

@leading The leading dimensions.

Totality: total
Visibility: export
scalarToAnyOk : (to : Shape) ->Broadcastable [] to
  A scalar can be broadcast to any shape.

Totality: total
Visibility: export
broadcast : Primitivedtype=>Broadcastablefromto=>Tensorfromdtype->Tensortodtype
  Broadcast a `Tensor` to a new compatible shape. For example,
```
x : Tensor [2, 3] S32
x = broadcast (tensor [4, 5, 6])
```
is
```
x : Tensor [2, 3] S32
x = tensor [[4, 5, 6], [4, 5, 6]]
```

Totality: total
Visibility: export
fill : PrimitiveRWdtypety=>ty->Tensorshapedtype
  A `Tensor` where every element has the specified value. For example,
```
fives : Tensor [2, 3] S32
fives = fill 5
```
is
```
fives : Tensor [2, 3] S32
fives = tensor [[5, 5, 5],
[5, 5, 5]]
```

Totality: total
Visibility: export
iota : Numdtype=> (axis : Nat) -> {auto0_ : InBoundsaxisshape} ->Tensorshapedtype
  A constant where values increment from zero along the specified `axis`. For example,
```
x : Tensor [3, 5] S32
x = iota 1
```
is the same as
```
x : Tensor [3, 5] S32
x = tensor [[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]]
```
and
```
x : Tensor [3, 5] S32
x = iota 0
```
is the same as
```
x : Tensor [3, 5] S32
x = tensor [[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2]]
```

Totality: total
Visibility: export
map : (Primitivea, Primitiveb) => (Tensor [] a->Tag (Tensor [] b)) ->Tensorshapea->Tag (Tensorshapeb)
  Lift a unary function on scalars to an element-wise function on `Tensor`s of arbitrary shape.
For example,
```
recip : Tensor [] F64 -> Tag $ Tensor [] F64
recip x = pure $ 1.0 / x
```
can be lifted to an element-wise reciprocal function as `map recip (tensor [-2, 0.4])`,
which produces `tensor [-0.5, 2.5]`.

**Note:** Values tagged in the same scope as `map` cannot then be used within the scalar
function passed to `map`. This is due to an [issue in XLA](https://github.com/openxla/xla/issues/14299).

Totality: total
Visibility: export
map2 : (Primitivea, (Primitiveb, Primitivec)) => (Tensor [] a->Tensor [] b->Tag (Tensor [] c)) ->Tensorshapea->Tensorshapeb->Tag (Tensorshapec)
  Lift a binary function on scalars to an element-wise function on `Tensor`s of arbitrary shape.
For example,
```
addRecip : Tensor [] F64 -> Tensor [] F64 -> Tag $ Tensor [] F64
addRecip x y = pure $ x + 1.0 / y
```
can be lifted to an element-wise function as
`map2 addRecip (tensor [3.0, -3.0]) (tensor [-2.0, 0.4])`, which produces `tensor [2.5, -0.5]`.

**Note:** Values tagged in the same scope as `map2` cannot then be used within the scalar
function passed to `map2`. This is due to an [issue in XLA](https://github.com/openxla/xla/issues/14299).

Totality: total
Visibility: export
reduce : Monoid (Tensor [] dtype) =>Primitivedtype=> (axes : ListNat) -> {auto0_ : SortedLTaxes} -> {auto0axesInBounds : All (flipInBoundsshape) axes} ->Tensorshapedtype->Tag (Tensor (deleteAtaxesshape) dtype)
  Reduce elements along one `axis` of a `Tensor` according to a specified `reducer` `Monoid`.
For example, if `x = tensor [[0, 1, 2], [3, 4, 5]]`, then reduce @{Sum} 0 x` produces
`tensor [3, 5, 7]`, and `reduce @{Sum} 1 x` produces `tensor [3, 12]`.

**Note:** `Semigroup` doesn't use `Tag`, which limits the functions that can be used in
`reduce`. However, the most commonly used semigroups don't need `Tag`, including `Sum`,
`Prod`, `Min` and `Max`, so for ergonomics, we have opted to use `Monoid` as is. We can
provide an overloaded variant if requested.

**Note:** Values tagged in the same scope as `reduce` cannot then be used within the binary
function supplied by the `Monoid`. This is due to an [issue in XLA](https://github.com/openxla/xla/issues/14299).

@reducer How to reduce elements along the given `axis`.
@axis The axis along which to reduce elements.

Totality: total
Visibility: export
sort : Primitivedtype=> (Tensor [] dtype->Tensor [] dtype->Tensor [] PRED) -> (dimension : Nat) ->Tensorshapedtype-> {auto0_ : InBoundsdimensionshape} ->Tag (Tensorshapedtype)
  Sort the elements of a `Tensor` along a specified `dimension` according to a scalar-wise
ordering. For sorting function `f`, elements are sorted such that for consecutive sorted
elements `a` and `b`, either `f a b` is true, or `f a b` *and* `f b a` are false.

**Note:** Sorting is not stable, meaning elements that compare equal according the ordering may
be sorted in a different order to the order they appear in the input.

**Note:** `sort` is limited to use comparison function without `Tag`. However, since the most
commonly-used functions, including (>), (<), (>=), and (<=), don't use `Tag`, we have opted to
omit it for ergonomics. We can trivially provide an overloaded variant if requested.

**Note:** Values tagged in the same scope as `sort` cannot then be used within the scalar
function passed to `sort`. This is due to an [issue in XLA](https://github.com/openxla/xla/issues/14299).

For example, for `x = tensor [[1, 6, 4], [3, 2, 5]]`, `sort (<) 0 x` produces
`tensor [[1, 2, 4], [3, 6, 5]]`, while `sort (<) 1 x` produces
`tensor [[1, 4, 6], [2, 3, 5]]`.

Totality: total
Visibility: export
reverse : (axes : ListNat) -> {auto0_ : SortedLTaxes} -> {auto0_ : All (flipInBoundsshape) axes} ->Tensorshapedtype->Tensorshapedtype
  Reverse elements along the specified axes. For example, for
```
x : Tensor [2, 3] S32
x = tensor [[-2, -1, 0],
[ 1, 2, 3]]
```
`reverse [0] x` is
```
x : Tensor [2, 3] S32
x = tensor [[ 1, 2, 3],
[-2, -1, 0]]
```
`reverse [1] x` is
```
x : Tensor [2, 3] S32
x = tensor [[ 0, -1, -2],
[ 3, 2, 1]]
```
and `reverse [0, 1] x` is
```
x : Tensor [2, 3] S32
x = tensor [[ 3, 2, 1],
[ 0, -1, -2]]
```

**Note:** This function requires `axes` is ordered simply so that elements are unique.
The ordering itself is irrelevant to the implementation, but ensures uniqueness without using
proofs of contradiction that can be difficult for Idris to construct.

Totality: total
Visibility: export
(==) : Eqdtype=>Tensorshapedtype->Tensorshapedtype->TensorshapePRED
  Element-wise equality. For example, `tensor [1, 2] /= tensor [1, 3]` is
`tensor [True, False]`.

Totality: total
Visibility: export
Fixity Declaration: infix operator, level 6
(/=) : Eqdtype=>Tensorshapedtype->Tensorshapedtype->TensorshapePRED
  Element-wise inequality. For example, `tensor [1, 2] /= tensor [1, 3]` is
`tensor [False, True]`.

Totality: total
Visibility: export
Fixity Declaration: infix operator, level 6
(<) : Orddtype=>Tensorshapedtype->Tensorshapedtype->TensorshapePRED
  Element-wise less than. For example, `tensor [1, 2, 3] < tensor [2, 2, 2]` is
`tensor [True, False, False]`.

Totality: total
Visibility: export
Fixity Declaration: infix operator, level 6
(>) : Orddtype=>Tensorshapedtype->Tensorshapedtype->TensorshapePRED
  Element-wise greater than. For example, `tensor [1, 2, 3] > tensor [2, 2, 2]` is
`tensor [False, False, True]`.

Totality: total
Visibility: export
Fixity Declaration: infix operator, level 6
(<=) : Orddtype=>Tensorshapedtype->Tensorshapedtype->TensorshapePRED
  Element-wise less than or equal. For example, `tensor [1, 2, 3] <= tensor [2, 2, 2]`
is `tensor [True, True, False]`.

Totality: total
Visibility: export
Fixity Declaration: infix operator, level 6
(>=) : Orddtype=>Tensorshapedtype->Tensorshapedtype->TensorshapePRED
  Element-wise greater than or equal. For example,
`tensor [1, 2, 3] >= tensor [2, 2, 2]` is `tensor [False, True, True]`.

Totality: total
Visibility: export
Fixity Declaration: infix operator, level 6
(&&) : TensorshapePRED->TensorshapePRED->TensorshapePRED
  Element-wise boolean and. For example,
`tensor [True, True, False, False] && tensor [True, False, True, False]` is
`tensor [True, False, False, False]`.

Totality: total
Visibility: export
Fixity Declaration: infixr operator, level 5
(||) : TensorshapePRED->TensorshapePRED->TensorshapePRED
  Element-wise boolean or. For example,
`tensor [True, True, False, False] || tensor [True, False, True, False]` is
`tensor [True, True, True, False]`.

Totality: total
Visibility: export
Fixity Declaration: infixr operator, level 4
not : TensorshapePRED->TensorshapePRED
  Element-wise boolean negation. For example, `not (tensor [True, False])` is
`tensor [False, True]`.

Totality: total
Visibility: export
select : Primitivedtype=>TensorshapePRED->Tensorshapedtype->Tensorshapedtype->Tensorshapedtype
  Choose elements from two `Tensor`s based on a `Tensor` of predicates. For each element in the
predicates, the output will use the corresponding element from `onTrue` if the element is
truthy, else the element from `onFalse`. For example, for
```
preds : Tensor [3] PRED
preds = tensor [False, True, False]

onTrue : Tensor [3] S32
onTrue = tensor [1, 2, 3]

onFalse : Tensor [3] S32
onFalse = tensor [4, 5, 6]
```
`select preds onTrue onFalse` is `tensor [4, 2, 6]`.

@onTrue The elements to choose where the predicate elements are truthy.
@onFalse The elements to choose where the predicate elements are falsy.

Totality: total
Visibility: export
cond : (Primitivett, (Primitiveft, Primitivedtype)) =>Tensor [] PRED-> (Tensortstt->Tag (Tensorshapedtype)) ->Tensortstt-> (Tensorfsft->Tag (Tensorshapedtype)) ->Tensorfsft->Tag (Tensorshapedtype)
  Use a scalar predicate to choose which of two functions to evaluate. If the predicate is truthy,
evaluate `onTrue` on the corresponding specified argument, otherwise evaluate `onFalse` on the
corresponding specified argument. The result of the evaluated function is returned. For example,
for
```
x : Tensor [2] S32
x = tensor [2, -1]

y : Tensor [2, 2] S32
y = tensor [[5, 6],
[7, 8]]
```
`cond (tensor True) (pure . (tensor 2 *)) x (pure . diag) y` produces `tensor [4, -2]` and
`cond (tensor False) (pure . (tensor 2 *)) x (pure . diag) y` produces `tensor [5, 8]`.

While both functions will be called for the purposes of defining the computation, only one will
be evaluated with its specified argument. That is, this function short-circuits.

**Note:** Values tagged in the same scope as `cond` cannot then be used in either function
passed to `cond`. This is due to an [issue in XLA](https://github.com/openxla/xla/issues/14299).

@onTrue The function to execute if the predicate is truthy.
@onFalse The function to execute if the predicate is falsy.

Totality: total
Visibility: export
(@@) : Numdtype=>Tensor [Sm] dtype->Tensor [Sm] dtype->Tensor [] dtype
  Vector dot product with a tensor of any rank. The vector dot product is with the first axis of
the right-hand side tensor. For example `tensor [0, 1, 2] @@ tensor [-1, -3, -1]` is
`-1`.

Totality: total
Visibility: export
Fixity Declaration: infixl operator, level 9
(@@) : (Primitivedtype, Numdtype) =>Tensor [n, Sm] dtype->Tensor (Sm::tl) dtype-> {auto0_ : LTE (lengthtl) 1} ->Tensor (n::tl) dtype
  Matrix multiplication with a matrix or vector. Contraction is along the last axis of the first
and the first axis of the last. For example,
```
x : Tensor [2, 3] S32
x = tensor [[-1, -2, -3],
[ 0, 1, 2]]

y : Tensor [3, 1] S32
y = tensor [[4, 0, 5]]

z : Tensor [2, 1] S32
z = x @@ y
```
is
```
z : Tensor [2, 1] S32
z = tensor [-19, 10]
```

Totality: total
Visibility: export
Fixity Declaration: infixl operator, level 9
contract : (lBatch : ListNat) -> (rBatch : ListNat) -> (lContract : ListNat) -> (rContract : ListNat) -> (ls : Shape) -> (rs : Shape) -> {auto0_ : All (flipInBoundsls) lBatch} -> {auto0_ : All (flipInBoundsrs) rBatch} -> {auto0_ : All (flipInBoundsls) lContract} -> {auto0_ : All (flipInBoundsrs) rContract} ->Shape
  The output shape of a `dotGeneral` operation.

Totality: total
Visibility: public export
dotGeneral : (lBatch : ListNat) -> (rBatch : ListNat) -> (lContract : ListNat) -> (rContract : ListNat) -> {auto0_ : unique (lBatch++lContract) =True} -> {auto0_ : unique (rBatch++rContract) =True} -> {auto0lInBoundsBatch : All (flipInBoundsls) lBatch} -> {auto0rInBoundsBatch : All (flipInBoundsrs) rBatch} -> {auto0lInBoundsContract : All (flipInBoundsls) lContract} -> {auto0rInBoundsContract : All (flipInBoundsrs) rContract} -> {auto0_ : multiIndexlBatchls=multiIndexrBatchrs} -> {auto0_ : multiIndexlContractls=multiIndexrContractrs} ->Tensorlsdtype->Tensorrsdtype->Tensor (contractlBatchrBatchlContractrContractlsrs) dtype
  Matrix multiplication.

This is a much more general version of `(@@)`, in which you can specify any number of batch
and contracting axes. Matrix multiplication is done over each contracting axis.
The operation is vectorized over batch axes. For each contracting axis on the left-hand
operand, there is one contracting axis on the right-hand operand. These can be different axes
in each operand. The same is true for each batch axis.

For example, we can vectorize over a typical rank-two matrix multiplication as follows: given
two inputs tensors
```
let x : Tensor [3, 4, 5, 6] F64
y : Tensor [3, 4, 6, 7] F64
```
we do
```
let z : Tensor [3, 4, 5, 7] F64 = dotGeneral [0, 1] [0, 1] [3] [2] x y
```
Here, we vectorized over the first two axes `[0, 1]`, and do standard matrix multiplication
over the remaining axes by specifying the axes 3 and 2 respectively as contracting axes. Notice
how the batch axes appear once each at the start of the output shape, and the contracting axis
disappears. Remaining axes appear in order from left to right.

Note this API is somewhat of a quickfix to bring general matrix multiplication to the tensor
API. It is not thoroughly tested. Expect it to change in the future.

Totality: total
Visibility: export
(+) : Numdtype=>Tensorshapedtype->Tensorshapedtype->Tensorshapedtype
  Element-wise addition. For example, `tensor [1, 2] + tensor [3, 4]` is
`tensor [4, 6]`.

Totality: total
Visibility: export
Fixity Declaration: infixl operator, level 8
negate : Negdtype=>Tensorshapedtype->Tensorshapedtype
  Element-wise negation. For example, `- tensor [1, -2]` is `tensor [-1, 2]`.

Totality: total
Visibility: export
Fixity Declaration: prefix operator, level 10
negate : Negty=>ty->ty
  The underlying of unary minus. `-5` desugars to `negate (fromInteger 5)`.

Totality: total
Visibility: public export
Fixity Declaration: prefix operator, level 10
(-) : Negdtype=>Tensorshapedtype->Tensorshapedtype->Tensorshapedtype
  Element-wise subtraction. For example, `tensor [3, 4] - tensor [4, 2]` is
`tensor [-1, 2]`.

Totality: total
Visibility: export
Fixity Declarations:
infixl operator, level 8
prefix operator, level 10
(*) : Numdtype=>Tensorshapedtype->Tensorshapedtype->Tensorshapedtype
  Element-wise multiplication. For example, `tensor [2, 3] * tensor [4, 5]` is
`tensor [8, 15]`.

Totality: total
Visibility: export
Fixity Declaration: infixl operator, level 9
(*) : Numdtype=>Tensor [] dtype->Tensor (d::ds) dtype->Tensor (d::ds) dtype
  Multiplication by a scalar. For example, `tensor 2 * tensor [3, 5]` is
`tensor [6, 10]`.

The RHS is required to be non-scalar simply to avoid ambiguities with element-wise `(*)`.

Totality: total
Visibility: export
Fixity Declaration: infixl operator, level 9
(/) : Fractionaldtype=>Tensorshapedtype->Tensorshapedtype->Tensorshapedtype
  Element-wise floating point division. For example, `tensor [2, 3] / tensor [4, 5]` is
`tensor [0.5, 0.6]`.

Totality: total
Visibility: export
Fixity Declaration: infixl operator, level 9
(/) : Fractionaldtype=>Tensor (d::ds) dtype->Tensor [] dtype->Tensor (d::ds) dtype
  Floating point division by a scalar. For example, `tensor [3.4, -5.6] / tensor 2` is
`tensor [1.7, -2.8]`.

The LHS is required to be non-scalar simply to avoid ambiguities with element-wise `(/)`.

Totality: total
Visibility: export
Fixity Declaration: infixl operator, level 9
div : TensorshapeU64-> (denom : LiteralshapeNat) -> {auto0_ : AllIsSuccdenom} ->TensorshapeU64
  Element-wise division of natural numbers. For example,
`div (tensor [Scalar 13, Scalar 8]) [3, 4]` is `tensor [4, 2]`.

Totality: total
Visibility: export
Fixity Declaration: infixl operator, level 9
rem : TensorshapeU64-> (denom : LiteralshapeNat) -> {auto0_ : AllIsSuccdenom} ->TensorshapeU64
  Element-wise remainder for natural numbers. For example,
`rem (tensor [Scalar 13, Scalar 8]) [3, 4]` is `tensor [1, 0]`.

Totality: total
Visibility: export
recip : TensorshapeF64->TensorshapeF64
  The element-wise reciprocal. For example, `recip (tensor [-2, 0, 0.2])`
is `tensor [-0.5, nan, 5]`.

Totality: total
Visibility: export
(^) : TensorshapeF64->TensorshapeF64->TensorshapeF64
  Each element in `base` raised to the power of the corresponding element in `exponent`.
example, `tensor [2, 25, -9] ^ tensor [3, -0.5, 0.5]` is `tensor [8, 0.2, nan]`.

Note: The behaviour of this function is not well-defined at negative or positive infinity, or
NaN.

Note: The first root is used.

Totality: total
Visibility: export
Fixity Declaration: infixr operator, level 9
abs : Absdtype=>Tensorshapedtype->Tensorshapedtype
  Element-wise absolute value. For example, `abs (tensor [-2, 3])` is `tensor [2, 3]`.

Totality: total
Visibility: export
exp : TensorshapeF64->TensorshapeF64
  The element-wise natural exponential. For example, `exp (tensor [-1, 0, 2])` is
`tensor [1 / euler, 1, pow euler 2]`.

Totality: total
Visibility: export
floor : TensorshapeF64->TensorshapeF64
  The element-wise floor function. For example,
`floor (tensor [-1.6, -1.5, -1.4, -1.0, 1.0, 1.4, 1.5, 1.6])` is
`tensor [-2.0, -2.0, -2.0, -1.0, 1.0, 1.0, 1.0, 1.0]`.

Totality: total
Visibility: export
ceil : TensorshapeF64->TensorshapeF64
  The element-wise ceiling function. For example,
`ceil (tensor [-1.6, -1.5, -1.4, -1.0, 1.0, 1.4, 1.5, 1.6])` is
`tensor [-1.0, -1.0, -1.0, -1.0, 1.0, 2.0, 2.0, 2.0]`.

Totality: total
Visibility: export
log : TensorshapeF64->TensorshapeF64
  The element-wise natural logarithm. Negative inputs yield NaN output. For example,
`log (tensor [1 / euler, 1, euler * euler])` is `tensor [-1, 0, 2]`.

Totality: total
Visibility: export
logistic : TensorshapeF64->TensorshapeF64
  The element-wise logistic function equivalent to `1 / 1 + exp (-x)`.

Totality: total
Visibility: export
sin : TensorshapeF64->TensorshapeF64
  The element-wise sine.

Totality: total
Visibility: export
cos : TensorshapeF64->TensorshapeF64
  The element-wise cosine.

Totality: total
Visibility: export
tan : TensorshapeF64->TensorshapeF64
  The element-wise tangent.

Totality: total
Visibility: export
asin : TensorshapeF64->TensorshapeF64
  The element-wise inverse sine.

Totality: total
Visibility: export
acos : TensorshapeF64->TensorshapeF64
  The element-wise inverse cosine.

Totality: total
Visibility: export
atan : TensorshapeF64->TensorshapeF64
  The element-wise inverse tangent.

Totality: total
Visibility: export
sinh : TensorshapeF64->TensorshapeF64
  The element-wise hyperbolic sine.

Totality: total
Visibility: export
cosh : TensorshapeF64->TensorshapeF64
  The element-wise hyperbolic cosine.

Totality: total
Visibility: export
tanh : TensorshapeF64->TensorshapeF64
  The element-wise hyperbolic tangent.

Totality: total
Visibility: export
asinh : TensorshapeF64->TensorshapeF64
  The element-wise inverse hyperbolic sine.

Totality: total
Visibility: export
acosh : TensorshapeF64->TensorshapeF64
  The element-wise inverse hyperbolic cosine.

Totality: total
Visibility: export
atanh : TensorshapeF64->TensorshapeF64
  The element-wise inverse hyperbolic tangent.

Totality: total
Visibility: export
erf : TensorshapeF64->TensorshapeF64
  An approximation to the element-wise error function.

Totality: total
Visibility: export
square : TensorshapeF64->TensorshapeF64
  The element-wise square. For example, `square (tensor [-2, 0, 3])`
is `tensor [4, 0, 9]`.

Totality: total
Visibility: export
sqrt : TensorshapeF64->TensorshapeF64
  The element-wise square root. The first root is used. Negative inputs yield NaN output.
For example, `sqrt (tensor [0, 9])` is `tensor [0, 3]`.

Totality: total
Visibility: export
min : Orddtype=>Tensorshapedtype->Tensorshapedtype->Tensorshapedtype
  The element-wise minimum of the first argument compared to the second. For example,
`min (tensor [-3, -1, 3]) (tensor [-1, 0, 1])` is `tensor [-3, -1, 1]`.

Totality: total
Visibility: export
max : Orddtype=>Tensorshapedtype->Tensorshapedtype->Tensorshapedtype
  The element-wise maximum of the first argument compared to the second. For example,
`max (tensor [-3, -1, 3]) (tensor [-1, 0, 1])` is `tensor [-1, 0, 3]`.

Totality: total
Visibility: export
argmax : Orddtype=>Tensor [Sn] dtype->Tag (Tensor [] U64)
  The first index of the maximum value in a vector. For example,
`argmax (tensor [-1, 3, -2, -2, 3])` produces `tensor 1`. If the vector contains NaN values,
`argmax` returns the index of the first NaN.

**Note:** `argmax` uses `Tag` to work around what we believe to be an inconsistency in the XLA
compiler's handling of NaN. Specifically, we have modified `argmax` to return the first index of
the value returned by `reduce @{Max}`.

Totality: total
Visibility: export
argmin : Orddtype=>Tensor [Sn] dtype->Tag (Tensor [] U64)
  The first index of the minimum value in a vector. For example,
`argmin (tensor [-1, 3, -2, -2, 3])` produces `tensor 2`. If the vector contains NaN values,
`argmin` returns the index of the first NaN.

**Note:** `argmin` uses `Tag` to work around what we believe to be an inconsistency in the XLA
compiler's handling of NaN. Specifically, we have modified `argmin` to return the first index of
the value returned by `reduce @{Min}`.

Totality: total
Visibility: export
cholesky : Tensor [Sn, Sn] F64->Tensor [Sn, Sn] F64
  Cholesky decomposition. Computes the lower triangular matrix `L` from the symmetric, positive
semi-definite matrix `X` s.t. `X = L @@ L.T`. Values will be NaN if the input matrix is not
positive semi-definite. The remaining matrix components - those not in the lower triangle or
diagonal - will always be zero.

Totality: total
Visibility: export
(|\) : Tensor [m, m] F64->Tensor [m, n] F64->Tensor [m, n] F64
  Solve the set of linear equations `a @@ x = b` for `x` where `a` is a lower-triangular matrix.
`a` is given by the lower-triangular elements of the first argument. Values in the
upper-triangular part are ignored. If `a` is lower-triangular already,
this is written `a |\ b`.

The operator is shaped like the lower-triangular portion of a matrix to signal that it uses
this portion of its argument. This is in contrast to `(\|)`.

Totality: total
Visibility: export
Fixity Declaration: infix operator, level 9
(\|) : Tensor [m, m] F64->Tensor [m, n] F64->Tensor [m, n] F64
  Solve the set of linear equations `a @@ x = b` for `x` where `a` is an upper-triangular
matrix. `a` is given by the upper-triangular elements of the first argument. Values in the
lower-triangular part are ignored. If `a` is upper-triangular already, this is written
`a \| b`.

The operator is shaped like the upper-triangular portion of a matrix to signal that it uses
this portion of its argument. This is in contrast to `(|\)`.

Totality: total
Visibility: export
Fixity Declaration: infix operator, level 9
(|\) : Tensor [m, m] F64->Tensor [m] F64->Tensor [m] F64
  Solve the set of linear equations `a @@ x = b` for `x` where `a` is a lower-triangular matrix.
`a` is given by the lower-triangular elements of the first argument. Values in the
upper-triangular part are ignored. If `a` is lower-triangular already,
this is written `a |\ b`.

The operator is shaped like the lower-triangular portion of a matrix to signal that it uses
this portion of its argument. This is in contrast to `(\|)`.

Totality: total
Visibility: export
Fixity Declaration: infix operator, level 9
(\|) : Tensor [m, m] F64->Tensor [m] F64->Tensor [m] F64
  Solve the set of linear equations `a @@ x = b` for `x` where `a` is an upper-triangular
matrix. `a` is given by the upper-triangular elements of the first argument. Values in the
lower-triangular part are ignored. If `a` is upper-triangular already, this is written
`a \| b`.

The operator is shaped like the upper-triangular portion of a matrix to signal that it uses
this portion of its argument. This is in contrast to `(|\)`.

Totality: total
Visibility: export
Fixity Declaration: infix operator, level 9
trace : (Numdtype, Numa) =>PrimitiveRWdtypea=>Tensor [Sn, Sn] dtype->Tag (Tensor [] dtype)
  Sum the elements along the diagonal of the input. For example,
`trace (tensor [[-1, 5], [1, 4]])` produces `3`.

Totality: total
Visibility: export
0Rand : Type->Type
  A `Rand a` produces a pseudo-random value of type `a` from a `Tensor [1] U64` state.
The state is updated each time a new value is generated.

Totality: total
Visibility: public export
uniform : Tensor [] U64->TensorshapeF64->TensorshapeF64->Tag (Rand (TensorshapeF64))
  Generate independent and identically distributed (IID) uniform samples bounded element-wise
between `bound` and `bound'`.

`bound` and `bound'` need not be ordered, and samples will be generated, elementwise, in
[min bound bound', max bound bound'). The exception is where the bounds are equal, in which
case: if the bounds are finite, samples are generated at the common bound, else samples are NaN.

The generated samples are a deterministic function of the input key and state, but may vary
between backends and library versions.

Example usage, multiplying two uniform samples
```
x : Tag $ Tensor [3] F64
x = do let key = tensor (Scalar 2)
initialState = tensor [Scalar 0]
rng <- uniform key (fill 0.0) (fill 1.0)
evalStateT initialState [| rng * rng |]
```

@key Determines the stream of generated samples.
@bound A bound of the samples. See full docstring for details.
@bound' A bound of the samples. See full docstring for details.

Totality: total
Visibility: export
normal : Tensor [] U64->Rand (TensorshapeF64)
  Generate independent and identically distributed (IID) samples from the standard normal
distribution.

The generated samples are a deterministic function of the input key and state, but may vary
between backends and library versions.

Example usage, multiplying two normal samples
```
x : Tag $ Tensor [3] F64
x = let key = tensor (Scalar 2)
rng = normal key
initialState = tensor [Scalar 0]
in evalStateT initialState [| rng * rng |]
```

@key Determines the stream of generated samples.

Totality: total
Visibility: export