Core¤
Transform(p: float = 1.0)
¤
Base Transform class. Every Transforms inherits this class and implements get_params()
and apply()
; apply()
is always based on the functional counterpart of the Transform
class. Any Transform accepts multiple batches of point clouds (typically sources and
targets) as it is often desired to apply the same random transform to many batches of
points clouds. If multiple point clouds are passed, they MUST all have the same length.
Any Transform is applied with a provided probability self.p
.
Source code in src/polar/train/data/transforms/core.py
16 17 |
|
get_batch_size(**data: Tensor) -> int
staticmethod
¤
Static method. Assume every values in data dict is of same shape.
Source code in src/polar/train/data/transforms/core.py
19 20 21 22 |
|
get_num_points(**data: Tensor) -> int
staticmethod
¤
Static method. Assume every values in data dict is of same shape.
Source code in src/polar/train/data/transforms/core.py
24 25 26 27 |
|
get_params(**data: Tensor) -> dict
¤
Shared parameters for one apply (usually random values).
Parameters:
-
**data
(Tensor
, default:{}
) –Dictionary with str as keys and batch of point clouds of shape
(batch_size, num_points, *)
where*
denotes spatial coordinates as values. Typically,data = {'source': ..., 'target': ...}
.
Returns:
-
params
(dict
) –Params used by the transform (e.g. Euler angles for rotation).
Source code in src/polar/train/data/transforms/core.py
32 33 34 35 36 37 38 39 40 41 42 43 |
|
apply(pointclouds: Tensor, **params) -> Tensor
¤
Apply the functional transform with the params obtained by self.get_params()
to
one batch of point clouds.
Parameters:
-
pointclouds
(Tensor
) –Batch of point clouds of shape
(batch_size, num_points, *)
where*
denotes spatial coordinates.
Returns:
Transformed tensor: Transformed batch of point clouds of shape (batch_size,
num_points, *)
where *
denotes spatial coordinates.
Source code in src/polar/train/data/transforms/core.py
45 46 47 48 49 50 51 52 53 54 55 56 |
|
__call__(**data: Tensor) -> dict[str, Tensor]
¤
Call self.apply
with a probability self.p
on every values in the provided
dictionary.
Returns:
-
dict[str, Tensor]
–Transformed data: Same dictionary structure as input. The values have been transformed (with a certain probability).
Source code in src/polar/train/data/transforms/core.py
58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
|
Compose(transforms: Sequence[Transform], p: float = 1.0)
¤
Bases: Transform
Very simple mechanism to chain Transforms. Nothing more than a wrapper able to store a sequence of Transforms, to be applied iteratively on every values in a provided dictionary. It also has a probability, typically used so that only a portion of a dataset is augmented during training.
Example
from polar.train.data import transforms as T
center_normalize = T.Compose((T.Center(), T.Normalize()))
Parameters:
-
transforms
(Sequence[Transform]
) –Transformations to be randomly composed during a call.
-
p
(float
, default:1.0
) –Probability to apply the provided sequence. Defaults to 1.0.
Source code in src/polar/train/data/transforms/core.py
89 90 91 92 93 94 95 96 97 98 99 |
|