Skip to content

Core¤

Transform(p: float = 1.0) ¤

Base Transform class. Every Transforms inherits this class and implements get_params() and apply(); apply() is always based on the functional counterpart of the Transform class. Any Transform accepts multiple batches of point clouds (typically sources and targets) as it is often desired to apply the same random transform to many batches of points clouds. If multiple point clouds are passed, they MUST all have the same length. Any Transform is applied with a provided probability self.p.

Source code in src/polar/train/data/transforms/core.py
16
17
def __init__(self, p: float = 1.0) -> None:
    self.p = p
get_batch_size(**data: Tensor) -> int staticmethod ¤

Static method. Assume every values in data dict is of same shape.

Source code in src/polar/train/data/transforms/core.py
19
20
21
22
@staticmethod
def get_batch_size(**data: Tensor) -> int:
    """ Static method. Assume every values in data dict is of same shape. """
    return list(data.values())[0].shape[0]
get_num_points(**data: Tensor) -> int staticmethod ¤

Static method. Assume every values in data dict is of same shape.

Source code in src/polar/train/data/transforms/core.py
24
25
26
27
@staticmethod
def get_num_points(**data: Tensor) -> int:
    """ Static method. Assume every values in data dict is of same shape. """
    return list(data.values())[0].shape[1]
get_params(**data: Tensor) -> dict ¤

Shared parameters for one apply (usually random values).

Parameters:

  • **data (Tensor, default: {} ) –

    Dictionary with str as keys and batch of point clouds of shape (batch_size, num_points, *) where * denotes spatial coordinates as values. Typically, data = {'source': ..., 'target': ...}.

Returns:

  • params ( dict ) –

    Params used by the transform (e.g. Euler angles for rotation).

Source code in src/polar/train/data/transforms/core.py
32
33
34
35
36
37
38
39
40
41
42
43
def get_params(self, **data: Tensor) -> dict:
    """ Shared parameters for one apply (usually random values).

    Args:
        **data (Tensor): Dictionary with str as keys and batch of point clouds of shape
                         `(batch_size, num_points, *)` where `*` denotes spatial coordinates as
                         values. Typically, `data = {'source': ..., 'target': ...}`.

    Returns:
        params (dict): Params used by the transform (e.g. Euler angles for rotation).
    """
    raise NotImplementedError
apply(pointclouds: Tensor, **params) -> Tensor ¤

Apply the functional transform with the params obtained by self.get_params() to one batch of point clouds.

Parameters:

  • pointclouds (Tensor) –

    Batch of point clouds of shape (batch_size, num_points, *) where * denotes spatial coordinates.

Returns: Transformed tensor: Transformed batch of point clouds of shape (batch_size, num_points, *) where * denotes spatial coordinates.

Source code in src/polar/train/data/transforms/core.py
45
46
47
48
49
50
51
52
53
54
55
56
def apply(self, pointclouds: Tensor, **params) -> Tensor:
    """ Apply the functional transform with the params obtained by `self.get_params()` to
        one batch of point clouds.

    Args:
        pointclouds (Tensor): Batch of point clouds of shape `(batch_size, num_points, *)`
                              where `*` denotes spatial coordinates.
    Returns:
        Transformed tensor: Transformed batch of point clouds of shape `(batch_size,
                            num_points, *)` where `*` denotes spatial coordinates.
    """
    raise NotImplementedError
__call__(**data: Tensor) -> dict[str, Tensor] ¤

Call self.apply with a probability self.p on every values in the provided dictionary.

Returns:

  • dict[str, Tensor]

    Transformed data: Same dictionary structure as input. The values have been transformed (with a certain probability).

Source code in src/polar/train/data/transforms/core.py
58
59
60
61
62
63
64
65
66
67
68
69
70
71
def __call__(self, **data: Tensor) -> dict[str, Tensor]:
    """ Call `self.apply` with a probability `self.p` on every values in the provided
        dictionary.

    Returns:
        Transformed data: Same dictionary structure as input. The values have been
                          transformed (with a certain probability).
    """
    if torch.rand(size=(1, )) < self.p:
        params = self.get_params(**data)
        self.register_params(params)
        for k, v in data.items():
            data[k] = self.apply(v, **params)
    return data

Compose(transforms: Sequence[Transform], p: float = 1.0) ¤

Bases: Transform

Very simple mechanism to chain Transforms. Nothing more than a wrapper able to store a sequence of Transforms, to be applied iteratively on every values in a provided dictionary. It also has a probability, typically used so that only a portion of a dataset is augmented during training.

Example

from polar.train.data import transforms as T
center_normalize = T.Compose((T.Center(), T.Normalize()))

Parameters:

  • transforms (Sequence[Transform]) –

    Transformations to be randomly composed during a call.

  • p (float, default: 1.0 ) –

    Probability to apply the provided sequence. Defaults to 1.0.

Source code in src/polar/train/data/transforms/core.py
89
90
91
92
93
94
95
96
97
98
99
def __init__(self, transforms: Sequence[Transform], p: float = 1.0) -> None:
    """_summary_

    Args:
        transforms (Sequence[Transform]): Transformations to be randomly composed during a
                                          call.
        p (float, optional): Probability to apply the provided sequence. Defaults to 1.0.
    """
    super(Compose, self).__init__()
    self.transforms = {t.__class__.__name__: t for t in transforms}
    self.p = p