linmult.core.temporal¶
Temporal reducers, aligners, and composite modules (TRM, TAM).
Classes¶
Factory for creating temporal signal aligners and reducers. |
|
Temporal aligner via truncation or zero-padding. |
|
Temporal aligner via adaptive max pooling. |
|
Temporal aligner via adaptive average pooling. |
|
Temporal reducer that extracts the last valid timestamp. |
|
Temporal reducer via masked global average pooling. |
|
Temporal reducer via masked global max pooling. |
|
Temporal reducer via learned attention-weighted pooling. |
|
Time Reduce Module: aggregates the time dimension of a sequence tensor. |
|
Time Align Module: aligns the time dimensions of multiple tensors. |
Module Contents¶
- class linmult.core.temporal.TemporalFactory[source]¶
Factory for creating temporal signal aligners and reducers.
- static create_aligner(method: str = 'aap') torch.nn.Module[source]¶
Create a temporal aligner module.
- Parameters:
method (str) –
Aligner type. One of:
"aap": Adaptive average pooling."amp": Adaptive max pooling."padding": Zero-padding / truncation.
- Returns:
The constructed aligner module.
- Return type:
nn.Module
- Raises:
ValueError – If
methodis not one of the supported values.
- static create_reducer(d_model: int, reducer: str) torch.nn.Module[source]¶
Create a temporal reducer module.
- Parameters:
d_model (int) – Feature dimensionality of the input tensor. Required for
AttentionPooling; ignored by other reducers.reducer (str) – Reducer type. One of
"attentionpool","gmp","gap","last".
- Returns:
The constructed reducer module.
- Return type:
nn.Module
- Raises:
ValueError – If
reduceris not one of the supported values.
- class linmult.core.temporal.TemporalPadding(*args: Any, **kwargs: Any)[source]¶
Bases:
torch.nn.ModuleTemporal aligner via truncation or zero-padding.
Adjusts the time dimension of a tensor to exactly
time_dimby truncating if the sequence is too long, or zero-padding if too short. The mask is updated accordingly (padded positions areFalse).Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x: torch.Tensor, time_dim: int, mask: torch.Tensor | None = None) tuple[torch.Tensor, torch.Tensor][source]¶
Truncate or pad the time dimension.
- Parameters:
x (torch.Tensor) – Input tensor of shape
(B, T, F).time_dim (int) – Target time dimension.
mask (torch.BoolTensor, optional) – Validity mask of shape
(B, T). True = valid. IfNone, all input positions are treated as valid.
- Returns:
- Output tensor of shape
(B, time_dim, F)and updated mask of shape(B, time_dim).
- Return type:
tuple[torch.Tensor, torch.Tensor]
- class linmult.core.temporal.AdaptiveMaxPooling(*args: Any, **kwargs: Any)[source]¶
Bases:
torch.nn.ModuleTemporal aligner via adaptive max pooling.
Resizes the time dimension of a tensor to
time_dimusingF.adaptive_max_pool1d. Masked (padded) positions are filled with-infbefore pooling so they never win the max, and the output mask is derived from the result.Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x: torch.Tensor, time_dim: int, mask: torch.Tensor | None = None) tuple[torch.Tensor, torch.Tensor][source]¶
Apply adaptive max pooling along the time dimension.
- Parameters:
x (torch.Tensor) – Input tensor of shape
(B, T, F).time_dim (int) – Target time dimension.
mask (torch.BoolTensor, optional) – Validity mask of shape
(B, T). True = valid.
- Returns:
- Pooled tensor of shape
(B, time_dim, F)and updated mask of shape(B, time_dim).
- Return type:
tuple[torch.Tensor, torch.Tensor]
- class linmult.core.temporal.AdaptiveAvgPooling(*args: Any, **kwargs: Any)[source]¶
Bases:
torch.nn.ModuleTemporal aligner via adaptive average pooling.
Resizes the time dimension of a tensor to
time_dimusingF.adaptive_avg_pool1d. Masked positions contribute zero to the average and the output is renormalized by the fraction of valid input positions in each output bin.Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x: torch.Tensor, time_dim: int, mask: torch.Tensor | None = None) tuple[torch.Tensor, torch.Tensor][source]¶
Apply adaptive average pooling along the time dimension.
- Parameters:
x (torch.Tensor) – Input tensor of shape
(B, T, F).time_dim (int) – Target time dimension.
mask (torch.BoolTensor, optional) – Validity mask of shape
(B, T). True = valid.
- Returns:
- Pooled tensor of shape
(B, time_dim, F)and updated mask of shape(B, time_dim).
- Return type:
tuple[torch.Tensor, torch.Tensor]
- class linmult.core.temporal.LastTimestamp(*args: Any, **kwargs: Any)[source]¶
Bases:
torch.nn.ModuleTemporal reducer that extracts the last valid timestamp.
If a mask is provided, selects the feature vector at the last
Trueposition for each sample. Fully-masked samples (allFalse) return a zero vector.Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x: torch.Tensor, mask: torch.Tensor | None = None) torch.Tensor[source]¶
Extract the last valid timestamp.
- Parameters:
x (torch.Tensor) – Input tensor of shape
(B, T, F).mask (torch.BoolTensor, optional) – Validity mask of shape
(B, T). IfNone, the final timestep is selected for all samples.
- Returns:
Output tensor of shape
(B, F).- Return type:
torch.Tensor
- class linmult.core.temporal.GlobalAvgPooling(*args: Any, **kwargs: Any)[source]¶
Bases:
torch.nn.ModuleTemporal reducer via masked global average pooling.
Computes the mean over valid (unmasked) timesteps. If no mask is provided, averages over all timesteps.
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x: torch.Tensor, mask: torch.Tensor | None = None) torch.Tensor[source]¶
Apply global average pooling.
- Parameters:
x (torch.Tensor) – Input tensor of shape
(B, T, F).mask (torch.BoolTensor, optional) – Validity mask of shape
(B, T). True = valid. IfNone, all positions are treated as valid.
- Returns:
Pooled output of shape
(B, F).- Return type:
torch.Tensor
- class linmult.core.temporal.GlobalMaxPooling(*args: Any, **kwargs: Any)[source]¶
Bases:
torch.nn.ModuleTemporal reducer via masked global max pooling.
Computes the max over valid (unmasked) timesteps. Masked positions are filled with
-infbefore the max, and fully-masked samples return zero.Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x: torch.Tensor, mask: torch.Tensor | None = None) torch.Tensor[source]¶
Apply global max pooling.
- Parameters:
x (torch.Tensor) – Input tensor of shape
(B, T, F).mask (torch.BoolTensor, optional) – Validity mask of shape
(B, T). True = valid. IfNone, all positions are treated as valid.
- Returns:
Pooled output of shape
(B, F).- Return type:
torch.Tensor
- class linmult.core.temporal.AttentionPooling(d_model: int)[source]¶
Bases:
torch.nn.ModuleTemporal reducer via learned attention-weighted pooling.
Learns a scalar attention score per timestep and computes a weighted sum of the input features. Masked positions receive
-infbefore the softmax so their weight is zero. Fully-masked samples return a zero vector.- Parameters:
d_model (int) – Input feature dimensionality.
Initialize AttentionPooling.
- forward(x: torch.Tensor, mask: torch.Tensor | None = None) torch.Tensor[source]¶
Apply attention-weighted pooling.
- Parameters:
x (torch.Tensor) – Input tensor of shape
(B, T, F).mask (torch.BoolTensor, optional) – Validity mask of shape
(B, T). True = valid.
- Returns:
Pooled output of shape
(B, F).- Return type:
torch.Tensor
- class linmult.core.temporal.TRM(d_model: int, reducer: str)[source]¶
Bases:
torch.nn.ModuleTime Reduce Module: aggregates the time dimension of a sequence tensor.
Transforms
(B, T, F)→(B, F)using a configurable pooling strategy.- Parameters:
d_model (int) – Input feature dimensionality. Required for
"attentionpool"; ignored by"gap","gmp", and"last".reducer (str) – Pooling strategy. One of
"attentionpool","gmp","gap","last".
Initialize TRM.
- forward(x: torch.Tensor, mask: torch.Tensor | None = None) torch.Tensor[source]¶
Reduce the time dimension.
- Parameters:
x (torch.Tensor) – Input of shape
(B, T, F).mask (torch.Tensor, optional) – Validity mask of shape
(B, T). True = valid.
- Returns:
Reduced output of shape
(B, F).- Return type:
torch.Tensor
- apply_to_list(x_list: list[torch.Tensor], mask_list: list[torch.Tensor | None]) list[torch.Tensor][source]¶
Apply time reduction independently to each tensor in a list.
- Parameters:
x_list (list[torch.Tensor]) – List of tensors, each of shape
(B, T, F).mask_list (list[torch.Tensor | None]) – Corresponding masks, each of shape
(B, T).
- Returns:
List of reduced tensors, each of shape
(B, F).- Return type:
list[torch.Tensor]
- class linmult.core.temporal.TAM(input_dim: int, output_dim: int, aligner: str, time_dim: int, num_layers: int = 6, num_heads: int = 8, attention_config: linmult.core.attention.AttentionConfig | None = None, dropout_pe: float = 0.0, dropout_ffn: float = 0.1, dropout_out: float = 0.1, name: str = '')[source]¶
Bases:
torch.nn.ModuleTime Align Module: aligns the time dimensions of multiple tensors.
Transforms a list of
(B, T_i, F)tensors to a single fused tensor(B, time_dim, tgt_dim)by pooling/padding each sequence to a commontime_dim, concatenating along the feature axis, processing with a transformer, and projecting totgt_dim.- Parameters:
input_dim (int) – Concatenated input dimensionality (sum of feature dims across modalities).
output_dim (int) – Output feature dimensionality after projection.
aligner (str) – Temporal alignment strategy. One of
"aap","amp","padding".time_dim (int) – Target time dimension after alignment.
dropout_out (float) – Dropout in the output projector. Defaults to
0.1.num_layers (int) – Depth of the internal transformer encoder. Defaults to
6.num_heads (int) – Number of attention heads in the internal encoder. Defaults to
8.attention_config (AttentionConfig, optional) – Attention type and parameters for the internal encoder. Defaults to
AttentionConfig()(linear attention).dropout_pe (float) – Positional-encoding dropout for the internal encoder. Defaults to
0.0.dropout_ffn (float) – FFN dropout for the internal encoder. Defaults to
0.1.name (str) – Module name shown in
repr. Defaults to"".
Initialize TAM.
- forward(x_list: list[torch.Tensor], mask_list: list[torch.Tensor | None]) tuple[torch.Tensor, torch.Tensor][source]¶
Align, fuse, and project multiple sequences.
- Parameters:
x_list (list[torch.Tensor]) – Input tensors, each of shape
(B, T_i, F).mask_list (list[torch.BoolTensor | None]) – Corresponding masks, each of shape
(B, T_i)orNone(treated as all-valid).
- Returns:
- Aligned tensor of shape
(B, time_dim, output_dim)and validity mask of shape(B, time_dim).
- Return type:
tuple[torch.Tensor, torch.Tensor]