lossFunctions module

Package with a bunch of loss function callbacks. If you’re planning to write your own loss function classes, then you have to set l’s loss and lossG fields. lossG is the original loss, still attached to the graph (hence “G”). Then, loss is just lossG.detach().item(). This is so that other utilities can use a shared detached loss value, for performance reasons.

shorts module

For not very complicated loss functions

class k1lib.callbacks.lossFunctions.shorts.LossF(lossF: Callable[[Tuple[torch.Tensor, torch.Tensor]], float])[source]

Bases: k1lib.callbacks.callbacks.Callback

__init__(lossF: Callable[[Tuple[torch.Tensor, torch.Tensor]], float])[source]

Creates a generic loss function that takes in y and correct y yb and return a single loss float (still attached to graph).

class k1lib.callbacks.lossFunctions.shorts.LossNLLCross(nll: bool, integrations: bool)[source]

Bases: k1lib.callbacks.callbacks.Callback

__init__(nll: bool, integrations: bool)[source]

Adds a cross-entropy/negative-likelihood loss function.

Parameters
detach()[source]

Detaches from the parent Callbacks

accuracy module

For not very complicated accuracies functions

class k1lib.callbacks.lossFunctions.accuracy.AccF(predF: Optional[Callable[[torch.Tensor], torch.Tensor]] = None, accF: Optional[Callable[[Tuple[torch.Tensor, torch.Tensor]], float]] = None, integrations: bool = True)[source]

Bases: k1lib.callbacks.callbacks.Callback

__init__(predF: Optional[Callable[[torch.Tensor], torch.Tensor]] = None, accF: Optional[Callable[[Tuple[torch.Tensor, torch.Tensor]], float]] = None, integrations: bool = True)[source]

Creates a generic Callback accuracy function.

Built in default accuracies functions are fine, if you don’t do something too dramatic/different. It expects:

  • y: to have shape (*N, C)

  • yb: to have shape (*N,)

Where:

  • N is the batch size. Can be multidimensional, but has to agree between y and yb

  • C is the number of categories

If these are not your system requirements

Deposits these variables into Learner:

  • preds: detached, batched tensor output of predF

  • accuracies: detached, batched tensor output of accF

  • accuracy: detached, single float, mean of accuracies

Parameters
  • predF – takes in y, returns predictions (tensor with int elements indicating the categories)

  • accF – takes in (predictions, yb), returns accuracies (tensor with 0 or 1 elements)

  • integrations – whether to integrate ConfusionMatrix or not.

detach()[source]

Detaches from the parent Callbacks