lossFunctions module¶
Package with a bunch of loss function callbacks. If you’re planning to write your
own loss function classes, then you have to set l’s loss and lossG fields.
lossG is the original loss, still attached to the graph (hence “G”). Then,
loss is just lossG.detach().item(). This is so that other utilities can use
a shared detached loss value, for performance reasons.
shorts module¶
For not very complicated loss functions
-
class
k1lib.callbacks.lossFunctions.shorts.LossF(lossF: Callable[[Tuple[torch.Tensor, torch.Tensor]], float])[source]¶ Bases:
k1lib.callbacks.callbacks.Callback-
__init__(lossF: Callable[[Tuple[torch.Tensor, torch.Tensor]], float])[source]¶ Generic loss function. Expected variables in
Learner:y: result of model. Auto-included in
CoreNormalandCoreRNN.
Deposits variables into
Learnerat checkpointinLoss:lossG: single float tensor value, attached to graph
loss: lossG, but single float value
- Parameters
lossF – takes in
(y, yb)and returnslossG
-
-
class
k1lib.callbacks.lossFunctions.shorts.LossNLLCross(nll: bool, integrations: bool)[source]¶ Bases:
k1lib.callbacks.callbacks.Callback-
__init__(nll: bool, integrations: bool)[source]¶ Adds a cross-entropy/negative-likelihood loss function.
- Parameters
nll – if True, then use
torch.nn.NLLLoss, else usetorch.nn.CrossEntropyLossintegrations – whether to integrate with
AccFcallback
-
accuracy module¶
For not very complicated accuracies functions
-
class
k1lib.callbacks.lossFunctions.accuracy.AccF(predF: Optional[Callable[[torch.Tensor], torch.Tensor]] = None, accF: Optional[Callable[[Tuple[torch.Tensor, torch.Tensor]], float]] = None, integrations: bool = True)[source]¶ Bases:
k1lib.callbacks.callbacks.Callback-
__init__(predF: Optional[Callable[[torch.Tensor], torch.Tensor]] = None, accF: Optional[Callable[[Tuple[torch.Tensor, torch.Tensor]], float]] = None, integrations: bool = True)[source]¶ Generic accuracy function.
Built in default accuracies functions are fine, if you don’t do something too dramatic/different. Expected variables in
Learner:Deposits variables into
Learner:preds: detached, batched tensor output of
predFaccuracies: detached, batched tensor output of
accFaccuracy: detached, single float, mean of
accuracies
Where:
N is the batch size. Can be multidimensional, but has to agree between
yandybC is the number of categories
- Parameters
predF – takes in
y, returns predictions (tensor with int elements indicating the categories)accF – takes in
(predictions, yb), returns accuracies (tensor with 0 or 1 elements)integrations – whether to integrate
ConfusionMatrixor not.
-