2
\$\begingroup\$

Here is a python function I have for a neural network I have implemented. I feel it become a bit cumbersome. The intention is to weight positive labels in channels 1:end higher than background pixels. Hence the distinction between foreground and background.

def loss_function(
        self,
        x: torch.tensor,
        groundtruth: torch.tensor,
        weight: float
) -> torch.tensor:

    delta = 0.00001
    groundtruth_pos = groundtruth[:, 1:, :, :] == 1
    groundtruth_neg = groundtruth[:, 1:, :, :] == 0
    foreground_gt = groundtruth[:, 1:, :, :]
    background_gt = groundtruth[:, 0, :, :]

    foreground_x = x[:, 1:, :, :]
    background_x = x[:, 0, :, :]
    loss_foreground_pos = -(torch.sum(foreground_gt[groundtruth_pos] * torch.log(foreground_x[groundtruth_pos] + delta))
                            + torch.sum((1 - foreground_gt[groundtruth_pos]) * torch.log(1 - foreground_x[groundtruth_pos] + delta)))
    loss_foreground_neg = -(torch.sum(foreground_gt[groundtruth_neg] * torch.log(foreground_x[groundtruth_neg] + delta))
                            + torch.sum((1 - foreground_gt[groundtruth_neg]) * torch.log(1 - foreground_x[groundtruth_neg] + delta)))

    loss_background = -(torch.sum(background_gt * torch.log(background_x + delta))
                        + torch.sum((1 - background_gt) * torch.log(1 - background_x + delta)))
    return weight * loss_foreground_pos + loss_foreground_neg + loss_background
\$\endgroup\$
1
  • 1
    \$\begingroup\$ The code does not compile since it is missing the import statements. Here at Code Review we prefer self-contained compilable code snippets that allow us reviewers to run the code with some example data. (This is just a remark for your future questions. Since this question already has an answer, leave the question as it is now.) \$\endgroup\$ Commented May 14, 2021 at 12:47

1 Answer 1

2
\$\begingroup\$

It looks like you need to calculate the same thing over and over again. Instead of doing this separately for the positive, the negative and all labels (not sure that is the correct word), you could do it once and then use that once you need to calculate the sums:

def loss_function(
        self,
        x: torch.tensor,
        groundtruth: torch.tensor,
        weight: float
) -> torch.tensor:

    delta = 0.00001
    foreground_gt = groundtruth[:, 1:, :, :]
    background_gt = groundtruth[:, 0, :, :]
    groundtruth_pos = foreground_gt == 1
    groundtruth_neg = foreground_gt == 0

    foreground_x = x[:, 1:, :, :] + delta
    background_x = x[:, 0, :, :] + delta

    a = foreground_gt * torch.log(foreground_x)
    b = (1 - foreground_gt) * torch.log(1 - foreground_x)
    c = a + b

    loss_foreground_pos = -torch.sum(c[groundtruth_pos])
    loss_foreground_neg = -torch.sum(c[groundtruth_neg])

    loss_background = -torch.sum(background_gt * torch.log(background_x))
                                 + (1 - background_gt) * torch.log(1 - background_x))
    return weight * loss_foreground_pos + loss_foreground_neg + loss_background

Note that sum(a) + sum(b) is the same as sum(a + b) = sum(c).

\$\endgroup\$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.