Contrastive Loss

April 7, 2024

Contrastive Loss Basics

The Contrastive Loss is usually used in setups where the goal is to measure the similarity or difference between data points. This is very commonly used in metric learning, where the goal is to have a function learn how similar or dissimilar two data points are.

The basic idea behind contrastive loss is that two data points of similar classes would be close together on the latent space whereas two points of dissimilar classes would be further away from each other.

Triplets Optimization

The main approach to minimizing the contrastive loss is through triplets.

A triplet is $(x_a, x_b, x_c)$, where:

  • $x_a$ is the anchor sample, or the sample that you start with.
  • $x_b$ is the positive sample, or the sample that is of the same class.
  • $x_c$ is the negative sample, or the sample that is of a different class.

The equation for the contrastive loss is as follows:

$$\max(0, 1 - f(x_a, x_c; w) + f(x_a, x_b; w))$$

In the equation:

  • $f(x_a, x_c; w)$ represents the similarity between $x_a$ and $x_c$, which should be minimized.
  • $f(x_a, x_b; w))$ represents the similarity between $x_a$ and $x_b$, which should be maximized.