Our Blog

Our Blog

Having fun with CLIP features — Part I

By: Ido Ben-Shaul

It’s been a bit over a year since openAI have released the CLIP model for connecting images and caption texts. This massive model was trained on 400M(!) pairs of images and captions trained on the web. In this post, we’ll add some visualisations and insights using some dimensionality reductions, and the open-sourced CLIP models.

Full Blog Post >>

Skip to content