From 13c75fa06c2a030883b47be6415198581adb13bf Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Wed, 23 Oct 2024 10:37:25 +0400 Subject: [PATCH] fix broken link in jina clip (#2467) --- notebooks/jina-clip/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/notebooks/jina-clip/README.md b/notebooks/jina-clip/README.md index 150ebcb406d..d512cb09a9a 100644 --- a/notebooks/jina-clip/README.md +++ b/notebooks/jina-clip/README.md @@ -4,7 +4,7 @@ This tutorial will show how to run CLIP model pipeline with [jina-clip-v1](https ## Notebook Contents -[jina-clip-v1](https://huggingface.co/jinaai/jina-clip-v1) is a state-of-the-art English multimodal(text-image) embedding model trained by [Jina AI](https://aimodels.fyi/creators/huggingFace/jinaai). It bridges the gap between traditional text embedding models, which excel in text-to-text retrieval but are incapable of cross-modal tasks, and models that effectively align image and text embeddings but are not optimized for text-to-text retrieval. jina-clip-v1 offers robust performance in both domains. Its dual capability makes it an excellent tool for multimodal retrieval-augmented generation (MuRAG) applications, allowing seamless text-to-text and text-to-image searches within a single model. jina-clip-v1 can be used for a variety of multimodal applications, such as: image search by describing them in text, multimodal question answering, multimodal content generation. Jina AI has also provided the Embeddings API as an easy-to-use interface for working with jina-clip-v1 and their other embedding models. +[jina-clip-v1](https://huggingface.co/jinaai/jina-clip-v1) is a state-of-the-art English multimodal(text-image) embedding model introduced in the [paper](https://arxiv.org/abs/2405.20204). It bridges the gap between traditional text embedding models, which excel in text-to-text retrieval but are incapable of cross-modal tasks, and models that effectively align image and text embeddings but are not optimized for text-to-text retrieval. jina-clip-v1 offers robust performance in both domains. Its dual capability makes it an excellent tool for multimodal retrieval-augmented generation (MuRAG) applications, allowing seamless text-to-text and text-to-image searches within a single model. jina-clip-v1 can be used for a variety of multimodal applications, such as: image search by describing them in text, multimodal question answering, multimodal content generation. Jina AI has also provided the Embeddings API as an easy-to-use interface for working with jina-clip-v1 and their other embedding models. The notebook contains the following steps: 1. Download the model and instantiate the PyTorch model.