A backend for a search engine that uses the CLIP model and Qdrant vector database to match text queries with relevant images. Ideal for product discovery and cross-modal search applications.
- Converts text queries into image-space embeddings.
- Retrieves images that best match the textual description.
- Built using CLIP and Qdrant for high performance.
-
Clone the repository:
git clone https://github.com/tandalalam/CLIP-image-search-backend.git cd CLIP-image-search-backend/src
-
Install dependencies:
pip install -r requirements.txt
-
Set up a Qdrant instance and configure the
.env
file:QDRANT_HOST=<your_qdrant_host>:<port>
-
Run the server:
python main.py
You can also run the project using Docker by simply docker build -t clip-search
and then docker run -p 8080:8080 clip-search
.
Send a GET request with a text query to retrieve matching images:
curl -X GET -H "Content-Type: application/json" -d '{"query": "red shoes"}' http://localhost:8080/search