Project 3:
Vision-Language Models for Biomedicine

Supervisor:
Type:
Requirements:

Recent advances in learning large models in biology have leveraged microscopy images across different scales to learn tissue representations and structural patterns via self-supervised or unsupervised learning. However, medical images are often complemented with textual information, consisting of medical reports, related conditions, possible treatments etc., by medical experts, and integrating such textual information with microscopy images would allow us to build more robust models for learning tissue representations. In this project, we aim to utilize large vision-language models to learn aligned vision and language patient representations, which can be used for a variety of downstream tasks, and have significant impact in understanding of patient health and disease progression.