Learning with partial annotations

How can we leverage annotations from existing datasets that are task-specific (either structure or lesions annotated), hetero-modal (different sets of images) and domain-shifted (different acquisition protocols) to create joint models? This project provides a principled formulation to learn joint tasks in these conditions. Application on joint brain structure and lesion segmentation.