AIMA Research


Mentorship Program


Openings

  • For AIO 2025 students only, please fill the form here to register for participating AIO 2025 AIMA Research Mentorship Program

  • Application Deadline: January 28th, 2026 (hard deadline)

  • Interview Period: from January 29th to February 1st

About project information, please find projects’ description below:

1) Project 1: Segment Anything Model under Noisy Medical Radiology Imaging Conditions

  • Description: Recent advances in Segment Anything Models (SAM) have demonstrated strong generalization across natural images, and more recently, across medical imaging domains through variants such as MedSAM and SAM2. However, the robustness of these models under realistic noisy medical imaging conditions remains insufficiently studied. This project aims to systematically evaluate the robustness, stability, and failure modes of SAM-based models on 2D medical imaging datasets under controlled noise perturbations, simulating common acquisition artifacts encountered in clinical practice. The final objective is to produce a high-quality benchmark study suitable for submission to a CVPR workshop, followed by an extended version targeting a Q1 journal.
  • Team: 5-6 AIO 2025 members

2) Project 2: Segment Anything Model under Noisy Medical Radiology Imaging Conditions

  • Description: Foundation segmentation models such as the Segment Anything Model (SAM) and its medical variants (e.g., MedSAM, SAM2) have shown strong generalization capabilities across medical imaging datasets. However, in real clinical environments, anatomical regions of interest are frequently partially occluded due to surgical tools, imaging probes, catheters, markers, implants, or overlapping anatomical structures. This project aims to systematically evaluate the robustness and failure modes of SAM-based models under controlled occlusion scenarios in 2D medical images, using synthetic but clinically motivated occlusion strategies such as Cutout, CutMix, and copy–paste of surgical tools. The ultimate goal is to establish a standardized occlusion robustness benchmark for medical segmentation, suitable for CVPR workshop submission and later extension to a Q1 journal paper.
  • Team: 5-6 AIO 2025 members

3) Project 3: Segment Anything Model under Noisy Medical Radiology Imaging Conditions

  • Description: Accurate brain image segmentation is a foundational task in medical image analysis, supporting applications such as disease diagnosis, treatment planning, and longitudinal analysis. While deep learning–based segmentation models have achieved strong performance, they typically rely on fully annotated datasets, which are expensive and time-consuming to obtain in clinical practice. This project investigates how segmentation performance degrades under limited annotation availability, and how different model architectures cope with varying proportions of labeled data. By systematically reducing the percentage of labeled samples in a standard brain imaging dataset, this study aims to benchmark classical and advanced segmentation models and establish a clear performance–annotation trade-off. The project will first target an international conference and then be extended toward a Q2 journal publication with more advanced learning paradigms.
  • Team: 4 AIO 2025 members

4) Project 4: Segment Anything Model under Noisy Medical Radiology Imaging Conditions

  • Description: Accurate liver and liver tumor segmentation from abdominal CT images is a critical task in medical image analysis, supporting applications such as surgical planning, tumor burden assessment, and treatment response evaluation. Despite recent advances in deep learning–based segmentation models, high-quality voxel-wise annotations for liver imaging remain costly, time-consuming, and dependent on expert radiologists, limiting scalability in real-world clinical settings. This project investigates how liver segmentation performance degrades under limited annotation availability, and how different segmentation architectures cope with varying proportions of labeled training data. By systematically reducing the percentage of labeled samples from the Liver Tumor Segmentation (LiTS) dataset, this study aims to benchmark classical and advanced segmentation models and establish a clear performance–annotation trade-off for liver CT segmentation. The project will first target an international conference and then be extended toward a Q2 journal publication with more advanced learning paradigms.
  • Team: 4 AIO 2025 members

5) Project 5: Segment Anything Model under Noisy Medical Radiology Imaging Conditions

  • Description: Breast cancer ultrasound imaging plays a critical role in early detection and diagnosis, particularly for distinguishing between benign and malignant lesions. Deep learning–based classification models have shown promising results in this domain; however, their success typically depends on large, well-annotated datasets curated by expert radiologists. In practice, such annotations are expensive, time-consuming, and often limited in availability. This project investigates how breast cancer ultrasound classification performance degrades under limited annotation availability, and how different convolutional and transformer-based architectures cope with varying proportions of labeled data. By systematically reducing the percentage of labeled samples in a breast ultrasound dataset, this study aims to benchmark classical CNN models and modern Vision Transformer (ViT) models, establishing a clear performance–annotation trade-off. The project will first target an international conference and then be extended toward a Q2 journal publication, where classification will be further integrated with segmentation in a multi-task learning framework.
  • Team: 4 AIO 2025 members

6) Project 6: Segment Anything Model under Noisy Medical Radiology Imaging Conditions

  • Description: White blood cell (WBC) classification from microscopic blood smear images is a fundamental task in hematology, supporting disease screening, diagnosis of blood disorders, and clinical decision-making. Deep learning–based image classification models have demonstrated strong performance in automated WBC recognition; however, these models typically require large, well-annotated datasets curated by trained experts, which are costly and time-consuming to obtain. This project investigates how white blood cell classification performance degrades under limited annotation availability, and how different convolutional neural network (CNN) and transformer-based architectures cope with varying proportions of labeled training data. By systematically reducing the percentage of labeled samples in a WBC classification dataset, this study aims to benchmark classical CNN models and modern Vision Transformer (ViT) models, establishing a clear performance–annotation trade-off. The project will first target an international conference and then be extended toward a Q2 journal publication, incorporating more advanced learning strategies.
  • Team: 4 AIO 2025 members

If you have any questions, please contact us through this email:

Email: aima.research.aivn@gmail.com