Jie Liu

Jie Liu (刘杰)

Ph.D. Student

University of Amsterdam

VISLAB

j.liu5@uva.nl

About Me

I am a fourth-year ELLIS Ph.D. candidate at VISLAB, University of Amsterdam (UvA), the Netherlands. I am fortunate to be supervised by Prof. Efstratios Gavves and Prof. Jan-Jakob Sonke. I also work closely with Prof. Pan Zhou.

Research Interests

My research aims to develop human-centered machines that augment human capabilities in perception, reasoning, and interaction with the world. To achieve this, I am focusing on the following topics:

Human-AI Interaction: interactive segmentation, LLM-based multi-agent cooperation

Efficient Learning: few-shot segmentation, prompt learning, probabilistic modeling

Foundation Models: multi-modal learning, vision-language models, healthcare application

Hobbies

I'm a sports enthusiast with a passion for snowboarding and badminton. I also hold a 2nd-degree black belt in Taekwondo. Whether on the slopes, the court, or in training, I love pushing my limits and shaping my skills.

News

  • [Jan 2025] CaPo, our first work on Embodied Agents, was accepted in ICLR2025, see you in Singapore.
  • [Sept 2024] I am visiting Prof. Pan Zhou in Singapore.
  • [July 2024] We present CPlot for Interactive Segmentation in ECCV2024, see you in Milano.
  • [June 2024] One paper with Wenzhe Yin on Domain Adaptation was accepted in UAI 2024, see you in Barcelona.
  • [Sept 2023] We present prototype adaption for Few-shot point cloud segmentation in 3DV2024, see you in Davos.
  • [Sept 2022] Our work on Few-shot Segmentation with Graph Convolution Network was accepted to BMVC2022.
  • [June 2022] One paper with Haochen Wang on Few-shot Segmentation was accepted to ACM MM 2022.
  • [March 2022] Our work on Few-shot Segmentation with Prototype Convolution was accepted to CVPR 2022.
  • [Sept 2021] I joined VISLAB as a PhD candidate.

Recent Projects

  • CaPo: Cooperative Plan Optimization for Efficient Embodied Multi-agent Cooperation
    Jie Liu, Pan Zhou, Yingjun Du, Ah-Hwee Tan, Cees GM Snoek, Jan-Jakob Sonke, Efstratios Gavves
    International Conference on Learning Representations (ICLR), 2025
    [Paper] [Code]
  • CPlot
    CPlot: Click Prompt Learning with Optimal Transport for Interactive Segmentation
    Jie Liu, Haochen Wang, Wenzhe Yin, Jan-Jakob Sonke, Efstratios Gavves
    European Conference on Computer Vision (ECCV), 2024
    [Paper] [Code]
  • DPA
    Dynamic Prototype Adaptation with Distillation for Few-shot Point Cloud Segmentation
    Jie Liu, Wenzhe Yin, Haochen Wang, Yunlu Chen, Jan-Jakob Sonke, Efstratios Gavves
    International Conference on 3D Vision (3DV), 2024
    [Paper] [Code]
  • sigcn
    Few-shot Semantic Segmentation with Support-Induced Graph Convolutional Network
    Jie Liu, Yanqi Bao, Haochen Wang, Wenzhe Yin, Jan-Jakob Sonke, Efstratios Gavves
    British Machine Vision Conference (BMVC), 2022
    [Paper] [Code]
  • dpcn
    Dynamic Prototype Convolution Network for Few-shot Semantic Segmentation
    Jie Liu, Yanqi Bao, Guosen Xie, Huan Xiong, Jan-Jakob Sonke, E Gavves
    Conference on Computer Vision and Pattern Recognition (CVPR), 2022
    [Paper] [Code]
  • da
    Domain Adaptation with Cauchy-Schwarz Divergence
    Wenzhe Yin, Shujian Yu, Yicong Lin, Jie Liu, Jan-Jakob Sonke, Efstratios Gavves
    Conference on Uncertainty in Artificial Intelligence (UAI), 2024
    [Paper] [Code]
  • piclick
    PiClick: Picking the Desired Mask in Click-based Interactive Segmentation
    Cilin Yan, Haochen Wang, Jie Liu, Xiaolong Jiang, Yao Hu, Xu Tang, Guoliang Kang, Efstratios Gavves
    IEEE Transactions on Multimedia (TMM), 2023
    [Paper] [Code]
  • fis
    Dynamic Transformer for Few-shot Instance Segmentation
    Haochen Wang, Jie Liu, Yongtuo Liu, Subhransu Maji, Jan-Jakob Sonke, Efstratios Gavves
    Proceedings of the 30th ACM International Conference on Multimedia (ACM MM), 2022
    [Paper] [Code]
  • sagnn
    Scale-aware Graph Neural Network for Few-shot Semantic Segmentation
    Guo-senXie*, Jie Liu*, HuanXiong, LingShao
    Conference on Computer Vision and Pattern Recognition (CVPR), 2021
    [Paper] [Code]