Skip to content
View Andy1621's full-sized avatar
πŸ˜‡
Paper Reading
πŸ˜‡
Paper Reading

Block or report Andy1621

Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Andy1621/README.md

Hi there πŸ‘‹

I'm ✨ Kunchang Li ✨, a researcher at Seed, Bytedance.

Please check my website for more details.

πŸ’Ό Experiences

  • [2016-2020]πŸŽ‰ I received my B.E. degree in Software Engineering at Beihang University.
  • [2020-2025]πŸ’ͺ I received my Ph.D. degree in Computer Science at SIAT, UCAS, advised by Prof. Yu Qiao and Prof. Yali Wang. I have been a Research Intern at MEGVII, SenseTime, Shanghai AI Lab and Bytedance.

πŸ”­ Research Interests

  • πŸ“Ή Video Understanding
  • ⚑️ EfficientArchitecture
  • πŸ“š Multi-modality Learning
  • πŸ¦„ Video Generation
  • πŸ€” Unified Model

Pinned Loading

  1. ByteDance-Seed/Bagel ByteDance-Seed/Bagel Public

    Open-source unified multimodal model

    Python 5.6k 497

  2. OpenGVLab/InternVideo OpenGVLab/InternVideo Public

    [ECCV2024] Video Foundation Models & Data for Multimodal Understanding

    Python 2.2k 138

  3. OpenGVLab/Ask-Anything OpenGVLab/Ask-Anything Public

    [CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.

    Python 3.3k 268

  4. OpenGVLab/unmasked_teacher OpenGVLab/unmasked_teacher Public

    [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models

    Python 347 18

  5. OpenGVLab/VideoMamba OpenGVLab/VideoMamba Public

    [ECCV2024] VideoMamba: State Space Model for Efficient Video Understanding

    Python 1.1k 91

  6. Sense-X/UniFormer Sense-X/UniFormer Public

    [ICLR2022] official implementation of UniFormer

    Python 896 116