personal_site

View the Project on GitHub ChengeLi/personal_site

Welcome to Chenge’s Website

background

About me

Hello World! 😎

I am Chenge Li, a fifth year PhD student studying machine learning and computer vision at the Video Lab at New York University. My supervisor is Prof. Yao Wang. Prior to NYU, I have obtained Bachelor’s degree in Communication Engineering from Tianjin University, China. I have also spent a wonderful junior year exchanging at The University of Hong Kong.

I am looking for a full-time position after my expected graduation in December 2018. 🎓

Research Projects

TrackNet: Simultaneous Detection and Tracking of Multiple Objects

Object detection and object tracking are usually treated as two separate processes. Object detection in still images relies on spatial appearance features, whereas object tracking in videos relies on both spatial appearance and temporal motion features. Significant progress has been made for object detection in 2D images (or video frames) using deep learning networks such as region CNN and subsequent variants. The usual pipeline for object tracking requires that the object be successfully detected in the first frame or in every frame, and tracking is done by “associating” detection results. However, performing object detection and object tracking through a single network remains a challenging open question.

We propose a novel network structure that can directly detect a 3D tube enclosing a moving object in a video by extending the region-CNN framework for object detection in an image. The proposed trackNet works over short video segments and outputs a bounding tube for each detected moving object, which includes shifted bounding boxes covering the detected object in successive frames. A Tube Proposal Network (TPN) inside the trackNet is proposed to predict the objectiveness of each candidate tube and location parameters specifying the bounding tube with a high objectiveness score.

Model Architecture

flowchart

Joint Detection & Tracking by Bounding Tubes

detections


Saliency detection in 360-degree images

reference: All images displayed below are from ICME 2018 Grand Challenge Salient360! 2018: Visual attention modeling for 360 Images https://salient360.ls2n.fr/

Saliency detection qualitative result from our model: sal1 sal2 sal3


Robust Vehicle Tracking at Urban Intersections

Vehicle Tracking site



Semantic Grouping

Semantic Grouping site


Human Upper Body Segmentation

Poster


Internship

Apple Inc. Cupertino, CA

Jun 2017 – Aug 2017 summer internship Computer vision research intern. Single image super resolution using convolutional neural networks.


Publications

Prize 🏆

Grand Prize for the MLBAM Automatic Video Annotation Challenge held by NYC Media Lab May 11, 2015

Last Update: June 8, 2018