Career Profile

I am currently a postdoctoral fellow at the Robotics Institute, Carnegie Mellon University (CMU). I received the Ph.D. degree from Nanyang Technological University (NTU) Singapore in 2019 and Bachelor degree from Beijing Institute of Technology (BIT) in 2014. I was a senior researcher in Tencent. My research interests include robot perception, vision, and machine learning.

Selected Publications

Full publication list is avaiable on Google Scholar, arXiv, and dblp.
The paper source codes are avaiable on GitHub.

Visual Memorability for Robotic Interestingness via Unsupervised
Online Learning

May 2020
Chen Wang, Wenshan Wang, Yuheng Qiu, Yafei Hu, Sebastian Scherer
European Conference on Computer Vision (ECCV 2020)
  • Selected for oral presentation at the conference (2% acceptance rate).
  • Interesting scene prediction for mobile robots.
  • Unsupervised online learning via visual memorability.

Kervolutional Neural Networks

Jun. 2019
Chen Wang, Jianfei Yang, Lihua Xie, and Junsong Yuan
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2019)
  • Selected for oral presentation at the conference (5.6% acceptance rate).
  • Biologically inspired, extend convolution to kernel space, while keep linear complexity.

Correlation Flow: Robust Optical Flow using Kernel Cross-Correlators

May. 2018
Chen Wang*, Tete Ji*, Thien-Minh Nguyen, and Lihua Xie
International Conference on Robotics and Automation (ICRA 2018)
  • Reduce the complexity of joint rotation-scale prediction from O(n log n+mn) to O(n log n).
  • Improve accuracy by 50% compared to the state-of-the-art PX4Flow.

Non-iterative RGB-D-inertial Odometry

Apr. 2018
Chen Wang, Minh-Chung Hoang, Lihua Xie, and Junsong Yuan
arXiv preprint arXiv:1710.05502
  • Find the first close-form solution for RGB-D-inertial odometry.
  • Achieve real-time performance even on an ultra-low power processor.

Kernel Cross-Correlator

Feb. 2018
Chen Wang, Le Zhang, Lihua Xie, and Junsong Yuan
The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI 2018)
  • Break the theoretic limitations of KCF that is only able to predict translation.
  • Predict affine transforms with complexity O(n log n), e.g. translation, rotation, scale, etc.
  • Improve accuracy by 15% and reduce computational time by 25% compared to DTW.

Ultra-Wideband Aided Fast Localization and Mapping System

Sep. 2017
Chen Wang, Handuo Zhang, Thien-Minh Nguyen, and Lihua Xie
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017)
  • Propose the first UWB-Aided Visual SLAM system.
  • Take advantage of local smoothness of visual odometry and global robustness of UWB.

Non-Iterative SLAM

Jul. 2017
Chen Wang, Junsong Yuan, and Lihua Xie
The 18th International Conference on Advanced Robotics (ICAR 2017)
  • Receive the Best Paper Award in robotic planning at the conference.

* joint first authors

Open Source Projects

Git Helper - This project is extremely helpful when you use Git, especially in Ubuntu and Mac. After simple installation, the tedious leading path in bash environment will be clear. It also reminds your location, modification, and current branch. It has improved my working efficiency dramatically, thus I recommend this project to all Linux and Mac users!
Thesis LaTex Template - This thesis Latex temaplate is designed according to the new submission guidelines of Nanyang Technological University (NTU). It should be useful for both Master and Ph.D students. This version has been verified by the NTU library.
Range-based Localization - To enable fast and accurate localization for micro-robots in GPS-denied environment, we design and implement the range-based localization algorithm using the Graph Optimization Approach. This framework is compitable with ROS and flexible for multi-sensor fusion. This code has been tested for the ultra-wideband(UWB) technology.
Formation/Rendezvous for multi-agent - This ROS package provides a controller for formation and rendezvous using multi-drone. It is designed based on the fastest hunting and surrounding direction relative to a moving target. You may visualize the convergence using RVIZ and Gazebo. You may also test other controllers by modifying a few lines of codes.


Interestingness - Supported by the Air Lab at Carnegie Mellon University, we released the visual interestingness dataset based on the DARPA Subterranean Challenge to promote the development of visual interesting scene prediction for mobile robots. You may also find the dataset description, development tools, and our paper for visual interestingness.
TartanAir - TartanAir is a very challenging dataset for visual navigation and more. It is recorded in AirSim containing various light conditions, weather, moving objects, challenging viewpoints, and diverse motion patterns. This dataset provides stereo RGB and deph images, segmentation, optical flow, camera poses, and LiDAR point clouds. This dataset is supported by the Air Lab at Carnegie Mellon University. You may find the dataset description, development tools, and paper.


Simultaneous Localization and Mapping Methods and Apparatus

Chen Wang*, Lihua Xie*, and Junsong Yuan
Application Number:  US16/329,118
Publication Date:  Jul 25, 2019

Filing Date:  Sep 7, 2017

* joint first authors