CVPR 2018 Papers

Image Caption

  1. GroupCap: Group-Based Image Captioning With Structured Relevance and Diversity Constraints. Fuhai Chen (Xiamen Univ.); Rongrong Ji (); Xiaoshuai Sun (Harbin Inst. of Technology); Yongjian Wu (); Jinsong Su (Xiamen Univ.).
  2. Convolutional Image Captioning. Jyoti Aneja (UIUC); Aditya Deshpande (UIUC); Alexander G. Schwing (). [PDF]
  3. Learning to Evaluate Image Captioning. Yin Cui (Cornel Tech); Guandao Yang (Cornell Univ.); Andreas Veit (Cornel Tech); Xun Huang (); Serge Belongie (). [PDF]
  4. Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering. Peter Anderson (Australian National Univ.); Xiaodong He (); Chris Buehler (); Damien Teney (Univ. of Adelaide); Mark Johnson (Macquarie Univ.); Stephen Gould (Australian National Univ.); Lei Zhang (Microsoft). [PDF]
  5. Discriminability Objective for Training Descriptive Captions. Ruotian Luo (Toyota Technological Inst.); Brian Price (Adobe Research); Scott Cohen (Adobe Research); Gregory Shakhnarovich (). [PDF]
  6. Regularizing RNNs for Caption Generation by Reconstructing the Past With the Present. Xinpeng Chen (Wuhan Univ.); Lin Ma (Tencent AI Lab); Wenhao Jiang (Tencent AI Lab); Jian Yao (); Wei Liu (). [PDF]
  7. SemStyle: Learning to Generate Stylised Image Captions Using Unaligned Text. Alexander Mathews (Australian National Univ.); Lexing Xie (Australian National Univ., Data61); Xuming He (ShanghaiTech Univ.).
  8. Neural Baby Talk. Jiasen Lu (Georgia Inst. of Technology); Jianwei Yang (Georgia Tech); Dhruv Batra (Georgia Tech); Devi Parikh (Georgia Tech). [PDF][Code]

Video Caption

  1. Video Captioning via Hierarchical Reinforcement Learning. Xin Wang (UC Santa Barbara); Wenhu Chen (); Jiawei Wu (UC Santa Barbara); Yuan-Fang Wang (UC Santa Barbara); William Yang Wang (UC Santa Barbara). [PDF]
  2. Fine-Grained Video Captioning for Sports Narrative. Huanyu Yu (Shanghai Jiao Tong Univ.); Shuo Cheng (Shanghai Jiao Tong Univ.); Bingbing Ni (); Minsi Wang (Shanghai Jiao Tong Univ.); Jian Zhang (Shanghai Jiao Tong Univ.); Xiaokang Yang ().
  3. Interpretable Video Captioning via Trajectory Structured Localization. Xian Wu (Sysu); Guanbin Li (); Qingxing Cao (Sun Yat-Sen Univ.); Qingge Ji (); Liang Lin ().
  4. Bidirectional Attentive Fusion With Context Gating for Dense Video Captioning. Jingwen Wang (SCUT); Wenhao Jiang (Tencent AI Lab); Lin Ma (Tencent AI Lab); Wei Liu (); Yong Xu (South China Univ. of Technology). [PDF]
  5. Jointly Localizing and Describing Events for Dense Video Captioning. Yehao Li (Sun Yat-Sen Univ.); Ting Yao (Microsoft Research Asia); Yingwei Pan (Univ. of Science and Technology of China); Hongyang Chao (Sun Yat-Sen Univ.); Tao Mei (Microsoft Research Asia). [PDF]
  6. M3: Multimodal Memory Modelling for Video Captioning. Junbo Wang (Inst. of Automation, Chinese Academy of Sciences); Wei Wang (); Yan Huang (); Liang Wang (); Tieniu Tan (NLPR). [PDF]
  7. Reconstruction Network for Video Captioning. Bairui Wang (); Lin Ma (Tencent AI Lab); Wei Zhang (); Wei Liu (). [PDF]
  8. End-to-End Dense Video Captioning With Masked Transformer. Luowei Zhou (Univ. of Michigan); Yingbo Zhou (Salesforce); Jason J. Corso (); Richard Socher (Meta-Mind); Caiming Xiong (Salesforce). [PDF]