menu

Learning Transferable Architectures for Scalable Image Recognition

Posted on 22/04/2019, in Paper.
  • Overview: This paper introduces an Auto-ML strategy that is scalable: Searching the block structure rather than the overall neural network structure.
  • Two cells: The two block structures the authors searches for are 1) Normal Cell - convolutional cells that return a feature map of the same dimension; and 2) Reduction Cell - convolutional cells that return a feature map where the feature map height and width is reduced by a factor of two. The architecture on top of the cells are fixed for the given dataset.
  • Searching space: The initial two hidden status are the inputs of the current and previous cells. The RL searching strategy always select two hidden states and apply operations on top of it and use concatenation or element-wise addition to combine the outputs. To ensure the dimension, sometimes 1 by 1 filter is used.

  • This paper mentioned a scalable meta-learning paper I want to check out later: Wichrowska, N. Maheswaranathan, M.W. Hoffman, S. G. Colmenarejo, M. Denil, N. de Freitas, and J. Sohl-Dickstein. Learned optimizers that scale and generalize. arXiv preprint arXiv:1703.04813, 2017.
  • ScheduledDropPath: G. Larsson, M. Maire, and G. Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648, 2016.
  • Proximal Policy Optimization (PPO): J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017
Top