Skip to main content
SHARE
Publication

Efficacy of using a dynamic length representation vs. a fixed-length for neuroarchitecture search...

Publication Type
Conference Paper
Book Title
Proceedings of the Genetic and Evolutionary Computation Conference Companion
Publication Date
Page Numbers
1888 to 1894
Publisher Location
New York, New York, United States of America
Conference Name
Genetic Algorithms and Evolutionary Computation Conference (GECCO)
Conference Location
Melbourne, Australia
Conference Sponsor
Association for Computing Machinery
Conference Date
-

Deep learning neuroarchitecture and hyperparameter search are important in finding the best configuration that maximizes learned model accuracy. However, the number of types of layers, their associated hyperparameters, and the myriad of ways to connect layers poses a significant computational challenge in discovering ideal model configurations. Here, we assess two different approaches for neuroarchitecture search for a LeNet style neural network, one that uses a fixed-length approach where there is a preset number of possible layers that can be toggled on or off via mutation, and a variable-length approach where layers can be freely added or removed via special mutation operators. We found that the variable-length implementation trained better models while discovering unusual layer configurations worth further exploration.