The success of deep learning in machine learning applications has encouraged the scientific and engineering community to develop deep-learning-based predictive models for a wide a range of applications. Designing a deep neural network (DNN) architecture for a particular modeling task, however, requires significant architecture engineering by a deep learning expert. While several recent works discuss automating the process of the neural architecture search (NAS), they have focused mainly on the traditional machine learning tasks of image and text classification. In this talk, we will present a scalable NAS approach to automatically generate DNN models for predictive modeling in science and engineering applications. We will discuss a recurrent neural-network-based architecture generator that produces a multilayered perceptron with skip connections. We leverage a manager-worker-based distributed reinforcement-learning approach using proximal policy optimization method to iteratively improve the generated DNN architectures. We demonstrate the effectiveness of the proposed NAS approach for multivariate and multioutput regression problems on diverse applications. The generated architectures obtain high accuracy while maintaining significantly fewer parameters and achieve 70% to 80% node utilization on 256 to 1,024 nodes of Theta supercomputer at ALCF.