Nonlinearity gives rise to diverse dynamical behaviors across science and engineering. While analysis and control of linear systems are well-understood, there is no general framework for nonlinear systems. According to Koopman theory, there exist coordinate transformations that make strongly nonlinear dynamics approximately linear. Such transformations have the potential to enable nonlinear prediction, estimation, and control using linear theory. However, they are challenging to find. This work leverages deep learning to discover representations of appropriate coordinate transformations from data. Our transformations are parsimonious and interpretable by construction, embedding the dynamics in a low-dimensional space. We identify nonlinear coordinates on which the dynamics are globally linear using a modified autoencoder. We also generalize Koopman representations to include a ubiquitous class of systems with continuous spectra while maintaining a compact and efficient embedd ing. Thus, we benefit from the power of deep learning, while retaining the physical interpretability of Koopman embeddings.