Deep neural networks have become the cornerstone of modern machine learning, yet their multi-layer structure, nonlinearities, and intricate optimization processes pose considerable theoretical challenges. In this talk, I will review recent advances in random matrix analysis that shed new light on these complex ML models. Starting with the foundational case of linear regression, I will demonstrate how the proposed analysis extends naturally to shallow nonlinear and ultimately deep nonlinear network models. I will also discuss practical implications (e.g., compressing and/or designing "equivalent" NN models) that arise from these theoretical insights.