In systems with many parameters, when no single parameter dominates—or when their influences
“average out”—one often sees concentration and universal laws, such as central limit behavior. As the system is tuned, it can also exhibit sharp thresholds and phase transitions, like the connectivity threshold in random graphs. High-dimensional probability aims to quantify these effects: to identify which events are typical, how strongly they concentrate, and how they depend on the relevant parameters. These tools both explain complex systems, inform algorithm design, and can also be used via the probabilistic method to construct random objects with properties that are difficult to achieve deterministically.
In this talk, I will present these themes in four case studies where high-dimensional probability plays a central role. In numerical linear algebra, they help explain why Gaussian elimination with partial pivoting—despite severe worst-case examples—typically behaves stably. In random geometric graphs, they enable recovery of hidden geometry from sparse, noisy connectivity data by turning reliable local statistics into global distance information. In inference on trees, they clarify when information truly propagates and when it provably cannot be extracted by limited-complexity computations, linking probabilistic thresholds to depth advantages. Finally, in convex geometry, probabilistic constructions reveal intrinsic barriers to approximating high-dimensional bodies by simple polytopes and clarify why natural choices of a scaling “center” can fail in asymmetric settings.
