In recent years, it has been extensively demonstrated that phase transitions can be detected from data by analyzing the output of neural networks (NNs) trained to solve specific classification problems. In this talk, we present a framework for the autonomous detection of phase transitions based on analytical solutions to these problems. We discuss the conditions that enable such approaches and showcase their computational advantage compared to NNs based on our Julia implementation.
The identification of phase transitions and the classification of different phases of matter from data are among the most popular applications of machine learning (ML) in condensed matter physics. NN-based approaches have proven to be particularly powerful due to the ability of NNs to learn arbitrary functions. Many such approaches work by computing indicators of phase transitions from the output of NNs trained to solve specific classification problems.
The optimal solutions to these classification problems are given by Bayes classifiers that take into account the probability distributions underlying the physical system under consideration. We show that in many scenarios arising in (quantum) many-body physics, the Bayes optimal indicators can be well-approximated (or even computed exactly) based on readily available data by leveraging prior system knowledge. This constitutes an alternative approach to detecting phase transitions from data compared to NN-based classification. Here, we contrast these two approaches based on our Julia implementation.