About

You can reach me at micragone (at) berkeley (dot) edu.

CV (Mar 2024), Twitter, LinkedIn, Publications (scroll to bottom), Research Statement

My name is Michael Ragone, and I’m a Morrey visiting assistant professor in mathematics at UC Berkeley. I work at the intersection of quantum many-body physics and quantum computation/information theory, with a high emphasis on applied Lie representation theory. Recently, I’ve been thinking about open quantum systems and methods of preparing quantum thermal states.

Dissertation work: SO(n) spin chains

I completed my dissertation work (arXiv:2403.09951) work under Bruno Nachtergaele at UC Davis, and still dabble on closely related problems. We are studying a fascinating class of ground state structure problems for quantum spin systems with natural Lie group symmetries. “Quantum spin systems” encapsulates a mathematically rigorous framework arising from condensed matter and quantum information theory which models particles with finite degrees of freedom on lattices (think cold atoms in an array, or vacancies in a crystal, that sort of thing). When these systems have Lie group symmetry, like a rotational SO(n) symmetry, tricky problems like determining spectral gaps or uniqueness properties of ground states become much more tractable. A classic symmetry I like to think about is rotational symmetry: “rotated ferromagnets are still ferromagnets”. My main work is on a class of SO(n)-invariant ground states which closely resemble the AKLT chain, the prototype model for symmetry protected topological (SPT) phases. My thesis is chock-full of details and specific examples to help make the story clear–I hope it helps!

Los Alamos National Laboratory and Pacific Northwest National Laboratory

Throughout my time at LANL and PNNL, I worked on a handful of teams which aimed to use tools from representation theory and Lie geometry to study the potential performance of variational quantum algorithms and quantum machine learning. Much of the analysis revolved around studying “dynamical Lie algebras”, a central object in quantum control theory (and other branches of physics) which allows us to rigorously describe the somewhat vague notion of “expressibility” of a parameterized quantum circuit. Parameterized quantum circuits are essentially quantum circuits with little dials on them, and these little dials allow us to explore spaces of quantum gates, which are just collections of unitary maps. The expressibility of a circuit describes how much of the space of unitaries it can explore: circuits with high expressibility can see many unitaries, while circuits with low expressibility can only see a few.

What the field has realized is that for the existing common models of quantum machine learning, there is a fundamental tradeoff: highly expressible circuits can become prohibitively hard to train, due to the barren plateau phenomenon. But circuits with low expressibility are typically classically simulable, meaning we don’t really need a quantum computer to perform these computations. One may hope that a happy middle ground exists, but detailed classifications of dynamical Lie algebras seem to suggest no such middle ground exists.

This doesn’t necessarily mean that quantum machine learning is doomed, but it does mean that to make real progress we need at least one of two things (and my personal suspicion is that we will need both):

  1. We need more thoughtful models for quantum machine learning: different architectures, different loss functions, different optimization routines…something to evade the no-go theorems of the last 5-10 years. As to what to try, that’s a great question.
  2. We need good heuristics for quantum machine learning. Classical machine learning owes much of its success to years of thorough experimentation and development of good heuristics. The development of rigorous mathematical theory lags by years and is ultimately guided by what works, in some analogy to the relationship of physics and mathematical physics. But despite incredible progress for building larger quantum computers with lower error rates, we’re still a long ways out from having a machine we can play with the way we can play on our laptops.

Publications

The Many-Body Ground State Manifold of Flat Band Interacting Hamiltonian for Magic Angle Twisted Bilayer Graphene, K. Stubbs, MR, A. MacDonald, L. Lin. (2025) (arXiv:2503.20060)

SO(n) AKLT Chains as Symmetry Protected Topological Quantum Ground States, MR. Dissertation work under B. Nachtergaele. (2024) (arXiv:2403.09951)

O(n)-to-SO(n) Symmetry Breaking Quantum Ground States, B. Nachtergaele, MR (in preparation)

A Unified Theory of Barren Plateaus for Deep Parameterized Quantum Circuits, MR, Bojko N. Bakalov, Frederic Sauvage, Alexander F. Kemper, Carlos Ortiz Marrero, Martin Larocca, M. Cerezo (2024), (https://rdcu.be/dRXRa)

A Theory for Equivariant Quantum Neural Networks, Q. Nguyen, L. Schatzki, P. Braccia, MR, F. Sauvage, P. Coles, M. Larocca, M. Cerezo (2024), (https://link.aps.org/doi/10.1103/PRXQuantum.5.020328)

Representation Theory for Geometric Quantum Machine Learning, MR, P. Braccia, Q. Nguyen, L. Schatzki, P. Coles, F. Sauvage, M. Larocca, M. Cerezo (2023) (arXiv:2210.07980)

The Power of Quantum Convolutional Neural Networks, P. Braccia, F. Sauvage, Q. Nguyen, L. Schatzki, MR, P. Coles, M. Larocca, M. Cerezo (in preparation)

Selected Talk Slides

UC Davis PhD Exit Seminar, March 2024: The Curious Symmetry Breaking of O(n) Quantum Spin Chains (slides)

QIP 2024: Dynamical Lie Algebras and Barren Plateaus (joint talk with Enrico Fontana) (slides, recorded talk)

2023 NC State Quantum Workshop on Quantum Machine Learning: Representation Theory for Geometric Quantum Machine Learning (slides, recorded talk)

Old Stuff: Computational Neuroscience

For a good chunk of my college career at the University of Arizona, I researched computational neuroscience in the Computational and Experimental Neuroscience Lab (CENL) under Dr. Jean-Marc Fellous. We developed a biophysical model of the rat’s spatial navigation system, and we collaborated with the Laboratory for Information Processing Systems (LIPS) to investigate coding theoretic properties of place cell networks and sharp-wave ripple events. Here’s our abstract from SFN 2016. There’s lots of other interesting work happening in both labs—check them out!

Old Stuff: Engineering Senior Design

My engineering senior design team at the University of Arizona created a machine learning denoiser for General Dynamics for the Coast Guard. You see, the Coast Guard regularly receives distress calls from boats that are…well, distressed. These radio signals are often made noisy by atmospheric interference, so the Coast Guard manually filters these signals until they are listenable. We were tasked with finding a better solution using machine learning. So, we created a framework wherein noisy audio signals are processed, converted into a form that highlights vocal features, and fed to a specially trained autoencoder. Our results show promise, and I suspect that following some refinement at General Dynamics, the Coast Guard may have a powerful new tool for incoming calls.

Random stuff

I’m also a huge coffee and food nerd. I’ve scattered pictures of stuff I’ve made around the website—if you have coffee/food/music suggestions, I’d love to hear them!