Hi! I am a second-year Ph.D. student in Computer Science at Northwestern University. My research focuses on quantum simulation and high-dimensional data processing. I am particularly interested in developing the algorithms and software needed to efficiently learn from and compute with quantum information. I am grateful for the support of the SandboxAQ Research Excellence Scholarship.
I graduated with a B.Sc. in mathematics and minor in computer science from the University of Chicago in June 2024. Since then, I have been studying the theory of algorithms and topological quantum error correction. As an undergraduate, my research focused on quantum adiabatic optimization and mathematical physics. Before that, I worked on machine learning models for accelerator physics and cellular agriculture.
I love to collaborate. Please feel free to email me at `kabir [at] u [dot] northwestern [dot] edu`. I am also on LinkedIn and GitHub.
tqec
Abstract: The theory of quantum signal processing (QSP) enables quantum computers to implement a generic class of scalar polynomials by encoding them as unitary transformations on a single qubit. We introduce a protocol that extends QSP by configuring the signal operator which encodes the input to the polynomial with QCrank, a quantum read-only memory (QROM) primitive for parallel access to real numbers. When given a QSP-achievable polynomial and vector of scalars, our protocol evaluates the polynomial at each component of the vector to additive precision $\varepsilon$ in $O(1/\varepsilon^{2})$ measurement shots. We compare vectorized QSP over a length $n$ vector with executing $n$ independent scalar QSP circuits and analyze the costs incurred in circuit width and entangling gate count. For experiments over practical workloads, we develop a comprehensive testbed of shot-based simulations of QSP using Qiskit, which we can provide early-access to upon request. This is joint work with Jan Balewski and Daan Camps from LBNL, supported by NERSC.
Abstract: Fault-tolerant quantum memory is essential for large-scale quantum computer systems and has recently achieved major experimental and theoretical advances. In 2023, McEwen, Bacon, and Gidney showed that walking code circuits move surface code logical qubits diagonally while maintaining comparable logical performance to standard surface code circuits. Building on this work, we apply gliding codes to create access hallways in densely packed qubit arrays using minimal ancilla space. This approach provides arbitrary access to stored qubits and supports cache-like eviction of qubits from the storage array. For a storage layout of $l\times w$ surface code logical qubits, our design reduces spacetime volume by $O(lw)$ when compared to loose packings which allocate rows and columns of ancilla qubit patches between each logical qubit.
Abstract: Combinatorial optimization problems that arise in science and industry typically have constraints. Yet the presence of constraints makes them challenging to tackle using both classical and quantum optimization algorithms. We propose a new quantum algorithm for constrained optimization, which we call quantum constrained Hamiltonian optimization (Q-CHOP). Our algorithm leverages the observation that for many problems, while the best solution is difficult to find, the worst feasible (constraint-satisfying) solution is known. The basic idea is to to enforce a Hamiltonian constraint at all times, thereby restricting evolution to the subspace of feasible states, and slowly "rotate" an objective Hamiltonian to trace an adiabatic path from the worst feasible state to the best feasible state. We additionally propose a version of Q-CHOP that can start in any feasible state. Finally, we benchmark Q-CHOP against the commonly-used adiabatic algorithm with constraints enforced using a penalty term and find that Q-CHOP performs consistently better on a wide range of problems, including textbook problems on graphs, knapsack, combinatorial auction, as well as a real-world financial use case, namely bond exchange-traded fund basket optimization.
Abstract: Future improvements in particle accelerator performance are predicated on increasingly accurate online modeling of accelerators. Hysteresis effects in magnetic, mechanical, and material components of accelerators are often neglected in online accelerator models used to inform control algorithms, even though reproducibility errors from systems exhibiting hysteresis are not negligible in high precision accelerators. In this Letter, we combine the classical Preisach model of hysteresis with machine learning techniques to efficiently create nonparametric, high-fidelity models of arbitrary systems exhibiting hysteresis. We experimentally demonstrate how these methods can be used in situ, where a hysteresis model of an accelerator magnet is combined with a Bayesian statistical model of the beam response, allowing characterization of magnetic hysteresis solely from beam-based measurements. Finally, we explore how using these joint hysteresis-Bayesian statistical models allows us to overcome optimization performance limitations that arise when hysteresis effects are ignored.