A few words on past and present projects with links to more information
There is a subtle problem in quantum computing, that there is more than one way to realize the same quantum gate. Many different physical processes can realize the same transformation on the state of a qubit. So the question arises, which way is best? The "Space Curve Quantum Control" (SCQC) Formalism is an approach to finding these "best" choices.
In particular, it lets us translate a quantum gate to a curve in 3d space (thus, a space curve). This translation makes previously hard to quantify properties of a "good" gate into nice geometric quantities like arclength, curvature, and area. So we can use tools from fields like differential geometry and calculus to find these "best" versions of quantum gates.
Within the SCQC formalism my project works on gates in sequence and feasibility. The past work only looks at one quantum gate at a time, so I study how multiple gates lead to more freedom and additional considerations. Additionally, the current work produces gates which aren't always possible to run on current quantum computers. So I am studying what feasible gates look like under the transformation to a space curve and how to build robust yet feasible gates under the formalism.
This project is an ongoing research project under Prof Murphy Niu.
An instance of PPO architecture on the digital twin of the physical system, with noise included. The actor (policy NN) can be pretrained on known control solutions or be limited to a Hamiltonian ansatz to guide PPO convergence as desired.
As quantum devices realize more and more physical qubits, they are still limited to running circuits of one and two qubit gates. While this is sufficient for universality, it can lead to circuits of high depth which are subject to more errors. Multi-qubit gates would be able to significantly reduce the circuit depth. However, multi-qubit gates are subject to a much larger control space than smaller gates. This increased complexity is a fundamental barrier to quantum control optimization (choosing how the gate is realized). The space is complicated and too large to sweep.
This is where the Reinforcement Learning approach is effective. When the control parameter space grows much too large for a team to manually sweep, the RL-based gate design can identify effective controls. There is also work shown to indicate that this approach is robust and that the RL models are effective at control optimization.
Our work uses gradient-based reinforcement learning techniques such as supervised pretraining to warm start the unsupervised learning process of methods like Proximal Policy Optimization (PPO), to tailor policy solutions towards the goal on hand. This allows us to to improve upon various previously known control solutions while focusing on reducing leakage, gate times and gate infidelities.
This project is an ongoing collaboration with PhD student Adyant Kamdar and Prof. Murphy Niu. This project was also awarded 10,000 GPU Hours by the NVIDIA Academic Grants Program
This project was part of the Research in Industrial Projects (RIPS) Research Experience for Undergrads (REU) hosted at the Institute for Pure and Applied Mathematics (IPAM) at UCLA. Our Industry sponsor was the Aerospace Corporation.
A Lunar Time Estimation system would be an essential part of any future prolonged Lunar missions or even Lunar Surface Networks. A Lunar Time Estimation system is a way of keeping precise and synchronized time estimates across a network of Lunar base stations. Bhamidipati et al. (2022) propose a modified Diffusion Kalman Filter for Lunar timing, which uses intermittently available Earth-GPS signals and measurements from neighboring stations to maintain accurate, synchronized time estimates.
A Kalman filter is a statistical algorithm which balances measurements and estimates for a more accurate estimation than the two could give individually. A Diffusion Kalman Filter is a network of Kalman Filters which use each other's estimates as measurements which factor into their own estimates. My group implemented the Diffusion Kalman Filter proposed by Bhamidipati et al. Then, we optimized our filter for the weight of the neighboring estimates in personal estimation. We evaluate and compare the performance of a range of cost functions and optimization techniques, with applications to optimization on generalized diffusion algorithms.
We also studied the proposed Diffusion filter from a statistical perspective, most notably deriving the bias of our filter's estimate. Showing that it has non-zero bias, unlike the traditional Kalman Filter.
Wildfires stand as an ever present dangerous natural phenomenon, especially in California. However, their recent rise warrants extra attention. In particular, when fires start it’s important to understand how they spread. This project explores the relationship between the spread of invasive weed species across California and the regions with frequent fires.
To perform our analysis we used linear algebra tools such as projection, linear regression, kNN and PCA, we utilized geological data of wildfires and records of weed coverage to correlate the spread of weeds and the frequency of fires.
Although this data doesn't imply certain weeds cause more/larger fires we can analyze what can make a plant more likely to have been in a fire. Some notable contributing factors can include disturbed land, climate conditions, or biome changes (Ex. dried swamps). Using some comparisons to data found online as well as in other research on invasive grasses we can largely confirm our conclusions for several species, such as, notably, Ricinus communis (castor bean), Festuca myuros (rat-tail fescue) , and Verbascum thapsus (great mullein). Overall it is evident that certain weeds are a greater risk than others, even among invasive species as a whole.
Quantum Computers offer true randomness, an output which is completely independent of all prior information. True randomness has a plethora of applications across cryptography in hashing algorithms, key generation, and more. However, if someone sends you a number and swears it is truly random and was measured from a quantum computer, how do you know?
This is where the necessity of a "Proof of Quantumness" lies, a way for someone to prove to you they have a quantum device. The general structure for such a proof is simple; we send a quantum circuit, they run it on a quantum computer and send us back the sampled outputs, if the outputs match the expected distribution of outputs, and we received them quicker than a classical computer could have simulated them, then we accept that they have a quantum device.
However, there is another issue here, how do we know the expected distribution of outputs? How can we efficiently verify the information? My research project was a deep dive into one of the most promising candidates for an efficiently verifiable protocol. The Instantaneous Quantum Polynomial (IQP) protocol. The general premise is that you have some secret key, which describes the circuit you send. The circuit is hard to simulate in isolation ... but ... when equipped with the secret, you know that the outputs should have a bias towards your secret.
So you simply check the sampled outputs for this bias and accept/reject on that. However, this initial protocol had a weakness that an adversary could uncover your secret just by seeing the circuit. So the protocol has been adapted and adjusted over time using clever techniques in Coding Theory and Abstract Algebra to circumvent these "secret-extraction" techniques.
This project was done as part of a course on Post-Quantum Cryptography
The Stable Matching problem is a famous algorithm which has many applications, the most famous of which is resident-matching for med-school graduates and hospitals. The premise is simple, you have two groups where every individual ranks the members of the opposite group by preference. With these rankings, you must match up pairs such that there is no two individuals (who aren't paired) who both prefer each other to their current match.
There is a famously simple algorithm for finding these solutions, the Gale-Shapely Algorithm. However, there is an interesting caveat to the solutions given by this algorithm. Of all the possible "stable matches", or solutions to the problem, it will always choose the one that is the "best case" for one group and "worst case" for the other group. So the obvious question arises, are there simple ways to get solutions that are more fair?
This question reveals a surprising depth to the problem where every possible "stable" configuration exists in a lattice structure and with simple modifications to the Gale-Shapely algorithm you can traverse the lattice to find matchings which are optimal by different metrics. (Best average mate, best possible worst matching, group-equality, etc)
This project was done as part of a course on Combinatorial Algorithms
I undertook this as a freshman year project in the CCS Computing Lab course. My approach was divided into two halves; first designing and training a convolutional neural network to decipher individual mathematical symbols, second build an image segmentation algorithm to divide a mathematical expression into individual symbols and capture their structure (fraction, exponent, subscript, etc)
I used the kaggle mathematical expressions dataset which stored the math in a handwriting based format. (The data was in the form of "brushstrokes" or short paths that the pencil took in the order they were performed). I transformed these "brushstrokes" to pictures by plotting the code and saving the plot as an image. I also augmented the dataset by adding extra data entries for symbols that were underrepresented (ex. Greek letters) by rotating existing data entries and adding noise to the images. This was used to train a convolutional neural network (CNN) which could identify mathematical symbols with ~95% accuracy.
The recursive segmentation algorithm was designed to identify superscripts, subscripts, fractions, and square roots. Then, cropping out individual characters by building bounding boxes around connected regions of the image, it fed those to the CNN. Of the 11,000 entries in the kaggle dataset, ~7500 had formats recognizable by my segmentation algorithm, and the segmenter + CNN pair got 88% accuracy on these ~7500 expressions.
Image Segmentation
Neural Network Character Recognition
Reconstructing full expression