Bespoke research in the fields of Quantum Machine Learning, Non-equilibrium thermodynamics, and the Physics of Information.
QuantumFlow v0.8.0: Automatic differentiation of quantum circuits and SGD training of quantum networks. Now with TensorFlow 2.0 backend.
Install the latest tensorflow 2.0 alpha with
> pip install -U --pre tensorflow
and set the QUANTUMFLOW_BACKEND environment variable to tensorflow2.
> QUANTUMFLOW_BACKEND=tensorflow2 make test
A survey of probability distributions used to describe a single, continuous, unimodal, univariate random variable.
Whats New: Added Porter-Thomas, Epanechnikov, biweight, triweight, Libby-Novick, Gauss hypergeometric, confluent hypergeometric, Johnson~SU, and log-Cauchy distributions.
Full LaTeX source distributed on github: https://github.com/gecrooks/fieldguide
[ Full Text ]
Tech. Note 012v1
The Weyl chamber of canonical non-local 2-qubit gates. Papercraft meets quantum computing. Print, cut, fold, and paste. (Should look like Fig. 4 of quant-ph/0209120)
A 2-qubit gate has 15 free parameters. But you can apply local 1-qubit gates before and after, which leaves a 15-4×3=3-parameter space of non-local gates. Once you remove a bunch of symmetries, you’re left with a tetrahedral chamber in which all your favorite 2-qubit gates live.
Gavin E. Crooks, arXiv:1811.08419 (2018)
The Quantum Approximate Optimization Algorithm (QAOA) is a promising approach for programming a near-term gate-based hybrid quantum computer to find good approximate solutions of hard combinatorial problems. However, little is currently know about the capabilities of QAOA, or of the difficulty of the requisite parameters optimization. Here, we study the performance of QAOA on the MaxCut combinatorial optimization problem, optimizing the quantum circuits on a classical computer using automatic differentiation and stochastic gradient descent, using QuantumFlow, a quantum circuit simulator implemented with TensorFlow. Continue reading
Announcing QuantumFlow, a python package that emulates a gate based quantum computer using modern optimized tensor libraries (numpy, TensorFlow, or torch). The TensorFlow backend can calculate the analytic gradient of a quantum circuit with respect to the circuit’s parameters, and circuits can be optimized to perform a function using (stochastic) gradient descent. The torch backend can accelerate the quantum simulation using commodity classical GPUs.
Various other features include quantum circuits, circuit visualization, noisy quantum operations, gate decompositions, sundry metrics and measures, and an interface to Rigetti’s Forest infrastructure.
This technical note describes the Drazin pseudo-inverse, which is an under-appreciated mathematical gadget that has several interesting applications to non-equilibrium thermodynamics.
[ Full Text ]
A brief overview of information measures on classical, discrete probability distributions. 009 v0.7 [ Full Text ]
[ Full text]
Noisy intermediate-scale quantum computing devices are an exciting platform for the exploration of the power of near-term quantum applications. Performing nontrivial tasks in such a framework requires a fundamentally different approach than what would be used on an error-corrected quantum computer. One such approach is to use hybrid algorithms, where problems are reduced to a parameterized quantum circuit that is often optimized in a classical feedback loop. Here we described one such hybrid algorithm for machine learning tasks by building upon the classical algorithm known as random kitchen sinks. Continue reading
We consider the entropy production of a strongly coupled bipartite system. The total entropy production can be partitioned into various components, which we use to define local versions of the Second Law that are valid without the usual idealizations. The key insight is that the joint trajectory probability of interacting systems can be split into terms representing the dynamics of the individual systems without feedback.
Josh Fass, David A. Sivak, Gavin E. Crooks, Kyle A. Beauchamp, Benedict Leimkuhler, and John D. Chodera Entropy, 20(5):318 (2018).
While Langevin integrators are popular in the study of equilibrium properties of complex systems, it is challenging to estimate the timestep-induced discretization error: the degree to which the sampled phase-space or configuration-space probability density departs from the desired target density due to the use of a finite integration timestep. Sivak et al., introduced a convenient approach to approximating a natural measure of error between the sampled density and the target equilibrium density, the Kullback-Leibler (KL) divergence, in phase space, but did not specifically address the issue of configuration-space properties, which are much more commonly of interest in molecular simulations. Continue reading
3.6 (2017-12-29) [Gavin Crooks, Melissa Fabros]
* refactor version string creation
* update testing framework for use with tox and pytest
* weblogo is centered in it’s png file (Kudos: Gert Huselmans)
* Miscellaneous minor bug fixes and refactoring (Kudos: Kudos: Jerry Caskey, Coby Viner)
* fix headings in README.md
* Weblogo 3.6 runs under python 2.7, 3.4, 3.5 & 3.6
Version: 0.11 beta
In a desperate attempt to preserve my own sanity, a survey of probability distributions used to describe a single, continuous, unimodal, univariate random variable.
Whats New: Added hyperbola, hyperbolic, Halphen, Halphen B, inverse Halphen B, generalized Halphen, Sichel, Appell Beta, K and generalized K distributions. Thanks to Saralees Nadarajah and Harish Vangala
[ Full Text ]
Optimal control of nanomagnets has become an urgent problem for the field of spintronics as technological tools approach thermodynamically determined limits of efficiency. In complex, fluctuating systems, like nanomagnetic bits, finding optimal protocols is challenging, requiring detailed information about the dynamical fluctuations of the controlled system. We provide a new, physically transparent derivation of a metric tensor for which the length of a protocol is proportional to its dissipation. This perspective simplifies nonequilibrium optimization problems by recasting them in a geometric language. We then describe a numerical method, an instance of geometric minimum action methods, that enables computation of geodesics even when the number of control parameters is large. We apply these methods to two models of nanomagnetic bits: a simple Landau-Lifshitz-Gilbert description of a single magnetic spin controlled by two orthogonal magnetic fields and a two dimensional Ising model in which the field is spatially controlled. These calculations reveal nontrivial protocols for bit erasure and reversal, providing important, experimentally testable predictions for ultra-low power computing.