QuantumFlow v0.8.0: Automatic differentiation of quantum circuits and SGD training of quantum networks. Now with TensorFlow 2.0 backend.

Install the latest tensorflow 2.0 alpha with

`> pip install -U --pre tensorflow`

and set the QUANTUMFLOW_BACKEND environment variable to tensorflow2.

`> QUANTUMFLOW_BACKEND=tensorflow2 make test`

Version: 0.12

A survey of probability distributions used to describe a single, continuous, unimodal, univariate random variable.

Whats New: Added Porter-Thomas, Epanechnikov, biweight, triweight, Libby-Novick, Gauss hypergeometric, confluent hypergeometric, Johnson~SU, and log-Cauchy distributions.

Full LaTeX source distributed on github: https://github.com/gecrooks/fieldguide

[ Full Text ]

]]>Tech. Note 012v1

PDF:

http://threeplusone.com/weyl

Source code:

https://github.com/gecrooks/on_weyl

The Weyl chamber of canonical non-local 2-qubit gates. Papercraft meets quantum computing. Print, cut, fold, and paste. (Should look like Fig. 4 of quant-ph/0209120)

A 2-qubit gate has 15 free parameters. But you can apply local 1-qubit gates before and after, which leaves a 15-4×3=3-parameter space of non-local gates. Once you remove a bunch of symmetries, you’re left with a tetrahedral chamber in which all your favorite 2-qubit gates live.

]]>**Abstract**:

The Quantum Approximate Optimization Algorithm (QAOA) is a promising approach for programming a near-term gate-based hybrid quantum computer to find good approximate solutions of hard combinatorial problems. However, little is currently know about the capabilities of QAOA, or of the difficulty of the requisite parameters optimization. Here, we study the performance of QAOA on the MaxCut combinatorial optimization problem, optimizing the quantum circuits on a classical computer using automatic differentiation and stochastic gradient descent, using QuantumFlow, a quantum circuit simulator implemented with TensorFlow. We find that we can amortize the training cost by optimizing on batches of problems instances; that QAOA can exceed the performance of the classical polynomial time Goemans-Williamson algorithm with modest circuit depth, and that performance with fixed circuit depth is insensitive to problem size. Moreover, MaxCut QAOA can be efficiently implemented on a gate-based quantum computer with limited qubit connectivity, using a qubit swap network. These observations support the prospects that QAOA will be an effective method for solving interesting problems on near-term quantum computers.

]]>Announcing QuantumFlow, a python package that emulates a gate based quantum computer using modern optimized tensor libraries (numpy, TensorFlow, or torch). The TensorFlow backend can calculate the analytic gradient of a quantum circuit with respect to the circuitâ€™s parameters, and circuits can be optimized to perform a function using (stochastic) gradient descent. The torch backend can accelerate the quantum simulation using commodity classical GPUs.

Various other features include quantum circuits, circuit visualization, noisy quantum operations, gate decompositions, sundry metrics and measures, and an interface to Rigetti’s Forest infrastructure.

]]>This technical note describes the Drazin pseudo-inverse, which is an under-appreciated mathematical gadget that has several interesting applications to non-equilibrium thermodynamics.

[ Full Text ]

]]>A brief overview of information measures on classical, discrete probability distributions. 009 v0.7 [ Full Text ]

]]>C. M. Wilson, J. S. Otterbach, N. Tezak, Robert S. Smith, Gavin E. Crooks, and Marcus P. da Silva, arXiv:1806.08321 (2018)

[ Full text]

**Abstract**:

Noisy intermediate-scale quantum computing devices are an exciting platform for the exploration of the power of near-term quantum applications. Performing nontrivial tasks in such a framework requires a fundamentally different approach than what would be used on an error-corrected quantum computer. One such approach is to use hybrid algorithms, where problems are reduced to a parameterized quantum circuit that is often optimized in a classical feedback loop. Here we described one such hybrid algorithm for machine learning tasks by building upon the classical algorithm known as random kitchen sinks. Our technique, called quantum kitchen sinks, uses quantum circuits to nonlinearly transform classical inputs into features that can then be used in a number of machine learning algorithms. We demonstrate the power and flexibility of this proposal by using it to solve binary classification problems for synthetic datasets as well as handwritten digits from the MNIST database. We can show, in particular, that small quantum circuits provide significant performance lift over standard linear classical algorithms, reducing classification error rates from 50% to < 0.1%, and from 4.1% to 1.4% in these two examples, respectively.

]]>**Abstract**:

We consider the entropy production of a strongly coupled bipartite system. The total entropy production can be partitioned into various components, which we use to define local versions of the Second Law that are valid without the usual idealizations. The key insight is that the joint trajectory probability of interacting systems can be split into terms representing the dynamics of the individual systems without feedback.

]]>Josh Fass, David A. Sivak, Gavin E. Crooks, Kyle A. Beauchamp, Benedict Leimkuhler, and John D. Chodera Entropy, 20(5):318 (2018).

**Abstract**:

While Langevin integrators are popular in the study of equilibrium properties of complex systems, it is challenging to estimate the timestep-induced discretization error: the degree to which the sampled phase-space or configuration-space probability density departs from the desired target density due to the use of a finite integration timestep. Sivak et al., introduced a convenient approach to approximating a natural measure of error between the sampled density and the target equilibrium density, the Kullback-Leibler (KL) divergence, in phase space, but did not specifically address the issue of configuration-space properties, which are much more commonly of interest in molecular simulations. Here, we introduce a variant of this near-equilibrium estimator capable of measuring the error in the configuration-space marginal density, validating it against a complex but exact nested Monte Carlo estimator to show that it reproduces the KL divergence with high fidelity. To illustrate its utility, we employ this new near-equilibrium estimator to assess a claim that a recently proposed Langevin integrator introduces extremely small configuration-space density errors up to the stability limit at no extra computational expense. Finally, we show how this approach to quantifying sampling bias can be applied to a wide variety of stochastic integrators by following a straightforward procedure to compute the appropriate shadow work, and describe how it can be extended to quantify the error in arbitrary marginal or conditional distributions of interest.

3.6 (2017-12-29) [Gavin Crooks, Melissa Fabros]

* refactor version string creation

* update testing framework for use with tox and pytest

* weblogo is centered in it’s png file (Kudos: Gert Huselmans)

* Miscellaneous minor bug fixes and refactoring (Kudos: Kudos: Jerry Caskey, Coby Viner)

* fix headings in README.md

* Weblogo 3.6 runs under python 2.7, 3.4, 3.5 & 3.6

Version: 0.11 beta

In a desperate attempt to preserve my own sanity, a survey of probability distributions used to describe a single, continuous, unimodal, univariate random variable.

Whats New: Added hyperbola, hyperbolic, Halphen, Halphen B, inverse Halphen B, generalized Halphen, Sichel, Appell Beta, K and generalized K distributions. Thanks to Saralees Nadarajah and Harish Vangala

[ Full Text ]

]]>Phys. Rev. E 95 012148 (2017)

[Full text | Journal | arXiv ]

**Abstract**:

Optimal control of nanomagnets has become an urgent problem for the field of spintronics as technological tools approach thermodynamically determined limits of efficiency. In complex, fluctuating systems, like nanomagnetic bits, finding optimal protocols is challenging, requiring detailed information about the dynamical fluctuations of the controlled system. We provide a new, physically transparent derivation of a metric tensor for which the length of a protocol is proportional to its dissipation. This perspective simplifies nonequilibrium optimization problems by recasting them in a geometric language. We then describe a numerical method, an instance of geometric minimum action methods, that enables computation of geodesics even when the number of control parameters is large. We apply these methods to two models of nanomagnetic bits: a simple Landau-Lifshitz-Gilbert description of a single magnetic spin controlled by two orthogonal magnetic fields and a two dimensional Ising model in which the field is spatially controlled. These calculations reveal nontrivial protocols for bit erasure and reversal, providing important, experimentally testable predictions for ultra-low power computing.

]]>