Announcing QuantumFlow, a python package that emulates a gate based quantum computer using modern optimized tensor libraries (numpy, TensorFlow, or torch). The TensorFlow backend can calculate the analytic gradient of a quantum circuit with respect to the circuitâ€™s parameters, and circuits can be optimized to perform a function using (stochastic) gradient descent. The torch backend can accelerate the quantum simulation using commodity classical GPUs.

Various other features include quantum circuits, circuit visualization, noisy quantum operations, gate decompositions, sundry metrics and measures, and an interface to Rigetti’s Forest infrastructure.

]]>This technical note describes the Drazin pseudo-inverse, which is an under-appreciated mathematical gadget that has several interesting applications to non-equilibrium thermodynamics.

[ Full Text ]

]]>A brief overview of information measures on classical, discrete probability distributions. 009 v0.7 [ Full Text ]

]]>C. M. Wilson, J. S. Otterbach, N. Tezak, Robert S. Smith, Gavin E. Crooks, and Marcus P. da Silva, arXiv:1806.08321 (2018)

[ Full text]

**Abstract**:

Noisy intermediate-scale quantum computing devices are an exciting platform for the exploration of the power of near-term quantum applications. Performing nontrivial tasks in such a framework requires a fundamentally different approach than what would be used on an error-corrected quantum computer. One such approach is to use hybrid algorithms, where problems are reduced to a parameterized quantum circuit that is often optimized in a classical feedback loop. Here we described one such hybrid algorithm for machine learning tasks by building upon the classical algorithm known as random kitchen sinks. Our technique, called quantum kitchen sinks, uses quantum circuits to nonlinearly transform classical inputs into features that can then be used in a number of machine learning algorithms. We demonstrate the power and flexibility of this proposal by using it to solve binary classification problems for synthetic datasets as well as handwritten digits from the MNIST database. We can show, in particular, that small quantum circuits provide significant performance lift over standard linear classical algorithms, reducing classification error rates from 50% to < 0.1%, and from 4.1% to 1.4% in these two examples, respectively.

]]>**Abstract**:

We consider the entropy production of a strongly coupled bipartite system. The total entropy production can be partitioned into various components, which we use to define local versions of the Second Law that are valid without the usual idealizations. The key insight is that the joint trajectory probability of interacting systems can be split into terms representing the dynamics of the individual systems without feedback.

]]>Josh Fass, David A. Sivak, Gavin E. Crooks, Kyle A. Beauchamp, Benedict Leimkuhler, and John D. Chodera Entropy, 20(5):318 (2018).

**Abstract**:

While Langevin integrators are popular in the study of equilibrium properties of complex systems, it is challenging to estimate the timestep-induced discretization error: the degree to which the sampled phase-space or configuration-space probability density departs from the desired target density due to the use of a finite integration timestep. Sivak et al., introduced a convenient approach to approximating a natural measure of error between the sampled density and the target equilibrium density, the Kullback-Leibler (KL) divergence, in phase space, but did not specifically address the issue of configuration-space properties, which are much more commonly of interest in molecular simulations. Here, we introduce a variant of this near-equilibrium estimator capable of measuring the error in the configuration-space marginal density, validating it against a complex but exact nested Monte Carlo estimator to show that it reproduces the KL divergence with high fidelity. To illustrate its utility, we employ this new near-equilibrium estimator to assess a claim that a recently proposed Langevin integrator introduces extremely small configuration-space density errors up to the stability limit at no extra computational expense. Finally, we show how this approach to quantifying sampling bias can be applied to a wide variety of stochastic integrators by following a straightforward procedure to compute the appropriate shadow work, and describe how it can be extended to quantify the error in arbitrary marginal or conditional distributions of interest.

3.6 (2017-12-29) [Gavin Crooks, Melissa Fabros]

* refactor version string creation

* update testing framework for use with tox and pytest

* weblogo is centered in it’s png file (Kudos: Gert Huselmans)

* Miscellaneous minor bug fixes and refactoring (Kudos: Kudos: Jerry Caskey, Coby Viner)

* fix headings in README.md

* Weblogo 3.6 runs under python 2.7, 3.4, 3.5 & 3.6

Version: 0.11 beta

In a desperate attempt to preserve my own sanity, a survey of probability distributions used to describe a single, continuous, unimodal, univariate random variable.

Whats New: Added hyperbola, hyperbolic, Halphen, Halphen B, inverse Halphen B, generalized Halphen, Sichel, Appell Beta, K and generalized K distributions. Thanks to Saralees Nadarajah and Harish Vangala

[ Full Text ]

]]>Phys. Rev. E 95 012148 (2017)

[Full text | Journal | arXiv ]

**Abstract**:

Optimal control of nanomagnets has become an urgent problem for the field of spintronics as technological tools approach thermodynamically determined limits of efficiency. In complex, fluctuating systems, like nanomagnetic bits, finding optimal protocols is challenging, requiring detailed information about the dynamical fluctuations of the controlled system. We provide a new, physically transparent derivation of a metric tensor for which the length of a protocol is proportional to its dissipation. This perspective simplifies nonequilibrium optimization problems by recasting them in a geometric language. We then describe a numerical method, an instance of geometric minimum action methods, that enables computation of geodesics even when the number of control parameters is large. We apply these methods to two models of nanomagnetic bits: a simple Landau-Lifshitz-Gilbert description of a single magnetic spin controlled by two orthogonal magnetic fields and a two dimensional Ising model in which the field is spatially controlled. These calculations reveal nontrivial protocols for bit erasure and reversal, providing important, experimentally testable predictions for ultra-low power computing.

]]>**Abstract**:

We consider the entropy production of a strongly coupled bipartite system. The total entropy production can be partitioned into various components, which we use to define local versions of the Second Law that are valid without the usual idealizations. The key insight is that the joint trajectory probability of interacting systems can be split into terms representing the dynamics of the individual systems without feedback.

]]>[Full text | Journal | arXiv ]

The last paper from our time working together in Berkeley. Ironically, also the first project David worked on during his postdoc. But a 7 year lag from inception to completion matches my previous records [Crooks2008a, Crooks2008b].

**Abstract**:

We explore the thermodynamic geometry of a simple system that models the bistable dynamics of nucleic acid hairpins in single molecule force-extension experiments. Near equilibrium, optimal (minimum-dissipation) driving protocols are governed by a generalized linear response friction coefficient. Our analysis demonstrates that the friction coefficient of the driving protocols is sharply peaked at the interface between metastable regions, which leads to minimum-dissipation protocols that drive rapidly within a metastable basin, but then linger longest at the interface, giving thermal fluctuations maximal time to kick the system over the barrier. Intuitively, the same principle applies generically in free energy estimation (both in steered molecular dynamics simulations and in single-molecule experiments), provides a design principle for the construction of thermodynamically efficient coupling between stochastic objects, and makes a prediction regarding the construction of evolved biomolecular motors.

]]>Version: 0.9 beta

In a desperate attempt to preserve my own sanity, a survey of probability distributions used to describe a single, continuous, unimodal, univariate random variable.

Whats New: Added pseudo Voigt,and Student’s t_3 distributions. Reparameterized hyperbolic sine distribution. Derived limit of Unit gamma to log-normal.Corrected spelling of “arrises” (sharp edges formed by the meeting of surfaces) to “arises” (emerge; become apparent). Added Moyal distribution, a special case of the gamma-exponential distribution. Corrected spelling of “principle” to “principal” (Kudos: Matthew Hankins, Mara Averick).

[ Full Text ]

]]>**Abstract**:

The development of sophisticated experimental means to control nanoscale systems has motivated efforts to design driving protocols that minimize the energy dissipated to the environment. Computational models are a crucial tool in this practical challenge. We describe a general method for sampling an ensemble of finite-time, nonequilibrium protocols biased toward a low average dissipation. We show that this scheme can be carried out very efficiently in several limiting cases. As an application, we sample the ensemble of low-dissipation protocols that invert the magnetization of a 2D Ising model and explore how the diversity of the protocols varies in response to constraints on the average dissipation. In this example, we find that there is a large set of protocols with average dissipation close to the optimal value, which we argue is a general phenomenon.

]]>