Tuesday, 16 June 2015

Using a single molecular reaction to perform a complex calculation

Much of the research in molecular computing is based on making molecules imitate the most simple logical operations that underlie conventional digital electronics. For example, we can design molecular systems that mimic "AND", "OR"  and "NOT" gates. These gates can then be joined together to perform more complex tasks (eg. here). But at some level, physical systems naturally perform complex calculations. For example, the energy levels of a hydrogen atom are predicted theoretically through a number of complex equations, and in principle you could infer the answers to these equations by measuring the energies.

Of course, there are a number of issues with this. You need to be confident about your theory that relates measurements to equations, and you also need to be confident about the measurements themselves. But perhaps the most important caveat is that the equations that you can solve by performing measurements are often only interesting in predicting the outcome of the measurements themselves; the whole thing becomes rather incestuous and not obviously useful.

In a recent paper (here), Manoj Gopalkrishnan shows how the the complexity of a single chemical reaction can be harnessed to perform a useful computation. The paper is quite involved, but the essence is the following. Let's say I have the chemical species, X1, X2 and X3 that can interconvert by the reaction X1 + X3 ⇌ 2 X2, with both left and right sides being equally stable (such a system could, in principle, be created from DNA). If I start with a certain initial concentration of molecules, x1, x2 and x3, the system will evolve to reach some equilibrium in the long time limit, x'1, x'2 and x'3. This equilibrium represents a maximization of the system's entropy, subject to the constraint that (x1,x2,x3) can only be changed via the X1 + X3 ⇌ 2 X2 reaction.

Dr Gopalkrishnan shows that the constrained optimization performed by the chemical reaction solves a problem of wider interest. Let's say we have a random variable that can take three distinct values y1, y2 and y3 (in essence, a three-sided die). This die might be biased, with the probabilities of seeing each side not equal to 1/3. Further, we might have a reason to think that the probabilities are constrained by underlying physics, so that only certain combinations of probabilities p(y1), p(y2) and p(y3) are possible. So we know something about our die, but not the specifics. If someone rolls the die several times and presents us with the results, can we estimate the most likely values of p(y1), p(y2) and p(y3)?

Dr Gopalkrishnan shows that, for a certain very general type of constraint on probabilities (a "log-linear model"), the procedure for finding this "maximum likelihood estimate" of p(y1), p(y2) and p(y3) is exactly identical to that performed by an appropriate chemical system in maximizing its entropy under constrained particle numbers. Different log-linear constraints on p(y1), p(y2) and p(y3) translate into different choices of the reaction, which doesn't have to be X1 + X3 ⇌ 2 X2. Given the appropriate reaction, all we have to do is feed in the data (the results of the die rolls) as the initial conditions (x1,x2,x3), and the eventual steady state (x'1,x'2,x'3) gives us our estimate of (p(y1),p(y2),p(y3)).

In principle, this argument generalizes to much more complex systems than the illustration provided here, and the principle of maximum likelihood estimation isn't only applicable to biased dice. Of course, this is a long way from a physical implementation, and even further from an actual useful device, but it does illustrate the potential of harnessing the complexity of physical systems rather than trying to force them to approximate digital electronics. Going further in this direction, the chemical system will fluctuate about its steady-state average (x'1,x'2,x'3); it may be that these fluctuations can be directly related to our uncertainty in our estimate of (p(y1),p(y2),p(y3)).*

*In detail, the distribution of (x1,x2,x3) in the steady state may be related to the posterior probability of (p(y1),p(y2),p(y3)) given a flat prior and the data.

No comments:

Post a Comment