Sunday 13 September 2015

Shameless self-promotion: Engineering self-assembly pathways

Over the last two weeks we've finally managed to get our paper on self-assembly pathways published in Nature ("we" includes the Turberfield and Kwiatkowska groups in Oxford). In the paper, we look at a type of nanostructure called DNA origami.  DNA origami is a wonderfully versatile approach to create structures with a typical size of a few hundred nanometres, but with the possibility of adding details within a precision of a few nanometres. For real "wow-factor", just type "DNA origami" into Google image search.

The principle is remarkably simple. We start with one long DNA strand (see the pink and green object in the above picture) - this is known as the "scaffold". We then introduce a number of much shorter strands called "staples" (blue in the above picture). These staples are designed to stick to two (or more) specific parts of the scaffold very strongly, bringing them together and folding the scaffold into a complicated shape.

A number of people have made incredibly sophisticated structures, in both 2d and 3d, with this technique. However, we were more interested in understanding how the structures formed, and whether we could design them to form more efficiently. We therefore started with a simple rectangular design. Our unusual step was then to double the scaffold, so that it contained two identical halves (pink and green sections in the picture above). This meant that each staple could stick in a number of configurations, because each binding site appeared twice on the double scaffold.

Despite this, most scaffolds folded into one of a small number of distinct shapes, which looked like two rectangles lying side-by-side: example microscope images are shown in (b) and (c) above. We saw that certain shapes were more favourable than others. More interestingly, we were able to change which shapes were most common by making small adjustments to the staple strands. We were able to predict the changes using a theoretical model that we discuss in an even newer paper. The success of our model gives us some hope that we might be able to design origami rationally to improve the reliability of self-assembly.

From a general perspective of  using molecules to achieve complex tasks, the most interesting thing is that we were able to manipulate self-assembly outcomes by interfering with the folding pathway, rather than the stability of the final structures. The obvious way to force a system to assemble into a certain shape is to design your system so that your chosen structure has by far the lowest energy (technically, free energy) of all configurations. However, the alterations we made to staples should have had almost no effect on the relative energies of the different configurations. Instead, our modifications changed the order in which staples attached to the scaffold - and this order is crucial, as the early staples shape the scaffold, determining which of the possible structures will eventually form.




Saturday 8 August 2015

Whaling on the nanoscale: Molecular harpoons

[Image of Type VI secretion system from the homepage of the Jensen Lab]

Greetings from Virginia, where I am currently at the Q-bio conference. I've seen plenty of great talks, and one audience member in the front row that slept the whole way through mine (he probably got up early to listen to the Ashes as well).

One talk by David Bruce Borenstein from the Wingreen group in Princeton brought the "Type VI secretion system" to my attention (for a more scientific discussion, see this review). This is a rather dry name for a pretty amazing bacterial weapon. Certain types of bacteria (in fact, quite a lot of them) have a harpoon concealed within their cell membrane that can be thrust into neighbouring cells, allowing delivery of toxic biochemicals. Bacteria use this weapon against each other and more complex organisms such as humans. It is similar to (and components may even have been directly stolen from) mechanisms by which some viruses inject genetic material into hosts.

For me, the interesting thing about this device is the ability to convert chemical changes of proteins within the cell into the rapid motion of this harpoon. For example, how exactly is chemical fuel involved, and how much fuel the cell must use to achieve a certain force? How much of an advantage does this active puncturing give?

Tuesday 23 June 2015

Dissipative self-assembly

I'm currently at a conference on Engineering of Chemical Complexity hosted by the TUM in Munich. Today we had an interesting talk from Thomas Hermans on ``dissipative self-assembly". To understand this concept, we need to think about steady states.

At a first glance, many of the things around us don't change over time -- they are in steady states. For example, the hotel I'm currently sitting in looks pretty much the same as it did five minutes ago. A fundamental principle of physics is that isolated systems tend to relax towards an "equilibrium state" (this is essentially the famous second law of thermodynamics). Equilibrium is an example of a steady state, because when a system reaches equilibrium, it has nowhere else to go. It isn't, however, the only example. In fact, most objects that we see in steady states are actually stuck in "kinetic traps", including the hotel. Really, the equilibrium state of the materials that make up this hotel wouldn't look very welcoming! For a start, everything would be much closer to the ground. If we wait long enough, of course, the hotel would fall down. but the rate at which this happens is very slow and it seems apparently "trapped" in a hotel-like steady-state to the casual observer.

Dr. Hermans wants us to consider a third, fundamentally distinct type of "dissipative" steady state. In kinetically trapped or equilibrium steady states, the system maintains itself - you don't need to supply anything to keep it in the steady state. However, think about a human body which is, roughly speaking, in a steady state; it needs to be constantly supplied with food to stay this way. This situation is typical of many biological systems - they are out of equilibrium, but not kinetically trapped, and rapidly relax towards equilibrium unless they are fed with fuel in some form. Feeding them with  fuel keeps them in a "dissipative" steady state, which is called dissipative because fuel gets used up in the process.

Dr. Hermans is looking to design artificial biochemical assemblies that exist in such dissipative steady states. Why might this be worthwhile? Hallmarks of biological systems include their flexibility, repairability and adaptability, features that are probably much more natural in dissipative assemblies in which there is a constant turn-over of material. As yet, the results seem preliminary and I can't find any publications - although a discussion of the principle of dissipative self-assembly can be found in this article, "Droplets out of equilibrium".

Tuesday 16 June 2015

Using a single molecular reaction to perform a complex calculation

Much of the research in molecular computing is based on making molecules imitate the most simple logical operations that underlie conventional digital electronics. For example, we can design molecular systems that mimic "AND", "OR"  and "NOT" gates. These gates can then be joined together to perform more complex tasks (eg. here). But at some level, physical systems naturally perform complex calculations. For example, the energy levels of a hydrogen atom are predicted theoretically through a number of complex equations, and in principle you could infer the answers to these equations by measuring the energies.

Of course, there are a number of issues with this. You need to be confident about your theory that relates measurements to equations, and you also need to be confident about the measurements themselves. But perhaps the most important caveat is that the equations that you can solve by performing measurements are often only interesting in predicting the outcome of the measurements themselves; the whole thing becomes rather incestuous and not obviously useful.

In a recent paper (here), Manoj Gopalkrishnan shows how the the complexity of a single chemical reaction can be harnessed to perform a useful computation. The paper is quite involved, but the essence is the following. Let's say I have the chemical species, X1, X2 and X3 that can interconvert by the reaction X1 + X3 ⇌ 2 X2, with both left and right sides being equally stable (such a system could, in principle, be created from DNA). If I start with a certain initial concentration of molecules, x1, x2 and x3, the system will evolve to reach some equilibrium in the long time limit, x'1, x'2 and x'3. This equilibrium represents a maximization of the system's entropy, subject to the constraint that (x1,x2,x3) can only be changed via the X1 + X3 ⇌ 2 X2 reaction.

Dr Gopalkrishnan shows that the constrained optimization performed by the chemical reaction solves a problem of wider interest. Let's say we have a random variable that can take three distinct values y1, y2 and y3 (in essence, a three-sided die). This die might be biased, with the probabilities of seeing each side not equal to 1/3. Further, we might have a reason to think that the probabilities are constrained by underlying physics, so that only certain combinations of probabilities p(y1), p(y2) and p(y3) are possible. So we know something about our die, but not the specifics. If someone rolls the die several times and presents us with the results, can we estimate the most likely values of p(y1), p(y2) and p(y3)?

Dr Gopalkrishnan shows that, for a certain very general type of constraint on probabilities (a "log-linear model"), the procedure for finding this "maximum likelihood estimate" of p(y1), p(y2) and p(y3) is exactly identical to that performed by an appropriate chemical system in maximizing its entropy under constrained particle numbers. Different log-linear constraints on p(y1), p(y2) and p(y3) translate into different choices of the reaction, which doesn't have to be X1 + X3 ⇌ 2 X2. Given the appropriate reaction, all we have to do is feed in the data (the results of the die rolls) as the initial conditions (x1,x2,x3), and the eventual steady state (x'1,x'2,x'3) gives us our estimate of (p(y1),p(y2),p(y3)).

In principle, this argument generalizes to much more complex systems than the illustration provided here, and the principle of maximum likelihood estimation isn't only applicable to biased dice. Of course, this is a long way from a physical implementation, and even further from an actual useful device, but it does illustrate the potential of harnessing the complexity of physical systems rather than trying to force them to approximate digital electronics. Going further in this direction, the chemical system will fluctuate about its steady-state average (x'1,x'2,x'3); it may be that these fluctuations can be directly related to our uncertainty in our estimate of (p(y1),p(y2),p(y3)).*


*In detail, the distribution of (x1,x2,x3) in the steady state may be related to the posterior probability of (p(y1),p(y2),p(y3)) given a flat prior and the data.