Tuesday, 2 January 2018

Why biomolecular computing could pose a cybersecurity threat, why it does not, and why it is still an interesting question.

By Ismael Mullor Ruiz

There is not much needed to construct a computer. We are not referring to the hardware here, but the basic operations that the computer must be able to implement. All possible computer configurations available are to some extent a physical approximation to the configuration envisioned by Alan Turing in his seminal paper in 1936 [1]: the Turing Machine. Turing imagined an infinitely long paper tape containing a series of sites in which two different states could be encoded (the minimum information unit, what would later be called a bit). In addition, he considered a “head” that could go through the tape performing different operations such as reading the information encoded, copying it and writing over the tape (the basic algorithmic operations) following defined rules. Such a device could, in principle, perform all possible computational tasks (it is “Turing complete”).

As said before, this setup is only a theoretical model, but in order to get a physical approximation that allows us to have a computer, we only need a physical system that can emulate this behaviour. The requirements are actually quite loose and allow the implementation of Turing complete machines in a wide array of phenomena, including the well-known electronic devices being used to read this post, but also other non-conventional configurations like biomolecular computing, membrane computing or optical computing.

It must be noted that for any reader with deep knowledge of molecular biology, the resemblance between the theoretical Turing machine and life’s basic molecular machinery is striking. For example, consider the presence of an information coding substrate – nucleic acids in biology - on which different machines – such as enzymes, ribozymes or some oligonucleotides - can operate. These natural devices can read the stored information, copy it into different formats, and overwrite or remove information given some rules.

Similarities between theoretical and biomolecular Turing machines. Adapted from Saphiro and Benenson (2006) [2].

Despite these similarities, it took almost 30 years from the publication of the central dogma of molecular biology, which outlines how the information in DNA is used by the cell, and almost 20 years from the birth of genetic engineering, to actually exploit biomolecules for computation. The birth of biomolecular computing came in 1994 with the foundational paper by Leonard Adelman in which he used DNA to solve a classic computational problem characterized by its difficulty: the Hamiltonian path problem [3]. In this problem, a network of n nodes is defined with the particularity that only certain transitions between the nodes of the network are allowed. If there exists a path in the network that allows you to go through all the nodes of the network staying only once in each node, this path is said to be Hamiltonian. During the years since, the field has grown beyond the architecture designed and developed by Adelman giving birth to a range of underlying biomolecular methods for computation [4].

(Left):Directed graph of the Hamiltonian path problem from Adelman's seminal paper [4]. (Right): DNA species implementation of the nodes and transitions for the graph.  Adapted from Parker (2003) [5].

These methods, often based on DNA, have an extraordinarily high density of information, dwarfing all the current optical/magnetic devices, as well as a very low energy consumption when compared to conventional computers and a capacity to perform astronomical numbers of operations in parallel. This means that that while in conventional computing architectures, all operations are executed sequentially doing only once at a given instant, in a biomolecular computer every single molecule in dilution is executing operations and making attempts to find a solution to the given problem at the same time.

These architectural features make DNA computers potentially efficient for crunching the solution of a set of computational challenges known as “NP problems”. For NP problems, the time for finding a solution grows exponentially with regards to the size of the problem and therefore can become impractical to solve with a conventional computer. The difficulty of solving NP problems is at the core of the security encryption protocols that are used for example in digital transactions or cryptocurrency mining. It is not hard to think that the existence of a tool that might allow those problems to be solved easily would pose a serious security breach. But despite the fact DNA computers have been designed to break cryptography protocols [6,7], they are not currently a serious threat due to a range of practical physical issues:

-The first problem comes from the fact that the information density calculations frequently cited assume essentially an essentially solid density of DNA – but such conditions are impractical for computation with any biomolecular architecture.

-Another issue with the use of massively parallel molecular computation is that only a small number of “correct” solutions may be produced in a test tube with astronomical quantities of unwanted molecules. Recovering and reading this output is an enormous challenge as it requires specifically picking these needles from the haystack using molecular interactions only.

-Related to the previous issue is the question of how to make the output readable and useful. The molecular interactions used to separate the output and read it never are completely specific. This results first in an imperfect separation of pools of molecules as well as important chance of reading as correct an incorrect output. This misreading problem becomes more prevalent when differences between correct and incorrect outputs become smaller.

-Another substantial issue to face is that most DNA-computing architectures are not reprogrammable and once the program has been run it cannot be run again.  Thus one must design new DNA strands and order those DNA strands from a supplier for a new instance of a problem, making the process of iterating an algorithm for a new set of inputs a time- and resource-consuming one. Some recent work in out-of-equilibrium chemical reaction networks aims to introduce reprogammability [8], but still is very far away from the versatility found on any personal computer.

 -The last issue that makes impractical the application of biomolecular computers for breaking cryptography protocols lies in the fact that despite algorithmic operations are easy to implement in biomolecular systems, translating that algorithmic operations into arithmetic operations is hard by itself (as proved by Winfree and Qian with their paper encoding an algorithm for calculating the square root of an integer using a 130 strands reaction network [9]).

Despite all these issues, valuable lessons can be taken. The analogy with the theoretical Turing machine underlies every conceivable computer to some extent or another, but the physical characteristics of the substrate in which the Turing machine is implemented affects deeply the efficiency of any program the machine is given. As we have seen, analogies between life’s molecular machinery and a Turing machine can be drawn and are significant. But the similarities that can be mapped between life molecular machinery and network computing are even more relevant [10,11]. Life does indeed perform certain computational tasks and it is incredibly efficient at doing so when compared to man-made computers. These tasks include inference, learning and signal processing and these remarkable capability are observed at all evolutionary levels. It is not hard to conclude that a deeper understanding of the underlying biophysics of genuine biomolecular computations would yield relevant insights improving the design and capabilities of programmable biomolecular systems that for their very own nature can directly interact with living systems and have applications in fields like synthetic biology or bio-medicine.

1. Turing, A. M. (1937). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London mathematical society, 2(1), 230-265.

2. Shapiro, E., & Benenson, Y. (2006). Bringing DNA computers to life. Scientific American, 294(5), 44-51.

3. Adleman, L. M. (1994). Molecular computation of solutions to combinatorial problems. Science. 266 (5187), 1021–1024

4. Benenson, Y. (2012). Biomolecular computing systems: principles, progress and potential. Nature Reviews Genetics13(7), 455-468.

5. Parker, J. (2003). Computing with DNA. EMBO reports, 4(1), 7-10.

6. Boneh, D., Dunworth, C., & Lipton, R. J. (1995). Breaking DES using a molecular computer. DNA based computers27, 37-66.

7. Chang, W. L., Guo, M., & Ho, M. H. (2005). Fast parallel molecular algorithms for DNA-based computation: factoring integers. IEEE Transactions on Nanobioscience4(2), 149-163.

8. Del Rosso, N. V., Hews, S., Spector, L., & Derr, N. D. (2017). A Molecular Circuit Regenerator to Implement Iterative Strand Displacement Operations. Angewandte Chemie International edition129(16), 4514-4517.

9. Qian, L., & Winfree, E. (2011). Scaling up digital circuit computation with DNA strand displacement cascades. Science332(6034), 1196-1201.

10. Bray, D. (1995). Protein molecules as computational elements in living cells. Nature, 376(6538), 307-312.

11. Kim, J., Hopfield, J., & Winfree, E. (2005). Neural network computation by in vitro transcriptional circuits. In Advances in neural information processing systems (pp. 681-688).

No comments:

Post a Comment