Introduction to Reversible Computing (Chapman & Hall/CRC Computational Science)
Kalyan S. Perumalla
Format: PDF / Kindle (mobi) / ePub
Few books comprehensively cover the software and programming aspects of reversible computing. Filling this gap, Introduction to Reversible Computing offers an expanded view of the field that includes the traditional energy-motivated hardware viewpoint as well as the emerging application-motivated software approach.
Collecting scattered knowledge into one coherent account, the book provides a compendium of both classical and recently developed results on reversible computing. It explores up-and-coming theories, techniques, and tools for the application of reversible computing―the logical next step in the evolution of computing systems.
The book covers theory, hardware and software aspects, fundamental limits, complexity analyses, practical algorithms, compilers, efficiency improvement techniques, and application areas. The topics span several areas of computer science, including high-performance computing, parallel/distributed systems, computational theory, compilers, power-aware computing, and supercomputing.
The book presents sufficient material for newcomers to easily get started. It provides citations to original articles on seminal results so that readers can consult the corresponding publications in the literature. Pointers to additional resources are included for more advanced topics. For those already familiar with a certain topic within reversible computing, the book can serve as a one-stop reference to other topics in the field.
is the ability to gracefully continue execution of an application despite transient faults or failures of system components at runtime. Fault tolerance is an extremely difficult capability to achieve in parallel systems, particularly when the number of components in the system is very large. Simplistic schemes rely on periodically saving the entire application state to persistent storage and restoring this state at all processors for recovering from failures. However, such schemes are woefully
the other. In the initial conditions of interest, N1 ≪ N . Due to the symmetrical and complementary nature of the two urns, the model can focus on tracking the occupancy of only one of the urns, say, the first urn. In each step, an N -way random number, RN ∈ [1, N ], is thrown, and the ball numbered RN is moved from its current urn to the other urn. As the number of steps increases, the number of balls in the first urn changes. The function of interest is the probability, Pj (t), 1 ≤ j ≤ N , that
irreversible programs that can be augmented with runtime instrumentation via compiler techniques to enable simulation of their execution for reversibility. It also is possible to identify a reversible subset of each irreversible language to define a sub-language that is reversible, although the expressive power of the subset may significantly limit the types of programs that can be written. Several reversible languages have also been defined over the past few decades, with varying levels of
pre-modification copy ∆i of set of the modified variables relative to the initial value of the state. This is different from incremental checkpointing in that the snapshot of the system before action ai is obtained by taking the difference from the initial state, as opposed to the value after the most recent action. Thus, Li = S ⊕ ∆i , where ∆i is the set of differences between S and the value after action ai is applied, and the operation S ⊕ ∆ indicates applying a change set ∆ to S. 126.96.36.199
x. Similarly, RESTORE BITS(x,n) results in restoring only n bits into the integral lvalue x. The bit tape is primarily used for conditional statements (to store the truth value) and for destructive assignments to bit variables, while the byte tape is used for other statements. 10.2.4 Compilation Phases The compilation is divided into three phases, as illustrated in Figure 10.2. The phases delineate three distinct functionalities of the compiler. The normalization phase is needed to reduce the