
Transactional memories: A new abstraction for parallel processing
Author(s) -
Joseph H. Fasel,
O. Lubeck,
Divyakant Agrawal,
John Bruno,
Amr El Abbadi
Publication year - 1997
Language(s) - English
Resource type - Reports
DOI - 10.2172/563279
Subject(s) - computer science , transactional memory , software transactional memory , consistency model , cache coherence , programming language , consistency (knowledge bases) , parallel computing , programming paradigm , transaction processing , thread (computing) , abstraction , programmer , distributed computing , operating system , database transaction , correctness , cpu cache , cache , artificial intelligence , cache algorithms , philosophy , epistemology
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). Current distributed memory multiprocessor computer systems make the development of parallel programs difficult. From a programmer`s perspective, it would be most desirable if the underlying hardware and software could provide the programming abstraction commonly referred to as sequential consistency--a single address space and multiple threads; but enforcement of sequential consistency limits opportunities for architectural and operating system performance optimizations, leading to poor performance. Recently, Herlihy and Moss have introduced a new abstraction called transactional memories for parallel programming. The programming model is shared memory with multiple threads. However, data consistency is obtained through the use of transactions rather than mutual exclusion based on locking. The transaction approach permits the underlying system to exploit the potential parallelism in transaction processing. The authors explore the feasibility of designing parallel programs using the transaction paradigm for data consistency and a barrier type of thread synchronization