z-logo
Premium
Towards a language for concurrent processes
Author(s) -
Harland David M.
Publication year - 1985
Publication title -
software: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.437
H-Index - 70
eISSN - 1097-024X
pISSN - 0038-0644
DOI - 10.1002/spe.4380150903
Subject(s) - computer science , concurrency , asynchronous communication , distributed computing , parallelism (grammar) , communicating sequential processes , synchronizing , message passing , inter process communication , synchronization (alternating current) , premise , exploit , process (computing) , programming language , parallel computing , computer network , semantics (computer science) , philosophy , computer security , operational semantics , transmission (telecommunications) , telecommunications , channel (broadcasting) , linguistics
In this paper we shall discuss concurrency in programming languages, with a view towards designing a process‐oriented language which, by its inherent parallelism, is well suited to exploit the forthcoming generation of distributed processor networks. We shall start by discussing the traditional approach towards managing concurrency, with ‘monitors’ co‐ordinating the interactions of ‘processes’, and shall demonstrate that this approach actually degrades concurrency by imposing sequentiality during interactions because it is based on the premise of co‐ordinating secure access to shared resources. As a tool for interprocess communication it is felt that the ‘monitor’ is too far removed from the abstract nature of the problem, and so, as a purely engineering solution, it imposes too broad and too prolonged an exclusion to be acceptable in general. Instead we turn to a simpler, and ultimately more powerful notion of ‘message passing’ between parallel processes. We shall show how, if the message system is polymorphic, any data value, however large it is, can pass freely between any pair of processes. By making the processes themselves values in the language we shall discover that message networks can come into being dynamically, and tailor themselves to their applications as and when necessary by ‘short‐circuiting’ extensive communications paths. We shall also see how, if the message system is inherently asynchronous, the degree of the parallelism in a system can be enhanced, not degraded, as more and elaborate communications paths develop, the only sequentiality in the system as a whole being imposed by synchronizing processes, not the message passing system itself. After discussing the various built‐in system facilities that permit processes to dynamically find out about and study one another, thus permitting processes to set up and thereafter supervise whole subsystems, we shall round off by discussing the advantages of introducing the machines themselves into the language, making it possible for processes to become aware of, and then ‘migrate’ within, the topological structure of a multi‐processor distributed network, moving closer to their application, or just to a less‐loaded processor, as the need arises. To conclude we shall contrast this new‐style process‐oriented language with various existing programming languages which have experimented with concurrency, either implicitly or explicitly, in order to see if, and if so how, this new style is any simpler and more powerful than its precursors.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here