Abstract: The ever increasing demand for computational power in many areas of science and engineering is an undeniable fact. In the first few generations of computer systems, increased computational speed was primarily obtained through advances in technology that allowed higher degree of integration and faster circuits. As technology approaches certain physical limitations, parallelism seems to be the most promising alternative for satisfying the ever-increasing demand for computational speed. Through the replication of computational elements that are interconnected in some regular structure, programs can execute on multiple processors and access multiple memory banks. In other words computations (and memory transfers) can be performed in parallel. Parallelism is an old concept and was applied to a limited degree in some of the early computer systems. First in the form of I/O channels that allowed overlapped computation and I/O, and later in the form of multiple functional units that could operate in parallel. Technology constraints and software limitations however did not make it feasible to design and build parallel machines until later in the 70’s.
Publication Year: 1988
Publication Date: 1988-01-01
Language: en
Type: book-chapter
Indexed In: ['crossref']
Access and Citation
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot