Uniform memory access
Uniform memory access (UMA) is a shared memory architecture used in parallel computers. All the processors in the UMA model share the physical memory uniformly. In an UMA architecture, access time to a memory location is independent of which processor makes the request or which memory chip contains the transferred data. Uniform memory access computer architectures are often contrasted with non-uniform memory access (NUMA) architectures. In the NUMA architecture, each processor may use a private cache. Peripherals are also shared in some fashion. The UMA model is suitable for general purpose and time sharing applications by multiple users. It can be used to speed up the execution of a single large program in time-critical applications.[1]
Types of architectures
There are three types of UMA architectures:
- UMA using bus-based symmetric multiprocessing (SMP) architectures;
- UMA using crossbar switches;
- UMA using multistage interconnection networks.
hUMA
In April 2013, the term hUMA (heterogeneous uniform memory access) began to appear in AMD promotional material to refer to CPU and GPU sharing the same system memory via cache coherent views. Advantages include an easier programming model and less copying of data between separate memory pools.[2]
See also
References
- ↑ Advanced Computer Architecture, Kai Hwang, ISBN:0-07-113342-9
- ↑ Peter Bright. AMD's "heterogeneous Uniform Memory Access" coming this year in Kaveri, Ars Technica, April 30, 2013.
Original source: https://en.wikipedia.org/wiki/Uniform memory access.
Read more |