numa(5) numa(5)
numa - non uniform memory access
This document briefly describes the IRIX NUMA memory management
subsystem, and provides a top level index for all NUMA management tools
available in Origin systems.
Topology [Toc] [Back]
The command topology(1) can be used to get a quick view of the topology
of an Origin system. This command produces output that lists processors,
nodes, routers, and the links that connect all these devices. For more
information, see hinv(1) and hwgraph(4).
Name Spaces for Nodes
There are several related name spaces for nodes. The main name space is
that provided by the hardware graph, where a name is a string of
characters in the form of a path that both identifies a node and defines
its location relative to the overall hardware.
$ find /hw -name node -print
/hw/module/1/slot/n1/node
/hw/module/1/slot/n2/node
/hw/module/1/slot/n3/node
/hw/module/1/slot/n4/node
/hw/module/2/slot/n1/node
/hw/module/2/slot/n2/node
/hw/module/2/slot/n3/node
/hw/module/2/slot/n4/node
Another highly visible name space for nodes is the Compact Node
Identifiers. This space is just a compact enumeration of the nodes
currently available in the system, from 0 to NUMNODES-1. These numbers
are known as cnodeids and their relation to path names is defined by the
hardware graph directory /hw/nodenum.
$ cd /hw/nodenum
$ ls -l
total 0
lrw------- 1 root sys 26 Jul 10 13:36 0 -> /hw/module/1/slot/n1/node
lrw------- 1 root sys 26 Jul 10 13:36 1 -> /hw/module/1/slot/n2/node
lrw------- 1 root sys 26 Jul 10 13:36 2 -> /hw/module/1/slot/n3/node
lrw------- 1 root sys 26 Jul 10 13:36 3 -> /hw/module/1/slot/n4/node
lrw------- 1 root sys 26 Jul 10 13:36 4 -> /hw/module/2/slot/n1/node
lrw------- 1 root sys 26 Jul 10 13:36 5 -> /hw/module/2/slot/n2/node
lrw------- 1 root sys 26 Jul 10 13:36 6 -> /hw/module/2/slot/n3/node
lrw------- 1 root sys 26 Jul 10 13:36 7 -> /hw/module/2/slot/n4/node
Page 1
numa(5) numa(5)
The relation between cnodeids and node path names may change across
reboots.
There are two additional name spaces used internally by the operating
system: The Numa Address Space Identifier or nasids, which is used
internally to define the section of the physical memory space that will
be covered by a node; and the Persistent Node Identifier, which is used
to identify hardware components. For a more detailed description, see the
Origin Technical Report.
Name Spaces for Processors
The main name space for processors is provided by the hardware graph,
where a name is a string of characters in the form of a path that both
identifies a processor (CPU) and defines its location relative to the
overall hardware.
$ find /hw -name "[ab]" -print
/hw/module/1/slot/n1/node/cpu/a
/hw/module/1/slot/n1/node/cpu/b
/hw/module/1/slot/n2/node/cpu/a
/hw/module/1/slot/n2/node/cpu/b
/hw/module/1/slot/n3/node/cpu/a
/hw/module/1/slot/n3/node/cpu/b
/hw/module/1/slot/n4/node/cpu/a
/hw/module/1/slot/n4/node/cpu/b
/hw/module/2/slot/n1/node/cpu/a
/hw/module/2/slot/n1/node/cpu/b
/hw/module/2/slot/n2/node/cpu/a
/hw/module/2/slot/n2/node/cpu/b
/hw/module/2/slot/n3/node/cpu/a
/hw/module/2/slot/n3/node/cpu/b
/hw/module/2/slot/n4/node/cpu/a
/hw/module/2/slot/n4/node/cpu/b
The listing above shows all processors in a system, and each path name
also identifies the node the processor is connected to.
Another name space for processors is the Compact Processor Identifiers,
or simply cpuids. This space is just a compact enumeration of the
processors currently available in the system, from 0 to NUMCPUS-1. Their
relation to path names is defined by the hardware graph directory
/hw/cpunum.
$ cd /hw/cpunum
$ ls -l
total 0
lrw------- 1 root sys 32 Jul 10 14:53 0 -> /hw/module/1/slot/n1/node/cpu/a
Page 2
numa(5) numa(5)
lrw------- 1 root sys 32 Jul 10 14:53 1 -> /hw/module/1/slot/n1/node/cpu/b
lrw------- 1 root sys 32 Jul 10 14:53 10 -> /hw/module/2/slot/n2/node/cpu/a
lrw------- 1 root sys 32 Jul 10 14:53 11 -> /hw/module/2/slot/n2/node/cpu/b
lrw------- 1 root sys 32 Jul 10 14:53 12 -> /hw/module/2/slot/n3/node/cpu/a
lrw------- 1 root sys 32 Jul 10 14:53 13 -> /hw/module/2/slot/n3/node/cpu/b
lrw------- 1 root sys 32 Jul 10 14:53 14 -> /hw/module/2/slot/n4/node/cpu/a
lrw------- 1 root sys 32 Jul 10 14:53 15 -> /hw/module/2/slot/n4/node/cpu/b
lrw------- 1 root sys 32 Jul 10 14:53 2 -> /hw/module/1/slot/n2/node/cpu/a
lrw------- 1 root sys 32 Jul 10 14:53 3 -> /hw/module/1/slot/n2/node/cpu/b
lrw------- 1 root sys 32 Jul 10 14:53 4 -> /hw/module/1/slot/n3/node/cpu/a
lrw------- 1 root sys 32 Jul 10 14:53 5 -> /hw/module/1/slot/n3/node/cpu/b
lrw------- 1 root sys 32 Jul 10 14:53 6 -> /hw/module/1/slot/n4/node/cpu/a
lrw------- 1 root sys 32 Jul 10 14:53 7 -> /hw/module/1/slot/n4/node/cpu/b
lrw------- 1 root sys 32 Jul 10 14:53 8 -> /hw/module/2/slot/n1/node/cpu/a
lrw------- 1 root sys 32 Jul 10 14:53 9 -> /hw/module/2/slot/n1/node/cpu/b
The relation between cpuids and cpu path names may change across reboots.
Locality Management [Toc] [Back]
IRIX provides a rich set of features for managing memory locality, both
automatically and manually. All automatic memory locality management
procedures work based on the concept of adaptability; all manual tools
work based on hints provided by users, compilers, or special high level
memory placement tools.
Automatic Memory Locality Management [Toc] [Back]
Automatic memory locality management in IRIX is based on dynamic memory
migration (see migration(5)), dynamic memory replication (see
replication(5)), and an initial placement policy based on a First Touch
Placement Algorithm. System administrators can tune the aggressiveness
of both migration and replication for a system using the numa tunables
file (/var/sysgen/mtune/numa) or the command sn(1).
User Driven Memory Locality Management
IRIX provides a Memory Management Control Interface (mmci(5)) to allow
users control over memory system behavior. This interface covers both
NUMA and generic memory system control. For NUMA, the interface provides
control over placement, migration and replication policies; for generic
memory management, the interface provides control over page size and
paging algorithms.
MMCI can be used directly (mmci(5)), via compiler directives (mp(3F),
mp(3C)), or via high level placement tools (dplace(1), dplace(3),
dplace(5), dprof(1)).
Page 3
numa(5) numa(5)
Performance Monitoring [Toc] [Back]
Users can monitor memory reference patterns produced by their
applications using the memory reference counters provided by the Origin
hardware (refcnt(5)). High level tools that simplify this procedure are
dlook(1) and dprof(1).
Users can also monitor the r10k event counters (r10k_counters(5)). See
perfex(1), ssrun(1), speedshop(1).
/var/sysgen/mtune/numa
migration(5), replication(5), mtune(4), refcnt(5), mmci(5), nstats(1),
sn(1), mld(3c), mldset(3c), pm(3c), migration(3c), pminfo(3c),
numa_view(1), dplace(1), dprof(1).
PPPPaaaaggggeeee 4444 [ Back ]
|