*nix Documentation Project
·  Home
 +   man pages
·  Linux HOWTOs
·  FreeBSD Tips
·  *niX Forums

  man pages->Tru64 Unix man pages -> sys_attrs_vm (5)              
Title
Content
Arch
Section
 

sys_attrs_vm(5)

Contents


NAME    [Toc]    [Back]

       sys_attrs_vm - system attributes for the vm kernel subsystem

DESCRIPTION    [Toc]    [Back]

       This reference page describes system  attributes  for  the
       Virtual Memory (vm) kernel subsystem. See sys_attrs(5) for
       general guidelines about changing system attributes.

       In the following list, an asterisk (*) precedes the  names
       of attributes whose values you can change while the system
       is running. Changes to values of  attributes  whose  names
       are  not preceded by an asterisk take effect only when the
       system is rebooted.

              A value that sets no limit (0), a soft  limit  (1),
              or  a  hard limit (2) on the resident set size of a
              process.

              Default value: 0 (no limit)

              By default, applications can set a process-specific
              limit  on the number of pages resident in memory by
              specifying the RLIMIT_RSS resource value in a setrlimit()
   call.   However,   applications  are  not
              required to limit the resident set size of  a  process
  and  there  is  no system-wide default limit.
              Therefore, the resident set size for a  process  is
              limited  only by system memory restrictions. If the
              demand for memory exceeds the number of free pages,
              processes  with large resident set sizes are likely
              candidates for swapping.

              The anon_rss_enforce  attribute  enables  different
              levels  of  control over process set sizes and when
              the pages that a process is using in anonymous memory
  are  swapped out (blocking the process) during
              times  of  contention  for  free   pages.   Setting
              anon_rss_enforce  to  either  1 or 2, allows you to
              enforce a system-wide limit on  resident  set  size
              for   a   process  through  the  vm_rss_max_percent
              attribute.  Setting anon_rss_enforce to 1  (a  soft
              limit), enables finer control over process blocking
              and paging of anonymous memory by allowing  you  to
              set  the vm_rss_block_target and vm_rss_wakeup_target
 attributes.

              When anon_rss_enforce is set to 2, the resident set
              size  for  a  process cannot exceed the system-wide
              limit set by the vm_rss_max_percent attribute or  a
              process-specific  limit,  if any, that is set by an
              application's setrlimit() call. When  the  resident
              set size exceeds either of these limits, the system
              starts to swap out pages of anonymous  memory  that
              the  process  is already using to keep the resident
              set size within the specified limit.

              When anon_rss_enforce is  set  to  1,  any  systemdefault
 and process-specific limits on resident set
              size still apply and will cause swapping  to  occur
              when  exceeded.  Otherwise,  a  process's pages are
              swapped out when the number of free pages  is  less
              than   the   value   of   the   vm_rss_block_target
              attribute. The process remains  blocked  until  the
              number  of  free  pages  reaches  the  value of the
              vm_rss_wakeup_target.

              This  attribute  supports  diskless   systems   and
              enables  the pager to be more responsive.  It functions
 under the following conditions: The  diskless
              driver  is  loaded  and configured. Diskless system
              services are part of the Dataless  Management  Services
 (DMS). DMS enables systems to run the operating
 system from a server without requiring a  local
              hard  disk  on  each  client system.  The server is
              serving a realtime pre-emptive kernel.

              Default value: 0 (off)

              Maximum value: 1 (on)

              A value that enables (1) or  disables  (0)  a  soft
              guard  page  on  the  program stack. This allows an
              application to enter  a  signal  handler  on  stack
              overflows, which otherwise would cause a core dump.

              Default value: 0 (disabled)

              The enable_yellow_zone attribute  is  intended  for
              use by systems programmers who are debugging kernel
              applications, such as device drivers.

              Number of 4-MB chunks of memory  reserved  at  boot
              time  for  shared memory use. This memory cannot be
              used for any other purpose, nor can it be  returned
              to  the system or reclaimed when not being used. On
              NUMA systems, the gh_chunks attribute affects  only
              the  first  Resource Affinity Domain (RAD). See the
              entry for rad_gh_regions for more information.

              Default value: 0 (chunks)  (The  zero  value  means
              that use of granularity hints is disabled.)

              Minimum value: 0

              Maximum value: 9,223,372,036,854,775,807

              The  attributes associated with "granularity hints"
              (the  gh_*attributes)  are  sometimes   recommended
              specifically  for database servers. Using segmented
              shared memory (SSM) is  the  alternative  to  using
              granularity  hints and is recommended for most systems.
 Therefore, if the gh_chunks attribute is  not
              set to zero, the ssm_threshold attribute of the ipc
              subsystem should be set to zero. If  the  gh_chunks
              attribute   is   set  to  zero,  the  ssm_threshold
              attribute should not be set to zero.

              The gh_* attributes, which includes gh_chunks,  are
              automatically   disabled  if  the  vm_bigpg_enabled
              attribute  is  set  to  1.   The   vm_bigpg_enabled
              attribute  turns  on  "big pages" memory allocation
              mode,  which  provides  the  advantages  of   using
              extended  virtual  page sizes without hard wiring a
              specific amount of physical memory at boot time for
              this purpose.

              See  your  database  product  documentation and the
              System Configuration and  Tuning  manual  for  more
              information about using granularity hints or SSM.

              A  value that enables (1) or disables (0) a failure
              return by the shmget function under certain  conditions
  when  granularity hints is in use. When this
              attribute  is  set  to  1,  the  shmget()  function
              returns  a failure if the requested segment size is
              larger  than  the  value  of   the  gh-min-seg-size
              attribute and if there is insufficient memory allocated
 by the gh-chunks attribute  to  satisfy   the
              request.

              Default value: 1 (enabled)

              A  value that specifies whether the memory reserved
              for granularity hints is (1) or is  not  (0)  allocated
  from  low physical memory addresses. Allocation
 from low physical memory addresses  is  useful
              if you have an odd number of memory boards.

              Default value: 1 (allocation from low physical memory
 addresses)

              Specifies whether the memory reserved for granularity
 hints is (1) or is not (0) sorted.

              Default value: 0 (not sorted)

              Size, in bytes, of the segment in which shared memory
 is  allocated  from  the  memory  reserved  for
              shared  memory,  according  to the value of the ghchunks
 attribute.

              Default value: 8,388,608 (bytes, or 8 MB)

              Minimum value: 0

              Maximum value: 9,223,372,036,854,775,807

              Number of pages per thread that are used for  stack
              space in kernel mode.

              Default value: 2 (pages per thread)

              Minimum value: 2

              Maximum value: 3

              It is strongly recommended that you not modify kernel_stack_pages
 unless directed to do  so  by  your
              support  representative.  In  the event of a kernel
              stack not valid halt error that is caused by a kernel
 stack overflow problem, increasing the value of
              kernel_stack_pages may  work  around  the  problem.
              This workaround will not be successful if the error
              occurred because  the  stack  pointer  became  corrupted.
 In any event, a kernel stack not valid halt
              error is always an unexpected error that should  be
              reported to your support representative for further
              investigation.

              Number of freed kernel stack pages that  are  saved
              for  reuse.  Above  this  limit, freed kernel stack
              pages are immediately deallocated.

              Default value: 5 (pages)

              Minimum value: 0

              Maximum value: 2,147,483,647

              Deallocation of freed kernel  stack  pages  ensures
              that memory is available for other operations. However,
 the processor time required for  deallocating
              freed kernel stack pages has a negative performance
              impact that might be more noticeable on  NUMA  systems
  than  on  other  systems.  You  can  use  the
              kstack_free_target value to make the most appropriate
  tradeoff  between increased memory consumption
              and time spent by CPUs in a purge operation.

              You can change the value of the  kstack_free_target
              attribute while the system is running.

              A value that enables (1) or disables (0) caching of
              malloc memory on a per CPU basis.

              Default value: 1

              Do  not  modify  the  default  setting   for   this
              attribute  unless  instructed  to  do so by support
              personnel or by patch kit documentation.

              Default value: 1 (on)

              Do  not  modify  the  default  setting   for   this
              attribute  unless  instructed  to  do so by support
              personnel or by patch kit documentation.

              Percentage of the secondary cache that is  reserved
              for  anonymous  (nonshared) memory.  Increasing the
              cache for anonymous memory reduces the cache  space
              available  for  file-backed  memory  (shared). This
              attribute is useful only for benchmarking.

              Default value: 0 (percent)

              Minimum value: 0

              Maximum value: 100

              For NUMA systems, the granularity hints chunk  size
              (in  megabytes)  for  the  Resource Affinity Domain
              (RAD) identified by n. There are 64 elements in the
              attribute      array,      rad_gh_regions[0]     to
              rad_gh_regions[63]. Although all  elements  in  the
              array  are  visible on all systems, the kernel uses
              only the element values corresponding to RADs  that
              exist  on  the  system.   See  the  entry  for  the
              gh_chunks attribute for general  information  about
              granularity hints memory allocation.

              Default  value:  0  (MB) (Granularity hints is disabled.)


              The array of rad_gh_regions[n]  attributes  replace
              the  gh_chunks  attribute,  which  affects only the
              first  or  (for   non-NUMA   systems)    only   RAD
              (rad_gh_regions[0])   supported   by   the  system.
              Although gh_chunks and the  set  of  rad_gh_regions
              attributes   both   specify   how  much  memory  is
              manipulated through granularity hints memory  allocation,
  the  unit of measurement for the former is
              4-megabyte units whereas the  unit  of  measurement
              for the latter is megabytes. Therefore:

              rad_gh_regions[0] = gh_chunks * 4

              Setting    the   gh_chunks   attribute,   not   the
              rad_gh_regions[0] attribute, is recommended if  you
              want  to use granularity hints memory allocation on
              non-NUMA systems.

              The rad_gh_regions[n] attribute  are  automatically
              disabled  if  the vm_bigpg_enabled attribute is set
              to 1. The vm_bigpg_enabled attribute turns on  "big
              pages"  memory  allocation mode, which provides the
              advantages of using  extended  virtual  page  sizes
              without  hard  wiring a specific amount of physical
              memory at boot time for this purpose.

              A value that controls whether user text can or cannot
  be  replicated on multiple CPUs of a NUMA system.
 When the value is 1, replication of user  text
              is  enabled.   When  the value is 0, replication of
              user text is disabled. This attribute is  sometimes
              used  by  kernel developers when debugging software
              for NUMA systems; however, the attribute is not for
              general use. (The value is ignored on non-NUMA systems
 and changing it to 0  on  NUMA  systems  might
              degrade performance.)

              Default value: 1

              Do  not  change  the value of this attribute unless
              instructed to do so by support personnel  or  patch
              kit instructions.

              The  device  partitions reserved for swapping. This
              is   a   comma-separated   string,   for    example
              /dev/disk/dsk0g,/dev/disk/dsk0d  that  can be up to
              256 bytes in length.

              Percentage of memory above which the  UBC  is  only
              borrowing memory from the virtual memory subsystem.
              Paging does not occur until the  UBC  has  returned
              all its borrowed pages.

              Default value: 20 (percent)

              Minimum value: 0

              Maximum value: 100

              Increasing this value may increase UBC cache effectiveness
 and improve throughput; however, the  cost
              is  a  likely  degradation  of system response time
              during a low memory condition.

              Obsolete; currently ignored by the software.

              Specifies the number of pages to consolidate before
              initiating an I/O operation.

              Default value: 32 (pages)

              Minimum value: 0

              Maximum value: 512

              The  default  value  is  appropriate  for  the vast
              majority of systems. Raising this value may improve
              I/O efficiency if relatively few users and applications
 write to only a few  very  large  files,  and
              there  is  high  probability  that write operations
              affect  contiguous  pages.  However,  the  cost  is
              increased  time  spent in memory (and holding locks
              for a longer  length  of  time)  while  the  system
              determines  what  state pages are in and which ones
              can be clustered.

              A threshold value  that  forces  cleanup  of  AdvFS
              metadata  that  is  being  stored  in  the UBC. The
              default setting forces return of  pages  containing
              AdvFS  metadata  when  they reach 70 percent of the
              UBC.

              This is not a tuning parameter. Do not  modify  the
              default setting unless directed to do so by support
              personnel or patch kit instructions.

              Default value: 70 (percent)

              Minimum value: 0

              Maximum value: 100

              Number of I/O operations (per second) that the virtual
  memory  subsystem performs when the number of
              dirty (modified) pages in the UBC exceeds the value
              of the vm-ubcdirtypercent attribute.

              Default value: 5 (operations per second)

              Minimum value: 0

              Maximum value: 2,147,483,647

              Maximum  percentage of physical memory that the UBC
              can use at one time.

              Default value: 100 (percent)

              Minimum value: 0

              Maximum value: 100

              It is recommended that this value be set to a value
              in  the range of 70 to 80 percent. On an overloaded
              system, values higher than 80 can delay  return  of
              excess UBC pages to vm and adversely affect performance.


              Minimum percentage of physical memory that the  UBC
              can use.

              Default value: 10 (percent)

              Minimum value: 0

              Maximum value: 100

              A  value  that  enables  (1)  or  disables  (0) the
              ability of the task swapper  to  aggressively  swap
              out idle tasks.

              Default value: 0 (disabled)

              Setting  this  attribute  to 1 helps prevent a lowmemory
 condition from  occurring  and  allows  more
              jobs to be run simultaneously. However, interactive
              response times are likely to be longer on a  system
              that is excessively paging and swapping.

              The  number  of  asynchronous I/O requests per swap
              partition that can  be  outstanding  at  one  time.
              Asynchronous  swap  requests  are  used for pageout
              operations and for prewriting modified pages.

              Default value: 4 (requests)

              Minimum value: 0

              Maximum value: 2,147,483,647

              The minimum amount of anonymous memory (in  Kbytes)
              that  a user process must request before the kernel
              will map a virtual  page  in  the  process  address
              space  to  more  than  one physical page. Anonymous
              memory is requested by calls  to  mmap(),  nmmap(),
              malloc(), and amalloc().

              Default value: 64 (Kbytes)

              Minimum  value:  0  (big pages allocation mode disabled
 for anonymous memory)

              Consult with  your  support  representative  before
              changing vm_bigpg_anon to a value other than the 64
              Kbyte default.

              The vm_bigpg_anon attribute has  no  effect  unless
              the vm_bigpg_enabled attribute is set to 1.

              Currently, big pages allocation of anonymous memory
              is not supported for memory-mapped files.

              If the anon_rss_enforce  attribute  (which  sets  a
              limit on the resident set size of a process) is set
              to 1 or 2, it overrides and disables big pages memory
 allocation mode for anonymous and stack memory.
              Make sure that anon_rss_enforce is set to 0 if  you
              want  big  pages memory allocation to be applied to
              anonymous and stack memory.

              The master switch that enables (1) or disables  (0)
              memory allocation for user processes in "big pages"
              mode.

              Default value: 0 (disabled)

              Big pages memory allocation allows a  virtual  page
              in  the process address space to be mapped to  multiple
 pages in the system's physical  memory.  This
              mapping  can  be to 8-pages, 64-pages, or 512-pages
              (64, 512, or 4096 Kbytes, respectively) of physical
              memory.

              Big  pages  uses  threshold  values  set  on  a per
              memory-type basis to  determine  whether  a  memory
              allocation  request is eligible to use the extended
              page sizes.  The attributes that set these  thresholds
 are vm_bigpg_anon, vm_bigpg_shm, vm_bigpg_ssm,
              vm_bigpg_seg, and vm_bigpg_stack.

              If big pages memory  allocation  is  disabled,  the
              kernel  maps  each virtual page in the user address
              space to 8 Kbytes of memory.

              To   enable   big   pages,   you   must   set   the
              vm_bigpg_enabled attribute at system boot time.

              The  minimum  amount  of  memory (in Kbytes) that a
              user process must request for a program text object
              before  the  kernel  will map a virtual page in the
              process address space to  more  than  one  physical
              page. Allocations for program text objects are generated
 when the process executes a program or loads
              a  shared  library.   See  also the descriptions of
              vm_segment_cache_max and vm_segmentation.

              Default value: 64 (Kbytes)

              Minimum value: 0 (big pages memory allocation  disabled
 for program text objects)

              Consult  with  your  support  representative before
              changing vm_bigpg_seg to a value other than the  64
              Kbyte default.

              The vm_bigpg_seg attribute has no effect unless the
              vm_bigpg_enabled attribute is set to 1.  Allow  big
              pages  to distribute memory across RADs as a priority
 over getting the largest page size possible.

              Default value: 1 (Use smp)

              Setting the value to 0 enables this feature.

              The minimum amount of System V  shared  memory  (in
              Kbytes) that a user process must request before the
              kernel will map  a  virtual  page  in  the  process
              address  space  to  more  than  one  physical page.
              Requests for System V shared memory  are  generated
              by calls to shmget(), shmctl(), and nshmget().

              Default value: 64 (Kbytes)

              Minimum value: 0 (big pages allocation disabled for
              System V shared memory)

              Consult with  your  support  representative  before
              changing  vm_bigpg_shm to a value other than the 64
              Kbyte default.

              The vm_bigpg_shm attribute has no effect unless the
              vm_bigpg_enabled attribute is set to 1.

              The  minimum amount (in Kbytes) of segmented shared
              memory (System V shared  memory  with  shared  page
              tables) that a user process must request before the
              kernel will map  a  virtual  page  in  the  process
              address  space  to  more  than  one  physical page.
              Requests for this type of memory are  generated  by
              calls to shmget(), shmctl(), and nshmget().

              Default value: 64 (Kbytes)

              Minimum value: 0 (big pages allocation disabled for
              segmented shared memory)

              Consult with  your  support  representative  before
              changing  vm_bigpg_ssm to a value other than the 64
              Kbyte default.

              The vm_bigpg_ssm attribute has no effect unless the
              vm_bigpg_enabled attribute is set to 1.

              The  vm_bigpg_ssm  attribute  is  disabled  if  the
              ssm_threshold attribute is set to 0 (zero). If  you
              want  to  use  big pages memory allocation for segmented
  shared   memory,   make   sure   that   the
              ssm_threshold  is  set  to a value that is at least
              equal to the  value  of  SSM_SIZE.  This  value  is
              defined   in   the   <machine/pmap.h>   file.   See
              sys_attrs_ipc(5) for more information.

              The minimum amount of memory (in Kbytes) needed for
              the user process stack before the kernel will map a
              virtual page in the process address space  to  more
              than  one  physical page. Stack memory is automatically
 allocated by the kernel on the user's behalf.

              Default value: 64 (Kbytes)

              Minimum value: 0 (big pages allocation disabled for
              the user process stack)

              Consult with  your  support  representative  before
              changing  vm_bigpg_stack  to a value other than the
              64 Kbyte default.

              The vm_bigpg_stack attribute has no  effect  unless
              the vm_bigpg_enabled attribute is set to 1.

              If  the  anon_rss_enforce  attribute  (which sets a
              limit on the resident set size of a process) is set
              to 1 or 2, it overrides and disables big pages memory
 allocation of anonymous and stack memory.  Make
              sure  that anon_rss_enforce is set to 0 if you want
              big pages memory allocation to be applied to anonymous
 and stack memory.

              The  percentage  of  physical memory that should be
              maintained on the free page list for  each  of  the
              four  possible  page  sizes  (8,  64, 512, and 4096
              Kbytes).

              When a page of memory is freed, an attempt is  made
              to  coalesce the page with adjacent pages to form a
              bigger page. The vm_bigpg_thresh attribute sets the
              threshold  at which coalescing begins. With smaller
              values, more pages are coalesced, hence  there  are
              fewer  pages  available  at the smaller sizes. This
              may result in a performance degradation as a larger
              page  will  then  have  to  be  broken into smaller
              pieces to satisfy an allocation request for one  of
              the  smaller page sizes.  If vm_bigpg_thresh is too
              large, fewer large size pages will be available and
              applications may not be able to take full advantage
              of big pages. Generally,  the  default  value  will
              suffice,  but  this  value  can be increased if the
              system work load requires  more  small  pages  than
              large pages.

              Default value: 6%

              Minimum value: 0%

              Maximum value: 25%

              Size, in bytes, of the kernel cluster submap, which
              is used to  allocate  the  scatter/gather  map  for
              clustered file and swap I/O.

              Default value: 1,048,576 (bytes, or 1 MB)

              Minimum value: 0

              Maximum value: 922,337,203,854,775,807

              Maximum  size, in bytes, of a single scatter/gather
              map for a clustered I/O request.

              Default value: 65,536 (bytes, or 64 KB)

              Minimum value: 0

              Maximum value: 922,337,203,854,775,807

              Number of times that  the  pages  of  an  anonymous
              object are copy-on-write faulted after a fork operation
 but before they are copied  as  part  of  the
              fork operation.

              Default value: 4 (faults)

              Minimum value: 0

              Maximum value: 2,147,483,647

              Size, in bytes, of the kernel copy submap.

              Default value: 1,048,576 (bytes, or 1 MB)

              Minimum value: 0

              Maximum value: 922,337,203,854,775,807

              Obsolete; currently ignored by the software.

              Minimum  amount  of  time,  in seconds, that a task
              remains in the inswapped state before it is considered
 a candidate for outswapping.

              Default value: 1 (second)

              Minimum value: 0

              Maximum value: 60

              Size,  in bytes, of the largest pagein (read) cluster
 that is passed to the swap device.

              Default value: 16,384 (bytes) (16 KB)

              Minimum value: 8192

              Maximum value: 131,072

              Size, in bytes, of  the   largest  pageout  (write)
              cluster that is passed to the swap device.

              Default value: 32,768 (bytes) (32 KB)

              Minimum value: 8192

              Maximum value: 131,072

              Base address of the kernel's virtual address space.
              The  value  can  be  either  Oxffffffff80000000  or
              Oxfffffffe00000000, which sets the size of the kernel's
 virtual address space to either 2 GB or 8 GB,
              respectively.

              Default value: 18,446,744,073,709,551,615 (2 to the
              power of 64)

              You may  need  to  increase  the  kernel's  virtual
              address  space  on  very large memory (VLM) systems
              (for example, systems  with  several  gigabytes  of
              physical  memory  and  several thousand  large processes).


              The threshold value that  stops  paging.  When  the
              number  of  pages  on  the  free  list reaches this
              value, paging stops.

              Default value: Varies, depending on physical memory
              size; about 16 times the value of vm_page_free_target


              Minimum value: 0

              Maximum value: 2,147,483,647

              The vm_page_free_hardswap value  is  computed  from
              the  vm_page_free_target  value,  which  by default
              scales with physical memory  size.  If  you  change
              vm_page_free_target,     your     change    affects
              vm_page_free_hardswap as well.

              The threshold value that starts page swapping. When
              the  number  of  pages  on the free page list falls
              below this value, paging starts.

              Default value: 20 (pages, or twice  the  amount  of
              vm_page_free_reserved)

              Minimum value: 0

              Maximum value: 2,147,483,647

              The threshold value that begins hard swapping. When
              the number of pages on the free  list  falls  below
              this  value for five seconds, hard swapping begins.

              Default value: Automatically scaled by  using  this
              formula:

              vm_page_free_min    +    ((vm_page_free_target    -
              vm_page_free_min) / 2)

              Minimum value: 0 (pages)

              Maximum value: 2,147,483,647

              The threshold value that determines when memory  is
              limited  to  privileged  tasks.  When the number of
              pages on the free page list falls below this value,
              only privileged tasks can get memory.

              Default value: 10 (pages)

              Minimum value: 1

              Maximum value: 2,147,483,647

              The  threshold  value  that begins swapping of idle
              tasks. When the number of pages on  the  free  page
              list  falls  below  this  value, idle task swapping
              begins.

              Default value: Automatically scaled by  using  this
              formula:

              vm_page_free_min    +    ((vm_page_free_target    -
              vm_page_free_min) / 2)

              Minimum value: 0

              Maximum value: 2,147,483,647

              The threshold value that  stops  paging,  When  the
              number  of pages on the free page list reaches this
              value, paging stops.

              Default value: Based on the amount of managed  memory
  that  is  available on the system, as shown in
              the following table:

              ---------------------------------------------------
              Available Memory (M)   vm_page_free_target (pages)
              ---------------------------------------------------
              Less than 512          128
              512 to 1023            256
              1024 to 2047           512
              2048 to 4095           768
              4096 and higher        1024
              ---------------------------------------------------

              Minimum value: 0 (pages)

              Maximum value: 2,147,483,647

              Maximum number of modified UBC pages  that  the  vm
              subsystem  will  prewrite to disk if it anticipates
              running out of memory. The prewritten pages are the
              least recently used (LRU) pages.

              Default value: vm_page_free_target * 2

              Minimum value: 0

              Maximum value: 2,147,483,647

              A  threshold  number  of free pages that will start
              swapping of anonymous memory from the resident  set
              of  a  process.  Paging  of anonymous memory starts
              when the number of free pages meets or exceeds this
              value.  The  process is blocked until the number of
              free  pages  reaches   the   value   set   by   the
              vm_rss_wakeup_target attribute.

              Default value: Same as vm_page_free_optimal

              Minimum value: 0

              Maximum value: 2,147,483,647

              The   default   value  of  the  vm_rss_block_target
              attribute is the same as the default value  of  the
              vm_page_free_optimal  attribute  that  controls the
              threshold value for hard swapping.

              You can increase the value  of  vm_rss_block_target
              to  start  paging  of anonymous memory earlier than
              when hard swapping occurs or decrease the value  to
              delay  paging  of anonymous memory beyond the point
              at which hard swapping occurs.

              A percentage of the total pages of anonymous memory
              on  the system that is the system-wide limit on the
              resident set size for any  process.  The  value  of
              this    attribute    has    an   effect   only   if
              anon_rss_enforce is set to 1 or 2.

              Default value: 100 (percent)

              Minimum value: 1

              Maximum value: 100

              You can decrease this percentage to enforce a  system-wide
  limit  on  the  resident set size for any
              process. Be aware, however, that this limit applies
              to  privileged,  as well as unprivileged, processes
              and will override a larger resident set  size  that
              may  be  specified  for a process through the setrlimit()
 call.

              A threshold number of free pages that will  unblock
              a  process  whose  anonymous memory is swapped out.
              The process is unblocked when the  number  of  free
              pages meets this value.

              Default value: Same is vm_page_free_optimal

              Minimum value: 0

              Maximum value: 2,147,483,647

              The   default  value  of  the  vm_rss_wakeup_target
              attribute is the same as the default value  of  the
              vm_page_free_optimal  attribute  that  controls the
              threshold value for hard swapping.

              You can increase the value of  vm_rss_wakeup_target
              to  free more memory before unblocking a process or
              decrease the value to unblock  the  process  sooner
              (with less freed memory).

              Number  of  text segments that can be cached in the
              segment cache. (Applies only if you enable  segmentation.)


              Default value:  50 (segments)

              Minimum value: 0

              Maximum value: 8192

              The  vm  subsystem  uses the segment cache to cache
              inactive executables and shared libraries.  Because
              objects  in  the  segment  cache can be accessed by
              mapping a page table entry, this  cache  eliminates
              I/O delays for repeated executions and reloads.

              Reducing  the  number  of  segments  in the segment
              cache can free memory and  help  to  reduce  paging
              overhead.  (The size of each segment depends on the
              text size of the executable or the  shared  library
              that is being cached.)

              A  value that enables (1) or disables (0) the ability
 of shared regions of user address space to also
              share  the  page  tables  that  map to those shared
              regions.

              Default value: 1 (enabled)

              In a TruCluster environment, this value must be the
              same on all cluster members.

              Specifies  the  swap  allocation mode, which can be
              immediate mode (1) or deferred mode (0).  Immediate
              mode  is  commonly  referred to as "eager" mode and
              deferred mode is commonly  referred  to  as  "lazy"
              mode.

              Default value: 1 (eager swap mode)

              In  eager swap mode, the kernel will block a memory
              allocation when it  cannot  reserve  in  advance  a
              matching  amount  of swap space. Eager swap mode is
              recommended for systems  with  variable  workloads,
              particularly  for  those  with  unpredictably  high
              peaks of memory consumption. For eager  swap  mode,
              swap  space  should not be less than 111 percent of
              system memory. A swap space  configuration  of  150
              percent  of memory is recommended for most systems,
              and small memory systems are likely to require swap
              space  in excess of 150 percent of memory. In eager
              swap mode, if  swap  space  is  not  configured  to
              exceed  the amount of memory by a large enough percentage,
 the likelihood that system memory will  be
              underutilized   during  times  of  peak  demand  is
              increased. In fact, configuring swap space that  is
              less  than the amount of memory on the system, even
              if swapping does not  occur,  prevents  the  kernel
              from  using  memory  that represents the difference
              between memory and swap space  amounts.  When  swap
              space  is unavailable in eager swap mode, processes
              start blocking one another and, worst  case,  cause
              the system to hang.

              In  lazy  swap  mode, the kernel does not require a
              matching amount of swap space to  be  available  in
              advance  of  a  memory allocation. However, in lazy
              swap mode, the kernel kills  processes  to  reclaim
              memory  if  an  attempt to swap out a process fails
              because of insufficient  swap  space.  Because  key
              kernel  processes  can  be  killed,  this condition
              increases the likelihood of a  system  crash.  Lazy
              swap  mode is appropriate on very large memory systems
 for which it is impractical to configure  swap
              space  that  is half again as large as memory. Lazy
              swap mode is also appropriate for  smaller  systems
              with a relatively constant and predictable workload
              or for systems on which peak memory consumption  is
              always  well  below  the  amount  of memory that is
              available.  In all cases where lazy  swap  mode  is
              used,  enough  swap  space  must  be  configured to
              accommodate times of peak memory consumption,  plus
              an  extra  amount of swap space to provide a margin
              of safety. To determine the amount  of  swap  space
              that  is needed, monitor memory and swap space consumption
 over time to determine  consumption  peaks
              and then factor in a generous margin of safety.

              The  number of synchronous I/O requests that can be
              outstanding to the swap  partitions  at  one  time.
              Synchronous  swap  requests  are  used  for page-in
              operations and task swapping.

              Default value: 128 (requests)

              Minimum value: 1

              Maximum value: 2,147,483,647

              Maximum percentage of physical memory that  can  be
              dynamically  wired.   The kernel and user processes
              use this  memory  for  dynamically  allocated  data
              structures and address space, respectively.

              Default value: 80 (percent)

              Minimum value: 1

              Maximum value: 100

              Enables,  disables, and tunes the trolling rate for
              the memory troller on systems supported by the memory
 troller.

              When  enabled, the memory troller continually reads
              the system's memory  to  proactively  discover  and
              handle  memory errors.  The troll rate is expressed
              as  a  percentage  of  the  system's  total  memory
              trolled per hour and you can change it at any time.
              Valid troll rate settings  are:  Default  value:  4
              percent per hour

              This  default  value  applies if you do not specify
              any value for vm_troll_percent in the  /etc/sysconfigtab.
  At the default troll rate, each 8 kilobyte
              memory page is trolled once every 24  hours.   Disable
 value: 0 (zero)

              Specify  a  value  of  0  (zero)  to disable memory
              trolling.  Range: 1 - 100 percent

              Specify a value in the range 1 to 100  to  set  the
              troll  rate  to a percentage of memory to troll per
              hour. For example, a troll rate of  50  reads  half
              the  total  memory in one hour. After all memory is
              read, the troller starts a new pass at  the  beginning
 of memory.  Accelerated trolling: 101 percent

              Specify  a value greater than 100 percent to invoke
              one pass accelerated trolling. At  this  rate,  all
              system memory is trolled at a rate of approximately
              6000 pages per second,  where  one  page  equals  8
              kilobytes.  Trolling is then automatically disabled
              after a single pass.  This  mode  is  intended  for
              trolling  all memory quickly during off peak hours.

              Low troll rates, such as  the  4  percent  default,
              have  a  negligible  impact  on system performance.
              Processor usage for memory  trolling  increases  as
              the   troll   rate  is  increased.  Refer  to  memory_trolling(5) for additional performance information
 and memory troller usage instructions.

              Specifies  the number of I/O operations that can be
              outstanding while purging  dirty  (modified)  pages
              from  the  UBC. The dirty pages are flushed to disk
              to reclaim memory.  The UBC purge daemon will  stop
              flushing  dirty  pages  when  the  number  of  I/Os
              reaches the vm_ubcbuffers limit  or  there  are  no
              more  dirty  pages  in the UBC. AdvFS software does
              not use this attribute; only UFS software uses  it.

              Default value: 256 (I/Os)

              Minimum value: 0

              Maximum value: 2,147,483,647

              For  systems  running at capacity and on which many
              interactive users are performing  write  operations
              to  UFS  file  systems,  users  might detect slowed
              response times if many pages are  flushed  to  disk
              each  time  the  UBC buffers are purged. Decreasing
              the value of vm_ubcbuffers causes shorter but  more
              frequent  purge  operations,  thereby smoothing out
              system response times. Do  not,  however,  decrease
              vm_ubcbuffers  to  a value that completely disables
              purging of dirty pages. One I/O  for  certain  file
              systems might be associated with many pages because
              of write clustering of dirty pages.

                                     Note

              Changes to this attribute  only  take  affect  when
              made at boot time.

              You  can  also  set the smoothsync_age attribute of
              the vfs kernel subsystem to  address  response-time
              delays  that  can  occur  during periods of intense
              write activity. The smoothsync_age attribute uses a
              different  metric  (age  of dirty pages rather than
              number of I/Os) to balance the frequency and  duration
  time  of  purge operations and therefore does
              not support the ability of UFS to flush  all  dirty
              pages  for  the  same  write  operation at the same
              time. However, smoothsync_age can be changed  while
              the  system is running and is used by AdvFS as well
              as UFS software. See sys_attrs_vfs(5) for  information
 about the smoothsync_age attribute.


              The  percentage  of pages that must be dirty (modified)
 before the UBC starts writing them to disk.

              Default value: 10 (percent)

              Minimum value: 0

              Maximum value: 100

              In the context of an application thread, the number
              of  pages  that must be dirty (modified) before the
              UBC update daemon starts writing them.  This  value
              is for internal use only.

              The  minimum  number  of  pages to be available for
              file expansion. When the number of available  pages
              falls  below this number, the UBC steals additional
              pages to anticipate the file's expansion demands.

              Default value: 24 (file pages)

              Minimum value: 0

              Maximum value: 2,147,483,647

              The maximum percentage of UBC memory  that  can  be
              used to cache a single file. See vm_ubcseqstartpercent
 for information about controlling when the UBC
              checks this limit.

              Default value: 10 (percent)

              Minimum value: 0

              Maximum value: 100

              A threshold value (a percentage of the UBC in terms
              of its current size) that determines when  the  UBC
              starts  to check the percentage of UBC pages cached
              for each file object. If the cached page percentage
              for any file exceeds the value of vm_ubcseqpercent,
              the UBC returns that file's UBC LRU pages  to  virtual
 memory.

              Default value: 50 (percent)

              Minimum value: 0

              Maximum value: 100

SEE ALSO    [Toc]    [Back]

      
      
       Commands:   dxkerneltuner(8),  sysconfig(8),  and  sysconfigdb(8).

       Others:   memory_trolling(5),    sys_attrs_proc(5),    and
       sys_attrs(5).

       System Configuration and Tuning

       System Administration



                                                  sys_attrs_vm(5)
[ Back ]
 Similar pages
Name OS Title
sys_attrs_cm Tru64 system attributes for the cm kernel subsystem
sys_attrs_vfs Tru64 system attributes for the vfs kernel subsystem
sys_attrs_proc Tru64 system attributes for the proc kernel subsystem
sys_attrs_generic Tru64 system attributes for the generic kernel subsystem
sys_attrs_ipc Tru64 attributes for the ipc kernel subsystem
dxkerneltuner Tru64 Modifies or displays kernel subsystem attributes
sys_attrs Tru64 introduction to kernel subsystem attributes used for configuration and tuning
sys_attrs_io Tru64 io subsystem attributes
sys_attrs_bcm Tru64 bcm subsystem attributes
sys_attrs_lag Tru64 lag subsystem attributes
Copyright © 2004-2005 DeniX Solutions SRL
newsletter delivery service