Everything Penguin

Focusing on Linux-based Operating Systems
htDig Search:

Operating Systems
  • /pub/OS/Linux

  • Storage
  • File Systems
  • HPC
  • /pub/Storage

  • Networking
  • /pub/Networking

  • Network Services
  • /pub/NetworkServices

  • Security
  • /pub/Security
  • Keytool/OpenSSL

  • Clustering
  • HA
  • DRM

  • Development
  • Design
  • C/C++
  • Java
  • Perl
  • Python
  • Shell
  • Web / J2EE

  • Not Linux ?
  • BSD
  • HP-UX
  • Mac
  • Solaris
  • VM
  • Windows
  • /pub/OS

  • Other
  • /pub
  • /pub/3rdParty
  •  Parent Directory

    Performance Tuning (System) - Overview
    Brett Lee
    ==============================================
    
    
    Principles:
    ----------------
    1.  Tuning is not troubleshooting, even though they may use similar tools.
    2.  Go for the low hanging fruit first.  Get the biggest bang for the buck.
    3.  Make small, incremental modifications and benchmark after each one.
    4.  For applications: optimize loops, do things in big chunks, use a profiler.
    
    
    Major Areas:
    ----------------
    1.  CPU
    2.  Memory
    3.  Disk / Array
    4.  Filesystems
    5.  Networking
    6.  NFS
    7.  OS Specific
    
    
    CPU
    ----------------
    1.  Disable all unnecessary processes
        - they won't be competing for time on the CPU
    2.  Optimize cache hits
        - build systems with sufficient L1/L2 cache
        - utilize CPU affinity
        - when L2 cache is shared, create processor groups using shared L2 CPUs
        - use CPUSETS on large shared memory systems
    3.  Increase time on CPU of critical processes via scheduling classes,
        priorities and designated IRQ handling by a CPU
    
    
    Memory (RAM)
    ----------------
    1.  Sufficient RAM to avoid swapping
    2.  Use multi-channel / high-speed RAM
    
    
    Disk / Array
    ----------------
    1.  Build arrays with data striped across a large number of disks
    2.  Increase the array stripe size to be as large as possible (without wasting disk)
        - 8k is common as it is the default block size for UFS / VxFS
    3.  Maximize the array cache
    4.  Opt for RAID 3 or 5 with "hot-standby" disks over RAID with mirroring
    5.  Opt for disks with fast access and seek times
    6.  Tune SATA/IDE disk(s) using `hdparm`
        - enable DMA
          - use DMA channels and less CPU
        - enable auto-readahead with sufficient readahead size
          - beneficial for sequential reads of large files
        - enable unmasking of interrupts
    7.  Tune SCSI disk(s) using `sdparm`
    8.  Enable write-back cache (use with caution)
    9.  Disable power management during periods of heavy use
    
    
    Filesystem
    ----------------
    1.  Disable atime
    2.  Tune the block size to reflect the data to be stored on the particular filesystem
    3.  If using an array, opt to have the array stripe size be equal to a multiple of the block size
    4.  Use ramdisks (with caution)
    5.  On large filesystems, decrease the amount of "reserved" space
    6.  On newer filesystems, enable write-back cache (use with caution) and configure fewer writes
    7.  Use an appropriate I/O scheduler and set job priorities if supported
    
    
    Networking
    ----------------
    1.  Tune the NIC using ndd, ethtool, mii-tool, etc.
        - top speed, full duplex, packet size, etc.
    2.  Tune TCP / UDP settings and buffers
        - some OSs auto-tune
        - TCP tuning for high-bandwidth / high latency links
          - send buffer size = bandwidth * round-trip time
        - UDP, increase receive buffer size
    3.  Increase the MTU
    
    
    NFS (if used)
    ----------------
    1.  Increase rsize and wsize
    2.  Use async (respond before commit)
    3.  On stable networks with heavy congestion, use TCP instead of UDP
        - smaller retrans when packets dropped
        - better support for different network speeds
    4.  Increase the number of threads for the NFS server
    5.  See: http://nfs.sourceforge.net/nfs-howto/ar01s05.html
    
    
    OS Specific (kinda-sorta, as source is typically available) & Common Tools
    ----------------
    1.  Solaris
        - strace - print STREAMS trace messages
        - truss - trace system calls and signals
        - dtrace - DTrace dynamic tracing compiler and tracing utility
          - http://docs.sun.com/app/docs/doc/817-6223
    
    2.  Linux
        - strace - trace system calls and signals
        - Oprofile - for CPU bound processes
          - http://oprofile.sourceforge.net/news/
        - SystemTap (STAP) - dTrace knockoff
          - http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/SystemTap_Beginners_Guide
    
    3.  Common
        - sar, iostat, vmstat, mpstat, netstat, prstat, top, atop, iotop ...
    
    
    For more, check out IBM:
    
      Linux Performance Tuning - mostly on IBM Power
        http://www.ibm.com/developerworks/wikis/display/LinuxP/Performance+Tuning
    
      IBM HPC Central: !!!!!!!!!!!!!!
        http://www.ibm.com/developerworks/wikis/display/hpccentral/HPC+Central
    
    
    

    Other Sites

    RFC's
  • FAQ's
  • IETF
  • RFC Sourcebook

  • Linux
  • Linux - Intro
  • Linux Kernel
  • Linux Kernel (LKML)
  • Bash - Intro
  • Bash - Advanced
  • Command Line
  • System Administration
  • Network Administration
  • Man Pages (& more)
  • More Guides
  • Red Hat Manuals
  • HOWTO's

  • Reference/Tutorials
  • C++ @ cppreference
  • C++ @ cplusplus
  • CSS @ echoecho
  • DNS @ Zytrax
  • HTML @ W3 Schools
  • Java @ Sun
  • LDAP @ Zytrax
  • Linux @ YoLinux
  • MySQL
  • NetFilter
  • Network Protocols
  • OpenLDAP
  • Quagga
  • Samba
  • Unix Programming



  • This site contains many of my notes from research into different aspects of the Linux kernel as well as some of the software provided by GNU and others. Thouugh these notes are not fully comprehensive or even completetly accurate, they are part of my on-going attempt to better understand this complex field. And, they are your to use.

    Should you wish to report any errors or suggestions, please let me know.

    Should you wish to make a donation for anything you may have learned here, please direct that donation to the ASPCA, with my sincere thanks.

    Brett Lee
    Everything Penguin

    The code for this site, which is just a few CGI scripts, may be found on GitHub (https://github.com/userbrett/cgindex).

    For both data encryption and password protection, try Personal Data Security (https://www.trustpds.com).


    "We left all that stuff out. If there's an error, we have this routine called 'panic', and when its called, the machine crashes, and you holler down the hall, 'Hey, reboot it.'"

        - Dennis Ritchie on Unix (vs Multics)


    Google
    [ Powered by Red Hat Linux ] [ Powered by Apache Server] [ Powered by MySQL ]

    [ Statistics by AWStats ]