Affecting system operations with operating system resources
The performance of your database server application depends on the following key factors:
- Hardware resources
- Operating system configuration
- Network configuration and traffic
- Memory management
You must consider these factors when you attempt to identify performance problems or make adjustments to your system.
Hardware components include the following.
- Disk I/O subsystems
- Physical memory
The database server depends on the operating system to provide low-level access for devices, to process scheduling, to interprocess communication, and to offer other vital services.
The configuration of your operating system has a direct impact on how well the database server performs. The operating system kernel uses a significant amount of physical memory that the database server or other applications cannot use. Furthermore, you must reserve adequate kernel resources for the database server.
Besides tuning the kernel, there are soft limits on various resources,
such as stack size, number of file descriptors, and so on. These can
be examined and adjusted using the
ulimit command is not covered
in this tutorial.
The temp directory is a common repository for many common applications (such as vi, ed, or the kernel). Temp may need to be adjusted accordingly, depending on the needs of the users and applications running on the operating system.
The database server might need to perform I/O operations on more than one object (such as a table and a logical log file) located on the same disk. Contention between busy processes with high I/O demands can slow them down. It is important to monitor disk usage. In such a scenario, you can see performance gains by load-balancing (moving the tables to another disk).
Applications that depend on a network for communication with the database server and systems that rely on data replication to maintain high availability are subject to the performance constraints of that network. Data transfers over a network are typically slower than data transfers from a disk. Network delays can have a significant impact on the performance of the database server and other application programs that run on the host computer.
The operating system needs to have a page in memory to do any operations on that page. If the operating system needs to allocate memory for use by a process, it first will try to scavenge any unused pages within memory that it can find. But if no free pages exist, the memory-management system then has to choose pages that other processes are still using. The operating system tries to determine the pages that seem least likely to be needed in the short run that can be replaced by the new pages. This process of locating such pages that can be displaced is called a page scan. A page scan can increase CPU utilization.
Most memory-management systems use a least-recently-used algorithm to determine which pages can be replaced in memory. Once identified, these pages are copied out to disk. The memory is then freed for use by other processes. When a page is written out to disk, it is written to a specific area called swap space or swap area, where it is available for reading back into memory. This space is typically a dedicated disk or disk partition. The process is called paging. Paging uses I/O resources and CPU cycles to do its work.
At some point, the page images that were paged out must be copied back in for use by the processes that need them. And so the cycle starts again with other older pages (pages that have not been used relatively recently). If there is enough paging back and forth, the operating system might reach a point at which the kernel is almost totally occupied with copying pages in and out. This state is called thrashing. If the system is thrashing, all useful work comes to a halt.
In order to prevent thrashing, some operating system's memory-management algorithms actually scale coarser at a certain threshold. Instead of looking for older pages, it swaps out all pages (to swap space on disk) for a particular process. This process is called swapping.
Each and every process that is swapped out of memory must eventually be swapped back in. The disk I/O to the swap device dramatically increases the time required to switch between processes, because each context switch must read (into memory) all pages involved with the process. Performance is limited by the speed at which those pages can be transferred. A system that is swapping is severely overloaded, and throughput is impaired.
Many operating systems have commands that provide information about paging and swapping activity. Important statistics reported include the following:
- Number of pages paged or swapped out of memory
- Number of page scans (this is an early indicator that memory utilization is becoming a bottleneck)
- Number of pages paged or swapped into memory (this number is often not as reliable an indicator of a problem as paging out, because paging in includes initial loading of processes and loading of paged out pages when active processes terminate and memory is freed)